var/home/core/zuul-output/0000755000175000017500000000000015140212016014515 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015140226071015467 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000343540015140225753020263 0ustar corecore+ikubelet.log_o[;r)Br'o-n(!9t%Cs7}g/غIs,r.k9GfB*|"mv?_eGbuuțx{w7ݭ7֫gl% oo/q3m^]/o?8.7oW}ʋghewx/mX,ojŻ ^Tb3b#׳:}=p7뼝ca㑔`e0I1Q!&ѱ[/o^{W-{t3_U|6 x)K#/5ΌR"ggóisR)N %emOQ/Ϋ[oa0vs68/Jʢ ܚʂ9ss3+aô٥J}{37FEbп3 FKX1QRQlrTvb)E,s)Wɀ;$#LcdHM%vz_. o~I|3j dF{ "IΩ?PF~J~ ` 17ׅwڋًM)$Fiqw7Gt7L"u 0VUI  ˹dvYļU[ Z.׿-h QZ*U1|t5wKOؾ{mk b2 ܨ;RJK!b>JR*kl|+"N'C_#a7]d]sJg;;>Yp׫,w`ɚ'd$ecwŻ^~7EpQС3DCS[Yʧ?DDS aw߾)VxX帟AB}nyи0stĈCo.:wAZ{sy:7qsWctx{}n-+ZYsI{/.Ra9XcђQ0FK@aEDO2es ׇN# ZF͹b,*YVi+$<QMGhC}^}?BqG!(8l K3T[<~6]90}(*T7siv'=k 9Q2@vN ( R['>v*;o57sp$3ncx!>t®W>]tF-iܪ%GYbaRvHa}dkD̶*';ک|s_}8yj,('GrgTZ'U鋊TqOſ * /Ijo!՟8`"j}zӲ$k3jS|C7;A)͎V.r?t\WU1ojjr<~Tq> `=tJ!aݡ=h6Yݭw}?lѹ`f_" J9w4ts7NG GGG]ҡgc⌝M b/Ζlpah E ur C&`XR JcwB~R2EL9j7e\(Uё$׿atyХ?*t5z\+`/ErVQUxMҔ&ۈt.3;eg_O ξL1KiYLizpV:C5/=v-}҅"o ']\Fi=Ob㸵2*3d*mQ%"h+ "f "D(~~moH|E3*46$Ag4aX)Ǜƾ9U Ӆ^};ڲ7J9@ kV%g>a~W;D=;y|AAY'"葋_d$Ə{(he NSfX1982TH#D֪v3l"<, { Tms'oI&'Adp]{1DL^5"Ϧޙ`F}W5XDV7V5EE9esYYfiMOV i/ f>3VQ 7,oTW⇊AqO:rƭĘ DuZ^ To3dEN/} fI+?|Uz5SUZa{P,97óI,Q{eNFV+(hʺb ״ʻʞX6ýcsT z`q 0C?41- _n^ylSO2|'W'BOTLl-9Ja [$3BV2DC4l!TO C*Mrii1f5 JA *#jv߿Imy%u LOL8c3ilLJ!Ip,2(( *%KGj   %*e5-wFp"a~fzqu6tY,d,`!qIv꜒"T[1!I!NwL}\|}.b3oXR\(L _nJB/_xY.# ſԸv}9U}'/o uSH<:˷tGLS0l/LKcQ.os2% t)Eh~2p cL1%'4-1a_`[Zz㧦|k˭c ĚOρ_} Ewt3th?tvͪ{~;J0= |JUԍ;Iw}/9nh7l%>'ct Հ}a>-:(QxPyA Z UcÖgڌ:8cΗ|U1,-N9 dI [@3YN%:ò6PT:”QVay 77ĐrX(K&Y5+$wL#ɽ 4d-bbdAJ?w:P>n^2] e}gjFX@&avF묇cTy^}m .Ŏ7Uֻ󂊹P-\!3^.Y9[XԦo Έ')Ji.VՕH4~)(kKC&ޙ-did˥]5]5᪩QJlyIPEQZlY=0D"\KjPQ>Y{Ÿ>14`SČ.HPdp12 (V ̍"ޛ4tO,{=hFѓ$b =D(zn;Y<1x~w w?38v?Lsb s "NDr3\{J KP/ߢ/emPW֦?>Y5p&nr0:9%Ws$Wc0FS=>Qp:!DE5^9-0 R2ڲ]ew۵jI\'iħ1 {\FPG"$$ {+!˨?EP' =@~edF \r!٤ã_e=P1W3c +A)9V ]rVmeK\4ɿ 8'*MTox6[qn2XwK\^-ޖA2U]E_Dm5^"d*MQǜq؈f+C/tfRxeKboc5IvsK TV}uu}k s" &ﱏҞO/ont~]5\ʅSHwӍq6Ung'!! e#@\YV,4&`-6 E=߶EYE=P?~݆]Ōv ton5 lvǫV*k*5]^RFlj]R#Uz |wmTeM {u:s@ -Mn3䦴mHЭj !'T9Xsl o:d lzzMvYź ^ ٲAPm쪊m\9htwmjQ\c5&,|^C.SS P󂏛o n8Fkbs/&a[s~W &_/.9tTo]r8-X{TMYtt =0AMUk}G9^UA,;Tt,"Dxl DfA\w; &`Ͱ٢N^8 n`т ti6{b?-X;|im̓'!n&.TU n$%rIwP(fwnv :Nb=X~ax`;Vw}wvRS1q!z989ep 5wݫK]0/k<'dzM2dk–fl:[a>֋&"_ }Oõϸ~rj uw\h~M il[ 2pCaOok.X0C?~[:^PrKo ^ƒA"ZF[Bt9 @bekۜ)߄ PQY4 zF u } hsߺi!4ELy!uG7V]-؆p Qo^Cr6q,"u%neDdF O>y_:,eXX맻c5ޖSweيO4L)69 War)|VϟT;Cq%KK-*i ѩQٰ`DݎGu( 꿢\cXn }7Ҫa nG{Y bcWa?\34 P U!7 * kTuwmUr%ԀjƮĀdU#ۈӕ3ΊeBO`^}ܖj49lnAvoI "%\;OF& wctغBܮlK*Gڞq ~]lrk`8"jѡ6֙ɖ@/#:=G<$w 24 6e/!~=f)Q UbshY5mseڠ5_mTDNIGW .Z#YmDvS|]F)5vSsiE۫1F#șcq##rI$I.imWMUF>su0,gy(&TI޽*}w[ #j*ٚ- DIAmPvIɦ7聀t>G;_H;2ʗ6 h6QװxmR JQUbTP2j˔Ni)C)HKE"$ӝ!@2<Bq 2oh80,kNA7,?ע|tC3.㤣TiHEIǢƅaeGF$ u2`d)/-st{D1kl)Fa/TTmCԤ{"9b{ywSXE*m#3U ùRIvޏrJ`k|wJKH:O*OKy`( ݢe*{ ua Ȼݔhvׄӫ A^%f+[`sb˟ _.6KsjA Qsmd  O#F.Uf28ZAgy>y,d$C?v01q5e.Um>]RLa&r?+@6k&#l)I5_> ` D s5npo}/ؙq #a2V?X~.4O/'|//>l8MrHID2VSsMX^"NۯDc558c&'K0L /C5YDqNe~ض˸nErc֋@aw*r܀0 a {RQ^xZb [_tܡ&yЋ{ Sym^?̑sU~' Ԓ f\itu)b>5X -$s?{WƱPz;| \;_D[T/BI GH8@"t*"9E[/Y5d{zrBܖ6Hlc "mKv~[uLU4lZ;xEN'oI㤛rP*jC# 6@dmHg1$ʇȠh#CBΤ{sTQ{%w)7@y1K^ ].Y$46[B-3%OONw8d`Q4d$x0t8@t]y1T\YAidtxBG:pɨyeNg4n]M؞ e}Wn6׳i~'ہZ*FU{fXڃP'Hl4 ,ŸqMHDCYZz Qnz܁$Jp04ȴIL΃.0E%e"xm뻱~0GBeFO0ޑ]w(zM6j\v00ׅYɓHڦd%NzT@gID!EL2$%Ӧ{(gL pWkn\SDKIIKWi^9)N?[tLjV}}O͌:&c!JC{J` nKlȉW$)YLE%I:/8)*H|]}\E$V*#(G;3U-;q7KǰfξC?ke`~UK mtIC8^P߼fub8P銗KDi'U6K×5 .]H<$ ^D'!" b1D8,?tT q lKxDȜOY2S3ҁ%mo(YT\3}sѦoY=-- /IDd6Gs =[F۴'c,QAIٰ9JXOz);B= @%AIt0v[Ƿ&FJE͙A~IQ%iShnMІt.޿>q=$ts,cJZڗOx2c6 .1zҪR "^Q[ TF )㢥M-GicQ\BL(hO7zNa>>'(Kgc{>/MoD8q̒vv73'9pM&jV3=ɹvYƛ{3iψI4Kp5 d2oOgd||K>R1Qzi#f>夑3KմԔ萴%|xyr>ķx>{E>Z4Ӥ͋#+hI{hNZt 9`b˝`yB,Ȍ=6Z" 8L O)&On?7\7ix@ D_P"~GijbɠM&HtpR:4Si גt&ngb9%islԃ)Hc`ebw|Ī Zg_0FRYeO:F)O>UD;;MY,2ڨi"R"*R2s@AK/u5,b#u>cY^*xkJ7C~pۊ ~;ɰ@ՙ.rT?m0:;}d8ۈ ݨW>.[Vhi̒;̥_9$W!p.zu~9x۾vC;kN?WƟ+fx3SuKQqxST Ζ2%?T74a{N8;lr`$pZds=3jwlL Eڲ t|*n8[#yN SrA GYb8ZIaʼn8 #fg3i`F#5N 3q_M]j 8E!@1vցP7!|+R@;HspSI]ڻCZUcg5pDcIϹ,oN-_XI,3\j ]ٟ5~' SuipA!C厐$&k7dmhz/#"݃,YqCL$ڲ`"MUbeT>Xuv~4Le͢ }UVM)[A`b}mcE]LCEg=2ȴcmZ?E*-8nhױ1xR2ϫCya` A y!?h!9yL%VLU2gr26A!4vbSG ]ꧧWp/ &ee *w$-`J\ ptǣC^p#_`{ К8EW>*(D{ٛ,[fnY𱹞M=6&$<,"lX-Ǐ_whaE 98 (oѢ/Р΅ 7ցl6618ł_1/=fu).s¯?.S[{'g=Ҥ):d8h\y6]t1T7IUV:;.1& ,5΀j:<< +Y?58In'bXIǣO{&V\DŽ0,9f O_"[l:h¢8wݓ19\:f6:+ .3}=uvKc ٹeS<>ij(o'ciS<{1$E[nP b?8E'xv[K+E{,Qƙ1*dcs_Z'407|qBOgYU|U--sG8`u! qGYܷw;ȌCPc_|(RaIBKb+{P.T! =ĦiTob d<>SHr][KqWs7ѝBYǭ~RR"p9dFg|K- obY_vM 4>/]e/dy,8!xŋ5 R<^mYo 3c9(F?hy:1V(!L7,RPEd;)QϢ +RlWDžuF7LFֆoM~ar*EtIbW>jqour?qzJJaQ#-n`/$fhnqgTĔO5 ꐌSYXzv9[ezksA`<dkON৯s|&*pNaJه5B5H:W2% `6MRR'xZtfC$1aH_dx$1'/v^ZZ4`9);q`F"d1v>ժbLGd~MP%m x52LMF9 E"A,S Vo}\"X.2< 5FB΢u.`aJ#Tk’"D#cuCXȉ4 ՖK(KP|dZ1&8{9rLnMRф%V Ng2K|`ot.GSGd oE'!B'Nb1{8LW^9KbN;sö!`0ݘ/l+1L#B8U֕&*?V6N{դ}Y(INBKhx2 *MOenT.a~.E jG)j{=u^K+Ȫcv/w#MivX :)ǪCZUnAS`SK6OSxa3 W; K>窜̀'n 3u0?K@BS %fee}i]>̤+*l:\歶!IZ5>H;0)N.w7ߍ|+qUߤ^oå~4en\.cY[s'wSSۘf ?.D s}Y~/J[}jX^ޗ_-/̍ݥ*n./cus}]\>\\^'W_nAqC_oO-S_sOq?B}mmK2/@DJt}=xL@5MG0ZY,\S Eb uw:YɊ|ZԘ8'ˠ*>q/E b\ R%.aS qY>W Rlz!>Z.|<VD h5^6eM>y̆@ x>Lh!*<-lo_V684A飑i2#@+j3l૎S1@:G|gRcƈ?H(m>LC,HI~'.Op% ' c*Dp*cj|>z G` |]e*:nq!`{ qBAgPSO}E`́JPu#]' 3N+;fwt[wL X1!;W$*죓Ha-s>Vzk[~S_vD.yΕ`h9U|A܌ЃECTC Tnpצho!=V qy)U cigs^>sgv"4N9W_iI NRCǔd X1Lb.u@`X]nl}!:ViI[/SE un޷(ȊD0M^`MDN74Т C>F-}$A:XBgJWq&4ۓflq6TX)ى?Nwg>]dt*?Ű~{N_w7p682~ =WBX"XA:#u-9`x 92$4_>9WvTIj`+C2"s%DƖ|2H\2+AaTaBˮ}L@dr_Wfc>IdA Od[jlec=XJ|&+-T1m8NP$%s,ig\Z:h Ћ߉n!r}_\ \5 6 d#=&X^-kOwĝJO\Vj; )!eoB4F\jtctUb.L[3M8V|&jZz/@7aV),A[5TpUZL_?CU0E [%W%vl x٘3܎y,< )i7 Ո: tC`\?c%v7\Ct!$9iç$><+c~݊lz1H[E'2/clQ.I`AWOlw&5fH n`gMytdx)lwAK~GgbJI-tq5/i ?WǠr^C/1NEU<=co(k0Q~wˌ\g,\ rf\PUH,L#L7E"`0dq@zn~+CX|,l_B'9Dcuu|~z+G q|-bb^HcUha9ce1P[;qsA.Ǎ-]W‹y?ڕ^Pm:>I+Ȧ6' ,}U=̀*Eg.6_~OJ/8V ?ç&+|t><,BLqL򱷬dS{X6"X#-^䀕#{К4i̎'QIc(<ǩJi lc*n;YKOIXA|i{6} %v&N2cg2FlJ\S$Ëlͯ*ev*&VL]_UqwD E\Y:\D0F,$b>\O=CU\eiPu U! ƥ"/Rޕ64BCUR&!\̓l9X((Tavܞ$~}Tn0BV ⹜C$k)Pbi@ bq|c_VRQEf1ȯ0n2TuG~|BS`&'i! I鉎4600=NCy=\}>9JJ NI闷={:%e+N0/241 ,__+c/=0,_Ws"Ñ̌\ݝ z, I32*#μh&ary탡mN4iBX+WͱdE˳Eޫ3Up9,~+1x-K!VB fa3O?]dKtȤtݎ].n`x|Azx?{|A:y?VKO7"K9m{d/IOS2 @$}O.hքd!,4BׄDA`wGt.t}൪d'_OcsBܼ ݻ03{mEh&믎* ` 00MϿmn8c8Bh?^y2y\[w՗Xo<6tX;VT xo=R Y~V@نowdrAзD4(KXҗy7ęfKZ⨚\56]YCn2z{bI=dq׹f _,xB1E\TzZj-/TY',2뺆%M0dh0L\eЗNz< f~2M2_$(.0hx2Q?gH@$?NiyU<ņ:>/ e [4m͚Vs4̳y"LǴ)Q߳sPbTβ:tQ5 (c_-*[xt.hVUyzrGw,('(/IJ%^WZU4T'(+FsXQU oj?Y:Hk^7TUE_笽f&p`6*}Rq 9 &PB1 wަ?ZxfjjZkU vr wbC㹘sQr<:n~~* p7^X4QKhk >?.G_ !77f&8|9OלǔY2t č 4ciHl%gV*u=s]}9.|}4d E)zpb PlD*BzN*uZ(b'D= h\yUH1?i rc83GM>#yX+V0 ڨނ4VEjJ(/7t"62i!̺RZW4=J:<*dz,+P-Z682̗ Q~(+HxB(*N#v^)M Ui!Ɛ4 I|-ã7 ^R9vf4xY%.Z$?tTlwS織! QXʢΛ 90C 8W YaW!2oOGBdTڊݞ"kw%>fJXnNnSfS}1:!p y⦤5QUlhĿP5 B(z:J ( n9[>_4yfi|:JQVRK!n<8H?xeWuW%~] cVԾHqކXwգPFNC"}f=~_T!ظh,ŷ$Ԟ֒Ȫ*k]:ʵ-+u!iuS+S*ѷzO vgI UaG^ǕD^n|}CUؑcs4"ٴZT+3ֆ9`ZfK^SUwshMUѩF RucT 0>TUxZ@әV๝e .h*rL!'aE,U yWFj5.Ԫekcd㍻@̯i Wɲen#եdT B&4mKMs%1,Th.QkDm{<@"Y=pVjR5zߖن"4VM0.>앭*"(f,`Bbj2QNm~m[R.^OO"Yeh-){Ѭ$=sU6qn% es !CyH`0CsGMۣbJѴ ,$ rmauMbr&e6 /%‹"nz ӊ<㑰88zD-drU0u4CkSl kCT-сwެ ShW'>ꃵ%T"\Q#L5S+@x˷RV:eiXEZSSҦENQ08 [/| *)S2窽 /:n"#YUQJ+R9TEaH.o.]UgI6]~ H{;X7SɗA[eIyUqq&Xڵ]3 5Sg +}𬹶mҰ9ak ML7'ۄYLmMY=Ymٗkgm1X֞L,#?` kx#&mcV1/3e@qWp{oFp03C ^DǬJB(<6~ދƀĵ}d$^\){f@b]_SL˭=)3lKN$7rl4&Ay{4ߺvFpױ}w]c_jbusړɭ|_F,gw1rI;{#"ݛa\s=~\ܣdm0qVJ2نO@瞬b脌 LYqw 7{ܴ4M$?ZC7J>s(BKӗO>ni3% m\8C' aY.B֕H&|$(T7`[3F8; & 803gn0ܓUYu?{ܸ UI $|)+׏2S.lf$QGJ_7R-+@dc@Y$U<|֨޳}AޥV|{Ȩu̮cHC;tS>Ƴ0u />lPKJUDNk/a"[UYyq]2#N ?ғoީ'b/&E*lpWEme꙰2M>q]9~N)IB瘺{ }߲!~5%`J 'vi\ -fhtTݦ$[Lݟ|UR~D=T2 `f&ʉ"Wnwi$]*U[% Ȼw{<3~yo0κtL>fSl@1i5 8/- ( >ۺj pbݳ_a/of i8o\ pBC&Jcpܒ#g H٪ 16m: u#)rsˀ/sEC_>aݸo(}mz{, j֍X bpm0NWLhẅ́.,npmҪe9@,PܮDeqo b}@<{] PN1WY]0AЀRgy M7AM.ב%x<н%Fv|zn` 펤m݈C(KhvS툋mhxB5@e\1A:66<^/n$gHB/sj{6[ d;W/ׇ35m9}p߮83l*ĭ g793j͙:rc s Es8X?gXY+'gs䆫U@t0M-G Zrខ@+cCOEk׹EI)|opo316-p}qS{n׮k}[Vgx~G<չx/8MbA6trv8X:ƹof$k_H̹Iʻu&(1K ɹ͹%k9鹁o{QoJ&pJP<7F~^P\֙4(<`nb#xٝ`O(;F@ŽdA~< CV8DrYkM޹KCF;Bg#xhG|t(=7Ek aB{ltĿ:TMz뒇 fu0aHYHWj{#`5qwt%u$2ײ|7#C4D8X"A"H,! 3_if*. ߅MSRJ*uvGsvGfq^QYL58T*s^K{$?&Vi ٩NN3QZpΫ}D6WUų\­9{81)ʋ~i,x/lC~vz#yNd6F^Zfq:#ײtN䲈>/kpA?VƅSAZjgBPw{dvx/1Iɛ,lV{֓&[8@ks1)%f/JDx:-@.6ZIQGz>U`z9p?]GC9ЕDz}|ъH %2Of vۇ+Z8Q\%~,ȸ-*sP4C>N"߃CrCI)t>1Y]9*Tql˽27J,&CA/_O< XQJQM&~V6f0F&"U#iXm OsTCnNe7VSރVXʗX`vևaASWe9VpM g]ؐi> ?LGwچP^ 4sX<җkGo\Rwy"}/8Ѓ_iZka19@,K vz΄~) o1 lQ6);4$_F{:fdFyQN'ף騈wxKU؟^ \x^ *;AMW){= ;(|`7ȡM3 vNQoB!!w'YB΄ @yn Giڳ^'dM_X'mԶ]ns%)&h ,HNW-YQ?gcSUms\'kpo-k ܝA/ {C'nrIK*E0Rt{C!8I 0\=(w#Q3ͅt 6&?isb2 ݭH iQQ>NOx( }ŻXJ}t2 vjgu.@=XF\Ma˾jOSF1 {VE(_Ț/"(8ꪰUuSF9Ɖ8]˖ua;pGt UڪԆ.\$6pc j6 t}m7,x?#?P{~MW|8@8J飃\L{>&!d;tqSQQ?tfT~9/f("w0Wf\U[:9U4?mx:W&G:X[ʅT% Tً)u. @[B́ZWх򜂛,7UB JH(݆{CJSA;*}ҭn5O]Ƀ::[lO)u ]%݂Pw{BuHlPl{Bn'ʶ [%ۂPo{BH*FDB-嫄-w#?P*FhDB- W 4ܞp7B'!tIP:+K =Ijh󴸝 TwqzzHux9-.A힍&N*J4љb1'UUH%9]iDaPά hjZ&ةb@ E/hg{}v9}K>(8fqU?97H̫ϧ/B9>#xCm׳keAr<9%G<8>1U_vu.|,l xC*}X,2E2BU@gm U??8'.yQe)<)|, ڐ S-G{U3F'2ī)RͲD)SXoUm:G{ܠn6ah>@+Ԗ!)dɜ c`("E"xM%j+KEw*j o8OAӯԐ*@ڱm+aj9" ,)3KܵsSJQmClâs|u[ʀXu fܙ^H><~urm~z3e8(O>`Κ(m[+.ad۞@:de< xzQ&"y` 70Y]cyCaYHYh؃ﳪ:xc 7tx BLt #&ⱡwed0/uOOdIWj7i+rq̌S!}LfçWzmC%XHJIwN1HStͱj*g&`-vVZ8Ev¯xl }-XtQi> ٘hq#3bj{L,,8Ϛs'nsljE$aumWǔ ; w50Ѳ`+~,_=t~({eG?k#U?Ρ2SXN,N{ fZDz?JMj.|2H4G|P'AYd=΅:kܨY3(kf-=P1S+TwD!*N#L4,0:D^#X`AÈ{& =- $ߠZst)[2|!!fjrMb28\L؅$a0!j֬2Z`TK-v̗ZJvO쁾 MQXAi P_"9 3Fc/Dǫrٽ*9p#tN.I7^Xc?nhd+o[tGAg7>GokO{mP!.H_ &^a$ۆbzk ߓ/fvkY۞*\Wg4^Q~x7ι'z?/1w?gZJE::,F\8/ᇊ'lI %:&[ўdRqnDzmDjT nKO0}zzp7}]ze6+o]ռ'NF4Uf`4C1*KJ#]{EURfpXZNo2Ű83T(1Ҧߓxf)hSbRa5q;Hම wQ]&]ב꡸꬙ɱt;޳E ^%YS)xV7Do>i*,N{hϧEgV: e%s`߻CeK$[Je❈h_g"5Uںbe rfj!hvHn)#K kl1فMOP2!ŕc Ϯ)[8U_ELT]7So// l"R)+d}}V7e>u3:DC-{OuY%ex{F|;q ľge%$ $+%D"n_(dљ%FfycmTndOLՁ4ϧ[2jvAyBT.Ab5 CYZTTR+SL`2<{e= pa EJT;+g:-> SapJa9R.bԣqhݥH,׺P""Ix|,Ye9#_5I&2*5n tF, uEQŽ"(iK&!Affb89Q=T\{.$3?ȜՏ]{]Sks|fwVcz&Ycsy=-d$:& ¡484|NdԖ+" q Z}85PC+ӭV$Ddɴ4-Y2a썳 ~i|ƤqO4cU*5 %E1DpyYBDH/ )-s<wr$h^,WR 횩; wGƞ XۢS07u0Q}#/lh*ٔ`8efi>/_E۫#zJgU< %,fU5$-^O!)'01,ɞ9_=]c?Ԃ1L*S7Ώӧ%Is6poۏwʈ Ov͓y}uI yN_ݵ5L|Frk0xfݿ!i/4h&nz&wcw3[v7v/4Dksi.#ߝ&\cmj~5R_4_ֳOq( SFyzt S0| ۇp{1?喖^hvxHxe(?4qHƄO J 0j٨|0xXAw(Da2斯&NiS~Gc%J Z * ׼=1w 8=OM RI ³P=*D(`4bGqP$4sTL .TS2 Ey}j Tc5gy(d s_ 4*H w"K/dMՇG)W09kRM A_ԷO]plYhYAՊ#CѺdEpHzcXӦTZК J.hR$]h6>8َ!EjPѭ Z" hJ}֪ 6өBǂkAM  G=Ry:zZĤU.!RӁo_l>(CSh%1(&)p˙MtqG9GWq10\D4VW/5LD" v_aNG5t1er`xq/4~YVf tS4|rsqvJscf@F\£7H"^KO;浕2:- jGRc-1 Ft6uqpOۀ!)(UjP68X#s9p|@pT[qB- {,K=$ )KtXMt6Uf DxllԎ"Pvc6 Jk,N9@IhGԱ4YTrlHW/Sb*،,+WAWtPZi(29ƒx*h&"m~,2aJs-DS"VK<ܻ8t-JT"0KEC0K/Baz-3zk(O]p']RJ<@kޮrC[s^k ڎ紓M@1nA3͍&piARGڛE2F D)cbøUij%6kR$](/։.8.3_jVBPbs;|t6r$QV5pUbqY^$xLg8/_{ygI`ܥ^Fp\Ĉ CYTdzFw.WcI}p?m88Zsؠ "GzSVZJEI r6xjچ>,5p^EP FHg.&݁e<g;ee_/ jFV#ѱY((9V6gz\sp&+B5c|UYkBuI]ON򨒯%+&D>sŝwm>nyQN~_obX?6xP2 +|mmJQx%r!fFEB4ˡƖaڂ} * )1c [{?IχzohڪЕ7B @[k[#$V ? 6&&K1Fm1) _PxgSd>B?BC9k |Kf(") >;v|lR|7溭/;LxQ$ ɡ9 Nc˴@c71<@qłD0%DS>3I94"vmwl`w'Yr\ s6 - Z(U轍>`90s0i2٩[ߦ$˂ԦMn՘m?p)ƞ U/+:"h$T )'Q$Al oE(*a$+4"L.$X=[la.Hn6?}<Ӥq(z안g֚D71? B Dl-kCީ%//65' ѝ=ښ֛Ubb0*#4,d_!j! %eB "j!ښ.8ry˙gCy칎ZJZYQT 8k}/(r$ /s) ڡ*]Tפl흟A`QԌaI~]]p7bh^ЃIv7%`_TDHV:' ׋*sړLY)+>1blS `?h/-Ғs)!|G0:ю6]x6}#7Lmq;~f0/9Ybdd^uil0| &콥Gkb6HYPqV.CJB<}-l ڤ^j-[E8R.8ԓҥn]Ԝ GʜE9}˜.ykqF`ΣaK2#H 7Z:/,l}rcZ桛~?x8|ΌZhmyy;$֔V⾬[^zֹɠyN?x*&*zMA'C&j2@4d&O1&1%I-pjA+7*9x^WV\w/]pTbQQIbĢLi I%B6q* A҅Wc8El^H]J ^>y6FOY|>g.8͎Y}ޕr-,6!SRф6pc:zB;$x_=}.VrW㰶 1FKgu.>uyeSm^xddR"jNb+pc҆UjELϼ^2rpB|R_M}.E/ŬFNUه?r}>|Ϋ.~Q]J +Q^#%M R-!Ń,WՎ{=.KLEINT*.<>6@h "-.xJۯύ/aփ gmtk(9SbխH Qv>wq{x- Cmr&S_Auq`¸k=ãs̪Q1wE] d{e3 NNyU61R-ʒ(Ƥ+ࢸg\zfDeKLJу7`%O J xМ`cW$ X񦌹=2W(cOwouЋkrٻ޶lUy?EMcSLLD]3$md;L=H `"9s;gk֯{E3@I_FeA F#okii ru{Q}wcYH_ǎ,{4S ~ݝ_ vw]izwmS:{D<ݝBpSS$?_wIj֩՞Z5V{=gHOF6XIlȃ>s4Kh|.k;e󁛄x3&zhkN=kFs! R~OBM_'v&MAu<'2%#9Ɨ 2˃ܾ ?l&"1d45xwH5l"ь(6Sr>H8B9O/PI_Hі58% h4ޠ7w{z:9CkC0TWAzx40jk|셳d8&e;v4fyݙ⺟ "`OYil{u|ܭ 2w^Q^!MKE@_ʁ7 Ykޤ>vEbSc/N2~m@?$H,**XF'Џ&nRx.@cY: RE? ]LrN@1X@?T݆ I ԼW0/)G蛉+FuA) s2ㄪcȱX~pox! o=+6рoDp„MزMkS)`?~1S{qkql'Їs*? 2Ov$@;'r`ʟ=Wr{)ϼm`?%p wy W&D߳PK7;-X``^}q;tO(<e)սsHX乩0=08B?)YѤ;F47 nwXDDG/bcKЗMxB-M>Hl2 ctLdtYwcʯ?WzU>lx8PqBY̫.OOI$r~9̍ę, Vh>x*80$y|c 9vϳ}UPj_#Uȍ+&sRyM @EQW(0T M}: =D nCs/0ܛLF Q#888FV3GG)W9/)2kagu^Nd'Tm5'Z&.+he畖sk&t.վ 9Nꈶ+%1BU] !奩u~Q;J]~u5%޸?ÈL~|T?ЗLbqh \~z[4Ygn&K<ެs΃'KWnr֕:Y͹'|i6NP>{YgFkrd GQ`ĺErJ-: $> BKZ(Ry1ᘘ2@eЙf?JBٚaHv8y*3\Ϙ2L Qa55xd CwLXl1CҾ\IHt\p%q; Ua2c:mDꨭ 0K;x] .%گ$)4HA# RE;jW)KaFaۡVGc%ۡVW_v_;CӔPݒ]λsj.TdWja7Q9U]~c3(&v-IhE\ j5&8SK>$JYM|R$ƺ Ge'OxpbϣYqsY ve=e*:KxŲr#ZMAopv*s_epwYTq n ˲V'{ɷg>\Y\v1*G RDYO1wYĥqAѼJ[5@SupUáNasn*7clFDXbi&Rmgѣfo5qiUsi#vs&),3RfgrB;2G&=GVz/G8-4B{4B@(lLuFQ{+R2r7,X{YYd.з-a|g̀g) +Ýrf(H )13pw*6+|Xh 8qU\x[i5Vh!|50[xd1VhGDmYDmvUPڬKVVwu4ݧ?ye&G:6c[,*i]ҕf~}!?gȚQR-F[_ @bTf?q%ԢKh)DSNAV +AqMݻ.CQuP9FɨRj& 4ԚSip&1h,3F"FVwU~ w7MhbpɎ8>##};3o/-rV$*h5jQF;èZ"a:@ :Lڎ}Dl#TH g#JGo!veXkzQyvK2uuys:f>_݂qvX"k ,ˋ~j9Bl;o\d|GR>GquXzX_oyh{X\lQy./(ل!6䛁 @״:=RY/ .,㌰h2ʓeg:%1q /O4̈́u\z'!dgjN=G^!*yb4_[ƛvHƛv@2:dJEƒ#ܦxM6.0p-8Q,eD)ܞ!-ڒѴFѴG̚ jWǙ@HXDNZf g >m3qaJec1ӄJ8kQqִҭU LSΎɌw콫9e*p7f_S77唿Q)jJ"x$檔 3BAf|6s}BLOΡzڡ24W;ǃ٧U~Ti4ɗ 7Reۀk2Eo}.N nE%e|r9'hLyRzqbVGWG!$o t(9",Ai#|Mɜu#N0S E 'ϟ~{3B1vo4RCh4N=Ok[ZejJÝnUi\y C4N{5I W bׁi>ĶpY%@Wd֒džFJy8ժ:P5M6JiE6]Dsդ:h6&%ך Iٮkl(6B en5HOvV/w ]%Kv&i7;{l=Ǽ ]bHT!waz>{p%؝R$Ƕy\Nyհ1epɄ~ҭtA ߙ%kClB&DZoD:֛+agg6V#2 0xI k 2%koStpPL=ཕQJr^nK!T}q*jj!(W ߾>]=18.3\扗l|"awKlk&&C2^{CF0=M9p\zF ^!;:_:s Xֻˌܦq.3)H8|j?$l&3K&+ǵRN|ƝFb+P hb {7$ %js(h3"U4ֈ!z=\g4"Ҩf| $&CmD:,)#Tyref,3;fhD`s)mF.**m,\Α!NSePlG6>u# ̊tPDAlĬpNt;t5宋xcĚ( W /SA :F3+M7m+u[D (:q!sKĆQR~B}Q5&PbсU<M{ț~U5o1F5K0JˊT5"-{ kM53J֤a`纏11 ܃5iI6~~{?j0H,jU+eU3eU=IVj7"0rv3rimP{Zdl* O{HTv.ÒQZ: VYʕ4Q2@TR/0nF@RdVrptcb2J%MCeRZ!en DOH rIOy *ZIFRCHDܳ)۟7`%G>xEf*G1r [ik T3c< &DkSxK+/#&VLYHy>t`jvT9eN@"˘,'Q3 xfH3(ޕ+"dټc$/Lc$$ o!winY .X,>IL0+*IS&Β4NM⚁&bJ4(0k' ma , FLcIKlD&p2eZ$HK ɌDr߄|IVRkQEcc1QD8&,Yze# fUKeSDR$khh!x(eVIihxg01SOct4OWT /inȆi/,1 #5?ξxUFvXd>dMX& ' w3>*D1ݛՀn̢)H+ :V)N!`GT'LJ$Ղ;#{# vO|jԱa$8if$VYufД+ɐn>V0AtDskձn=O|[hü]Zj fI"hJ4ʬL8D)dLRYS2awsن"se$1D?tvX{Ϡ;LVS&ޛ-m +eW #%1]^4h#i} B>I/Ҡ ûCYȖE ۓW}" @|(@ wtH9M4[>ݐ9ͨII,1 &^c__gFINIh(*Mq4M#V)ҧ Q v1-זX*@Xn&1` 2܁#iH"Lcp1efj*F*'3OL&g $|6͈HJpDpFSFq`זU]UmS>*vK~#nfpE @b;)'gD3Xј   {t+"D! IRe01r*`7RETFZDN,JKgCJˌ[fmq,e6 [Ɵ5 Ŗd)'o BXcB# &0N춧 *:|J3Zh%e}AU)ѣ!9bZ=fPnAv:mHJ*`@)F7p X/gWxgq}ٹK;"XX"pX@e4 Sz8;8F59=L_IU}=Id=J_2+3?E؀npg 6Y`P;`jIsj K{_qw3RJ>\Zn;FTUWNuĘP_)]"B}nQ?ʳ#CҨ0)mٱNi(A@+XAv"DAǝ?jXgtM`rLw4KHƝ cNRzOa48$ܚpl@_'QXi/W(nǧrtX!3^vWP>RgL\މw5<*IqK,MDu_h\''gd.f.}u/~ RUA )0Z_Ӊ=ݹ1UkA2^)`XW_.7O#ӊ:rT- k$fs,!/{:|ks9tGǬ+'"84􂨹c$A+X%1lnRkI|L`kI(N __q⸠V5t%}]֕j {w0zG!:jLP(z}-8#IH\ !8q0|>ZNnk%Nn1#c`P%׽tXI*h9n"'8~~܀=e\E)a|揚/n|HZ|ǻ0,ϓ.ZyoY[ :AʢE7;CGK1U=mBEJM cIshwVzʪ `_Oߖ?|og8/҉n6>z05dϮ/E ^ Q[plh:r`[m鿭>}6OAW>ŵ݇q_1o\U,s<XB_a4`1s:N`+%&! V(џH%Aa؀ Q~6]%.@ʽmrܒsgnf/sI A oSr[זݔ\$`GXv یwXt` fg5,;^3kWng`1/] .+jn>p`R䄧.D"RC0O]N"Hsr=?U[Zn3P YkKV4Cʌ~)K96fYLRy,5a8%7 dhCS•7@ *zԍ~hwL0G |(DxoDfxiI1zL)f~.Sw61rT>Rڌ:ZO\5L@RCoEV O/X>1C%8zzt+? 0)ѧR6!pz_E$2G*% -gWҠ1=_7@1i|0aWy1"C5obg[QHGĂw^C!`^#s?)W#Bg] M+w}7 L*yg 4^nˊ)!s~ߓ>}O PJ,mMXf|PO.jeR˧ѷtnZa?rdЋaײe y(ҟ)0*/fPb^%g1den-MX)GKa RUgGf<E!s$wyY>=\\e:@Oruf !I4dw#2 Mj$Ε'mZ9K&t_Z^`H=J/S6h÷iRQV-|i, :|Q%oA:+4%_I<7?x)^ڊazՋbsM@TPX*E^ӿF̽IeggyVzJ(Q&j;CtM!'G-!gOڴ!|BRZva1%/owC)ܬ>_os-jtA繷"r٩,¼VhZBT)$OzaaWQN5۽0 =o&\Zӂ ⹾x(0^Mn+tlQ0 TN']^g~/nPO峏5%-/p /F7/\Ljs~6OkY9 l/;)b}TOݪե*+2u|fl5l-P &cr5 X3'[MWphtkFO.>(l:mײQ4,xGe2ij^%aUsG ؉oQ>s= A9}<:C-ǥnE%ӵdZS;]C}9v>?7^#"_|#(%~2",CȧQury V3J6oT)_TkCKo/ 7_jatu7/-Y8_-Kh8[F'tdfݡ 'et c{ "Zn%TL[ɱI8#Č"kYl" wd];'G'4gi*uOchf1AZ'51٦gE}珞mѳ0#^c'8ve13H)O5 gNRH*jct`ZxKrY~yeS<66P5%<$ԝU{\r 95Y]V9je큘v8do~B{ g3zTb wW?|}x>-[D=oE)Z+Zǂ 1NJ,G M vh~* ; p6d//yթ^ф9۪lS+\O+a1 5'Z/Z(6Ze -[r7sejO??}S=W~?TBN*J th:pPs[}y+]$ZX;CSKnMnc>j8e^UsD=>榅>-E(I4)dV;~!qՊ"WM?fc.-#;xr-a})OmW°(o2|鄒;cc] >B2"^[w5e|ņ(pp1v8Mꏪ6Vi h$#/jmRZL<:*]z̃wA ̅HAumU72\TwW,˳t;ߟ܀SfRMsLP>$7ɭ_ tvEor bo>̾ozJlGn ZJacybV}[p>jÍՓKUb/ٺ^ >u!kbH>P%dllZYv-Gq;uyD?}M~[85*đS$ePZ'b AN[rWV)@~eƭ)5IH@yS i@TʈtO7iRX|}2w |C /+h5bsL"c9mRȭ|_`o̫1Zw0ӱZz!]] e2; $6FK[҅oC4|!lx/趄MNٝBmQk1 ]y2:"4",7 $LAƘ/\\~zS~AZHpBG:bp9 _,ϻ> cvFls,Hb}P~2I<˃mvM@?̩ZF5 HZr;OUpi[-r JC^E`iR7 `RWաSZBڪ+em껎!} c>GZrζ~d5ð ;?mP}E ay1a!-_ p-}HԎ2tJIHGNx"#s3N4#DBzBGא1txH)vX#bv}W, ?n[T$O*$2,.v/4tV`%O Jm gws1D2'{U2$l"xl0l1s пٌn,NtjoGyBGu\XJ>4@Έ1Ũ@04S.1t.{L0H˗l֟t:qxVY>xW+yCa-8:)3첫gz)+΍~|J`#Y9^1F}HNs 8xwi;p<=OVqXKWeiVtL#;l~gg}&ub.a,ޯGR҅ˏ=}j3ci|愎Gi5{~9;˯Y zϚ >a9.7+ր}.o'+)J:zUB6~}z|QIqI4+v'go?ψRQ3=?V4O qQ2YֹܺE` #Iw7EǷy"XdY,#E("c,茁 9A圴|)+N*_~o9Jy|n^7?ĝe!0北AbMM-&W1d_~I˗Bnܺy{|8kB{{|nݦ7τ Ax}tޭ؏aOX~ŨǕ }ioz6We}~{^H3u&:ý0¾1#-_\18e(aI?i9%h-VWL!zhP}LYW巻ͷ$ŀLژ#sߟ56ӽNl0v+@VZd!eAjNr&僺:qlwI˗"8}9~60.\"mk`**K.EG1EGZmŷx z1T 7z9d[oxKOR+ImRZCBI!=c=EHzK#{S14&tL5愓)ga e۽Q_ss&@:z\kdK43a]榊r{EបM3RHv:zN3>J:=j#u7禢Y͍9N踟0+C*~=l3xFJ_5^JQx%r!fF=$vHblزuCzxޏ9MkΎgF r "2=:!3 BMPiR|ꨡ#,뼡c%YvöU9n܄=hksR[5k~/>( -,$d'pٽlmȒGj3-RT-2*abL_ gBfw?X_N(Bh׉Ա攋^wMbo7/uIcĩg[;y:9!V[$\hg] 'B;^H못iAlQ/Xv+j~#7!$(!ei')WyO<.,1vL-/cHC<YG.O'5 vӻʺew\t0iX?ACə9chX5nx|Fjw>r9)DzO%GiG b2>N΄OjЋnA#m. Nqʞx/;Z^լNq6[µŭF#Ƙ!c}(6Ɩ+4cB^ }v!&t9%=uh.|7OJp+C4 @E|p&:^SF8CYt?oJp5p/kW]#<za#ΤU{>XUx݌b5@r2, 6(4Ihy&Y,>+~7\|O:Xu^7IZSjRڨV䏋81rV(Jji:į.Ƶ3&787&~Cb4Xq:ooCq?L_.w1T[ ؎kZ[OP֍T5?XEv:ӚVQmܩy>@4Po:x<HEHH K@8?ǯp.kB8nǥ|WA9LTկ'X=H";)'BŢZUQ1 k(!ܺZxx^)x.4fYtp=tXYoW$uiְA"_3uVDy :<q 4. WS O- b.ڈ1.*(u}/z4[ U^ED UFQT5}N+/o"cW~x_F\kvl] e .xD9~Z DΥ"Z'[7B|Ǔ{ʝBFoo !*Qx$F;k/F 5kۋh~VݯD/=I1= ٷ~1F1{Fe 3N*F@ZBf2XC @ChQn7'q;6xs^@/aKCccD/lGc I|-1D( P>P2z<r;T!({L 5uە~1FQA%e*[ꆼ ؒdή|/AbMnҰOR_1. U-Dw)åta |tn] rtwA*gH-4j Hm(Wa OL/ m{{8DhH= `\e,dJu\!4S!B D*|֠.z%xu|Y|#Ky03i RC%)R Bq#v޶/UP@}jp3_ ٻ"1d|{hO>a *ɘ Ik炂mY1 k O]:^(X4Ukghl.Qt}_G u~1FxU*KnnCO5]%>e=gK,HrxBOV[GL҃ӆ:{~1N9>dVMPegEc4)9U9Yx}@4,KuJ>+{<_ 8qAiEQ'ՋΞ./Qג\Ƴ:I J]xk$Z,&L(n2E2H:Z{?ofo ְEV7yқpXz7"1k#P]24 zt_[ĕnmITAZۻYXv^5n[ AăE#%5G4M l\L+aXG9ShTzg4L-vCO58^l7Ir~m/6b/ I'ɭV ;FTjm}C xlYʳiy&'x\8hU M3lzK 5,zzySSoo ZqZ*Se"2VE_{RY<Յ2C:4 8Ϊfj`@ŸPfz=b4-NGz *ɸyW5pWhuǻ^Mߦe~$Ib!j-*Ȥ摞r,9bS2 Oҹ".(5?;-?~$I.bEVuug~;##ǂLgK$*r%) ?;,_q"`2gHeiRFZ*('$ ҥs%'dN_Ӱ4,g|v/+* &%jI#MLs$-ͧ-ͩ _{!cwr WWs{0yrm;o&A3w~|{|u3-+竷ueWjc!gBɫIX.j/٪ ?~TϦQտGɇɖ"Kӎx--\fv[Y ꇥˤ'~Z#VSU–?VvY}Wv]5\?yΦ_"c Y&ٚ7B3\~zی0~m~%Lol;;[O ~y? _?^g6*zTv)ogb8n&e:-jzE?oNJżgqQm͉@.SЧ{[jc=XW%k;{9mn (k@><]w聿(Bnlte1!eL؂<.pTޓ8+B6y ^~zf`*M$>wSWJ̚rkHdÀ,!╄sS#Fk+꜂TIYE7S/,gEYZrZM\/J[}6_gWxb' 3~p/֐,TmV7fADž7WuϞ+;f~ߺjkDtPA_ǣ%~A[?s6?ob;";B;B6S-vxwz /]];r,Y/pV7@SQ5FCu %x, ގF_U~~w@OnP/6[=&oea?Oz?(W<-#x  ?~3oK*>?6O "pekKǗы/)h"bUWJ,1QJ*lsAZybU~^ b|*fgGe ꂹ[04ᒿf<\B)/rAeq$aMsFNџ;렓A'9e#jd5zP,׺W/gV܀LK $z +6` n|0TC2f}g e  x}3sup_@XF{e4~ z!dK֎q_kl>\{B?n &e·ZQj]*H*FqȈo 5ÿCPMFh:q۰LXیmnsτ嬫f %m3 ):me C_wiqphga:ppl`FR$iPA9F-SE#7[ygÚVҘQGfқ(Rm~fKrʔк+|I\q%t}h0Kw]kb;sJ5$˓/sr%ON f5ջQy,fU(JT.=_?>O`/ /_aM7Y4N-ф륒Gs}IIxe WNuvǥr !V <&s"`/*|tkZ ՠn;Os|uDx*ONA)]ỻe,wuP ]:hZb?LtsAN0HX$|gd&tb|z ]4e!c@9-xUxWl9k UMR*<ޯkJJS5u,2S9& !BiI+S3Fܪ9е!U巡$Y/hQQ.ŴLb-*]w*dW7zz &Y. ! cSnұW-zy.bĝL /YfĊK* ̂-k*SQFaO3]Bo:.׈54`)|*cBEc5ؼ6 I5ep$-*}kR^vXUʧ"̳^;/_F/⒕sRKџx)["ezS(?8t[Ǘ E-HѬ:{b~>؞ռ=ճ"%3:a-b9*7d7V$C,{%ðr"C~%tqOehUm&֕2QZdYEP5CL "5Wמ3!;P{ 9'ŭ(YN;׼{yya30rM/Wf 3Gv<ghgY~NMPrZհ 9(t67,>FbM4gkaOuY.;f#j\:DlD8?LFa_}\تa:k>(V=ma0qI"Npwu,@nym%7dòLy〇?9_K^9"5H8N\gPEuSF_51c=AX(WCr*1n8 xlhEa)f2 BeA:y&K?zt +yg;V!gFUpFseQSlUӜq9gOJ`JpД ]偃SJO xu4r_Ú* &fUt[^kmZL*6=_jY/׶3A :SPޫ\WqÉ*:à)QOhzq1Y7+33YϬ6WP=]OJ~#sU3W*V(X,' aV? (kܢc<&b [""=qjym5CA-?iCZ.Z++"-UՀA"*F(v?B;SS _T H,yW g"h%Drjʯ^CO;>Biy&Q&=1R&TW  Fuv ^[[aJ^Șo[ 8w{-Lqzw+x\|kyg]:x^b" )[}_CcJAgFGτ`I 205$2gtU0X`"3F+VSvG,~[%a~sD_6(ݧ?`)iXO >=`pI|AH|`}/}ߟUwʳ_1&Su9ҕ$]םnRmyޙ;8t s}^| Lߣ[imaF%h'ݗ1%'41MTڦ.# "rUL}dJ,&gll16"67qqSuotD2z\_Ht08ul2I7swl/^ҋn9^§Fh/ ^\r/PyDHkrZCϒyg^wd̷,(`aeɏ21e"|vVWxs)נk*iy#L擧0˔b6U`F j%wHSq@F N t|ڜ!Zd`0laD-7q,X{G_| mH =۬ٷI%O ~] :RΆ5Cʩ hL]IHwJ M&1Q1e X*o|Lb'%&2ݖ>~r)骺n4 [ :ăY!"de >j/fmrs2*u̍g6HK0D _y/zTϿpDɱfV[|0p:0{M˫jL>E֒ႁ/mߥtҬ_WՇ%a) h: {`I7Q*gȋՉ<^G[}7|'}K~Շ&HLkB<.h'&TE)1AbXWZ#Q!R=NѮ -h4Efn%PfSMZG8R4Ξ+k_]3瑱Z;EbMéՍY^ 5"P%3~ <=}\#DP#b;``6Up^%Ž&u,\O}uxL{ R]0zWq)U-'[o0m(zV.BYiOI$4&IV9nFҔ3\C*׵2;h=\jCHɮzb* 8rX[UiLȭE&9X0ژˊ5L@bfki;hڝG6* ol7o0_og7߮3xuI A<'!N.&Pp2&d\cɲTvȜT݆+`]ӿ !͚@ Avh~#L+usOߟ>p`NI)∂@zṈ!X:@MFqH( Q GI`}TTH'4]Ln<@xDj8́loȥ0 !.lwqC$/퇔s85B^:0hdYI%`jҒ𐦞R6cLh^nC7#d]`3#B3S_\E؝D"y@ "6u=c0܅HC}υYE׃v{+z=ucqhW(1.x mlRRU`OPAPw^K|}0z]vTc&7޹5uon9GJٯ]/6!j/FHN>: IL#l 4l`ek128hܜu)Fk6u-HFm])ϟyQ r\HUx+!<,҇Vst8/ӺPJ8yjc2O>KVstFb$] 9qEwf7-qD}XT_e>dU<1GvƱ{ۈ:b %"NNØ8(kfڑס'QQ-i~6*u$>Pq5e 6 5J~|ӇQC>1Giw-{TJ82pL 4Aܮ<-bx 2C1nHv3yl2BӑM?+,L]u[r ABPJ NC@ZkDύ9ph cBC'9vA#UTQ(SiMC!Y 0Ch5`on([6v2tڬ/}k>d>1Giy7e`"e: &'Kv!Hs# GDUPGS6khnX?:^޳s܊m{֙b:)'9 "4}iTlANp^NV28Q Gh0"0w񂞄̕&=#ѾBR928.}:6 22" 9^R/e作_aHM4`+Arq G&+wX1r#xXT_*Yst, & ;]"S5FG.0b#ʡ;F Ln#L4spk3ޏֵ[Pd%p*s@#笝!Ԫ 1J8[lϫӺR#J+~B3dzݟv}?fx)3lX0:+#>\IY %]IRhrUǬ`ۼe)hECy7VЂ4Z YΩAx(b_\; b:4`rF.o.N=4CÖ%Lʓש6d$PӖa!?VNZ=t9C#48M"uC#08T(LokC#08{IΎ{Ipwv] @>nuyaCLzKbEeօN(E ])I `ߥ$: d"Ũ.Ckݓ}3<88,뜜:ѓctb(,[-93 V Ʉ/񓈊$3bk݋Ɲ$lǘ2ӌqdȨBz8l̵;8ecR?!evhEA *)a=RY°7> ؕ N{h'OTᔐ>bR^W0I~9[ekBwnQ"'F`pu=fUFs5/DDRcd٥&u >V؝>xẁ 1Ҷ&* R*s P恬i';->q.o{QA11qC x>y9dcA{8x`8ȓXXN!#ZX< PA?W!Yc3DJ )sL yM#08*񆼙j:LkTdH"\ԖE8M8WH,$BC49%rLGMǩ n:(f` 1ZKC9.q#Y_)=>{ J::O8!1ORrfǴO;k{(ƭƈ$O(w6"6_b@ыI=4C/cv!Iɰ'Nv|񓅁͸a etg=l@"8GNF9 S?ZL{8N夣v8Fo3RQ` oy`BJW.0ΙLw+4㡅/M34#asP i Z$˫9 SkكEf/@SH]-Ϧ " f z7={M#48zrTx8(YNaڻN&akbElt3t¤Fhpڮl, fnNtPflpxiU9UH¨fYs )wG$,<$ . (@HBIN4=4B#@8-7{ !OMK Wƚ7FGFlN04 pΚT`ZvZTN`H(AkThH<'{i>yo(8 5/|@=4B2Y5Tyx7||1qeU=|8~.ji'Ϣ8(<ةA0*nJ?±Aݔ>Ꝏ6 %=4‚``xQ޶ܩq\-SaN3z`@|ᝢ#BIk=XIh* F4XhP  e"=4(i4^Jg/%"ac_I>RIs5[E/ H{i'O:uv7AgC"hgIS-T7Ѣ\ (>Ԓэ]'#o AM9+Uy w訙E˹VYb0EY41fV_RK Y|8{5?N"Sߞ[:s2m,\,ua FAY,3kF ~3)ko\|7,L\CXkSwKkn07[?FgU7TeeŊ^V0; tMM;EVբuyu7  ag]Mf`@moonJeU!0~[PEoŬfڳGq?'A[X}tu6c_VݭLYM,P?1)\ҌOT"'iJ%{\ A0̙ªh3#Ms Ev>ü|}ه7w1UYfVm LWw!"eU.`i/W }/LE ~[N5&L4ٽz\*j>s0 i6J:f=W겚T_7JUöYq X-ew);u 0g?h~l-:XdYuUW;xF)(92_l7$X/lY&fSn^ii2K4Cy68$q*si{=SžZfيLkyȗ`)ED(Ohe&A $3b-TDjseHh8_#|e!Byfpiox}j/-:/F>JZ<{--C #™@.lʩŀ$h*n v>~0+hBs/Lf4>z|GLH BOZ"9;c#7ѝ >gVVZ5hI, Cxb 3@xR1 6TֆvCɝJ3#!P4OID4DAʦXpz_"GټjE$4k%dmԝ<$Zψ&VEX?fF[&~8OKL]aaAY# t^{d=(2r ȋ}kIDM/?YUb1B1W$' `w> _WJ$]!v.`ɭmӥsI)ݞmɟFjmve]vWݕyݕJ-N 9N#4%b<ϥH%$XjxTXsE4sZ:JjK`$wO=|>$%a$ͦ06adZ D'V qAJ+֓i eʬCs6GzyEl׾勓fɳ+=wlSɱ:X8S%XRJfASNX耵@@ ˶Q!隂XWk!_ί3Tg1'i$XHF*12ɄJEemC6}cVץ=Nl#]j;2vǼJi %?9e?@=E)sZoxQˑ]~:'DEbSm箋BR!AwǷ:y煹:u)myr٫/4sUPmMhx/w]ت-)2[T+i |.wf9ZPfi)w~M%D>]g2 p^')7q7x6sUۈM[ 6n$^ɾ *'ow{ 7OΟxٵ#%\2ڑsH|vHϹ{utm "e<^b{LswᑐO7p.ߢe-Rk:pV?2ZLwe͍2$싪R7[_W{Ŕ8&5}(y@mY$4Wt@ws]Oaڌ&imuBg&9ḓPJsHcC98؊w\ѣb[%/&keUo#^k%xEKvx[E<*^ٱz'E)Q4|aCw-=v߳ _NV[6nf@8| #sg/ЧoJۖՌZ1+*wx\ \|A9jBcB1q 2N+ 1UTPF$abܧjWxN-JKk}d@iFZ|T`e}%T/MmW[|? 5k&Mm{Hl99|AW+)T| Ms[]2iS7m_ _ xymWR|*%.hI\Flv? 5 *'A;e R )c@` _ xpmk,$D)DCW$1sp;i_ xCjyaQqcQ hN|1INIK$9KɄ2㖫YH3K$oMrJ/P<ƺ::7}%OZ)Zci) ݹSX`QKs\,4=9 1pUEA9_ xY}$i~%jg\ j/P|#_ xA[N`Diy( 1);z ^%#E@PFΐQ +%^y=ܾDBHy40Z {7ΡC|-X2/Pi!X u@Id&EqyQgN?K$T/~k/MGqtrH?h6/1`脁J1GCRN(h,R\ ",9t2[3R}Q#cEJ-Q>\"x|YU*) Bc}@*M VSǑԆA< bxFN} $/u=CE"yȫ,yf\`/P՗~ti]rMR$R.l%jV{`6Qr1Sscy UCdQń氠@;ÕqMDyң%j7:ɱ@H(-|1"RzkH^B\Q98gĒ;ϻR Q' ,P(1|Ț m`*0Ղ0MqϜ/PԼ)̝ yS6f!&skO@B /i)Q"$CcOTW" xk| V,1JVĥ+`ƹK}uK$ 8BWoQv%1lgrZy/2J0q l1(L;.T5IBEA)뽖贐V?L|PFt) +YbG *eRLWCDzdu1eF*E[ )oc!U4 fV3LTՇ8 {'^ B c(<@=OSEɻ? ڂЦY>ɻkk~۬d3%DCYc }P~ r=Z}!PL]S~A|G1jeY&Pǃz};!O+E!x ǀDS!~ҁ R0N[Ծ|C+?\G|ZCKպ܏Cn.H3g(b>6zT%~Z=C ='R* Yj=; y}WtС;e)2;&(b>T>$cުV0VWP= tz7W=w=FjqN[CߞOV=O\l3+𺠄HDuwG;䬃K;n{nS^.546텓pF3?'_yb9(G>S$3':7DDsz6;TeWRQA6\WH;݈_/гWhWufٚwy+m(PxQ*- 5n9ɕI1 X(6Ʀ)u"XVcYŷ޶]#]1O_Zwi=ӒZcv2uH.d$R#[Cec^T|lAڅ8#gM$[ ,(!p$kjW33*6ueI=Mwo">qsIMNߎZ󗽗VnЈC>٧ߐ#(c-323?|wf?E-]ZM*wޞ%}q;y]9،cmc›-7OhCj 7-inڅv]inڅvQ=C%%Dj|鷬঺3LmJ|7~O>X]=7<ֹS;{SEWJ9IJ#P,,y7_ё-w1W5OVPE37%g7gήu78ѯ e e0UK#¢8?C bT]B>)'nNrAR嗸+ZD(`GZD5fk˂GR(lxpHDwV}-;euoU7>]MI+}5Vl7O')IB<221񿌆 d'JI@]OCb1\F;ޱpU7)ߗQT#i ]fyK.yЀuvCӮ*nM]=n[>v{0v\ɬrV3K\ԙ1:%%μvͶГ]f/3n*f9惆z+NW=5Ҿ#?w9 SQ>]L =O+୓>:2\Yj}@u Ydr=}J׾wvq͞,6лOwI >>~my|t9rULx,-v|WJwS5;䝔DmI{Nbx{=yۅ)yIr^ w3]٢Fx1ݻxf`~NFE+LZaj֠^E%&Fײ }c8gƋ\Y $$0n9(fczSI)IQ+R"CDi9ɩ4$[J2$sgʼ(y+Vz7mDOuHۮ#ZXSi男1y ;1RjI[# RrXU= uOܜ:EC_qV8Bvf6봲HAiNx7Λ-P7 |C-P7 |C-+oF-iU[VU}lU[V= ӭc~cc(J(ǦU}lCkqL߮5:??𵻢gXSbN|ݗ9eF birⱝBs'lMK>\Jy-"z 3; qOե}SvuoȎǓ9쐖Ki[ ͠֊ShT 3$XG5 ԝ[@*y +YE5ՎuO b@CDN["5rɲsɟscw,BEx=E&Qg[;l~@Q$RSu*'g$2K0{ni ,Rĉ6QY3P :3ϵvAB(B ʐ|Zso0rξ,vw[r$۷Зp5/aywQ|`"rv.Swf&pc_&S0s7ga{@0;P& -,I6>.hxy.ft0a~jTUDߙT(z[]d_)lıS3%>kmi%/YխYǣED}{?:'hfÎ"C\Qn \l`*Vh.pU9砕I=S|[&K8 jdu<s2^v? Vjl$&qjGXW_(]׮rF%M* B /lPARLr긭HRTb!碖\3鐉1 ̧@(ip)0fvVzȏ'{>63**x=ֱnR/u7I^ci?j5 )ؒyn͞[5{n͞[5{n͞!f{S+ygT`|6 mR1Г*ž.XΒP$*8ʝ7` BSϘ8P=t:FY^1Tk%i{;%jomIK&8R}sZRW(#t+8>jK ^Ȍ ^`A ƜZ؊qPO6O͏TYn ʊOx3yE#'Lx/P!`V0/2ZDJGt`ZD"rR8yV4,r!c0)fe  D96KXGmƴak!cYNHǴ©xjKFbR$S, A9 2I̥6tlK2kZִi]jZ5X?U!of+lgpr~fUAyZ\t1qcxm?q|v;_jE{M~zz`Tk [ťz77Cyz HFru*LOPJFʃw?އTՠ.U @yb>3 lXcQ(Qk f̵=^wL<)q8X-Id(!C|dNlwδ}$_IkMFok E\+^ <G֦ 711Iuː,} p<pUzj="](v)=0LܖS'C(֍jl7\9^ e"Xv;XeCڍ4>b7烞K($>*';Q^lƠڠ'#IxnxlmӌI~D3Tq46Ce"GUNG19"TA0XIjXA#ʣTa6xo8@'a{Cu!a#LnCkjA}Pу/]\hWF {MI߫@=:Qq)Do U`ha)1H)h3zت~R7&0&[hdELD,NHG.a/%JDU\EZ0%T8 ycXԔp8%b"pc1g;Jaظ;8Pž8}a>mF1=iVӨDy<ڇo]hB>|YݩHz(ljQR5 N:*L <Y(+o+;Co/Iwuwg)\lQȞy38T%lٿ9VY`%Uf&wwmihzG2NNq L힡W/C|9;N~P3xT0}7U7;|uuk+5vyk4k-խh& dQ% g;V^YuU~`h.U*;xfxށ]תfSoZIgZ+͸9 Wԧ_NȋX,Q` e +gIqv㑏HRTb!碖\3鐉1 ̧0X@(idS8ϵ~q\Hlk^xd fmc?#1k3:::&c#?116!C.E y-'~SEVg3)GQ,t'[+ <pKj׀,FQkuhAGEUtYk#?~t;tCg#~BKnӢ JB1Xt#mC? 7Jr,"ܛWٹbks;' hW$n!}H龛M~-*M(kd֤+=wLps}[>KָHU@?g3VQr0,{#>}gs0`[HNtVn%۞vp"ol[޲r8N^M_t;䳝ҶyLx<hYde(;KkeS"m=ȾBtZX7ma;|0"VjK{Cno/jvgf"[8H7e3Gz!۶B/W9>1, lsMdgL :kʺ7>knD-sa̟L|jT.B뮎ssO3'/$yW…$ $g#') -DHx<>Bْz+?53$f8[cP#6np~ϸZywv%m߇͉&PDI^mJpхn꯭P&S(~%aꬺ6#nPLn."3+aݷ;kH)EВ3"Mid)h:o҇jU\hn9 /N͌.iJs#*vHzykGRGTPJm6>imw1qI5O\ŷzAueQV6[.>㞁HHHKJpgHk)_->}-9o U`ha)1H)h3z"t<ԉVϰuid^uqv2@!xi3˸c+}>X?/1dC"}\S$CR[=3$/#Qrș>Uuu\,5^JJDU\5W,bG%T8 yaXCהp8%b"pc1g;Jaظ+<9l^Ln<;#ST~0 zVD,E8Q;ܬ(C[ pN.0$`ϫ̋!gYir%UZs4hLQV).ޒwGMd1YS  1oۃ}'F),Q6[8C,ο=&&ĝq/>hZ_0{rRWD>yEyX=: #p1x2)HD r)#`J-a$EV Щk .9oNy{zB[Vg ˀF-&HịUh4BkM#R͝QFGRA5!/1ϜЫ Icݳ%R;`ՑM u8xUԌlxiЊkC\PF*sIFPQL+7̛_s\\Q,&9 VyA2M3P>e<}>I#B;yz0Lz,<܇8):8FPKee2Cȃpj9 R,ATD2Q"L;ZUFC* j߂O᳇Ɔ7r1[S4{Fa˰eHwexWNӻvA5=jڄW p))HD; +V N7)ǎHcrH2Ҳ`U{e&r0eg`SQ EPR#M Z`'rg" "gcKb7v5$#]">ojwɷ^eЉ}Mή(wwUJLG"҅svYeWR"$$WSe'~SMa9Vh4lإwhj+ֳ%I& m^d$M1;aXlҢnw|"\ uFYsc11 HEָO$}SXQG3q"=MTNNnR?gٸ̷M'~)(?MC&n,^ޔ(|b'{F"rGǗRP;6aqn(#:mqkrTap{Ĕ-,d۵*Rj%9(lƏ=mxD}SbƿIt%:DpMLɦB5#DQ6;'ْ *-OZ O؍D/5 픘AϟN_|}S*{b˜Uo,Ǵ:Y vT7?mk l\^-ؘ3 ]'Z.sG~yݯ?^}D?8zYng%z1c));a]S6Z4u?Sכ~Wi=<3XO_^wХU?]_,ҙT e;a$j+0l)WH^jb$Si[JB#zGTUqH|-@)`cWijZM5Ch"z~z( ߗ78},: ^^LXgdzM? j>Ͷ1; P/gfeb:B.Φ솷>|9C6~ ;^?alT}߮x ϥ-ߥo_5]FeU0>\}_"-Vs4|qrY`BA˿_N>V .**G uF8{-GT `+| 3OP<Sy};RSK V)Q+xCm6΀P$?U}3F|ʥ8绊Yp^_Γ6rs }P ^I 2AuX\rIQ>}b+\I{lg)CZ~m}E6NUhiv^o1|?4oK*F68T;*d ^;Nq R ü ӈQS^Ȏ~eCeBtSt{/놽LZcVJ̣m}M'ۼȦq1A/)snm~8EYQ*`}<쀚[~I[nޭԻUO]5^.Z|PDX@<3A{cZ){<8UT#M>a>ټ/bO_Y v׏˂ovtgoC,|=H\Jmdݨ賨=7;wuhӡ*Z)Z8:uD, XHﴎ`2+cTkBy$b\ȅԱkG;QǮo-@njqMT]c^ys75B ~IM7[W(guitt@ H=X^?O8BJPyy_wBi*u gqлz|騏N&KJ˘)`shKգ]FPzNC>qL\!i/uɋWa4haqjqK%Ӷ;[qH=\zs˼y,F uNX=A:]n/aN 0͙A6ƂV8d>DtV?ob&!.r 8M(rneYcZ)"VD h$z;;'g^|tVN7ɇ@h1}jTX>zm.`L걗Yzt Ҟ9Nlh`?# $g\Kr1%pkM֊2@RSF&ZH0<)S7}{˼nŸOM)ͫԲk-Z/jhPcϟ?K.4*k:F ;6`#* j C\(c9qY9wϖN19ǖCQ'(R T:\*#|Sف]V$@_s}=;߅(*SPI)q?_M/~E{MpBkmH_6vdw3~I.&AhkGIG߯$J%&e2j8YEƌQҁf6.4v*wh; KqlC8:eWO@ QOtU\C}|t|uSvz}"vich>'l{-;fI𠹡q_ǎhCç^T}x5Asz*ՃGYS4?{Ml(?/7 03ѭ= ',B.<Gq si-B9ZVC$<>&LI8FҎY@ƹ㭆=Nrﵜ?O2v88hSl#s|iP[x߂]lW: ǐ61,0)&Zkn @L. !&#Hd>vU 1"9KaҌ;$4"P ( 8Y<}(0|xIYS+kqG`y0 gS걠XiaaI?vJT"=I\Rnc!zj9W2%hCTAdD(Hxlӂ'Iﭶ}mqgQD eGnsѻq<]ĭD"Ll1GvDmYĆr)loׯHJjXJTLCO6:_^ߜWǗ& sͱBD%xQ`UL~ p{,zΐüޑ@~K76k/7r+Wyӏ_{L?geolۢq&%rc94*fh_8tqu0;jXσ.% +w~~5?.y=g{]ۍKqĜ>[\f"ouO%)X N;QզIXcyj^:>j?Z/FV+w -_7[܆wEl{o6`~u64Clu]tmkM@7햣lF՚zu\ m>|czOfôX]Fzm2*'e0,}\`WT}=2[hB&4`B_/mIr`ލ& jqu@0)rgh:+1M(|q^W^ç{#5te/-jR{ (]bc3 Soky^pih^=r[(s60ðW`.HLPm50CRSeZ$C[bxhV,xk4aGNi(7 A_J$KĄ3 *TsI",oo)E:jgxbry`TT`Z+9km ktlnZOMK,X-sУ;8~(u6NަV5\c9T 'H !&ꑱ>9@ANbI:tV8VD13u.$^2GE:łKsqHuc AˤiIӒiְ"R;i< ժ y>pxy>o?Qf>e^|~J?$YQ(uۣ.o*GMbhwc-t· 'ʏFGj5חqpנJ|YwϤ8ø8z=o=|-!|v+05[&ͳ,YNɐ#=ASDc*Q"!%K9 Q#]MtIAviH۪dVd24?Ѫᗚ~lx3 ^>,w=3)):ih^?b%8RjD!ێAz.Nt^.k<pߡf']ظEwIMVYxtvu^[gH&:P]naF'(?01 «r'sQptq5Jn㏕s~}oU@0AՎ51ƷSފb,Ybʪ'ˁV qo*՜u|.IB_E_;=?xQS%E8N:~z2<U-7}:g/Ϻ|{R}'*Yq0DW|ԊKd(!1LNd(lw -Պ?s;lUAK Nޚ{XЌrGHW=tnࣕtٍ`Zc.8p{BWgtgN;7wDZQ]yXzo+o!0mgEaouyt9V{XEsKѼhnA~<É lKw>K+'ZYAgS/eR5`^/AԗY@I ,/h'7?Oǿ߆xŨԲ^¢͞l+1فakH=`#_v(z5SNF0S4"'VraCv2D їS)8‡yK|,Т?Q{r5 0dn`zIС{TPS'0d\ %kP:hpaN|[$;sL(QmG963,r Ƹ#aR'RHDs1Lqtٖ~!8=}{N>cnƭ9pÇ[iٿoHXH"ka-Gr<q]e)np03ZnuNcʠ }LDT-! ػ GtS$*8ʝ7 BSϘ8P=t:1g9iZ+IsŻxjV,|'&*Leqo>WkLF|*3N|amMf ѧ0-?OqRg0Ʌ{w|K<0}_s9*|W^>+6]}=5ƞX?b Ccﳭ;>|i-5\Xʑpn5mhn A= R;ANININIaDEkjʵ\kʵ\kʵ\kʵ\kʵ&h k{S5ДkM֔kM֔kM֔k}PHOk,-23y&S~ȈR4c ßV8dNe]!.Mz% [|骁aoe&; ?rD.?'zThIzz~l.0cXqvBƹ``cr~3颶N,'d"iY"M WRp_/2[_~ߎ]hAɳ VAW A*&O@//25q]QIRj3fȴyF&0&zQmAZ5hPVBYm b@rO&󁁮MH뉐ZސS~}l3 v;SHC3KFDoAsVBDyK͙@!xi9fq'$Ds.a/%w%rU\E:-ǒT8 9Dž SDc CS%n9 Q?{OƑ_KllF`97k >`CS" _/CG-GuUu]]e6Ʀ9xd䁑e#[ԥc\?ZbC{ySbϷ B4RςRJ('QGc}Pfwbo#[Io_-j5Ern!бerH3oIRk(b97˝z h8Ge9fS BJ)ʥ4Ξ{}Y?ǺMw0;/5.~{xFyˆ^FcB}0 KMUhA #(H8 FVPO8nK.9KOhtf8UfJ<kVslŝEt'q2y:RL#)9D)"< 3âҌ;$(" (O <ݗ ĭʬv"Vĺ d3с= `Cz,(V9gXXzCZ\{4hچW p2NhE-'Iyʱ#>RA1 r%f ie̍ lᢪP_!Ǭ+gss7?_^^\|u/%A_-8TJ gCxTmM VWimM.yŸoƭ"̋q ɦ4F3}=3t)LU&]D)̘z:3:Lg~BlJKWu|V >z䇾I+8eUVֿyq~~6)Z+AL'۩۪Շp734IӭR<~Fy_ߕm{6 ^̿=gɇKR"$$.=0 % xY/L 6Oog7 l㻱Ϻ5}6mro Q `}se+w3&1˨. 2p02kݻ xoϱZW L(ˁq!vh>>xu;O]'a9,g 0(ix, ,,ɱi-Q|y;x5H.LKWEQQh߯Sq°ŀ^|ل>Ph[W,X)*xFE.ѲwH\K4>w+)·E,zy&kJW* \qr6Ŵ6V%!`^:,Q? t덯}b "/VEf Hy4H)ʝJ ~F1~ꣳ$ (T[G%%mvB3OVj 9nu>S+yA=Fb^.ͬwQdS0 A/L% %b™Jqtj5 ۛwu_Lc81cA`L`إ#IzuUy&jsN!jKd620iEvi*$THW}MBn ] t|~|wQ'yt0HA@RRpaut6]ةE~PMTd-lY\dxqyB;g4 e#wwx: "k=L@Yyf 6gRLSNq飶ěU`̙@ڗr8Ե%ug6MYʇ͞UWԗlnZPS42a'ZK(I| yIEt$LEt!"'U<`N!l@{~AY11Ӕ)8$ rڐf>rHAFR:,XZki 1p Pd (QHLd$&B:łKsvH5Ac+-R>)IS"w(\,jGoݎ]_aQn" o*K{/Mfk{4c$)^MK7GI7{CtI|K pt۰qr.GtnB+fOjT7jtY-e.̖0[|<œӬ[ ]AWgLX-Q(֦[iكiJo{z5q줣tӤu6-6w B{LӶ hj<ȇ}!Z.wL\nret^-1$%TlD_!`՞P53֛{f>go5-ԗ&T5󔪸:hV{CҸ3h#ު3,DQx욻5q5`ŽqYQiNhB:iJ_ӥˤgm \'&E [!Iѯ.ff!=S(b$q9HD"|柆 ĊKd(nϙ4щN&kcŏm *2@_xxg5o y/Kc]i]{l%r!-1MiLzsf=Pz3f꥗5Ȉ Bug#bLS%WEU9/xT*.V|$S&KG7<+6K @EKр伖~LYYN9bϙ!ζ"О<ZGmJ{4/RPVҙ# %ւY5HEP: d#̉"٨t8M(% Ȣ`;X+\$0sCFҰtt6KI[#gϡE|<27W#dG<^.E1IN5e7nRVV dxyxFbLLr?{WƑbLl[u x:%Bd}^u7OuT<$RwuWy@($<3cĜE NVF`rHihf|6x/`*og5&u",tXJyzQww)KtO8鬫5Lr)u߶:˿R}3>=ݐaŠ}:.ߔG\)ۜpaE*[vi[Dɫz7Qw|Swq+wYLF7c 9VL׃GK)EВ)E 0;2^EKaKwPv;T[MZR+Ui6A*?H9 fr*y A#,8+Q!JktXFAxB @9QǼYFo/yMy:ޒ%m ja ]1omgɾR!91įëm&o]!Dy[Z)*!fPPJKӊI.`] 'Ы0BR(& D,LdrT:a] F[{OZM侢}lv<ڲ1{Hc| K9b3cz ] # A)CfH73at7l4lsg6vy :>0Kh>|}PCo{~ `KpmХ.PA(.(+*<Z3P RA)[z:zb-J4lJb'@?p̝R~ &o sX;[x7')&ŞPP;.Ng~-mm5sij&p٧H+)+잜m儦3of92Y.VCցs=e 毃}L beYJ`;]g& P f7w31kz۽R^ _Ԁ{y嶵38şhi`NˏC+.Oczqys\RE&9FMcxwcf9_ ]y Ou%j;{ 92mָt˄vr;(K*Ljd<)HDz)#"b1h#2&"-Gb{;&Iu\t{YXY~|an3/NT􍺻$ü9HịU֤ hʌ ;6`#*$;A 0At F =rkOa ǫV퐼A<"'c_OL 3y "!%7́%M`S#[Xca"mSw;C)7ءnVC߬bYŐR1˭:"̤z^i Ih¸AJ.@b½],1ץ)`:/&@;ݐe}q|.o)JϑĒgFp8PZ@r@|f()Xȳ"bq6-Af~N9#c#=u:d}>h{VY( p *XZ(5s1Y(8n&7 Q" ^uY˲Fw;rI8鼅4(@H5`] & nDL r$n9$à nyHwЍ@4:RDfExJb,#F,>vS$:P~, < kמ2:x4c3.Oq a <0cA4p9j (?:tj:jڅW p2NhE- UtMyʱ#>R)/#<$7|pNosx9m$Kaa/??/?~~??c>|{<0Dyc~#%{՟'7 ͚Ҵt}HMҮ9v.y%9ϤA?q)Ls[|}ݎUݟ\]Es$ &r;Txh^|U͞,' lzuGVL'RJ??V%+/z׭|6P?V}4~O0q0~چH1B7!!qkwNݠG޳bbwݻM~y:b']?/m5O*">|y~)htm|Ȣ-T 0`CֳmpfS'VVU~oe{M0(FU|#kdo~zв^6 0ƅx3(fЧ^UO.n>;"NJ75b8 奝QX^-plj_U˒oˆdcsdI-]+ZB4g@(LXb/=mO8fKW) .xDrxH\ cZ,.*X)@ ,0R߶]^ti Chpllƞ?{phc'q`(O#"2VR`HZʩR+{+%R.gM_˭ΖJ.uX|u_VYRSiMfҒmr9(+D GkQ&-brK%bdF/cL.k/E9tܵ#w{6I[3-gW/i@a]JfɀꖬA|4 j-$* %^F@HHLBDN2` @e_E.qVq2MA3 ('@׆{iXc/Ǵ@ QzHLdu4( &2%lZvqZiga{`T\BpU.iǭ+ë0uxcet;,(g5S $P u^U(haSp^t݁x^r=*A/jܠϓ4^ \飫%7^SoIE#MrI}}5ZϠo[ κ@qVn~&ڢT\{>Y0K㛎oNod =!"}J;X{))Jnpb΂wDðq$$1:7-G6'NݢAu͵%6WG0mxxke;l̎1;<cYّc3ƀ'JZ^UL-$ʡv*֩ܰow @'3i2.t]~𢘌evRx'6}42լFRIφ§VzTkbɖ:Wo3ؘ5u..~3 Km'-KʠߍyK̤"Uye&80_744SuePiIYN@'z,gnCcjnEIw?ʹqaoˈp 2a[yȫOOڅ| {&Wĝ<ljĤKt%:wλD].y@&(#{>o·0}7m) 9VL׃#:!-" `BK 1%} (>˖sk|};-WM~&^iq=5zaY7ANdN1MK'MW D f1)b` <u w"L\L;E3Rt#)9YF03,*͸Jb,#F[GѹcIP@aznH{0ffmȟ& a <0cA4p9jNґAȿN5=jچW p2NhE- UtKyʱ#>R]Fj:1(kp,pX½Ϣd8++1yr!{_lޠzEewgz7?pwgtW0<0K{ps))U_;7 MmӴh4ӿiW9vi`ܔgRˠ?d?]~5{KJX0yx𨗸/ |'@ptY¦\溺WE_MC[ǡP=OYU>*x 䓺T){ON^/^Y+`ӖsUvx?můgtOλ nC@}Wӗ>ar%H^a!K.6Ld<+ϧ}Y9bb7:kG4jX`L;B6j.*"޾Y\/ѪjJ|qh32yTL/ Ɠ &g=k\7rgubaQ5wVY6X bYe8aYo3|8RhTbkPP#B rCV++oa^4f9 g{`8fFajKK865eWyCresdN-]gDZi~3 4ˏ6ҕB,GJK#^0+6!X@[359RkaY.0adqu;*u"Ls6gY9F iz y1 n?.jhbNt=99\1 B*Ca X1cޝ豉hj4BZ"ںg?}?(%_eTL*9RiK!  3mcgs,u"kլ IM,)vGEwo>V`F) ˽)ײYZw6h87$;JzՌ)c1u[K@lgSwә>7R+X4|owʮr㕳eMgńr{}ThU*wuP2zEG HS\M*Ffd j0D q[u5 7o@nMW;PWhd'ZK(Ir yI=R:" dwrӲ_E.qA1 4M  r]R 2GS1mpXc}A<NQ jbKFbb$S, A90IXӼnq1g̎1;<c.Y6Ջ3Vkӌ~-T3xn23Ń3W[[xvNf#k>K#JQ ZrͤC&Ƭ&О€Xbm``5*Cz Y@SJx0lId1נ zØZ6h1ߴc]Bc@Xʴ ~uK-@ DAHLYYN@8bϙ!vhOw/^#ڽ G{AXJgB4>0dܧZ5"KQZ2ZPhlB)7j<:P.) >ϳlmR i髵y?8 Syw؅we:Sx0fo_NzLq iy)IŠT6xiw"n;wǰbrz'G໷ì0o ɟ m&dp 9\1|(>Py矽&n(Ei0@;t60Yjr/$P滬ػj\wuB>0`B8)7WmQ^]nԫ;gԴlS^rj $g\Kr 3+OzM LI *{gWxhKk ^1V;7Fn$']PkKZ. jZ ]#Z. jZ. jZ;3TQv( jZ. jZ/i^kjM>E Kn"'Jќdsk,L^6 rv=+{ Vh Y*pfjXb`a\ɠ %Rp}BkŅ,Ƀ{ v5GaTݟhOua~@`h }Ny͋ja OΡCkξ7VȨ t95Vv@b yݣ8/jpo=աemm@vZ]~@׼ցconaQ^44 3 r;WHCsKoAs߁aHb2@!xi3˸`K ]Ht Hwz+wL ,i#[9ܖ=K-Kyc㟼-fEnԳ଴F9N*Qp9`au4 e&z(F¼5zBέԣ)_&"B:aX[ (vYo;@ Lj=@7:sʩL*FRRcOp4+dIzF*V' oz\@'uI适-^j3[yқas Ef6)u!IVשmN"zMOgQ ˠZ4c^{ q,UȺtRy9yʚ48 62(6*GT 0mIw`٦sc8>N9hcCbٳg__wR9JNQZ6D.)͟oWAG[9<׊nEgE*Y@Vy4- s=v~ӲҌ˻):=gz4>@}x(XBJ t46 Łf1C<1KaL$"/|F:T$Ԁ A2 c*̪,0;ľMy>gΎ t nH3:evoDžwSWHxh{ /eљ]TV $Cd\, PdqDk-}@L`+ZsI`w7JKNTxDJN(EdZafXTqXG$*brcMHWA3vusNJOͬl:p3{Py&8 9gXXS& C D.Ľ^8w D+jAVOS"0dHy)Ip!Tp.Tj)1\%>EG3nzq۫0__~˷߽D]/HH>[p+))ߩ~<47*"]zquWnqfN-MMy& 2CwݑK7)B4&TT"!Mw+08L67쫥3VzxSv -?$7OӼ~}1]XSZ+`ӕ]*7C0x7_/Rak=ʇʀ鯃__݅o/DQ#,`)VᾙJr>+7̗UZ 6i0WMpfM`7|9O5 t,%"MS}[^H;wϢD b? z6׸(4UFWVY67ش `2`XJCΊ__KeSm<84.Ļ^1p}ԡtu7~-Õ_7,G?^.='JaYXxc@Z~_|{7?x7H̉+eu֕+~{WULe=~l6Ou+_6bUxϸt *r-Oy,54[>w+XT:x Z,-ϙ( 5²B`&1bְoCHu\~X X %=1cQЦ.o?͹jibup'zƓ9=w.!"qD!e!x0h՘1NˈiDk45[!-{'Dl7R?Kʗ)'OV@.y>H)-VP(e4 t+=Ccɭ aE]V6V"jxTqr roDeVkMg;ò:J$W0E>mx93C)k:nd-kȁV=, _F*_OAaz};g@UCO` OP* %^FHHLBDNgޞ8,r!l)oRp4H`&$ڐU>r a+ZxL+: H[XA`Ir.0IXk01rZigaG`lW~}_MyD|ΧS^^Ts߹qtܙEJ/tCKNou'|Jezz?|N/⧇NqO5cO4n H)5N(e\@Y`xxts!TۨOKPЖOb1xvx$P u^@U(haSvZ^B9RyF!DHcAHi BP(9>OBD@ץ;ɬDxvFC`iD$B(LjaW2-i~](y.oR)vv I4 Ǐ ,%-l%`a%<1_NԐ(1a _ GD-[FUuJC#7ގ'6^w\RZ{%6}."ۿg.:G;Zi,h$[YfN ^F߇Qv8x0ܔwjI6Wݍs>?rs9 J:jP #amgtHi".esv{xkLGmAh; B;N#b2 ]? @v/~h- (sVAGha[0(xAcgݒ'jOkמV+%Ǜ >yLHu6]O>URIbOJ^k5'1H=rYߏ2P %D &#-tt0Vy' jǴc>bǂ2s)]P+Aj̭2`.+({b1z֭얮k){&qjz]qkrF`ń6N<Sj/u7WFjxDn|ȻqIs-7`~aT,`=ηtM>_O'Mwݰcn~5H ?JNZjnn~"\Qsvw_sZHkiM#i54ҚFZHkiM#i544"v+9L2 QjWpq3T_0xa23ŃUש;kx_ 7˯FåHE{ BE-f! cVOaP,60PlJa^iGjd^ wE%<(X@U6~m:;.H礤GnY%z`'N*^Tx1$y]p)gm?+ۮt&[t #eGG?V`Hq#y-'~SEV8g[t rn/^`k3!G2ӂsI-P%(j-scHfEnPʍ"O(RȰ, Ƹ#aRE3D h̝A&tC8 m@"yj#Ʒ7z1>Zy"Ih85ab+կ꣺\re,:u_P(E,"J+jQ!8|@C6c!9_9s ZGPHx3jg>OLj9+j$~xy 0Mt:SJzy^u:"g|wY7-}6?~C"44#Ww~Vl5ˤ'xp\*ݲ7L3=/gN 8s*[5捈;H1,mpj'2r]|M>~g}Dy* D(WX!xt'}~c58!8ؔ)p4 FmQAdƣ>t.}NvovXhT% ڤ0ޕq$2?I2Q՗/kgټu^b}Z)R!)9b-!{d2g}TwUY2y YKzc^M>e 6݅q\ipYaWQYpr9בXTD5h8cY }:!m8g?ܑ݃uWo-9 $ԇ go>BH@ۄA4DsebI([3+a`E >z-^ Dy{sjM%3XZgDñY=9CmtiVB#A*MoR!_`oIP!9M3 uq!MmQWZAeׂ%v to-:2\qZO?]`+&v%DKǙ-%%L[mߡ0@rd@x" A)Z&fBZ >P1@CVL@3C 6͙P 82G9KM5֝oP&'U͞vM@m8@QTW!p^ ^᫃ud>;R"VD &eR\CW# du!8؈K6x)i$"x4e?_sVJ~.7+mGS4%2[OiU})ᅢm;Lr;03je@7hPu` `A牖4a?}!rMǝS;-]]OXj̿r:(̓*q9<OffZ4z4JW{LX);ׄI< 5BR'KCQ2w Qs^=";uH}LZ(Z=D1$P:*$FnP+,P 8FL@f̉tEm18($pg5]αw!2a.)dykc2LtL,KZtv)V7!U~pU5Ӳxum.: @s9  ,cc*#I KPjОx! 6>A7̴$P'O3)%xhdP rDQr9@|$:X~TFu>Nm4wŬ̺9DB顀BQZ\D!*9ՆG* Բvezݚ5mH"hIzAľ^#`I.g!qДjYFZge|9\0efB2e~zAa>NEwſ觢ӫ_s}Woͯ?]| ŏq_g ho'8;8-꟝&Ms 4-j:^妷i495.i)q~W{ug}ݫu`T/јm/av*ؔսR;"rwP|WQo64S16ݨ^U~Ard>1DV!`r`V>Ļ_G?^dEwO 7^:)8M"D*Z.6L>*D_*_xI`R@r1p5 <9VL6j?{a;j֋ޤwz!nInIm!oӉфon,fY/iH3Blp4~ݽ+PV~~*iԺz5l`RcͿXMl(xy3 N:wU 2 yp\̔>pU 5p-jƸ 3A XPq**|tjUk#eAtC%2E2>9_-C}+CQ ҉"76֝Y4T;dSQu['OQuΪ.9z]u,Eӷ9?~%]ro8Yk\B7*e7d"q3jRk#)@"y'1n\҈fۥV)&C┡>;r-AV R6jZ26֝%c("{屍nR<|A6qrį vo)HR!`:c#9Y\*/Ij/D&DM1KZii#$A6T!pӣQ!3Hdi/qgX9&8ޑ_c-}F Ċzb{󗐡`tb0謑/Ϥ#lD0ǃ]<0"ALjM &eF2 ,Z#ՁHOlDYvuIͮ5:WlhW^+>)0s囖oNo1qoodA;knm#aukX KlfedMm9n0 OZ<_V2f˘c%ͺ5|uvdjL1Z^ܘ,mw&>ԩGmM{X0lhmBe_(9ͨr$喎[Ioq>iR\b?IY$fa{iE(oo W垻0>毸7h̕~v,>Pbt13ܥĆg ƥ^ M+2a;טR2ѵadnCCx`UţNPinΝhk>[rl3ˤ|6(j8y:8il %aQX4+ klO[jtjr) Ba #wg;P?H>Ytm4l{5?=S\J>5uUzxK^pm/G«Wk"TE/~,x~1!B2GtkA`%0@iN2s3+sx)_9)rRs]!+2ӭ}CSD{,? H;ut艌)*Px LDHm# 2]GX~Ns^T aܿ&]:/jRe yWbtC!V%yG.bQ0q#WL<͔3aT炣(eGuрB%WGLTA 9Ka#EanTC[| 1 ɣq6, ,B"dŽÈкGpٔn~OihkgU;iQ1Lǁ'_rdJ Ih0U E'FfBZ#FQ&|[ ܚ u;YCwهOp)jnLXG;[1$z=0Oc206 sJZmy'ĻITnkn 9D|AHmn>{G9b%ly+&҈DvP渲%Je,CyciN$4/+6PEF.[а( ȆXBf\TKֆFB;rJ8reBf:(Pߕ]lcy<NtU8jzQE\јeya8O-Mkݬ7T$͍gcqh%NP{[dmg5nPwϖo|YZ +ZHlRv+3aiH8( hA ȒiKĕٻ8$W<"/`Ƴ /FbCT˦dk`fEQ E8ދȌ k7W% #|2\s/DS2)Є$W`LpTEE־BeJ2ic$XCE4ck.F6jd%L2xsS)9jnC0H{|աw/7$8Bg>ݜ'>eΡY^l !?wEב05be^]^P9iSUMI2  WԠ( = ɟ}tK[mwxȚ!)xaq*S{)'B\Qө:R-)7oK0IRUL>A_{Uݮۣ=LȻg N9ɯu:qNԮPrpG/>Y]8ߴVbíİyCbx׳V]j9PV1;ao*!sA*~_X,c4*'+6I!K׭PC)M0%^]|T1ᐘNq;|~.'ܯbg|Ik0W#>{\/?=N`zz<{pY'UQo~ۣ5RBؿ\]L[Аr t2~>9CLCu'+\ƳOw:sʿkNqeǺZcM؜ͰvD)STdWҷ / $UT  !ߑZUP/;u ?y1Y]\;_'7*]ܵoA%D2.NylLuzc',:oz-:[;Yǿ5Jk]4WE,){m O:y?Au4E$^[Cn-q@bzqY??,>m{](qn(?tmq(#5='gljyQZ)V5cjZV p:8ܱl? ZmD$ЧɊ2NwCϫ3j!z5Ɛ MLzJ1iM$Fuv!mz?Xkkn,*9;Jvߺ#ߎ V?P"|Z+[,*IY-YI[iGi]aOmj^)|GaVwKbFU/~lf=NgR[KѪ$D oAnr"h*kK�IU[\P)&jQBr*1k-bm(bJE7n"9Ql+}ʖFi 0X=l83)e%ltSVDG2eQ}h9(+gEK3Rk}ɒjydY ZHg9xrФ4IXPQ+\DϿ#YShxѱ]ˡH♭ï{! V@! D]8|d^7fxҖt!dHXlOs\ֆ^${UNȪs#DE*R4h"ȵ STBf#D8uQ2!=Ȼ F`VؔՐRDVhr IkSm'Mh+A$ԃŬs֞Dq "u1(liWBW3,H;.PB6=V \.%Ɇa*I%Ց-ˀVA @C K.!&vH42щl\j%xϙ0]+B:fʘ& HZI.2̀ R)ԤWoETĽ"nz^oP[ ABS5y+Zr2é BhSNp˟wy#7B9(-]e8h/Ȁ5ȡM!ds \A7< HӃ:%\uFAtP ,"E&C\ DzE@z=šnA)2p4 *OEgYoyE!XI+Yى*@j{A] PiQRSdLY8z"),+`~(]ڈr!c̩ X;u&AH@pAPAR䳗6tH `2 I0՜Z'kZ"[4r/L1JH͛5j@6iFY~a :X)*:LZaA+B{)[{]BD,sw jĔ5bUjԕK}``eSFvL!jIR8$!@I+9[ 4bˠ2ˏ0]KvEBxCDHz+`j?ʭzDz.n:?`"Ǜ.iґb׳@NF -SHO8rfrМBo&wsy1Ntf=_[Z.kV\~zz!n_Mϵkݚ?:F7HLU#N |CS~cӊw01 һtAIԻ?/?We2O%MAo򴻧2nT7Ep-~نx7no6Mtylk9Xw/M f/i^vʨGmmF߷6{M;V,f/o'yp䶃o8,d 0u#__"|;]`C?<#3G=ǯ^>cN04[zLkéPGԡa,rl;Dc^tb8徤(w>F]~ZgEOWGHy>|Zf{:yof:p1S缫+LdziύxuGΰPi+YC}=}4Ρmij{a1cevѢge [EU豒٤goaװtz_=꠱'j/&TtCH%݀jy6E7` ZtCHM-nnԊ3f7)VUpW=$Qd, lWutݒä%_),YA䲂V&s:'?ejfe+xpn\˲/ufc,t]Vn=xkAzXK* daaFM^2ГKQRyc^: |qrRSӭT< D*aFUR t(OKݦ`'\q.s2{Dm8;؛fmWRAO-oC׽2P +_RysW4/w=ؖ.x; 8MH ܄`Zˬ3F \^* B`y ;Bc#ˏ A4JNјYw!^l%(QbI9@L" %9Tqj,OE[bll.Qȿ8 x4=Ѵ #h|T!d4$l pV0jrPoDy3'"e}Ђ{%`&x>A]қ~hq׃4΃Qxn.x?|o77_o~X "0Dy6yуC_[M ϓ_ ˫U_.~Wջ9{lŒg_u7:\S֢ܖlA5ł~['U=.֏Js>'apX|ݵ: &G/ 'V}r5%mJr*l"]ƿ'읿-|Qv _'oy׿ξKW꫷%j]үh5\~ݗs0hQ)Egӏm+" B/G_/1;fPW_-.ǚg,onZইCq=/nx5Y.I o i#dm8bEt:Upˤ]leAKcvŊpOVixqח/_ZYo{y)};v!a5=AL`}ywGeY|(_ZI@e8̍zLE߷-mZ~sw{9բ!ulXt _ߴ} /lPhw9ƏWaudr9ٌL,G)9c͜fv z6<9\¢ލZ@7Ps;Yh5[CO@\qtrӍeLޯ7Op{o3ԍ$#X1\mWcW_~o0'vh4QXx<XCI0{^-HD0SjXS  6{K  rNxIx5: b<ٔ,5xFgxFgxFgxFgxFgxFgxFgxFgxFgxFgxFgxFgxFgxFgxFgxFgxFgxFgxFgxFgxFgxFgxFh9a<й`<-p60ZOF5b<ϔ&{x~7?-qy XPg8SBU8(=Vy8h)zOۨ`{$!Ǽ>Fsw= u!!qpe||wJiav0%1p`y0L <|^7 }A+#ȕ\;Q:<>? b?<fewA<8ynsPq.@t] OH.2*h]I5heC~$ťSHRr5SnEa{|Ԝ P,'Q-}HPC}D+jphz[Ū9r\uU9[O7:bEz2ż#8n$V-Zܾn@ rlKOŰ5ϟ$zRy}@f9B>dP_9$e@9XKq[6췺E!=Jކ]2nRe+ UϲP,<, ]RP:oT~xՆWCróEF"Ѹҡ$단:%FVIX˅uЦlrʁOI'òT0s*J#O)&KZB*e3&iK1OH}*u\ !!Κ }!gک~_'w~؉^558;DE 0qq) "0."0."0."0."0."0."0."0."0."0."0."0."0."0."0."0."0."0."0."0."0."0."00Nɭ|Ө.M.O>yb\/ ܜ/%vlp|p92ZNǷRq|)e8y_ث)8YЙW6ĠA^}|g8XzUl$RSK)-Q#Q8CR[ D /le + dnv%NR.iT YM%~rnzr upAnZ}ޥ23ssץw~ÀB.&*h]Afuն=,f߼MвSݻݴz7X١煖a|}>|4h~'%tU6`lY6ҜNtssc}ЦZn,:əsF AES XkL7dW}=L'P Le"!m6So{0eN7T\@l奷UfC0A~ G;aUztNڧ>V+y:ΡKJhiM \K?dDp&m1OLȼC$hR օ!M^=s돗i0w1i4L-+= 15/o"7zwm}7?ލG`eV露ǡPfznRhu3i;ӷG?ڿ_%cygDI/-tp+{~£w$UOTi(4|mAĆvg1DV>WV=8['x엧]'`=S>^#XPc|*Yjs*  }Yb$VVKQc2$*CO4 :h}&rE}7 }^~5yѥ0ڍG)TGR A|n?e$#;+WYUYs`v.ԨG}R ˄488/ԲP5ʡPZQ̔5#iZ2i-mO?=([)KAnW(Sx9sideDe 3Me&z.x'BB%h4KEAMH j 1 24DfWs&)OH1p.}pL_X۷Ӈ$UD)Ff1+[;=hojOٗKc)y)x )57֒t<|L*2.\!pJy{aމMf1_'&["Q<וdެoѾ9}cƿwΈ GW>@3¨ LJϊ I+|R"'p/=Ҿi*(M32J,D"Huh$Ys<*% "e%Q[.*`DM$*@=돽 gҾcBF?͢G}q30i75M!XB\!Vm>wȍ(8XzI'+PG+D1RP.xnc w*ku ,N 8 ҃l395XAqLq=spokOLw;0WΖ²ml],K-oWF[js6-O1 2%QTI1oJduۑ})No*#"OY猠>w8<:<-{DTLZAgL*pcDZp,RΉpeU2 zjȅxat11i5,Qc=Ͻ gGxF秢7Ѭ/Hv p]R'x5,~tt_mh۲|q}]LV 줂 \" dVz 5L@%/:|P]]WT\HLksUPD6V`TG$Pۯ0ObpdžpߦQjpxZʏͬ?/,C1&I2(;&qBWIbn%)v d^{;#E!'[FjVx9@8_v-wQ2i+1\?Q}k~:q:Wa]oշ.1Qwo_-.f6_HIῬٻiho4UlESM^6&Ҳ ϤA~}zo:KXtL_<󨓸/|ǯٱ?buU,JR:]nQ&rHXGs}xha?`3v.Mnk3^J]?nweA{j]0X|{aC$/:`"K5o*=yw?ٯNۗU#6. f |]1; i᯿% d.3[ELlmv;=>]eߍՐZ3iS2~TL 8`cse}+w3눘eT Y eA12k0(ՏȊߟc)\UbkPPƅ8c r}V_++wzeN;\ier4 9 0:y, t,ٱo˒Fë׳ȌZ2 _/+ZB6g@(/b/ڽm6Oq+; )S.xEe)r٧.t܍/W0S`N<;P=UGp_Ef Xy4H)ʝS\}@vW`.HLPm50! މ&ͨ _Vr+שo/>&!X&p |7D!e!x0cX1c`":-#chnD<:*o?n5w ㄓ'Dܜ}\`UrӖZTP+8d2041 3Vz8ǒ[,ŠF]E0E.(/ +0 |# roD56Ff erIAyw{,mcdv۽Ffo8/wWVfϠkrXs3m)m^SIѨT_K,(dŒ>i'2. p,0J' \Ha !0RdhL> 'H B!c}>QA=ؾlyψǴ QzHLdu4(ҥ'&9bkVrZi䴓0$!RDmuk;QyT8ڌh0iJ7 /`^V' # {\ Y⬊c\߼5ŻQHM~7~Ktp]cF>ݛa|t&`_tl%4.ԎsLBWB+75(EoCL)2#)V4*++hOǼ m47GȨ $HZ(<*Ol($p1p BXdyra? )$f}:rۃ}jӱ鸹ǚa6 uހ)9#W^eI|+0S%2}7LFX [0fp83ʎL颓3\4V2"՘[RJȀp,8cOLok2Nf]}-6y',Jϭ͟7gQN(̨<(s\O%\Vc2RrF1Nhy0=o&Ǚ,80P. x{X@ߋ~7!uU!5W2:Ň:cc>@u\r٫I \9L0ˇnܫ_XgTO^|3O[Z`0x){.F(MV\wnoS==f 3+Uwo.x ,%f1grHxM jYf ,?^L&Y{ɱf5Qs-9˝R 3|I\|IV0#)6gF\5;˜]p'ND{4Jz4_G#`>!q NOF\%r%;qJD]\%*j՗(843s),+^Tg۳u1>`V.kiAAܩ?iaeŴG:H )ϫ7o 83=)霪毵=?xٹӺ"7@;u{ 89fRdsk,mBU:aau{y|CJ,^WZz,\ca\ɠ %ফ ń{%ύ"O0=udXDcZ)"VDȍ( $ ޕ]Z:4чjjt2pEd%bQ؋JEa< Ch :.611gT>یңUk)|.vIFԜ%@GN!R1 a#Ahq)Oc.*5X@UѡVE Y\dreq6 3'mw6d~> lNCEͿ \9xwMj꫹wVHI{6kҙ`kܙ%{p+5&j5F˥֤ۨzJbgΤ?ێ氼>;jBufUjhe_vѵ`t?]ʶ"BJqB0rD`j)ǎ`*h,_"ERgM;s:o󏃿 !rF*gRbr叫o}}|CTԀd5QRF#(/-b"bfwBrpp2fR9D$:XŅQ ~Hsdh<3pKIL gg[q?~_yM6Lp9whPy{~k5s/(]AjI0E/|?ե>^y u3Oen9 +4]B{QhM kpJ@p) BSNA% |6BhT lFj1͢ #z!UJ# |)V29т (H8 a1rB;~ @~e}ݲ9..H|nFëV@7vI<_aa7ȠJE:({Lt8K.kT9Eu_zQK6)rWǗ'{[8^|B \# UYȃyYX3r^֮J= )/;Xqy oiFP*c- :y@ A\3!S%ȰV&Ѭu'R$ةˆcu'w6(0UFuxT ˛X!=\ui?T˪cHY6W( ं)XqDk-@L`+Zs`@3k]wT ulNdRFYF03,*͸ޕq$BM!@x/k&K8BV4ee}%IC8lq=]UCI%PD0@AP(@2(&Xz,TX]IsLJM:p3{Py&8 {VÐhڗhڄW p2 9 +V N7)ǎHcr#ص=#/{t`ɔko^czػT{"z7?~뿽9D7~ Kp) }t2l=4 Mdhk֯* ɸ:+݃qfN-ͥLdחc|_ _ ]Wt.h盯p~.JR:=7qe_-#߫?'ug5ԭJ{z:g^TZ+`ӕ Qp?74ޟٯc=V2~ptKoaƷatH1|XJĥp~/ WQΧn=|Y5bM1lM4Or,l~&d?15C`ߤ`  -~516!ͷAĤЫf Z3&1˨Z,eA02+*Տ==Rh֯b+PoFƅx/ |m-?1ϗuˆh:- at>U %jmۛWȌX0_+Z{um0.b/?zlJ(5[W,ĪRTXɖwwn{+ةVwoy0 Y˚$-K.f6YdG{%?n6l`(/.yYH]T`#)E|z-02㋔U03W `$\@:j,a.iC y,P˕dq;qvF(JR-{(I4aGNi ExpIa*Vh.ϰUuWqʭǚ`9 hgҕc{b6^5koZxv] Ԝ1Ceau}s+'*/ಮn9.ȑ)B #H.TZ"u9h S/) k7sh{#`MR7-> m~#ݪvDu6[ s_ P(:o39i1hFs+%y!8ʗㄒ'9"D>ۨH)-VP(e4@ +=Ccɭ aE]VU60$5Ahs!cfn$¼[)rUlYOݨ,D(U3JwǢf;'OQo6}iqnVhvV:3ySÃovY?[S2ncs+o_K,dŒAH'X#RFGAs41BJhק](" #XOp).}&x#32x5sf"0ܜ1ts(0\'㩼Pd慼;兓wM̖LX2t~_5(L?_o> jfL_]}F77 NS>E>β?=p_07ǧiWWMnuhe[>K{ҦE)H03nnvVt|G i݆)ra 8]l=[[IZbu7uӢ`?E:]%fG#ͺzy`bD2gLYL-GzSC$QnRa]?Ah$msDu]bfW̨'35}ηݛ6`e4ᎧT5><- TZ8 M:6~3{L^)꘨12Ǿ_1P#.}vBa&tkb}jI΀PԸqf2d~tyi142*" :RdN5=*ߜ3paQIrcD1Og3A?LsO6=:jM!?t\jOŗ<-yԙ%ao{'?7ClZnzI*|kon#,ECx?o3a]RݎV%q}VbNjo5K&k 袜H%tQz tV L]z'V'_՞A $(m.h)9!<[FuyxW-dYjD@v:X}T~OeLS3@Q^Sh0) {ДmUbv4hW 9ӷQG賨 <_ę?%</ #қas Ef# GUSUzA,=5>`cі4 huRɅY(!1x' vΪ+}kFUX;kѱLcρN>z-jDžG~7S%2n/ @ [0Up8ʎ:e:õ Jc%C (RRB̅C6`ecOL욮k)yKM'rjq]qmZlp/` vk|Rjѓp3|övmg]z6Mc]+t6FY`HǤ;Y%zk֞s,UOi$b]FVؕc4B 忹q^eOeEN6k<.0(sĞ3C Ўn/ȑ^ࣻ Gt{A΄h|`O %k@EP: d#̉"B)7j<<]H#"( -J) 'RHDclYCs @"9z;[^m|c=u<$Xg6].^>|[Ƣc\xrYl0sE-z;ʠ:!ʭ:*=:7zJ|MLj9+j$љz"n1bn'~q_i/ 0I߻ Kde8^o۰%. nߛvc\4_.k7O&b]&!{BcEeofz^=<93:*k7;ԣLŒӄ 3JBKqTf TuaƟaZЩf.\CMA7_ΠZwNs& -5²B`&1bGNz3vѳɽ[h+q\p2LX3xv`(Mˁj0yZEi4-a<7px=Y>7 YYUu!9l: x{} k㭗tY9y7tqO#лU1p\0s]sxi/9vR0|%gS4awFOQ0e62s0h̭3hlfX8g#sP8w3'̈9&Ț b ])>zM\m+R[0EEZiw1Oh Mo2W:Q˱@rgfmlw.:srŇџ `+D(B!  -,% lu!r Z-e42B"&"fq'$D#0K`F.G#}&Oߒ`G}œ4,-2m~ ՟6Gˆ^FcYd!R/>eRSF@1%Z4<)NgƏ٦aΑ5%1-ꡁ^],|Ge+4\Tm6*]4@aM/ȠJE:({L@r8K.iT GNc& `@TAsg=L[5@T!Mg=/Y@.ޔ3O=[G\IfSTjB[gI$4.L%  ڤ0EA3XX?mJ"_%ɧޗ}ݗ}ߗݗ1dTrk3jLC0J(]s7W&u;6W9u\*N*?$.4%%+ګ߯13|Ç8G2lq@_7_dR^N}& 4hU =6x9yW #Tv"X$ol>xЇ"MwO\Z(-l| ()Yȳ2bXhz`53Cw<s: yuu=c8x#>b<@"\*c- VA85q͜cxL)b (Dt9D:SW$6!`BO 0jvelk)uI_v d;Ύ: _\֊1]kk'6wﳾ tңUWɒFZe9f6Yx5p )kӹe6M%w(1)bʑE@ k>(fByVe*n"%"bFWM+!@$M^tn_yTJs7*x4KX驅uOӹ a`;x` Nǂbis> $r7tykziA^Y/Ix"V^ANp)O9vG<Y˒2K72 nd} $YI!_&rX?.E~zٯ웿?D޿Op"'TE~)%mRMSviZttzyޥ]+ݽvPd& {?o/'.i߽}Jq+7`>le 3~GO5pM{%)/>Z 5T4գ@~Wן#`J@#Tc[j2>bq/=(~kZP;P׿m4~K`&7af+rry_6v Q΋@}Y%bbW:fsG4-NЇ?ޠu?b0]Y>3i-dmb t gk}ҙMXeTY62ð1}f `fǃto_xв^6*0ƅx3([3EsO.nv\Vz_,G0gs05%K865S]]U惷¶ta@num0I{fsw n:uBޔxD1rN[Ns|3|2{y:,8qpi$iK9^M7<-&* Lhg^k>?}3.(0 ,M^!7i* mxm?O>U%6Wm<-'Mnǐ g\#Y0`N0W,,-rĂ⑅b#o,Z^ޘZWNJuj-k|iw ߣ]$MUr2rur}r+.mÛ!!j8J&| _N!ęc{Cdڡp2{ؘ)>u)x.5wjJm_[fA65ILTǁt;n{t 6ԉ.IF_E_MxvhPv<+5'ގ'F9-FFU$AG B GTh< N[נ$:}4s"X}"lm`cі$^ !bМN)rh$cVv bu44TPEeC7zf!% 3Bs4v b;.H^KZͲ<>+)$o}:}!}l8c阹}H,ZLZ@Lƫ ºǫ+eS᫖˚P^ihA#2e(+-*PĬ 3e͓ <;8FPT!57P1h kgmCNX'w:7 #$ Fj*e;Q!(8v0)&8w01;;0g6yFKw9d }"#( Sm"DXH 7Kyc(vic[|AGR#HΜ3:DσF dC k%)] ,%98/4et.uXERQK"2)C6xјI&rw 1{`sDD1˘Bq#JBxaxLSwٔB^z)Hb8*Weu ښw] \9q"N:p꣄xwC|dN(lw !1gHCȡzqj3\4V2"՘[=3J 0 ~Y;--gS FÅXG Ҝq{WFD9 {eeD9?gye` >00 n4iن'7/긛T͋wf&P\G߃NaF;)8]w[!)-aT%0 UгJa_Yev,ugx9umsc=6|Lw }l6E+x? φSqx6s *h{.~?VԕnX˘60ӢΦ߹'} oؤ7*"ʮe݌g]>^}/}3~2=yAɕeNF]uHၱXjT ʔV;*$ӸӸCj!>A.Ç4t7ޝ F"k&9IL$YM=d ba5*Cz Z{1[,} *۱;p dE!ɧIdcsj>2yeGdʭS^&ccVk޲Xl1w"c3e'1A*ŀ.;%BdʒҠJ{ q6C#^R0Vҙ#iq jVGTz91Z$;E(Pʍ"OgRȰ, 0A[R.9OƮyn:vt\&Nxl[Ϙ 8+ `Y/. 1O1%BR{ r*I[*nq]aR2`KZnu O鷉Aez#mHB(흪ua=$fw<|@N_)5A0uuqyPHx3jg>Lq^1Tk%i{-a*Q| `v_^aq x {;ˢX6 r໫|spt;lK}w3ۏZvG-;{ +{G˘2˵t3=/wL_8[ ^*[Ͱ-:w3;S Y\2Kfq,.%dnY\d0Y\2Kfq,.Y2Kfq܁13K,,.%dY\2Kfq,.%dr2kYEؙ!"dZLi2-BFG :"iiEEȴ!"dZLi2b20ɴfZLi2-BEȴYB=N~B3R1~*3Ro{ƌKsƌ/0cZt|;¥̢CVr ƘcjEcg-X"_jY"M0}iWaۓkv"O=Z(J6Bw3BW30==; .ŒlJx" PRE y.tNڒgZ8B&L܏絳IaE>$OgTHIrU-IQ×5J`[tuwUadalH#<P-i?op\-jQֶG5lׂ%g7i ii)ZX+?-[1 ਷ M. Z^8l!)a -jÚDa:8<圤1(G@& J IhqBeb&:-լQU qC 4d43Ԅ)U(IaMb% W#g*Uwֿ yCT[ %*祠Ŵ>K}K4{ȍJ$cJxrV{LkcH{aL.DeVx1zJ|ǣ)y^f&uZep`ެ}sZ_1Y'i7`ḐHVPUᢌ0'W<Ns͹x9mJg)1X Y0Jil3$XiLֆF#x%~ 93dPCÔ,i5T"=!i?6FΖ9?O G8wݘ.9gďn -f𡰷7WU9txH'?(U8ZzQE\1L!z5q>uYi"9lRi6ũjkP{Pg:E$[%(Ttͥ,+qW?5]LuVڌ2)vEkYɲ T`D sRAj5TRyQ*\R>F뛼 C5⪁ïodX grG?]SP$hvއC]á`@WpVD(j6*Q$֣s#2BhR5jY }GlVW4voajEa,s!c]q'wQ03`' Y,UseD, s(ML6{XflclyU9a 5fY]oq}TYp$=\B!`1F8ƨ>2hPPjО=,L.6ȔI8\kýQ Ӂ YH14e ZAjVb LUСQ>;bE(vW;{}޿_~|Koy8?+N䮱I\6]˦Σ~7m5Mߟ0sol~*3g܃?ty~xUrf: ( \P%{ Lj|ot+&[3qtu{_'tWu+l8-+qx93 sT1XƱU-*[fސ\ۖ,JV<}΍[T <ˏvfQB~t`ˑr1L-TbLCIƕBt>-l ~u'?{\8+!筙S'YF:ԡ$oO8'n' !RVD<1"ʰokvtߑN陛EvhЊQ!Y q[gku"!BQ'wB+ ^8Ӛ>E)|f..x9k#PaX\Ĉ/QI#mD:B-^w5r57Od}G0 &̃{VAc\zvT쮽ֿkiٻkl=m*llXШ;D:h+EUS.H(*7N:lr'3/!~dܛӽVQ.~nTM>\VxVC+Ebh18uP~ΨDy &g F8N]gﭝǝ2":JC5r Zq yQ3ƝdhC+ B O ST #Tsߨ("*Q(Q$߂hTʍb\pJ'Sh߰9E4~a'y}sNS1g]%ds)r7漴E˷>t y ush_9~Mw0vMŇ%&Q?-s!3Ϩ!K8 ܋ _ŸqIK#aZs$|G"B┡^{-TH(ƠSbTrKPI5#gf\Rsg eúPppJQm>[=,3lݏ8!ɯ|uG7_V"ZDAyWDP 6j~RR|5 7WG3Lt}&xΔ{+:{rھOgAOڭ3-KvZ{v۫{es9 sϟ靅VnZ9I[uGM[YN>A0]*V|귕-ֿf\輆"|Z;8LV.!%n%sd sɲn%F^YZ8P*eōݽvVGmML 6"mqYǍoZ-wJZntv(\r$eTyyp 8! NՁYvNd)Ҝz@g2lRɣL\=<#r=Ԗ"> _|^Trrn]ܺTCn]_zƝMOo.TREVSU%z I$B*"UX0JG+b| r4yluG"Dw_S _$ti Å_0i# [05޹^DT?]P+Aj̭ becO@+1zڙw'e!x0k5f,`ZFL&Z ilg6z]{2-uqn/GX:)m͏&]k 8qחm1lXtIf]ʡUKI? ~4ɉ9oEvƌݒ\k׶9~O[nƣъOgHfjT}uMބѰӞ7f7#v539adLrT^V:6)Y.@@oS5 +*GO F!po2&Nn?ȮLp<'BGIsᇓ*kq1E*Āy!]0eIaPd=g8{@ {{ { {ὠ`k3!G2ӂsI-P%(j-scHfE7B(QmG'':2,"1rbr7B"s6$t1'_߶:8 [ꐥy6yIռM=<ڶeŠ־3i;./VzUtVƊX/[ӈ= *UIa.`3:/aN(E f-xJeu}u Y*pfj) q%6PJU@b½Zڢ' ږ_[ruI7Wxl'SuFD><~=yZЍSlKl6ZD56%߳ztW7 .^źL]!Dy[qʕ m+wlJ.O_.8@G *Vw^k1Qul;@E!5bm 4*4!IrY'~wo;~7rU@:] Zg ֣_@};. ks.4ˊς; |Za<[wW Ml7=?\ *&]$z8.Y#s&5<:&RXsOنj@zH47udNY +H(򅖜N)bxЄ^;cQ\0s_bcr|ߜ EnrܩNH{> {IQA)Ai cA&zGEK "5q іq]hߜv7wb)E\Ѩh r`|/fds`AYݮc, ׮_} 8F碪ƃCQL:fDYMԍճm6Y6Odڍ.qJZ%&3Gn߃i/~3D1ҹ'z[O'-;5; oy4$UFթTD`= U.B[8ٍx~sq>#M(Yb+&Dxf?Vp#j6ațz&2:HR%h~/vϩ7wPyB}ft"u m!Si͑Tr$=Eh &`HknS~Z|f[!rFAn U`ha)1HW!&0%(o)1t^ZD2(p2fR|W"(5Kp3"'ń)Od 1%0l\lY_ 2q<ڳtV~n) 1\9Av<<,L8/Ϙ/sǎ\Cmy$FH= JkD('QGc}Pfwbt`,[UM<^NW$9 Cs|{oH\ ጨ9.yTfp(U],0` "VH'i8h8ꪇ{Z]դJ+`$M9,i!!E$Iˣ1y+F,B0#" |2)тQ4`pHp:3~6UָBVJ8Nqs=.':4mIJ7|a@/Mt#>h1Q Q Zb1!ǍT!H;֟rh5s ;mT`W'm89x)zRbٳCmU=UvY>f*f9īt >2)B#ϊ UpdYziYXa\j=Fv۳H]!s^v&biJFP*c-<Gq s)EDEdX+ 4 85`BL1jv)m`SJat>%4Q}stTgt/NX8ȗuRhO?"ᕖӣM'>l˼Y]TV $Cd\, PdqDk-I].7!Ĥ@AAa)Hw0@4:RD[fEwZI%pD0AP8@2ޅ۔K= :ڣ8 Ŭ d|Ё=O}(ޡLpJ=+M3,)!hz #qE$e<"ъZ8+V NS"0䁑2R[\X|YXƱ(=\i]LOzNBV{yƽ~ޝ/ߟ)&tw_<0D 1iGRRի~57*֥k4߼O]wWrK?`݌gRp~nO\t~tqQJ3`>tf^_3=? ;UlsuG%)#)KՏW?uװGR'[U7W͏ Pk%trٕqnz6f}1UUQ~{9 }4~K'p0~H1F7!!qi58_8G9o^j-- /|=ݼ+\ׯ˱dAmv: i<ަH -ZX- FKkM6~raO ~׺(Y5]FՂeª]fF.We'Z*Տ=ysKe[UV&ظh9tҫJʧ4U0]W'n98|{`LrZFI:ozE:9  $ D[DL8@?V)PK]Y]pu}#[`Ĝ u3$i`+]o$ؖO8ם¤%pinv 13t=_%*3(ܰ_5~YH`myDa@Ԥ`²B`&1**ZwE?PnIݜ88,II6ty[u20]wӥ 0(:o38y㴌FMFSq:6nofAjq⒈P%GJ1m Т"ZA@ݢ bV q%.XuY^"jxTqr roDeVT g=T˼ *f.oEyF'( jgux'XG+XܾvkbwɫB7|ٚ9w4vKwo{RYb@&>i'2. p,0JRL@G`KoW12#XP1g&- gd\RƐ2",Yp64dlT=,=׻&W/_ TKl=%nH5MPN8KgkmH_eQ`&$ aSVD IYVݯz8HJ(qH ٜivW?WP 6jX(2]#/RÙ,Wut.tL#?O˭ԋvm'E,?~L; k#ɲo:i%,&o,3~gFbuVX_MpEOŎ/8c1fq zu`bL6 j-/nL+WNJ#I[jӰ=N "mqBh/㸓rmrKG{okC+Yr$e4 ,By#:ڃKݚ*cc{ Ev2WڱXHevXh8T1Ipե_ޏ8*2a3k̶2;=횆q`UœNPinD4N:\UtM u(_u7cކN8}|YXщE8H-!@v佥VHGv^!;0($3s`r63L 6tnlm{TfaSeU]4*t :g#DhԾ$5U57ޖ7эlʞ:&rX_é`Kp$e˭E3b2F-ϳ^T,R Z˘q,{2E#8 T\$f@Fs(5bXAx --Xi#1H5R%ےPn}>8ן+sk!_ᾐN;7t~:5yQcL ë«i5ē '{&cdrvOL%Γzs-2G$`6wvqUGx#SkZ/JNW/P\5 l*6~p3m?'B@M fEGabЏ>*3AJ-@ Ly.VtO 8^:[~  [uFirY C9^~ɹ zbU,W ?SƨP^orU~"<eJĔN8S&&rCT[Ͳ&fyCI3Y$(H9w%X#KcC*5M3ˤnfwJ  _-?(vURƗBS^_\Y4VEmK'AQq:6vbs#j5ڔPV9CД}2JPa؞8UԬ~Z5ʆHU1AxGU:vmj \R|C.ؔs=s֤f!dn_7toUCm7o"JM$t$@VHC*O$W*OV(OV)ȡ+Ɠbg4̅h}C^ʞPM(*kc% JXfmwƶ8q:ht`ӑ Ʃs|!bBqGA%1n5w$I\B )9Ufq.@.hIψO )gM*S#qe%t m6kkQ}k߸?Rd t} \w5GH#Ę/x[(tnsoʭ(=9_ʅ]?-R7ڥcHm% ,RxFLS Rpc)<9P Qp)WyCsZD w ._\opO {ZAd~wy_5WʹknFxXq=iɑ3c['O 2r}1gKzN#ylwP@PO#HPS$w"KσZ7\Hj%AK\.X sRycvt-?i;i4B;"txF( E$kc4c$I#C)klQho*ÁD"VHu mm",E9t֗W63n w>n2{h ]|(äF ?ȼdfU`GqQ+l_E\ &U02QI&ʨ/w\Z7k igsJc,o5(=K3"^rL< spc<61qZqYphe_ 5$h Zp5]x&1J̿3Tbi} gc:CmtAsVACA*#$="'h43f@9Fs|N4'2VS~lmg\!EY/.ٷt~uJanj?bu:⒎G#obx 40ZW>-? 5bQnH]BtRR/:Յ6\q(z*F^~{X(}LyS\r~Ab5O7ڂrMk>TDdq"kc9::_H#pkpҐei]dF;ԜAP?݁Layu~cy oPH'(撄 5J[@|rD-#j9 61l""9 pg5vΑ0 9^0blE][z)%nUl* H oR+>zXVLE5n*71NhP"@Cd.npQ%<*I KPjОGx! 6>A{ףW$PNw1#)%x\ipPS֊R-2B\dP iqOjAe]%B2zv1+?4nydyz(BW'QD JNQ!JgBv[Y۾ﶦ}mM0}^q52^N^ tz'a MȩeU{3m&CfO/h1ʧ}r~2_G?y]qX޾ۯ_{oS͏~5_fDXo Jq[L~nۛ65lҴl7&7I.h{"̜[! ?2?\|ݸ}3x|n:ŋQn'0 Ïc‹Q*Jq~ƽJ}~0ǩRWՇP>Oi{vvwQ?*QUF(coT2.MDm${;?VM~N>`/Á]o/sٻ6d6c}s݇8A>8Чgr$ڿUpxCJèCLϪ_Iĕp 0QE5w}YbMTxrfM_\ |^ϗeMw}.ñޠM ?h2}>d[Hg7L>d=[k]Ol*zגּl/mʜopfizt;YB˶z*?|hfCk6?XTawMr4 P|aV%.SUeCrcsdi[0 _kZB4g@(ԧ,W6[f!V#|5YS,WcqØ V r><1L`R5yyH'-76Z3|\~5ha%&10J,|s:ۨo:p;mGp/"4uexPQC{NjX`jʋA~=W|t6sAej먱d QN|@(`橲 B-W!ǭcaX?}<>BKڟK3v 0 _$G$ SBsp&}RT-:9*ɷu:M`~;K;ص$=<ЫhpAp :wg{b)xPO+جt&uLukLQR=7@EWr@aRVib>L%3rIMh0rqXMJuBP^a@yJ])R#!1`t!z_ k06KY*@`yDa@O)@c3FWuqW|pܜ,B,o.\_Tv|X5280CP<6  d#QHYu2L4jX(_ˈiDk45[!-caaD9'K"ծxH)-ZTP+R!f8@39ܺ`Vu IM+XSD|0ar'RVJ\^κEçiWP0a_5aqyWgtrךj`CˎaTJ-/H@WjrnGMl i@}*I:{ 2a $ H;q1gQi;Pmw{E(" lG t#7 ,3AHu-;nɸ]KyS(㾲Pt, yG]]e^쿼-,ɯ f4܌[rb[PE$Fzluw3ɸ6xv`(a=CR*LC'ZK(Ib "#{Hi':0- 9ΏUwvKl;%/R3긯ԖKmv`QKzN^HiSRkS=6bCJd"eF`ív*#8!1 4Cv4H5_N@8|{Ҙag-+L/3DTKD%b[XRS9Z0-Ys6@5=,<(BYƋn5Dp6Qz`)DXҠKFbu]u֝=07#>S;eI%f3I)F-`URI`zht{xm}P]1pok z\[ZPb{5r-=Ŭr1ҿ3gn̘11,̡-|+3#3Vg-nL-$Ŷ]:hKZmidGm2.*EQWYIʭķ3L,z8H& _\Ìp gĔ@GlX>ңvTzwPl\sFtubY#wizLЬn44 wVcJ)k2Qu|dGvKCvTu(jrHRIp:ݫJmc}J!LE N` &x;VF;8fA4)8~i@qr$%B`\Z eCSN4$O 9?Ѡ20Q @$$zv.Z@vJd@F`$AC]Ͼr.3h\JfUD&8dG(o"Xy:;fJ/#Ϝ ; ( ņ1F3M  ΦLF"3ǦqSk?.8|vybΠ|"w (Z9Io}QB_0i# [0(g٨d=rh{pXTcne(%d\$ +({`n -wN}$phDžLgԳmvGճ]x[trkwťIϪ|݀.&o{rJu# m;1;vm{E.JndHA{XaY;*&|[|ai[ 9i6ok}82&N'ԯ2+-OcS,1'>ux)uݧj|*O$*AZW`DɈ]NE\ފދK)zSNsMk8m>VMf.V]i]U`*uE髚1#-1egkGq44Պ"Ah3tֿtk^ӷN]}.W@yq5C4hXyZ>Py&8 {VÓP&'!oW p2PƝъZ+V N7)ǎH5ǤH2};[Õcy>A|rI?Uk2 &q0ׯnxy_޿=t瘨y?C8WPߗ<8Aডi>MN~>. jJ9Ϥa8caYgjB5*7`>mA /h~@dƏ7bs$ uo?No>?&7^s,c)W^^,G9_%e]5R͛:kG4j^~ry:6,6X?N{?_r55Y pycA~)dri|Ȣ-Ӱ$f ‡gk}™mXYeTOU2XevG0,lfG>a)lWL(çq!^fk9q0܌ӴoYÙ_{p/</uɟo.f-#K҅z^KԺ8BLcI{m[Сk,b5R\:\1r57i|`@;/烸{ʳ &z\GtwP2I2M9{ ^dͥ8'Q%(Xc2MQI{MɩգWxK1$ؖ!d~G$z H.&jH-<4fI2,vTC^*fE!z!L)W#w0u,0D8ɆW7ZZ`ܒjNbA/mPG]@ֵV˾iZɶOu8cg<FTt9 1Ev(0MDA_B5)Oy- # 9KEmIQ?aeA,<)tHP"7oqܻyzz uK;!6k%[\wzfoY jxB n-|X=Ot6;ڜ9[ n ڿmMwww٣絖ŖVnƼϚ+z.;q(M=p.WMuۈmG3vz6' Y]Iy2}}#@M^|ȟ.<-XU@^D^T-fRG*ntq?,=;r|<򱜵/F6=vmS=!y۳I˺7M~n14nHj?V`{Z, ~;_!Z֔yQ{9ޞ\mlߋ{w]kpT6`ÏPDP&8@p];@S;Q[I7YR(Og/MZ;aJٓ̚я^ZYAh6*# !UUUU U ;+*FJȒ*>oDQQ&8Vo et5J[6C2>,+x򾂲eRQy\%׍,Œ4UwFٍB{ ÙhU u=k{Vp~ >Fy"`p^ʻG?_G!R I#AX%5ߊ_%/iڣsV@@6$bEHtRGKLdƴ1m}i!+R6Zh]7hIV\k}UXHBjdkV}->7mU[XLQ.K qn7qL[! j:$Ro=XV ݏJY??.ڬ9Qon>(!~>Ƣ-g<@9+1G홼] JY @pc .x*V&1YXj+*ug?^E\n{5&MoK~ܧXdQo+ƃ#wLB3&z}G YW2WL$d؁Echc+2C91KC+ |C:p1yEі|W]*U2jGs"C aS|f?ObIΌhm$WƞѐJ1`db;x G0:dkLqrId: %P,yُ::{1dp-k`[NI r-^(t^x>*Zj9iqԫ廿n:kDY#ڗoòAЕ~L^ZlGuEHT/(V7qׂޤ4&3cN-5{vO].edi{˚y̮^3Ѽ?$w~z4l~TA7~vYڷF{䌽 z4/Lcڼ3ݤE9t3ݤ\0L{^dQӋ1WM\b \vͱd{wܜ94mQmٿo~Plu7j_~S,o# J%JZo kCz BD)Z=eʲ:2B`yF|x7MlN/..o'aU*ya2/X3g_l\a~ o =_Oo~ ׁ?)}ԞJu-;_1;O!!ʼnCn:Xj>d:ڻzCJ@%Rӿ7+Ze${qd-VGϖ,k`A&7NV($jYYG|RVROg,jC(Yu&Z`<}]3Cn˵K!wM#s)n~_Ė*ۮ>mE4+JE'T\ )J @ +Ϡ, (e6@D#\4M{IUC :J7q2Ma5!$rV VP׌K`ݰ2F$x9LXAlV( V&tB]e4VCiSŤ\ I)*QG19'IvӎjCŬ{ܩ@Òz5_bذtL:uxyTldë--0nI5'Lfs zi:؀t쪤f9TuEsB7VmB[-ja_-J=1 ǁ@{Wx3pǨhmyAgT5YکrA%FGFQozå>畣RƫF!8g(ld&1g65'z"OXIĜB^SUfj RԶH7qD X؞_}:n@osϧa/悘ċm |?+ ׏;Afwt܅z')]~[E]Oeeh@h{׭On?6[ n=k-w9v^y>bZv~]ݝ/yH6vv7}Ŵs,~!y0-\ t_xCe6dV;rpak蛟7i98oH^G]0kVXȄ˼kr}ufeo>+Q4.Ý㈎NR9GL`=}5`~Zh0-F?P(0(QThwH_i& ŗ.;nMrA&A>dn'֯I ?m};Ok˅ 3{L^v6]:NYv,t]:.eE>)Kg)C]:.ehKg٥Yv*6*KgQ]:.eeu,t]:Nډ4;[+-۵l$#nH]BtRR+:e71Lǁ=DR (h*ZpߵLDzG$.}b"lf 3),r㨀{+Mo*׫~<Ց`ެo; I[kD#3ĥ'A`ḐHC[RU颌%0'We)e/,'_&WC 5JŻp//D]ȪE^Xm!y 1Cj.s2J4jm^;MҷMYǹ".sS^!CdksIB9-ɺ$Fn圀chlo-C};~ksn]^+9iYAi @sy/Ch rÌ11F$@(I5h϶O ><&q<R'SU}MB7ZۑF%YȻ=.cPȸgkmwO?{& B"{ERpP(@3!cgaMwKӥ[D.HȉjF1;\+K\PB$CSVT'H Һ NO2 nQ=\AQ>Mj#ͼ+~b^~\yoߞROӷ}C"@| WB_)ٺilo4pe[M^ۛ}5mvyMjifS-Mok͛|y:ȇ%S[@/C& i0PoFoXM|Qǘay_fkzT} d| J4kj#L0xU}Yh2~1m_?G2E&jAGʻmӗd r;Éo!Rjp/fP_ &o_xI`R@?xXtaͳ_~9?+&Ld0Νj&5o!~:!&"CY!6[B&oR&}|o}v L5j}M FYF[p7<)+8>~wL4j]6XgCc('K3л[ίaU YAf_{pΌ!Tk)GN>a0An|RBJkFykf5MtEeAJ%yM&~1Q{O6w:p=Hjڊ#@'F5^c@7RzwbTH9Hn1hHHvIƝ tSL ?q ]\sbF `XĈ动6}FU-wr5-w-xba o&̭sV$759oYZԥNg:`<9n>~])mE&ƨ kFV KPS"BiRRPPh}Ao6"O6Pg^R#.`y~nQK@ޫۺms;1y(h.+Ebh18U>뜡S7)P5rL>8:+n?lAn*O(&?΍m ȕh 1EwY9ZcPQ`h8Ap>:B52^B% E2>9[-CK)CQ tSDpJ[6H[#ng* .;4;c%jk#uk7^ 4} O;*n._rͳ;j۶фPed"5x g{A7.iiDRBrHDH2Bbl :%*I% Amdlq<vmqcPbᔢܷUޕ\~X]Bnt3MՃ_(Eo4[$"&óK]u!aPkF8ٺ{_ݢ裾~?nE=ct>ե6.g+0dB`lCiv(/ggWbv ԰3 R򯉀yyz ?Β?<> C5_0O6'19|}J04{eC1.]Ç> ur^βT>@n Ȳ;k|iݿ [][%ZXbs5M5Ӳiir_vrrYߝdN0;|>\; ]#WY:xbDˋ ƲһzВڶsw?ȖV60.tSv 8PnPni CUrdx[]iw(Kf#)CR6\+By#:zEf\xcc{Er2WڱXtGev lL j nófRwNCʰL65fQ&ㅽd9nx0ﴫ*-֙6NzՌd Mu) u(5Nv'-m7rbuF'M #@ZzoQW(N( Da0vuQ%WI_ML_mO`YVZTZ~0 yu&|Uw~}?|gGoEΫc0lEp4o4@~sZ|5[}oH[j'wݶPJf6V\Tɜ¶jp0,e9Vr*j8s_U袭@Ц⸾lYtX\jgSu†f:.~>I<-+j{ orhjXgN{]N7=^}@n,nW)5hԕ:GO=2q@p 6` e \ejwTI;zapT=qJ)Li/FTa#Wo_@*fEGaŠo Nгg}Tg{~4Li2bgq\~Bm3MW!$HPv_D#AFF)';X>g2_KNQnLEbK̯7'GIP&sJFƆTsFgjeTw3ۙ]'I6A +NBikͬ!pG8&}tUjaʳ^mŽ@66e_", eJĔN8,r?T[\Y4H0=&+١X4ZmݢA*ѝE-!4!*,U&WC+pdWRj.  \ergjo!^ \)|N\]HE痆 v\><FSǂ^Oۯo*q]4h^ޡ%1w1hN-oEHu00ɕp(0#4*_"LkV!i'׶*[gWZIDtNS/2@IB9M!CL-URvWS^1Ioog ܿ⋟_7̹Musru|C8<5v |cHB'"Ӕ`!HL))ZX3p8+r!z ` Cx}yTTO;0؛訹@zr݂pJ89Dq~o۬g:{$U[^] di%8 ٻ8n$ e Q|0%M.!,EHvt߯󐬞i(e4&~"`8QxpoʥfAD)FuY@r r4<i+B=q5ĽB;gP/Y?kmaR;e„MV4nChzZᶔ+0cq]黿0B;uj=jWw1p=mdk "Yj+$}*(y^0 F$lI;u1T}J\ M`]mz$0 : 4֮]=+|,k|S PrZi;߁ ǻ Ne[ ,˕m.ɜ#q H%b09SAnHs>k90Fs db2G* A.d _fJ%fu1IdJ dc9xց Il&-Uzw"Qw[ylDiILXC >/UJJ@!bҞ8> R"'3:*L]N? 8nMkkGڀ:Ƭ2CYh樕,JE?ߜ??^paOO^| M_&HivqÊo4]3<4ZZgh]k2>㺚<0KjjV9+)#rşLE,I/C4%#tIzRU8Wg7-Y'gY6m1&u9۬`;&mD\YeMll[e1۬r&?p?>Vghg{51勳f5Ssi w;M*~yiINQ؆A/V2/]ۖ\?vӐq8-'^e~x mt2iGw$qt#vAj-7b"wl9ܾnx7oiH;_^[;λ06z-nZ3[bb׵f9O8+b˼{Kh/>+/2A*F2kƉ/~ܦ(Ϝ?s1DGkWZ$ Zb$Y'KP2(kE|0KiEx8F%2Z0<_L( )@G>ZE;^wUUj΍:%TF^Dv:]^=޿N$㺟Kڵ徥cVImnUcj7dU`K pIq22ʍ3ˌNm5ʡG],~me~JŬB-ȤJx.) '8 0Q!a@f`pMB]L*[IV d&$7FH%n` a@5rvhzUB)9mdf2MM9Z|yN 4>.N&3B)PF#*2t![*7d+t̒ Km6}B3xPs-= |&ٍ۵9,wB]   Ww;('ų>"]\.bx?;?;5?2bK**bxv~)q"H 4 ( AHQ`Sj5 XʌL%p:ǔY42bW#g7b7˱v5+jʨm}`q&ˠh,KYt0d@-&B̑&kU,d 9e25$_Q$lԤ8|E22V#g7F}qӱ b5+"ʈhxDĭXJJGf+)>J48CCI:{ mȐG]WCtu 8)emD\p&`Nl%C,sjzEnDv_XkTg䎸Ye\tG\6o?/ۍj,u[[K8; fb`A|n{d?NKSڝqnolX0}&s:;ʞCRIg|qdQrM(tPʦIj8"½ B垍pQ\!@( 4$E}`X]4E hм*0pY&espܡ1R(N).pQ(*C5rvC>=xd!3EOK&)~w{kZl麗0ERw3},/8̔BFMp){)2S6%"EòBDiYJ90A}_s8.:6}Q喁 G?`ŭy2"j_Ge?s˭4ű_mt<}'VjUʆ tcHv~2AEc6rL'Vcشfӽ 7(P0$"!=IBW㜆v'=nW2쉞4 mP-6h~go?oIm8rB-Nh[| 0l*.L7됭~=!n\.+Rg^'wk'MV7m:˧V}l^WGAR D e2F`1QQQ2c zZbzċ :'$6RK )(F{[+uݎ\44u" 'N"L`#z<( 0ӃN P\;CR-rqsnqs^i}Ux +m/=xlc6U(1Gk̆ZK_>lCXBhH4r a5ޔRQX[טgFJJïٰsh2KƲE,:A?Tbxs& aK]b(@u۪zgd١"3҈+y .9˘LHO(xe9{TzP)Z%)&q#A1Ej3kcW(q q^8ˢO-THBhzw/er ˃Ũ=[4݁&@FqkJ蕋:nMJe LFjEy6Ts?Lf?'"XLZea 0qEfd.3rw; BW>FΞZe_`D7r,on_h,6~޿zNDn%K1QkL.&2=zZ[Z7]늁!;vmA5 ,{hٖOwb|Ux}z3nsvbXs˦+Fٙf}y2Hv6n/ey";l\lzu+xU*16&$뛠AUH~4o26deڃ߬}20-u-ŘQL -4$!хdՍsN#Pa9ƭΙ$5@`!%GT0Fznl $DQ$2A3⩣FkÖLIExlF(% Jakw s~Oօ7Ჾ% > <T ޙ8W4|HNù< <RT <0.(9v(&[6m )KNγQ @j"ȝ "锨ăCx 8 qq.QVy, B-] .)z7gq؛`|0b]mZwmSͰ 0ڍWg 7w"lG|0+hșbLCیXﲗyr9R/\, V-8 ;Ӡ {A`@9w쬃bXsk:vJi d)}hHjqe(㨂Pu%,x#+5W9pu$&ڻOUCa_ x[<Ǖ'& *z5A2A͌ qu$S&B߉:&[4 Gl! ,a45bW.*6X#_ DžD2 mEA W2VDhœT^X4B7]規n M,)Nj ,JG,Gi4c$tn70G=0Dg b(_B&p`2#FZ#ՁHOEg:Ə5gK鑳B%+8OwWi#vfU#LCe'gMtOwssΆFU^T`deya8=Gk-!7T$͕S*F8u4[m JLVz-n:dWټWW@S[N ::E~ܮW@Z9+9+9+9+9+9+נߨE' /(7WB@[ɱVZ CtqGqGqGqGqGqGqCS5KNԪP~-ԶڶPjBm6gBm[P-ԶtPjBm[ s Bm[pT-ԶejBm[m RD~/j!uv{|bkXՕ6W.Gu 7 Gɟf']];}Icн`4d[%BE*5 )ZJB1 ]J g4pB /񮻈wcrʧ9pgcwP6.YW:O ѕooE_W /.g (k`"YtQDuUFqKs Lzk!t"v"Z."`8`+AED  :rA 4 HHQ$-L0A ]pE: |㕆_ƈ:<ͯ[Zv`JV mbIDXg- r&&K04zbStAi}Y<I2&1"xFPIR O7'uyݟZ#A ݁%WtťWj'n< ;rV5{xdVƃ$ ˮl˦o٬Тen? mjԣ@Qך]mn FvyO =|>ƗJVSzL%Ӕo%vcBw|;ċ0{uea޽t6b њM̅қߎņ \.ZҢܠi7i{NqeX&lyo̱fq vME31~ţN`pFt |*\dn8fޭ?x!*Fl8%bE,EQ9A< Qk4I>ʬǂ;#kȸׁ 3'}FC" N kl߼ +5faP"Q^'B/@Nq\_=SZuk/kcYuKlc(ꋫ*Xͭ)[;ąG/^03=E)zx:}.m/(! 6X3brR ~ Ir+) I`BnAB"<'wQmu3nRDPV"Qz;lT=m~}~3~˴,^;Vxq;ID;48D0P 'DJxT!$ՠ=KB|a\s6{uN2xlFdJ h yfOjvϽDZwzaWgqO?~:O~O?|O'ϓO?|C^ 35/ BSUc}m]U{/skrKxP/y55_cQogqg0[x1l `qj=?}V)d0n|9,K8e`0dZU*D#LMoU<.׈{|0?eJ/봖ύS/G~3:>=H5D7Uhm5\;kQ!zEO)+r7˛3 W]o]|YNLf>O9Tbq~ &~! 啳1vw̾GL 0Mo7*o5biI3fYX7@`e!8Fq4}<禍FWZ&5ӑ1]ьrsoKϮ'f8͠Я ZA3_#=C0G'wJa*86Wu'g>.*+c ҙEzHԦxBa{놻nqlB{.xIEnpbWnEۯqxz,6 ߲oZ.j3k8Hq2%&~;P;I Tw"<hͅW}ُ(H-ߔ޸]DmEu |ֆ8j'!)uq'3E}ܷfTV-G~ˀ zN,/1"gD1rETpE@zQ?NkPwW'ٚx5wh׼WOuzwPם5^sP X"ŹA>~UQA=WaEB @VEM*#TG ^̈K̀z%usp(j¢M#7W-鷗cr\!"1MP:g(u3*QYíN*sŶiqkGLONBn'aPL^U)`õN2T85'`CĩDT!@aq )Q$߂hjJŸn*N)} igټEi}^EP}VhHZ'/d8v+hCz8}y4qMN;SĕHXIϖv /%VRy(Y@f`-!>/>D,p/7Uy%-MLRBrha!qPDbp[N AR-lB!յd9%zr]W+㩲Pv, E;K3NU̪yXsnvӍo~2~S͠;{JK!Q4H^z4(kc Z]Yo#9+v*~#0 [L.r*J^A,itFsxbXC4hSx˾v\֕Q[7n vk tjBZ4pGCYl>1jts+|Uv&nD|t^XW:kg"q6\luŗ`>%*exAVY) YD MGVBkRjx\| ;EJ`u^y6zt3_q?x$FY\!̊;|Jh2w0RCp2Ihy[j i{XW?撴hbI"&RjeԠr"{eHRA LUzhjWK[Bd4ǔNO (q .E*jq>_מbyRP4Q)K& K&$jɔ6I9 V&hz MS),p˵`XM&pq-hoE׀ةC,y0DZ%.$6>0](A{uEgP@}ps]i筬R>[Z~Ȁ#kLtT?_R蘐!7둎M,UOֵ$žk}Au^*?p<3MM+C'8W8v#9(0obPܗУЋj6,(3S4 ˤ93FĤ*BJ"G$3P7q,ΓTTM7%{VY쎮];rhRɮ"녙C1 X`kdz{8N1g[lpۿmMww>/GK-xØY{EOGjo}zz4_4 ^-!m^ӏfaɭ|Mrkls#@ vMǗyihE9<^ & J8I 2@-[,d ƄcHғS=m\XJ~wH%Z 6D'R(", o#Ą:3E4`)SRXk4ZONZ)Jc% .4(-Fi4veS=!.`m8Oxsgn!\Ⱦ.4pQ/Q>뗤ꚗ8{¨wǷGxf?Ar\}8llpvYMwFD,p#ܠG̶4{xHE9u|lb]|< ?&T-Kn1%#$L\_~|p]I9+33O5E'*.0;=qv|Ìٝ0FANJBpi*vM xafQav0iQ:ua~r7X< ;7tOTfVu7;w\ɽW-d0'< 62]8ܼw-l R_ N^b1Jq+h&-X;T~E E/YӌdcJ 㔆-Zgux~EL`}sy1ZrXje![FE B6*Z陎;o|䬵}rt|Zʼn|˹#Km8Iדay<Ļ/lF[6o͘;1'`8Hv,u`ԥԪ j9J-9q4,Jp|Qձ^ 4V:v(A:#JJ-<Y͊:IsTMw*%BHu),{6S<Ƙ9JK%ȼ2;&nvMVB0))/$^zߝ57n!l[.|4y$i53 P`nd P\ԩi^P| =!B%g'WEܥ(UVURW!\iPZ>%JOHU;f%61.@)O? &Yӈ9Ϳ# ÇTpP =n2~B0kOdXe6aR%)9͡J\q \i;\7VvJ;ʨLJ lIZzdz\&+ YRs3 +ʢ>,EH-:E w@@]{`Kː>Mpe(u<~k[u 1~/3/qᇳݵÏ_&pMy(h>edQ:[a+~kmodH4ԯg#EDsF,3- DO** ^EjN*"L3cyΒWFO ) H#RQ {v.VQ)&Gv!eK);TjR8[?VgϊA#}|;v.ķkqڽnVB$;|MwwwGgzˍ-Ѫdm2`qt C}(>6S3X&7ޕ3u%8M28,nrB5ZA Ju585x-q) 7=\;Yjգ^oS To+ey$RRi3.*4%%M\ dApcG=eWy͚|yyyyP//6S:zX])Z"퐉4HǜBCJGR(lC:p2ŕy-|.[u8ߟ_9XBDM=#EԬ99 š;epAIHDJm\O.k[dC,xO2OX-)N&FF:ZJx5.qId* R,/ụuv,eT4CwRMMq9C(g&v."\eMO ŏKݟvH.c*dRYx畇b9DŽǬ\g)MJA˸CcTBK&# [HTBH B*$x-97<:O60xV+Pe bo;I|NɇϻUr|^P`<*ur[,L3BioSv4KR4|ԆgPOADGsahJOO?=![C%51D3%B^Yd*5ECJ&")(O)t݅˲$dO h 89N nfAhG <"hoPysZFD$hZ_M^7ߠb?jBv $RT۲b2O_E獾/8\7Ř 72''S{mBX(Gɍj0^ >ͯXX"kK(+<:N_E!702 |A.;mhaՒ{f>Wn5Zh"ѐd[^m;:6q?X<⦳eb]\[n v]NJ;_*l6ò?ꟓ7}q;c?@իꗟX&OƩ*==*d#LCocY(s;-şҽ;>XxQ>.}oӏc ׷6Da;R9v jE Qwe& \z<} kC ׎:ɸZ™V}|}PVSqOpSω^a#"zFQ NHo#gi-j㮃7ؘD;{ |auڗuݪt6꺇]Oc%| ,>]uJj#*ۭ.בwe)]&^V KPS"BiRRPPh}BPEsz~!gM.Z+@&HL451# T3{(O'cZQ+[WV[-E Y*ɕzQ)`ñN2Tl8$jO zST #TsߨJ(2*qR$H 2U"87qQn*N)}*ac⬟",bl;T7_ɱ_7}ڏ|v crT{7˵^c(*o`. %DgG@SEOPaܸiو*#ZX\tH2mHN`c)U1H*TMRM#ccG,i̚bl E{w:u8K/+7xʆ7C?7~ny[J%ᐊ( @$ /=> ʻAXox:ZC$͆'f pg6T^~P# F&J7؍]CAFǶFmբv`WGKAy`8 `(mEJ nV8Ikz!)̐jўGXKc̖ZĺD ꨬmg=VJ=dCAǶFD"b+wV$>Bd$eK]J=l ԑ6f40Q&IEyɗ2θ༆(G}[E36iLx!bW;Au65JEJEb+G|N'9u\Ƥ, Q )z :DjnPkMr-.!k [ͥPSy8c\HT֏XŵOd؏٥rK7$'6X!-T:rE("܇jP7cՐ7/ ZHʑҐ!:jr 1ē*2aP H ^:nzL@晓dRC}$&zlNG4H6Fe7P>s; I$BdLEoX*IQJV AZNFD1Śsx3pf 0%kPf&(: a)hܐ9Xdq8ؠvvC}s7XT']ǜf*('.VdPշI3kŏ)uvsrᑔ \^jX:X3"yOye+*Xʅ!22H;96e~_\{.6 }ۚ^3|@me{>a}8cnY(A%^e LSנ5hrxנ64 ³hV|I NVk"2'p!( HCtvyЍC 1{V]욬ǫY7wH?rhDh}Lz|oi,2ǸUsN:{lpM4kbt֬2Ĕ%m潓,3JthR~E~Y7[dlTGH=ݪ Eu:4'՟$T!^|1cA)ӂZitsΨ$3S1ʞymo6E #bf7k{>N?gWt|EgkI$x(Gn:٩^R=ROni ?ϖZ8A"|w*ƀO+2rB/*@LM1ع;4^9# ͍ǜӯ?|wvb;W&Y (ךYة9m'7[Li/!{)#T'tt0 <9bqT~,PB_M)wI䜻!cS @0LƏH s4MWcha8凮d)h5gL#+su4pŕX*KkJ Lp Jr1+\eq8*sp4gW `  \eqoei UR+R~΄`\ Xyryx5g}PYLO:AAĩৄBeS :ib̾%Uѧ V f!_8"Bj},0aȁtrd9%Ӟ 8;⊣,C+SWa ©4wr3~7l$,m5utؕ5//7Ύ,I)I҄ziUR,7<(ϒ^#) 1(,c @[nwkK5gu 9e wYG*ll7ʓj` ÓB맧<ĕiעIk!RJBAmI޸/]vv]2,EDMZamcM=m!෍07PRj"ȝ "锨C!%7F+ƒ~/lru+C34nn?C#LqC<(ů+oo_&\u_vϲKGÂ悜O _v ss|MZf{{X,3/7W~:Ceɝ[{uytZj)#Pa9ƭΙ$5@`!%GT0d 빱1$NeL*$ <#:j>1倯RX1qֳCDrz f8,cS\֯[~jn8ky|i(&*tp ]k7+~2``ݛ{uEz4L~-iF/Ԍ4I`{ԭ&dC^k@>SV I7ib&H͙0v[{7$\%4mfge =CYފ RGD0)3X«;,%V^7+HVlgXё} ׄ^7;.h-=C?SsvǴk%BoZgSMd2&BO4d1FYgUw^(M.j_e~ObWN<d}}s#fqgyt~{NJ> 4MH:5LY&6).TwHNy9O<DN*IЬY$v1c同({ Q>Y% Xh\:pZx2qLGf=Q8W?VΎydžDo#aI3qzVǯݾv熹ǪZv= )IG1|:8 7n$)d#vAРө$  CQe!'">W9Kk<MNEo#!(lNTmC5jɨrdWv ^ɳ^@z*K.|wdz)jM1M^Wt򙔥]ń&Pq:&(FX+,4jLtSSS X*x2`ƫCLS"ڡey)61OJ!bT#! ʹ6x3yC j'~\WOy.Q'O*lLǭIQY@55ͬmq2S\ 7YV=4[oɮHQ"QZƗAZ!OF,F&2k(L2!F/WKR%YL"f婃<8(Ã= u$9N_42h R(tQ Aڟt%Y&!$j"X8gIYY}5_X~.S56+A V"E]?l X3*Y/b?.4Q._8k$,gBWk..2)-zVw0^ ~ac"pLR?pQ' gjp~l\t$Pل,tm8{1ZW)M744文=ΞX|&ab'#   Ǭ=v:R*Qur_~zzg؄"_YL+5qE-O&(y(|Lֆ`YI:X'Nl_Ӿ}J{6(9|vO䘷I;E"9̼r%032d҆~stކhȤ)f@ f..֞ZuI91VGG^ԟ.ڣ`kNnU0]=Vp Bz|;L@$$iCr9Lnj$A(47 9AL,r(>\r#<#K-/.#jSQZ%N8fYYweOEӣUDJ2*mr:H CdV{)j+Sjm7xbNgycfMe'<.A Ҫ6es&|Nb:L|U xЊQ/Lÿ~qLe„4 tBǐ` [QR#'CyZy^ge>ǶȺض!k@m2և&Y%%O n `Rg,teCTpC 2X!xKWJ&=ee]:;BuEgR~lhjF\;lvy-="ZrGϷtMw`lH ĚhSAnT$)EbsDIN4qp5)(Ui#p/*XwXtI=?DLӈ,ZI\KnL Ò*ȋn?c75=Դ!)uYG#Ae-2\Aj8GfQ*fɐh υșΐ>pi^(=ޛ5t.I?Yh&7ʽpzi?;~Ǐߟ}wg\س}ϳwGMAd{ӟ/.ʛ--RUtyy[ޥ\WהGͼ+e6xg3dywCk{ܑ],wZ7]Me]Phia?#VHy?.;l(ɏ~ᇑG*hNYR˿X?bW)ttT__Pb$fKz?Jy?]@}7nro~H3371mxj\7ЛΣJO??-˗d`~^3N8Ao]lM箳\g K͓ |Jަ1A6 Et26M1% ټE+`\M*lz٬aS/N/+.GX_sY\ L[z>|5hS3,8w?9p߮OgU*v:s"i|v/ KzcKA}{wW. ɕʼniMuZ,DO0)R{eý}-Yh?λ}YJuowo$rܾW}ORO;ܽ]w]R^ c ʾގ-.jK/_p4ʹY'qi/n=nu[pdM adN!VhvO rc GkWZ$ ZbQC &6 `Qǧ^ 5ߏqO8yh"Y0`F_\f0h }"}ƃ붻lwչjcbsvxs+̾^u׾֩Kz&.It k{2YKu>,T[gq/fqVIV?]~58~ϤɵJ48CCI:=}*HA$Jv.Ct@8)emD2\p&`Nl1 (BW: n\|GbSsN#9fZ%ka1$UQCЃuՅl\j=RkK- RkpF$S^JK0DF[c ZcXEk}כ~gH EȆƠB@VO|}7}(q$ݲPJ0N/-n=[ϤqIn=fz6&0tq'FF7T8XT,eokKX.]TwW9pT;j;\=^%H&6/q0\F-!aTj,ة"BQ^i \erik*Sˡp++f4WEpU&WR+X*S WW@ "h \!UU^#\Qzru=J RN{ob-ny;DFUѳHMo4̯=&Ӿ/tE_YXKAb:7ʝVpŷ`?3\/@*r5J9ث_ {9[,Souf!Y36tIǃ7&(AsEb (ךY +űTcq'A,f~4^haE?{ʎqp,X; $Rp]ؐ F)x5Ɛnf#VbEOB*S8 ȉWSQL"g1i 6Fh4Zd*Eg/~$F"Brh \!OHxB*^\G`#]!łD%֫+< XV?htozM{/+p%;9^;`/ߜ>xm}\j%8AQuRG/Te+}Keo*vͻ7'c΋ Qђ>Swp㾯)>͊G]8'2О`r#kN9R,_©` p$]'.q]4.lPuوDf ܆He8 Rh+Rq&`!JZ՚AgŦjՎF o<Ȅ`E DK VH Egܤhh˫yߕ2_};p%T!E\ax;d.p1L{xM >, >|J`pG &R"9ƶ.[ՠ˵GGɣ2OXS6t#$!\;dSQ2weCeمb$z7EzX3^9}]#H2Itu_ϹIj۠&<"+b.I2s9PkD-#€`YppML.u}|%}m5n55P8`*0UGf!$ՠ={|>%Mk#)%x(J1^%)*I@=[77hq!DE^ $!3wzcb݇y48KQ<=W!JIHy2+< P >Iu"oZ^x $ rIq`I.g!qP) r[H.RV0~ )Lat~֣IƳM1MbozaWq?tO퟿}e>ӯp <[+B*wyt޼iae]M^>:k̲RŚ-|sGjglu]=C/̿dra:*6S8_2t0eJog lJUXϟJ{r<=*PU(P0ŷ~R_Z?G\EC^>Z9 ǟO^|nr~Aq"*ET7z>*͗U^+~nޯױ;fr?džFg1y/ g+Iń&SZ0oɦjHBo0]92i-dmbG|o"bmI5fYm/kHʹ| 1S<m UҨmV~[Lj|kF:47i 쌗ٙ\85J( R),o)rlnU_˒Wɍͱm"U:;Jo|NPb/6,g+;X U\9a6i9}2k˸SeFʁ#j}Hk;uߠU2zO%[͖]鰄QJ=J?Olw.݁m;!c(k+ \xe؏[vr]YOJ\}.CV "ɍ:>kC ׎:ɸZ™N}|n;\(ʥM<>Lޟ?yeO='{mKQ \4BFϨZeZ&qz}Hݺoj'L0d?L^;+{ؕ$f=Ыl˳lw:6ȀGxj캼w? ~؟aU\J[A,,BMX  IAQBu1>4>MA91)%5IP/ 1V3J YRkb$!ŕe,hm#*(I&ijb (GI:g(o0ɨDy &g F85JtUEzEe{E"8\$qn*N)} 6r^9F# MAXD5#CĝV,A!x2%pDh. =lv#K ^ h=u(S_ŭs^DEÈxsq⤘-F:#qqtN$˘e1x#jA1Er@N:BΉbPkMr.>.6cՋH'E-!o'xY{h1x#NE츇\2ޏTޏ[st~"מX8dB Gz_Q=mjHuw }6;y2P5EbJ%u_A(Dcο 5ą38ޠ AaqN & &eF2 ,Z#ՁHOs/ƣoDAmQg&ȏWyEw.^mŎ5Z0,uD(D:*Z1ETIo q "!“ BwK\ *{ dБ3J0Yhed%AZ*H\@D 98hf Jq&pqr&eU s,PYP9{pu;[mk0:AI_ \[Xbw5yovtjuK_?;ZHAZLL$ahĦBE,z@sMb<"tгz:ސ'AtcT$QLSrBhbCޕ6r$ٿBb]yD^m`ck{h䩖[MɢԲ=b"!)IVIim7"_DFu@u$I\3@]{Ka&y'ؿtivD /#vʂ8(*|*FnvuU'v''i.+e(<Ώ i^Z;IQf?VX'Nzv_޽f!^ Wxz^34:0H-!@v9-goQWZ F(\*}:kzUzj]+RN2s)|>iCNR ~o!@R2I?8i|ʟL;?pp(2Wߟ%"wm\5:>O\F8I {Y>1h%}nT>k%qn&NSN~6?@ՄC;k۴{:aj͑V7t_~7߷&jhWO_0:s[)(aURsQ/?^upN|3x%H=ϛ~nB*INm PMfkmE>!uolL׼'SGœdY;|9[4Ƶ&t D$挈䤊L(&w)0` xE-kO hOWIoru#e2 bԚq8 >-lYgŹD .{1 I$Bdt"UX04T*J(GZNFD Fڽ^v5aJ4tDqD4poHԝXԥrAgs>PWyb3g :Y as@('.VdX 6@uF =OWSLVޯ=^\jqy$/,Hŧ\< \R. A{G{:|W>μښS> G3/MM q$ݲЀJ1N/-\HwIHwI?HwM.$08/*FF7T8XT v40O"&SL"KǗn?l!9;f<;wt BnmklB#=?)c)%Lܺ.["Cb.>v~%rfO^hy=\_^nxy-ݺaDS=oypx! aj `ΗwM) 8NPq/|kUH?[@UR7Bݤ;WVd> oh\Զqhtt[8z =ˬ}vۧ eI̾Q€3 ߧ)b8Uy#:k֔z 1P l2!wItOZ4 Y]a8;[QH}2R\o@o3uz-0˼k ϊ w|އD`6ZŲ@hXБ yFts'2@bt7\$8oM_nnl>YuH[|T'i|mWh} oLMh'dI㒍sfۛcrktmMsodvy{~B_z]7g~6WחDmR׼5%Ab-8m^3fOi8-'ȗgS>ansnXu_UB绗|n}_Z>rf:lt͌ߦɞ5bTS#^ӥ嚭[N  Fz;$|> ԟs!`PU㢌 0'W/ܨ _E\ &U02/` 2 pheC-=䴳Jc,o5=K3"^r*ybK+% zI9(1nlJcyئ浬+ey$˒Qb؀ h` hgqJ,}m2~} 'p=-8@b.ьW ݿG-Jc|4d]ntO+J{N|u-%/# ܔtL[Ԟ^=.}Y |"ڢk^c#)nے(*@ _m2i Iodô Xg^?]8Xƶ¡q\ipYaWQ qu$S&B5HѪzkʺLp? Β$w+ HR ;f&'J/`ԲUkt<ւ78%kuSe ͞lּ*cӱ !뫋uVqh5ECOBckͬ٫ӵǡ%? 7g@Gp͸5SuLՕuS˚.z'p.UdxV?Z.Cgq{gE\[8ìaHK= h"bdZ;!O@ s{CuJ<}۞|*-V;kBKK?yS 1egYZ}?RЌ/1_],.gCQWY,TU]BuŹQZ H]q +KBimsuU]FuRW(0ᠫ,CQWYZPJxUWP] A ,a8*+PU]VuՕ@ĐUXV\=u4xR2Z+TW 4]-Y7ZOBk*ޞ =wCOГ.o 7qf20|\&3W?C.iJF ̓] JXF6Lk4c,q/j1)p *p .p +p q gI ^MF(ZP!"U43ιrDhT. 빱1$SIIψO92%tblG)ZORiRvicǛ ̫-eS3fxqfap=|Vs$#l@1(0jQV>F7KX}1<_Az"k%Z3+a`E^u p t3Vrƭ+묲1~Al.Ƹ(,@TjU 슣*v(R`W J]֮HK툴TfT 9ϕQʨ\+reTʕQ2*WFʨ\lߨ7LЅzt2AgD#W>,ACAW26DhœT^X-d>)-1Xd)oS.4RhtRK`Q.8fI~s7#YzI'Ѫdm2h ta>#jp\fo+q4gɉz )Uu5gȳIa;= E{'#DrPyQf2!f,W,K,?,Oo̵5ϓGjAN{G;y)}XitYJ=Awc *9S<՘d'TN N]S)Dgxi|]*H1ŊV Q!qʲURZ1F󝙔>C&;;uA>u0kx,繃n-VgQߍ\gT &-ND *#edi&R,j8ol@Oxj]LBZMǓs,xٱYs6gr/F_2JKd#|s}>b*.+zJ}LS Z|gEnb[R'N`"xb^Z+ \$\Vqd u F*9$TWuJ"o3Ih-!RBcY(5sM)BR hTkqZHpߦAj<:h>)+ܷn"PvZlLG'Ƙ#,Ʉ\7R"WÉߋt2S-M(GBVAb VnrPR_&A ,9}Pni{'K>Bʛ|0nxq7Mm:X}yzǽ??IW?__O1ۗ? WH Ɵh oٹjoZ5ܤjUե/~Yޤ^ez[3~L7/|1P.f}|9<;/7gj=G}) +<[097eQ[sE^{~TOڷj}GG['z>^U"?埣tĺk[˛fu߇ξn|Fϯ~ʆH5ך1ykxGs5먔ey* /1;aIǏ&g6e- FM^eq|n Iޅw26-1%Xt㷙ϥ-ߥߨ[ՈK̚6z fir+)nG}|GXgoq^Vn"{))C%vBti&zk}]$? Z~;ܬ7U+ټ;"it<7 dZP -yՉe7fk/Bo#P׉*m_73~$ ՝%AlNʹz|Dnr)ao|Yoh1׬Oy}NuvKIofZfVZc;^e5 sqVtS|u~QF/'IA[xȍOJ&گ 7OdhR Nbb,q` p힮 z%G7>^c?տVW}iGx~x٨OpfQĄd$&3ji7Nq*, R*n .Ckk2 B]{wqOMPOcףWnGۏe  Emq]d>OVw-;3v:AIK}W"Y!dLv%42$upͅsNI\Nq̉uIzfHvή<5K `8񗌑>CJES9jPiX̎êVM[Lrǭji҅LƴYh.J콧M ,kTk.M5p*TkJ5oj@R l=d* qWy_pӊ:V53"LuKUCaeJsƙM,C//߻ |bsf`X6es(4W2BAdaVqSԠ8p\JJfa۔G>(7r.{gYBuYs6fKS:agYQ? ^q_3MLjhx@ĕXJGJVIʝ (D 3!ZË m$Yi/I%5$B]_?98I4՞pq\ʵW{]IɖYǸhxŕ_Y(Tډ @1S :KvS~%I5 x7xؕtl[<[:n\x5ry{ Co(H/\UI[旉I<@+!Ά&y;j܊idx2=15L/b֒ zsy"'zխzΟʜUc00[ym,:jǭrb(m6So|bJKU=ݼʰ5sƹw-/&w#kf|4T RRBYeUd%--uZRǕIdiiܤ|f2^qYx{Y4t-Yo˰*r^Fڥb])RʥPiF96W VGP&.owb?ƽB'EO޵C0bt_$<]g(߿Nod7?~*O^%gAANvZ*R&W0)u 4M|50JGN{hlRc&nnj/4$]vI!9/CQ.*9WZe WcTQ*Řy省Vv8C6ͻȦY/JOm#8ՏӾ_|Lx>:~s,-Yo|uYtg F L$WypAĬBgG)MJ@,b|00HZrBbCBAhZ aDtmTa LVx5fP!F)ȣ`-6j BrP ;ϵS C:kz`p3w8(u~LL>l^;m_"zZ W;xlfZk1G崷);)0ueCz"òsB0,%wng8,|nȓd&L +QP*!8)yy GvAG(O7S E.އ{A :f4zM{ 6HCOv!@Rm@Q ,IL/>)}ݔtA Zר[-_@z-_ċ/]/Tax&9]3ej|!S0B˽w]]@^߫bпnℬ1 7] /hQ`.tzm0^SwS8߷v3+h_&uS_%Ծ^8 wfۖXč11g!㊊.ֲ-\#]ܖ_#7]JsY/6oDo@Ws%h5d'% rcfGUSHLzp}&r5iL`jz=UN^9.jJcY2K5XF4H]9bьki응q>@}s7ħ4щND_iRzpMv@Mv:esTA+Aj̭ r'xN="2 G eZ30{-#chnDtl8-QeUNT*}V)L㕚]t7] W N˯LBWSq1a1uZz;/0P]\j [ͮZ;-O_"Xny0WW?]8п燃(Pb8G+z:azm m#53l>6 ۜ!/Rvk>] 瀽R؁Nhn!۠LnL )>L̅lM:ʬzfۧpi$RQ)*^asQKLY cVOaP,600κ  C!fHM$#/g)Oc.*bۧ ŭ 鐌NWk9a{RIƴu6yPF#a8`#vťu )Ete]:S"e1wk<Ι$7(=g8{@;: ~z9 V:y$#>-8"YB(тJ0'>Fd0JqTQEjEw9lVJH`H!]l8!V qAzqk?ģ ^|F̓#| )Fs-"(M-#  V?mJXrΟ(Jv] g`"y@($<3@cĜE NVF߈WSR|We&~ޅlR𯛾/vy0iݾ KtE2,6,rw|Kڼ o ٵjvFeSY4~ސ3afo_NC5Ʉקt0<{M R .,YeZY'b z$w/ &0K=l댹3}qqDis4WѠF2(o7F,׈.>X3(EF!̧|U*@fA['1{PDyʫBibsOj+G]Tb%'RƠW<̄ki01m5iXmhA1[Krq*@l!_Q݌NE>|tӫ햣P+AbjVi1K XI7D1 rpRϗ^yv>Ɍ_DX|K>o5`ؿ(l7\-Eb=ۍЩ%\\#R/ TB!gĄ;9+M“* >(K "\矼&n4m:Ptf{܊Jks=ZNC_1wD5v6wu=-Ъՙrno 'w1F 6r'.iWd,-TGUT0Q/to:1_/@*1R"峿qzʤ”g =Zߠ'ُ%§ax;3/.p%m^~ew>Y03 ,`(}Q#:taEh'*_0fe틐E6Ҥ2DqۢynU .S|l-ܙS7mj7[O,~gIo1QzӧnjӉ cޖ+3JNtt֯R&x{;Yβ7]1wSV${ՅI;TmFSaϡK4D VT3e~}LI-jŒ֊^k6Kgn h\lQwYv7-Pl#.߇i-F/ը{)26_Za@*@ES-%t5^. TW +(KMZ4_CrZ&݀•} fꬽS%Izf. Buuh6Mtti΄ V+[L-|I9z t%[JgNx'Ѧ"ʴ^&(AUWұnk;Xai] OTs U |@VGY@:]E~$}_l!Fe𻊺zp viԊxthbdKSagb,AP坑Nc e & ,%a0@+&9NxQ >^p},N:3;V!H؟0C" ʴEd`5D7u{kI~2M5rwؖ^rÜ9Q [c#mS"Ѷ4UWlo5 T*f5\G+-=<<ƕ PB)U vvq:l)x>μjQ lE1GgXܟ>PJGΠ#̶C`f`"JQr.[dTdZ5:[)~T?$}f61G%ߟgɷ}uDYG6A  t'9"AZݝO??VQطsY04\`D$+h~L:UDZDyKLe Qe0R㥄]Ht 1Mt'iaO $0{)ܿyC9}?rc$FH= JkD('QGc}PfwbtNBo_H)j6ㆌ/W?Cs|{oȾJbҥ$8ti3y<]:V~4J"ҿtDqźLxi;XJOoS/|^]wt>",18gr"VH'i8:dv%=F"zhѺ]t2N- % BSNI 5BhT L=j16?%<^6O5$>V;b9~t :2ϭuo޾'9,v3*4!A렘fk19n YSw*Nc&u`(3*h2vlrDÖp'%ᬗ8nXp8 qgVʁh^nSv00Rs˲$`Q)b;M `hgy*PY@V3+ 3Ft|҈Wkz4>v2OF$6 Eʖ t46 Ł39 R,ATD2n3Q"LIX8F._lJ)µoѧcpZ>?<_7gi$=]V EiF]'m N?x@{67|'M1Y5p )ksF*@!ƒI5U1QpBL jq$gx>_PkL[TxDJNKU1nQ* J3K`@a:pȃ;x` Nǂbis>%0wTMRMm{ep. ')yVԂ XE'8ߔ;#E` ɣ u+HMN{\|YսC2D܌y\W48A8swa}/{~~?Js0y'l}H3𙯿Îey7\QIJ pբ\N{sϡI uʪ[ţ◔YIVYُr0k%t:+ }<d?دD=T-/k8zf򥷰p0zH>]!Kw|MУ|NڮU(iG4nr,!6+NПϸ%!tiཁerI|qC 5U&߇5ҷ31GeT.*VQaP>T|G0Zqs{śgX --+7n`BAo.Fƅxk ۬-w7>.c{w˺h:Qpᢰwm$%:Ǎ#IBO3# ,f=3x 00L`%Y<}쮖ʰ(*WE|Ow-t5ݦ"pb@(ln=B[R$c k1]X"nRpzŷTYN55<>|(HfItޮqf ],]-j3Joj3;SJMsqV^8Wb oێ!ԭW&EdNEfThcoiW>bHhJ$AKP 6$I;!JxBe 0T caV @ӱhy|~.!"GɼDS 30v0gQx \DxV MEJ Iѻx%H&S0ƭĪ'aRJx|$$FH%i\1beF6̺*"$H#}TrwJd>L[ wJ8D.jk{{z*i"-yy.%v"Wi"|RCmx@(Qet #EHNusTI, Ih*3e)ڜ5sgXm:RV2B] ՀgWE2ޒmq ^x q1x1n_8b T QdqaIFJf""PΣ{ mR* ,){> &CRؔڠ!hƤ1O\3kMg?b$n}Ajq,jʨm$]I2QIft,KYd0R"RƂ&B̑&kUCUs^J}J&Wq1KDU݁]yzخ+=`bG*{I%M:7bN1 M 0 Y`)AiiWHZHL&@G@0J+H RTJx7[nk0В$oˆK=[bw7˂[fDwSLyKZQ{\ʞexBs ˍbH(#DzB,%?Oo=[g@/ yrTFZfMr}9 zx,sQ3LF#qE7gM= O_H7s4 iQfPD;[{eBU`e9Kʁq};f-ZlCCM계ښ8 5,$/FM;wiø1f1A8V K5ALJxTk3KzZrNqz3tY`7<j:/(ZL't2^g}UOŅoh_fS;Z5^g>IxƐ߭|6~J½ ?4ziVo!ϵncԿR>5L{oՅYfuosտo;Ugcnσ0.{i?7?OǾߔ.nxW>|bt-E.}o,?ᨹjf?.=ѧifw[?y`1pYP%!Ч{%K*T^~ickfʌbt'8ǒC*JE"Zn[ksd|'GyqؘҷgRS(1zlvtEy`SZF*lв~Øwc hdbapw <,lکkڰKI0 c M˝[y-Fi>$@Rk*eEtGhR"DQimeL`T`~yNN*x*|yY$GVCdIJ.=Jm@C,TN"TN^@o^F=ap]X#(J%~DV^گ~-"Dw^p: e>_VE/~+L[Q#ήOgSbw?Wtt}o>m԰=i˛diN|L(#zwDw|ֽoMgS"7_҅"(|Cҽ8tv'&չ3[K#zAuذDuu'hZˠ`VwGl. 0,WEL7wu/5WT/g˳RezN6mg/W EQWHAPFa"G"&eX?8S w)(tn5)9WRoҼ-:'v`xyQVy$d *8bMUʗ`=) G)poq@ˬ)7N7ܔHO0' dEm[7p"hBh ,Q7 ,6L m_im%ٻ3Oљ'±͕"+'青`Y:$tp@*y>;st>%V }$.2љeNmЕMM@Ӳbd_ ~Y-Y|疯=]/‡w]:$wNm̮V?xmi ;K*Fi,ZWLrX6y/޽vqC7ZtZt͓dF_tOѧFf3_X[4W_5=f=3z'6OnYL*ȫ:|yBHFF7\AuƫGR~TMРd$TT5 tc:I=yel4h` 1;RlJ'hi3sK-]~/Dt-t3d ݧ>(opng@%C709mL?;/M&o^|CZ_/bZo7z!BQYcƩpùsٻW7GC\FCUח\3&}xF&y9m̒&Tl:jjÜ]Zu & 鬎1x(c\c_:Wj~m86;'Y޽jG3 ;Ԅچ}wg˟F'؇^-cCG-L6a!4KEsY/Wkҥ~7]]0ק9)/\-n\DCg`-:x#ٚ)\5S=)`p0UCN1{)e)gŧ?F@Dw%eLmΨrۤzqRxa;~Ɲdz+ljŖN.8oWwU!ed)}17MUQځlٚYzT\u~TѴ|B}((o22f5,e}lş5Jy`>bajVI61_ɱ%eB%ilMk,]iZ[_:GBF@-.#6hگ/0v"Tj'=:fdyJƜ 9X\<s9YU-wTMYƓPJJM seNJ}D:U2)Di;z7Eעȝ$S&mI!a+V 3_' H/"E!-NDz^,tROZ.E#OAQ ׄOQdO])i f;dp+@]+(?׆RC.ಬT ++5f AXG @C9eS26XYt0wD `L ӌ/%RBbYLu>A% `-. `I. pq$VZ!hަV0J892jhP5Ho&Y#<;"KQ @;dj@!OM JA]ReFVݓ.PVKtbϨ`3/oPH&9%J,"q 2Z*ȁ  v/PUTy Tg Кe$M'2Й`'8#U罘H+fD' G1͋BIJ!`B6}o *ܹ[`GK_}͉ұkW f=fAw  mz@-$:hɖ@ƛԁq@xCaӖtdUt4كҕjIJ%UA 0 CMZe& :#/2@\sHV **e^ʴJǃ5c4-zOx H}辬(kIv u< o 7`Lt+ѠKP-"W1hFyҶAMVzk$@Tdx*FI׷Ajz1΁6tt Zc`0B5Ԙ Q{CHUf}JPӟtIBwH ԁ *-fݱ` &KOi-;ݑl,4~3@ .njvHqAkڏj:O CIiXYvcF:&gو.PB 9QbզX5 JZ!gѝ8Mh% Pf%BJMKKުh" j ߤrVD 4y:hIUT&ƐZ*m&ǃܝW6{ ]Nr{|M{z8jH$KNɪ` ֣;V$L'Kic+q0M˿ Nd2QGmC^k Q^y(Y=yp4vMf cNaBϟo9)͘#BD{=D+YLw]Pz@ Aj0FQ-@OOdPL;PqCoڪGbb G+ EIC펱vi2 ܨ3O贚P1dZ8R$ 4FDf,tXLz?t |ՠaH?xDmDVc- Ωm X{uŒ35i&z`IQ{/AFL(X-E~\뜼Y%kP׮BTjoZ 05^%!֖*Qe3|y=#E6zn7sw32.o^ao68G&W#]a@Xoƥ@'Bb'qY N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@zN +A9(` D-N ʧ(`'st`d`'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v:0&'&p hwQ@  N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@zN m d O p@'BA9:0c'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v8>Zuz~iWEw貟\o{7}i9FDwcRwPˀxsXl] h"샆\dU&mWy: 8a,2l(Uh@ mg%vg_ Z%=Aᆆ[@zɨ/^~bQ>vDsuq>o4>טWZیEmziԈ5N2lsW-J_8/.L_z5)4n|:jew66ikëJxIq|r|a_{:OMa4vQ^₽9uXuct}~UgK*| j?%WoٱlmY3 $x9,`'q F?Fh't>vq/q~F*=B0c%r/Wu/CUEVq Fkͬ&mc0{(fafү稩8w~md)M݋!_I=!aЮ*2q-ڄgվlU ~w?(Iv4K>$Nw-ew\oLdzlgm?,֡o඿Vo\XU]-jQN]Zut{ ]˓m ir<2(gkV7O`+&v*V9̇ߢ[[vS'0pH\r /@DAPr6(]LtZHY :%7OTuu62URxK]$]wrm>RZVR.O/#]X;šw}< Ʉx#U iB4B_z?:ޛ_r,4KʍίR6 o>~qI~s; "ը?iN-W Wba:<*D?t~u^lr/oV1;fJ/\5g>y. KKͣF =<~!. `pc>dSmb/جύ8ߨVQ&Mem)N!,(񴗦{;h+~=?{N4jw5j/nh}L^5LwUdin:dќa}p^,HaV1l,͓O/^/ɵűiҢ}h,jS y5ޢQd^_:l5gCW7w+Z:o EnIJfn״i1|*8R9Swymӓ~#Vܿ1e3bV6wɆ7pH1Dڊ#@'F5 rr]DCV "p yq ; Hy Њ/iS@US\y|Opp<@,xm 3CA4B.ϨZ 2vWQO)筇.%L֝{ WA@:u]w}`ֳy<:ʆƉ*VQF7DkWUYY45R^x%)WPhs|= ,JO.(=4h4^bf.E,ʳHJC e,q-آB4)D;MMq dsr-o&(@mrր5I, UgOJdR Dbx?E"(8'2H !w*p"U*jDQ"iͿh*b M)E/Ū[Q|ƿ",E쐑 s돧!*y4{L`x^ݰm^29O[?9Ky%t00:ᅯSxE˸H|FxpNS E'HQ4*#ZXtH@Bjl :%nU J nҖXuv[rXX)8¶Pp^ܶ$# QΚu;_#?* n{3J4K!Q4.=$ /=^E+.6j砣%!qLl`cN##ڠ=1|L+]bnD1OjC*lUk[M%r/%$ c00޹E E\ un%I]Q{T#2"Ü3,z m ꗗ,2%ǘ]u!3l Ū֤~u4SP [DZ"n]i+! ɍ%ܻ"=<#le4{$sQp&IE!lg\p^( > 3K{r/#C8uRk@HIahZŭIdLʲYp<ИGURGNd+D6 \vvda)8w6Y[qȽp6KNh\Lޏ, ^=FD`3+烬&Br.uCNANl䲩#"ҋEAy'Of]NT2Yq!HZJ0$(;.9'h PtJ 6ӏ^m6ByϞ~PayE tt+k`"YtQD:8UY1E~,$l`[CB#HZX,w󷆡5 Oh%B${@ycVZf@$(%m#w`w68נ3B$/[v?3͋FxҲ>1q)Hɒ$ ]ch+`Cẽ&11<2gi-fyRD8aJHq}`'HyD$%1HKrnyL8d8Ξr|(n^qH`< ;r%J2kSJj}1?B :/(2U̮vWJK%xVC?7܌{K ,⓳H&i"w 2#SBQZ3^(#!$-}.2M8r.E0gs%RIg<0#B4UҧwYIg5WQL^XN~0[4.d߾C%EA'3> '07zinJo$0зkzՙl|u?v<~(c2l XF@j0i4k0Gpb9Ttr&sP!]8 #?_]k_tLatLW" /Ar5%|Jm\\zgʿ20Q+!aBD" t8)2 d:TgSӿIR-gORO|Tc%Ǔ^FZּ094CLFa>==_۟8}*|6݀>.$/`p\p Cg =lgdv Cy ǓqpkZItEeFJp}~=0PޓV[ι8@8xխh6x^efi?Wf2_[^\ ]H3u(+Q&k UD7Y&% pԘ/&iJ46*gUs7.c m4 kU~9K._K0g:qӟfjݸqo[yi-s(޼8^xϐ*ڝpMUw.޷/^oęRLT$ U+vvD Dm}Y-APO6OMkUQ+7>Ey!BٴЊ|s.+|zY)KJA_}$}f-Vjzϒ* &3z8UNaRrG-Ϗ^/7M1~ &ww&0?-gnp%~E[GFF@&dl4ΊY>[ JYE6!@6M#rwȫfyVk+/)> {UT<~_L^qiER^c )UBBT \sl&r:iTfCQ{0rHRIp:%Jmc}J!4Gpu0ܲ-Re7R3cB2tPS0iх5˷__|q4c GSxaxLSŔ\`5xccҹj~t).е봭_a1{8SJQˁeڨ*{-,G PwC|dN(lw r8dD-h-hN)3\XTcne(%d\$ +({`3nB<UX-zg՘IDk1hFs+%lUVI:tІv%ww6etfoyu5 o^ S=Rt.еgm;/nѯz;WߵyֶͣmvS^{mܸs<& 﫯]2?‹ [^M47f9B Օhկo ^RcyG Ǒ5+>dr;d)m6;e9(9?[Lw* \3xn23Ń`dom5eSН;׬}4.D**E%+Lr.j5)aj) :@ KRB0sC!fH |sVk/;A,} *۲}-@RV7hoi l|!d&هgE5+|,n#iG0JZ׎x^:=4R.F\"켖~9SE G93ك tOsClt&DHF\RjVGTz91Zn7B(QmGMudXDcZ)"?vܮCO&of8ϓC^[oxSeb>)[a~UAK\qXtg-h, f1gTQ˭#:F ꠅe!n*W8k^YUp![ An`_1~ezyx}ݳ2نg`yPHx3jg>sWL#,8ZIZ=otPySv_|م+EQJlv!ܗ&߇"vq]~>x˾y;8Lؿ_.qաG`<70jHi;wV9~otCY_˾f_NpÕ^X-U5#za{{%/Q\Z.% FqaX3ugRSTt{67co'_V3.EjAz <&ΉX ?l}y1ĠTJ%" U)4eɗ4/Yl UpݸTRŸT^۟z~D#B irAyd{ʐ\ĴĴOQ`ڗ2&َΟ %G Ο!I*~(mU]Z;OW %=m|t(iO@%٩|]/  <_~ n̎˂5BX˕`_,rv<2mrGvOQ&+/Z*ۤŧ ^f 1SsR)r¢]w mMlל]эOU!ŃR%)٪J"J ZTHѷR iiITRт\Gz$X4gJ[$JFEŪ*Zayeo߼>= CZS y~5@x,1Ύ_i:lڕHffY?f)WO"*0ꑪsU-YMh캪(%fU jiN.Fcypr>(' {8+69 D/fgl(oК#cc!59\t01]oG4 :ui:%鄖uJqHE>W`!Yo*%/thT]+:dmѕ޲1߯ 0f_\tZvB]Jj`X^ٍ&gE,SϦf6Ob  Gə*,:K}z=a) T9BBn93FJ[DةIE| /q3V~tt7TZ n~LRO$ֈ\BW -<]% %b#J@l\BW "]R)is=~hƓb.}ga񚸣Ti@#]*FZcR ɘcb6Zt,jDg} r^&7Nz\ucg-j 9Zr;AfBɗUϦ@RGSJmΌ6>fC#;L>Dkzd$? 00c -LBy D#UP_S`uJ( >+AQGt)vUBR]=At#UKt_*?ؙP'IWLm1Fca"uL1Қ0z7n_o(%}:ެ%TvJ~8iZ3IdJL ]%7g<- UB٘bts"h'q$8~l $ݸ#\g?{WƑ lgJ}}llV>Y3\iYk9H!#r49gկPkr:D`<ɵ|᝶螌 GJ1m "ZA,#H!kX{bd(=Ccɭ 3ze&4ԈvGE;8$2فvcaR"zfޚQyr慙\ \9DU :M"xt%ba F7XZތA]mz`Ӗwgrr5.uPk%v6mKII)W"Eg[)^ /a $I RFGAQqTBJh{EFd>ZRXhS\l ڦW12#XP1g&H5g;2'􆅽Q}P|MgI2>\ݤ&1O6[]Un:~a`FrW/%K܂("qTb-PN8x e!d\gVU]U^#>cEƍ JI-3:]kbԙZp00ۍࠖeAbV:Iì5sf4J֔^AgCX_lc1ͧWvdua,(b:Wᡖ+0acTj4PSxgHe0Mx z֚|^i֌rab;&4{ }B=%g'hdO{ 5W>LhLjJFѲ[nO=yWn6ڷmwoѾWF-Y#]ӋlVHr0;ygz+çuQچ?1&ho4x0RDc {#&lr#&aR/eD26)V2""&ZH0<)c"=or֜[ڤOqXH]ՋYUwc5m q@<1pf0J:LpwQDj`@u覴 *-^Grw{y\a.bH3Xj$g3c0TV;DfɔBljZcIKnoI+֟3}: X?ʪi9Ckݳv[w.cJZ]h|acp79 zy⁛/lhQo{{鐭y=#$ 7یܤ6Df\{0oǼm 3LT$탏Ն}º;Ng"K}CY{Ҏ\ }yi2m]ۺPDTy%y9'AZ qAPtAc?.Ի1^d҃ޟxv BH4\ %GQeS2%3paQI`icx^K xg""úP6mA?`Y}:J. |z]kB^^<*TSܻ1ZbJ`?M OZ54ՊF Vg=H۾R'h.{=WO/`Sz]xkZw/i[B.B<Gq ͆xL)b *"ZHDHXGyj&LI8FҎY@ZZn}_[sn=kN"t/^K6% #_ keWZUCۄOekZf[Ňv!emFYo-)hZڎ;P3X1Hs @KG&v`R4#)9D)"Q* JeT$8"⭂RA.{F##о9Lz,Y Y%YY*deI\>/~:>/_x3Lߎ~>; _2 `tjwߗvp>YUC}]TT-:ۣTm}vyM/o bZݜgRͰ'w/ ]^durۀiqg͆hH^i}D:*sm[%)]tT5氩9$]}&G3 ['?O3jpq>Z+ALpV^k5~G:ըtfԞ \KNZN Sj&?~ AD/2]w]}YG9_۫}dĺ b)Eyz#5/2pl~擝B\5Dxcҏ 0>{VM(6CQYq2i%{tQ7w\[]M6˨,g4 ˆ2˜o8aQjGW`px:Rh\6PPĸgA4f& Ӯu1HbC#׬૪hѝI=vB)] %%96Uk:}pH,-KYuji~_3 U{u+uSWwq _S,o8v}ZL{+)Ϋy'n,PNaPgECx4 6M*вa GeFi8;n7lבYRPAF=iR;7k& [h/1Ggh;+I0$P&KKf* r%r00YYﰣP400 A?J$KĄ33l jqW/];7Mo"obQޝO9mYZ:VȀ{ˮcٕP] Ewm &شݏculf 觬X&R,"%Qȡ8#g8sztUM=Y \yscw<Y" RF Q>$ Жs СD-6$cA,,^KČTe+3*BЮwG]L 50R,{PQ+oL\ gWsղ-m97w) aK`ϫ߸&_.4Os-fmCт[\:zAu5KC>pfғM͠;nڼ}AG.\QP " r/:T&[9]OVe{|}ruW^9~yN&\oqUo!q)\Cptf|%҇^P۳Xfi/V'|?~>h5h~B ;]⺨sXߥnpkzC\w-=o~PW&蓫ݵBSםyϝG|cK}4E(Jb.&n_Y7 81Ғ^Jܲt$@@UKoId--Uo?A-i)v+jqЎZLJ nG63F?ϧIlk vдi)PZLZw~J[PR HN/#Qde u\7\tSԞ <ȠDe/\ݨmF*gxn(! b2, ƒ:,5HteʍgR g̅Z 9q:X,׮K eWqT_gr55%9(^2=lK"Y*ΥS")x +0tE.qcR p E6 o+ޞM}}د ʕŮ7w跃}s$FS*i 07I}I2,hdXP9$$CeP4/p>f 2>fZʞBT"TʽA48dC{7pic,1Y(IY!jzRQ'Kzbv.jr^D@ ) X0Dmp;HD5-N۲-+L _df//gW îpHb 9HwsV٧Q?}zO ōBu JGR)r gt C}(/m0ݠ`xWnΘ18 8|6Fe((pQ@]5Sۂ SzJJ0XoP\wjvG-xeTZ#@U%$=FdXFY%w&-mdlK֡Lw¶f:CDJEDL2H40Gi幰F|%Sۖjrײ9_՝yDqμZf)1PV0icP-:XH42f&Debh%bu5դ;f`ż07&^~W.m6Fuf v=j%㖛/f7ފEx \)ʼn(JwJ,$ 0:xx2UU Kad6gPfsߟr)f]jޓ?H?XlF/Y/Vla۵bېhchD9D[re`h #t^S.%؆$=K፩mu9 é8িbpfЊHE}(z _1XuGW?~rRr*6f q_[0~&5vG8gR#a_) Xu @N=@FGw~cD9]{ 1N#ᧈ JRr|V\|TKs9%Q)Z_oց5+UälO^TŸ,B_bW @LW_ooeMYrׯ!+O,vcҚ631_f[2SMqZge۞Zp>ZpD-7xQ3N`>* SW', mK͔(6K7LA\auqc/]Mzt %|b/S1s]gGiy&8}G2$ZY~ŮIM%M֓x5xĽ)NK-j:QpwXѡ)=q"+ŴVv= T#l1K9yU-j,xO2O@0I0;,'%y3[ :jV!gCy[iܰTI-=ddSZ֗J6pJ6% dcaL`zCWq*h:]t&]!`%toXju*(r+j@38:]=##5Ў{]vzf ӬGtU+T_誠=18 t[j `zCW׬Dt Z]@W/I*5 ]!\%X_誠tUP0+z9}<;`i=̍|^7.QLyq\]!JTzt9d=+,) ].] NW]@\a{DW1+k ]v JHW;4[FqVe(j_ުK)dH'#?LҲ4Zbf$QUd1[#b33QHx̂F9=P 4RSAhovWپV ]v_/.qs=\T_]~_7uy|٨ ftk7\ʨ׺}]zJf0E\8!d iR2c}o@z)q-t̐-`o/%Mq_Fg]>ׇ~onL\xϜKWU?IPn(ۗI Iy">DE9CMxʽ,׿"L׊Cֆއs%_o};QYxK#5q2!$4-M6w ؟B4f[am>)t-5<͢9g{SRF]koG+~ce:7 dq|H &R>%IKCGe>Ji.hl1;&mr2m6 唋!X!o; Yv聳h+qgG퍓 UeLZF430K`QP|B;YuԌ:z}<ؚeּ&KMZ4l{ .ۋho;8WNwzbT\ƼVE@8)-1cTČV.\*t{;!8lYkT>7Y~{Ƶ}F0_A&Ft,"YɽtrGk//ۃHhk$6Ui ch)>3gl:˹|;p~UOjv>){VV.G]ԸG'(-mݽYgH)+JYg[9|~“RdG~۪͇8F WVWdE{GR.r]@ *$뫠˭2=uFfxkRнh[=?^oJ傱3ʁg,' rߒșꔟ&rĸYd@пbp5(1BV#An0 l蘟vVB:%QZGƔ7a#d[ܚهFS;Cִ+@hأGBGl!>![Ra#Rbc-K-w5*5g\NHc CAd1W ,C%qTE =c;a8V~r[I<} 39PJrDₐ\;- 2̖\!Q:eU|sΐ5E,Gm̢c2YuYՆBla;*KC3\?rVY}V@ӺǷGψ^7hwzozHHܠKk>! F ,,W&` Il0|:}m蛱e~<+u3n 쬷\Ny wW~x<ۚ,R~6,Xp}MxxsT֧Rx F]!^ÅTs13V=3|$u~muCocwfV{WBҩb7&zכuf'jă)hA$5:+p,dDهc;HDAZT%,'h3N OV&nȬw;΁fcu>|>}F>#ϧÏJnvRzhK_^5i֮x?F{1 $w8M7nMr ڠө n *,rEkW9Kk=D4Ѫ8h'rsȉ4w&:H -d92[O`ԑI!G6AuU I$#$RdǮuVBD]sw' \c Sx*k?Ï?VޔIW\nr[z8U3nwX{3CLHLU+XTUZ Q ǽ#w,]wsw\cvip\mԿSꩰublz}"Pvى TN`HCB2j .z]OA'S}]0Dab|~ /ԝU~DFyNWZ1*@w(}7=RFZSE ] eVI)CyU3F`zE9ڜH˷Փž꾪e?FmO jG=[>N#tRV IoWn*H+͙0ToG{b9a1H4K M`3*jQLGcH[K*-:#8vqװY-̚F\ۭ&tLS=*gZ!`Q )AX&Fa]F!'Nw޾ W[Uד/0Z{6x[ 0{W3ػ4&fkbz8x0i.% Ǫs;J˄ iYA51*i! A w/޵~ǃtF7nήF~ Xgg>GZhݝж!k@m2և&׬'S7{\JA03pg}[L(U>%.C :d.B`T;LB9C{(l?cїΪs˔W Uy贄U>2e Rex8YR :*\tދ~?]c4=t )uYG#Ae- 2Z!Q+A,Jţ,9 RvAp㲳PjeϾ>Y3Lo-ڄȽ2KI A?O{?w ^I b#M>umgHܻh*oQpHѺK_U)r]Mn)%,F6øR2obqjq㒋 Tx'yp6,HYsoTG˵ߛ8Hi0Q??/)֗oj<`UEJ gTF'Ѳ7RlFs楼i6P_71me j6|Gƴfq߬YG/?*ۗMY1X$k O^n;Uo]-Mg9e-ķk[ͳfn}cL^m?M`4Π۴E ?g<[k7wUbcYFٴ'5ia3ʍ=bxKǣ^xL[V-ۻ)_Lz`nR./pr5 fVbLz ),s#aǖ~jRPlūu"8,yүoڔ"K֚I)J[ b{ٰO& ՍXT.[Ai9~H\[ڴ[)Be#n0N7ӣmܯnk+Q;}|~{sD:<,/8n;m7 nJ6D9ZqcHoTHt!J†$L%(mA 5  U-\);w9yh"i`*E4`Ngvɑ4tHF&^zvvtCoVJR[r]dILRU L"aC\{c$Q--wUYvzL516S̝uy$p OY:lQZZQ׭.{8`ze jR5.ETn؄Ѡ6qQbC[B5!,*9>n_%QLTDb_BΙa0es(Ueځ:ìc[TfBӥk> O@S C9bBR(FYYW쭌1f3pAГBިcn]>:E8%L Uqzs`5F [\i7.`"zT86fJeT> [8+_UIiP(e J8HD3ef+OrT3灚 BC/$_o.\4%̖m>y?(^H|1ӫL^}>{L{ч=jg g'Tȵ8x"f]*8FCLGK)1ptb22k, L:Cc-樜6eDz<rMd_H"c `XJ:3zBO<_(!"+Ȋx}:d IQf? L$8RPQ82ͨGڑo7N,#+ ^:⤻ʹ}0f> p"d8 *!z҃V$41I T{N7JZ~"¬u`r:QD/quC =ia^cIىO#Bzɽed/q9k9?Ŋd h0\N-zϭlKN//Mf^7NCŷ,wdO`}yQܚoz;P*k0f͡~g)ݝܜ0nEVYYw4\Ti~B ckupnJ٩Ҫms xd|q`?!z`|x2xLxW{Αkzy|ѓ?PI43MsႱՅ&T[Agxt) \w=/qy`e61JR^o ClTspEk, |U5 VN|LGŁ is"~Jvەci= vѼ:;é! (DY~ /26GNQ=w\Z4lJRg+mVY:2.EٳAJ. `],K˨d3r1+D sk YSAfvvH@评N2O#so_[ ]QHq?mn|9ֆ_9HJۜ 4dx8QQ/m:>e"n=P5!kk 9T}u\R5) h S"<c>3!+  9yMs3s Ws<]]_ISSo8.@}:Oތ+cSQ?n~>~ݣ?FGO5b'#z%7GS! {a#'ezxsxSr^8w-f[D|Qz+ˏHEk4G*j~,:(0v:bN ꠸R7"y}+Ar/:Q?RJ4 #43klbL+`i!}E7VBV-5^Օz*GeUƣo V2mef١Ee ]z^J G,y2Lj&:'hihB.V:>ET68kHr|,qyAmNfo#݌ ͸%nLfuR#y^!Z: V5 cw/OOcGۅ1OYI璴sK$7Y]f>&G@簲 KϨeЌt"hӌy=&X2ǍT!BvJ{ӘGIcs&PfT w;Qnrr3)pA>wF+r<˃ Vgqyꃱ8;l۶}R=ѭ>T}ܵ=sZي`Kb``h`ȃpj9 R, ˎ`j#a) Oc*5(`:"`tq /[[kΆ{&b_=Ͼ>L-꘻FaقPs;fK7X#1^d%òGd쑛a $C伡,0b,EKh, @a+ktL|NŒ]zT5QH;с)Edṵ4N+!ʓH&ֳh](0u wKYS+&O¹ a@ <0cA4p9j4'rScMM(yKaR0^Ӟ T,H z[{6^H˒sC¹(GSrZQz*: <)`☌1\B8-ȝӾd8+31t+_^O?ʐ1 ۙ (ǻ_v}W^vmۗC#.%5s΃{W MmRh?7WK}u^,Lct}y\S 3.yK|f>ӯ@c1:N5U+SCҮCEM?e*\u JSNp|><``rۨԇpPt?ş)NIqp#{rR;P\ϾtWN9Dw /,%BBkX7W(!xwܗu#6- |3(.1; HUo[NsιZo ۴!g~{>.Y(ָw)dmbi2Kt|~#wfU#F~cFa]F$l9 CRsVtN:X -=W&<ȸ'|:5dM_Mƾ4yzwo{puHaK/TuɗU'<:27-nu-4uߟso B%YեfW/AC{n:t˂ VoJx"b9xLX[jܹگ`:/g/qs1tdP2OioI2[ _X^Y<>)w6^I 2AuX\b,~ y,P˕dq}S:?{84o; uJsET"\"& yUKH,!a<S|&[5Xpk͜Iw7ֆk2;-3g!i#֯FP]{Au]zh)0"uxOn|d:Fh#u٢"759`)˵,I gq ;:P#b*Nh"6( KGtN!-0K* !1eitb&(:o`k5f 췴@e4zl5ͭM֚!%Z ٻIno%'>/0Abs>`,($Sh7{+%ReEm95S%,#%]}6Urow5.q,^'Ls𭥏lyRTlylnl7$f_ŒAH vJc8z%:PBJhnP2zES\jM*Ffd j0DGl|`mIƶX(ZBYһ'2^Ymc]U^ɏv{r^9[r-"GMS Wo*3ɸ6xv`(K:A7$gED2"="qS9Z0-Ys6@5=,<Bp/ڵ)ʗ7QzPpz ",iPΥ-rLR#1sĺ#ZZk8j Xr픯-)1jRӞ3A"DiH,[q^#-CBQ ILq!pqg-Ix{,trΫ9r; Wau~Ap'~Wct>~HU[EΌQҁ6Sы}Wz^۹=]˷AAD{ځIBhy/DVB80 }āRj)Zn}&{5)B[~GgW|.AiEޟuBx;8?G@L8` <` ˰1CJ|x"2U,P d=0) *9极g.04^JEHI% VKޟInGΒ< >`ɻ2nZPbc/M휋G7 Z0 \z2 ਇ!QB0{ixg)KC9RyFv'Eu99NwB BPHX $kM;A|G h89?!D7b'=T; +{i TK혢0Qj$^{Z{wI+e]4-GC25NFQ̮g7 C#[MyYaø3bo$P\+ǵ L7m`V1n@$>{;o8-CQƕxfް[‹Qy'gQ1.\|w6;nKV݆{{f97,؁Rh⢄qYדq]!sLUbGdɤ糣siw 9bLG+pr_lu`w'sKj:.Sʒ,~TSF'72cuB( Ai=߸*1#sR(_?o:Ͽ;M$л)%y7lԫUOuDWF*xw(.@;sYTaH~L=H'(>wǿuY1{ycJ3 {Ja }@8Ϣ Nl1q4,=(!KI7$+4ګst˓>vwAWiΥ$2^v2H=P b<7Kaq0*Fx1{iJT!wf0|դ>|k$sSIw >O8:'O> wvaiCRg L@Qz*̱a:_j'=x[4=Ǜ7Y.\̪ 3҉ 44u:GltKy3b,M!]+52/E1Zk;bKl_/0 ʉwU'm#dX1%g܄·hQCi_t(6(5WG^x*nW+RWYjHQWT KIϪ){9Gm/:&5[rB.$CeTHnЦewɳnn꽇{3K - U0]RW'{S_}T/4w~R[L+nVg'AYo*ȎӉbk}Nxˠc;g޺à 1,&՟Qpy5?_ЃWj6GTL~//D !%<3Wқas Ef!(7|rC9k+@ p_-IAs fdBHʙkZ6" ,*e !wvuƻ{b{%A%YּV9,2ZOVYEfy:y  ȃ(q_V!C9prLۀ2iU5ug:/HSِ烃c7> 2M2/{cyŏW?f7ϱ`9{A̔^(ֱ{/~ƪ){'0֓*bk8ag ڑzAݑc ܈< :۞<*ɂFR>3p[GUZ+ZvD=q5khQ% ~qgRs@PI9t[lqDA\ pFђBw -7gϗXGX"ec%!9K"i%\_JeˆۏmuK#Խ.[EЪ\{ 8# XKEA!H>1Lh)?C^ *t54s$KPy05֓> ؽdx $&9G/b$f*P|ê w:Kksr zJbչl⅀90ʢ-vf2sX٢N>6"*{ke%[E[mŠi9@ˁVޗ'Z\l_'QڊA9j\}N6(=> 4qD 8ZS@܎{JѪf [Wl G4DYKDU~佮vvuTW>zRrD01#b]U*!h]v+"{cT:hDoX~͵g{1fj;Ж=+Z'!Ox@Vֶ˫XW[ݼ}ѝ.Go;o\y=\]^sO^6yQwK 6.s(o./6f?ss ITצy+&ۋWv-,g>N{vHE6Xٻ:n[FnԢ`v.ru.fU;'LavQ~]^2[z6]ۃ-=EĚx*d_=I0p%X> .Ē˵KiZ3$h1y}\l.DSfi0~$a^ #(֞k@Z=嚯ʿV=5+ZU|e${tv?ZSysL"ϫv'i-g9;h9{@ORs8)*=]kD#+9:o]BI#'8N{V `&XubYt0bنҘXeY#j~텮tPze!X] `~@/tВ;]5njErtH{[Xe0{ʔ狷3Ngl/ƗJ03EdN^f7_Ϯ.')̲(g5{ ۏ{%٪nh$Σsiࣤi#FnE셮ڍ#DWGHW@]5싮E ]5nJ]}1t;OAt5+ap0j<5.t]=VZ)] `Cjp-BW -JP"؉2#ZnզjhqUCii#+TwDWuCW CcUCDWHW\?vW;]xSnҺWۏ$ kJT좠uyI[kb6%?٠rV`?HykmG~ ?\^φVϟ c?-ب?j+ZBW -UCQ;Е:+LϻWw󪺡tj(7P&:""ÜKuji*mdbV^2@n9fk~DWKo$%oRkbF_Kzwsyu_V*X8鲻lx/;+9-{mvOk=^?Wf_*,?v]]rӆce|w/fܬ>}ܑ}@Ngǒnf O~Ŋ䧈pTED#9-UF_o\Zd*M/׋U㵓_.]WzsvӌQF6w:{ߓ9QA"$ɏ~W$"b6_̾]憛wm _{>}9^ܿx=u~@}w'I\CD -uCa(0~^7z+Aa/hGӉ%t5;T߂a(MtXշ;;+jpBW - ZJ'lM7tz ]i GOW u]!] [iv ]5Е5ZDWGHWR#ɅG}Hh],3SQ RS-Y Ҳ:iygS }P3e4X&iphZchSFs !}ֽUT/t;vj( ':BΦxx_tb7ZNW %DWGHW)0FBW c뉮HmDWuSOCyw=YuG4-مnv!4yZ;5-_]Uo[G 7%}CGW7 %@WvǪSQ7t2BW`;]5cOYJ; ;+y0=tAhQUC9f]= ]GO+:xApAQ/tКGW %M1ճ={ݱd!ũdgS 0HtG& ̟ .v7yOAi#?JkOz0n~J]!]Y5vD[ ]5WNW c+G؍u(Δ9E'!A뤿uٰiYvet74R7QeCy4-(WHӞ=ٞJο}OT)h~ͮ4z/܎w'Ji=6%sXP=䑵qMtXՃ_?jvCWh ]5NW J~m/?0ȽUC骡d5ҕ"J[㺡Zci#+oXwDWi\{vcj' t7r9SB sK?5fF}&0a8B9OaCj䑟';stF^ZKFz{$(7hx`#F\JuCWW녮-j;vD]']y+;+?]I8Z%6N a{Xif/gMbae_X|nՅ__Et땈rmPˋoqU?K޻%}U.Y(z_e 7!;!w/x޴חEp>cY_ako/GyxVxo[wx7~d>xKo)g]?c{+[_k5qϖbZ|믯XPrDc$T/HP?0ޏ-~8NBx16͗9S>[P3t#/$zZwV8Z|%uv3nA=\F_} FrL_ Xor :֨X  R>M G޻%?v34b^Wg2\:4˟VE Q"_obHJ1T nt*r b (2Z!djzC]zѱ!"bT@B2FdnӤɅNj X6>Tt~`i#BOLVcSJUY B\3pvNšCU\’PDoEAivh])P-Qdђ92 h#`jPc`uQۭheha68Ԑ7.$fKPT2IQqTCQ2Oi1lIYX ,1fdYĤkQ":LT\S AYm{,ĘQe ىAZ)n6:NeP&:b*a%@p(ZLȵzWWYRR~|<$vl  <0JhX1>߄"+|H& NȚXy-Fv^ 8Ѕb-Λ[!@NXR(*2YaBr! yGOШ@UF7-  |i]&~-~Z<QHM*lC˒ZvLlLj28)DU6$DNPhb$Ȭ2n+^dR2UvC1DEx 5T2'P!a袳bEʆE`# )DCIEfʼ0AdD4.IhfXOwf`E[x@](DtP$>p?1^bcJlgMqxjA D0L剭6y!ȐG9ioUeئ 8"d+K VtesA ԙy?/tV̈KQED$' GcEm*<`]]-Wi.]L%u-Uv`c{2]` mӇ֢IIJ%@uP< * YiV2Mt%ڇAhRU! 0阊'8$;ŗ9*XqAyjN$\Pd"U̫ FMʴL5}t\%$0AɣJA&ԭ.HGfp6GUS Y_"bΊf lZA]֊ANE"d}noD+Y w]zʠAUbF =>A(Az{ZT}zT [bcW,XB|oyE!8ij)K:)HrwWHG +U@ &ezBE5FHmr3u#; ?yT=&XW (]IڈJ5(Z54)hc#7 _HY 6)jK3e2 b2 D hs@:'/,9fB_{)Dlajl}5Z!@6(-VT@9 Z6i }BL iq#f0[JFDyStE8 8%-Cɵ+z(Э_7F<A4D;j̦rܛa4DX%d,z ]ñLRh~ 厞Z f'@;^w2aϭ>zM~ᛯ&PHn~#;i'"[B7~m(v@Pb;(v@Pb;(v@Pb;(v@Pb;(v@Pb;(v@P;n/wӛxŏsZjJw=,o7 o.z,G_Ïɰ&x [+h [fOް;1ˆc .JGDW ]R%b,tEh>u"2]=CX;yƘֆS+BX]q*|fׇX<=]'tuhBvr> U`zhKV#+h1"_Zk?r72]=RA9+GDWx5n,tEh>u"Q0]=CGDW ]ܠBWVGS+BX]=G22yVF1td5/ 1x/sx5/P'IC~q !pvDu[]N˲ 5wwAWBnƸ|?/4pWӋ ibq7kb6to UrΩVwMԮj# Ip.2!J1 lh'j,Оd*P*xF]0O0>4LWϑ\tU#+~< `0c+BTCLWχuΟF;7\^kڻv 3'{2XXІx, ^DfgQ+kFSZwSv2p ǡ]_"Qtu`zAap/(: SmxJ+N]tЮˏWY?"O ZS+B#3+E2#+%}>ŸJЕ_?"`CW׍$O5~೤+#CT /׾^\bU+v8/mսQ/LǬ>]>ZΤ躘j>E_<"LOKeih(ӭ./f"##Fnoʒs׫:E+!d[n&)xŬ]bnU]٘_GEy&G# _7C?vM3E.hoK6ihQch^ʠyUO  Uq,tEh?"狟%]Y!oS+khsc3.ۿ [ّ`[siqR^پ/hwGm}$Ex_~8dKuFo߷{bl6*,>!ƂUn6߸H:?v5yV O7\MvigB-.;cQ4o]N3ߜ8נ^"϶tf8xe{u|?,A^rq4oN˛+{p}uֻyCm{['߶N?ѿ~jwIrgw5|?lng r9gg}>yn:h/cA*!J#m*M[ʘdM״ g9}1=7Ǖ9%E\C ds$K:Ŕ>m6rٹ'C}On[xi cB#q+:oudRU^PS7l~=(Ki8uOnKW|7mZ,WmMݯ_PѴ[.݁M<-[ Ym|c9Fwlyة?4 *-rc/uyu_oE?ݓq/cVr|ZA ? hP]r~ \HY J^T#Dh.t (5I [0;]D*cwww=e% !=B9Z*m{: K;fS9މUHjMRimS1*zk64MpE"K(н::<OC0tCA!ɿ{Უ[z?Ǔ^_]̷mQ.ʽr{Q67s_@M]O_x8YW}"ׂ`D3j'H*sų }D~)7yїmw 4w1t^jxC+޹ \՘e9VStKkz]B:c ^uۧ{otڻ)²Vmë` cVm2[P|q"'Tz@_o?N~7?/wGOFގ;d,nݩoΛ?oFѮw[#ILNZ~ID?FQېхP,v'?#ó{xut:W#׏>߼w7e~k/dȦT\͙f׍5i4'>ӨOnќL9T.')J35;QtɹA)EW4٧T:ӨP77&ߕ륊N%LA7pEzـ}~9#Z'ǥvjj}&o_}rgpq}ߢ*kt-Ɔ"͘kJzSkknF/[7UjKjslqpH([Ocx)EiJ$p htK<C$r4ιqIVg%}6AZ8Rպs߉Ia?=FӼa׽Q7 6phn|&f)]Y>xpȲ&GrH 4ڨ *ah"w5yވ`d1ٱpg0Ď[[r}ܒc{q˽R[F$ L 3F (U㘬D0Yd F[~ܒ~/›7X~~<}7קB뜈&GeKǜZG"lbb`1{Z=3!*#JL 6 dEt,Xrԇ5IBI'HG<מKL1 CѪu~%^Մc3Gu_2XD&٨+71~!t-aSbVݯFYZºo3qW`O)Yd鰅U@UdRqru J/E,kUJ1jAcm*c@5uwr :g'zk-<.8>|ί崈qz/i`(W1QKFC41xeڣhcS%§NO mK͌$6K51niI4:YLb1.CTaqH[LH2/t [2M}?9 ?eL^;LG`gCЗla&Ki/G}Z%N7B`U %V%g_@:h"ѥ5XYFלH[R=2VY jPQ_TҪuAZ~3WUW:k bׁ ]miiJHkmR_|D LAAFq&4NvzTV!b)Eѣ N=5w *'&GG]y~űq9O 7~Ow|3%t 2DLkLBXR! CT.Ny(}!icY7V]鎪~-5Foy&~#aR$?gk> &j^Ȩ-oP$Out r d]GeWy"ܣKV;T"\upY%&Qĝz@ F4H,g%L+>rc焰(5MF:Knedzz2:K6%}`F9tJR$[(VIfvyM L2]ޞ6‡4+0y7f>9{KiR#=)MM3UZN(s uO&sI⧒֘c\B$ 2|K4Jc`)63 ~tTbA۫ƲMR|Vmcܞ1}޷cGN)O}lv f (=釆ֶ,s pTyc~ns>ob=TGo` Ӳ{EZR'K!TVXkkpYIǑ1Ԋ4A{t 5"2-8yu# %@PJh :ka6eFsI##EH d}ܣCH4C\RVxneET(I2=cR$>$FT rXH2JX;щz驖]I)у3BelT>GlE'i`)A$JCTk`RgJ˴}$Wr$ݵ(%K.ޛߤᆱ_nb$Oza{U&3c(۟g?EjhzXRa}d~Wk{n/5Dk5]yvC^k/t|lߕ#N6څޛy+og=gm4~f&Wio9DIM䟸֌)[zs1:*e⾜ j? G^?c9w 曛qdYNvY d&Mz^0__&~g$ѥwC&f Y|xmߍh։Yf,7le0̲[ 44<xsvkej'y7v!AXֵ^i}O[lYn3N\}O`8oHaikA+84;jHٜXY;3:kie4Q~x4S)-]x"rfL:,.FSzWÙxl@m8fz'k8[cr~F1\*Xy0 ccW4Yv} Pݴ1z zyBc`4 _L ?蕕 .Iv[HU&izuHݘnj2`9585>Op2Zjӫ!S z,<𯙣ޫc]zch:5 NȆ;=g8Q< ܊,[׸8 r& ,nGU#m 82 !XQ5gy>;:,[MNỷPɣ#*w_~ U2o{i{k-'&|ut;=RiCr$LȐ#I! J$a"Js>fY Rsao[RY XjE4-,bgz$ͥ mUh>g`9'{0"eȘ.Bf$n$Yi/;JkH&r4581jGT)'jOvq̕|dOYeh;ŭ_)Tډ Xy)@yԥ(G XbM$!Zұ=u!3&l;lC4]kN)=~x`YEC\LQdW>kE?>5 }2?L+=&j9Lxey09. zܫCzПA Oa#][\i7b Q*0R1 [8w_U7Ph7>& [@2V "9T`ƹleP;;y xu(}I1E&$}^~SJ<ɲӝO-wz*~?OG`5Rg'1BG-bND:(TpyҤ v \;0 R\F$ĐBd T4 D@-0":ό6 DV+TXGY-}gCP]-'2~Zk;폙#x'kxlfZk1G崷);)0p  ˥XRrcxFؙ<_Q2CD&W(tȐ0Kd )HHԱ<0(rnuL<b8yfqDGp̃FG`;q^dHBI D$ tZ_M돯1)}b?jԃ@Qmfwn,j>]h& 71iG'nJHnޠ) :fL{ݙ}BӪZh=4l0mų7пjG#Fr<ү/[7`x:kQh4`0RKpLLRX!ySډP>Nu)[/켭:KǓja*5҈+ADdi@ 6NNm &kQ#sD{oʹpm.U(ey,6W?bBN:`8b8"䤊;PRrɱX]U8nX1E&z)ÔqN!F aͥ 6J فfmc:h`3z\∅PH4|/JG+alZFFD,\vnpALL!Ї4Wu%)@)Sa 4YP}v5jKG @6Q xbjM`DYgD~/)Uh5gbuiVI-8"Emcܺ^z7^ /<-'؆yN{?58=St;͈p3Wtf|O z}ϷMv/"yCVdzgz-hyCIKLi R ҀJ %TFbY_x_xgnvd?oD)NT j-U$BfGLB'R p/-&Ύ sPN"J7WyV=sM>pvz< UYM}mmgzR!yBͮ.X[ѼW!.v޻%jw^r>LF-}t~Fwg{n,RžՖzK\mǟ_kns2அ:s/|j J%\=RJj:e\ͮ)pE\L1WY\%\,%\AsE,lxWY\A\ei캹R^4WLp+1W .'XU㮛,\As^e;kM0їŦCP0|Z Mo4"Von|o,KaҰr)Q_px knJ)/(OIXNFDP`+e\?*QY%*DeUWV*QgJeUJTV*QZeNgq  >4pr4iY\)e;-KkT׷@JDN{iR%N*QY%*DeUJTV*QY%*DeU_:86] Γ(Yi8O 'y^‹HxBcp%*:uk HlDzM@ s:z7ˋհ:Q~Vq~ypmw:#zKd%W4y#>XX8DL!œNX`"3F+@A&Qu]<E\ʌc+bo0ʫ@\aˋ !6 >?zw? >/OpC,_kF[o;ރl67f2vXFOפ;ٟ=2!;ѝe:.ąJ/3Uí3 _]øz+C~H19tPM8&1R{ēHsɐIbotgI< T2bA=x,2LOטle3`w*, ִ v'G7&#aV 3AWrQ|w=!ZHT$X[aR "e)bbLbHV} 3c)N$&A*Ko8&wsy+(VsƍJF f H 0,fN}Lg? HQ:{e)@[5S +5$K Lޜj=d/@plG+OЍuQ;W.s}p nn|%e!鶅gj`@9ƔdBIx$e*c p^2Vk^@.jk#5}Y{Ka'Eϊ='+Dtnk&cQpMsg(|&vH^\&QԩY's1AbX܃pDKavb<Ԭ.% awؒA7w>j\6ny`},I\n˅-_n^wt H.Wy!`% @>cv4NrD8 UhOddsdCdd qNy }F €W(9gH&VE\F3ccIB6eLD&$9G' _,a1qB;NQ.~i{% |TP.K.fT@`CO%or|*Y}kTT>7ȧ^(abow֐G],4e1j$*-*V. g)CǠ2H%$$ -w2QjZ>bjij Z(.R#c&ˉa1q`' 7Q~ ay0.Ƚ?4ԜORPsj,)V)gTʤ6G)r^ Qʸ{FpE&o 6_I }fd/_@a5/wNjƿc R狴ֈvFܦEk?,p\H8!A<0\a1uRybE8OIvOtH%I-904KRҔRˆ#Ա %gs Y11/M2uPq I-i5D,=\8S,&Ύ+g=kHt?V q}8k\7Bw|@bv͒wN':|9Q3@zQEÙ1+`B ;9xf{zpn/'Dc8sը%NV^d&UؖnWJgZ?pfJɴlqhWe3fS 7㊊,,%6D]K1@qJܣ6FPm4Ak8K d13  [mlIDװ%Ph:-_  (BJ<1YdRY0-XhUQf[rlIӱS%?yXT(==&%U*EZH4,)bn,!Fy#'aԹ)V`,Ơɼ7'vt}ȵ6"׻e n@ؕ?77󅿍V\%(iTJIzXQ`-xRb߿+S2K/|r&{CEqv5vq[]dqXP7nʢzvqi ^1Lϙ uTxifu sfApep[JzKrq ?ȝmsh+92K >0&(a7QX쐳/O{ Wr5Rjq 1*ʹs 2tĖ*B(vKV+HYH[MA3;B78:~H@v`uQ2s,#YSd)+țRhBߞaE`y8?&瞣xPx#9h3Q#5"ʭutRN. ǰ97 z+z >)7_^ZFļ.j)9K<^/3;2 #ƘC NG?j"->unU^V]uV~ &Gt-/:(TonyґiOiD ΕHȐ\QxecbVMRuLcP>ׄ`4d8!RkknVeJ/'ȸ_XMVypT,\%$C֦߷1DRFi t"b L1|!ًL#V+FF`~rR'#ä)9Oá}{*nFL8` <0a%5ZK2O3Yƹ *X?耡G.{B@s@x(qLrB"%H$X-˓JCYZ!ל4yw]TT͵%6WS}XC-#s-^iuEqH(+J~P4G t AZKC9RyFz'Eu99NwBDr!(p$P `= 'ϟ9!X7b'`{văA;^ܓH!ZjaW]B~ aZm(IFԝ-N L"[җ$uUwaø3bo$P\L8ׂpPݔYŸkTȂVv0~w_1(t-&;f%{fŻ_UZgRl ۙ~/_Ig37,ٱR=hYC4ʏ?gedYusu?.4~H2WVNN!g*I3m@Aތfa/{1&[}rUYs'5޼yW)m9Za%D&,7K?ΩկK/{^ Ќ\^-{<|<e$ީx<=n0We7{KY.ðr]#ul<{22 d`Bb~̳x6 ;A9GG)^hЂkm/14lŁ.z.E_U&*ea 0p.Wxz+2G3Pt3|x}%[*sS πFqtNU[|S\RIn<AJ7b6Lóֹ\=׸o:ӧ)2܄=܄?̬qqFEVqemD=&+#k}i<|-PGoHvyPNfQq {_L>i{4{6aW]}7PTFŖ:y7m`!bn:.Te턷1MHE&(H&\nhY,3ﴻ\`hc[u+:ᱷ( *'5do\Y2G^2_ e~Rke>G=VIΫZ'uOqM")|P̦˱q&0,>%KsF Vz3uLl@bσz졂^lP KO89kK$^40J&4H CJQhv:hvʄEm[0 ^ز9q/q 쟽4\]X٦K tԦ%V,ѿun9dS ZUb:}U._fZ!zUi\eWŭޙKӕ-ߵJ t7CXxSp@y},+̃cca#G1eQZ>}֊ֻ+xWUb{<Ȏ"[kY8ސTY{*+}&=r|]!!C]dk9!yZ&(6 Łf1C<f *"ZHD HXGy"Eb&y$tViǬQS @ش}˖ȹeBםW% =_^$nd6[!77qLܫCmU7ۄkJf[8 ǐ6%ˡ,0@9bHךιb3X1H"s@+y3mfh4DJNUHd 1f`g+!ʓH&oeqwǒFҁ0. +}ja Ӡaix0 pCz,(V9gXX!t$_\Hnjz6$aR5F9텍8I7 L:/ kN:$;AjWvqV^Y/IxD; 07)ǎHUǤ#]aYΊVPy>>!l0ngn0zw?^N;d¥le ++8KIVܻihi*vE[M>45}vuݗ5sn BfR0HߍA[eR.Uׅ-U !|HxX:tu7S%)]2lXTne9JыPOYU~If'͕TrY+AL Vޅ+- E^9J7 +7u+g?ӓΠy|g "y18/,%BBrjs, QΗi*@sĚb7񇓫u,Ǝhܼ,q`4juQ1AN\ȇ'要ƪK|6').BaI6L,FϵߵOl"(ï\_`(3 #(8a^o/3YKz`BAoϧƅ8 8tq1Q}XJxY75u^2QXƭJ/^,|vQ_8]5$olLKe(g ,qR6(e'hfK  ]U._J⍯[E7>ޗ+)΋K\Yx:UNl kd7/-dy{; uJsE`*Vh.yU-V*5%#n1cYNbbtrAĞKLpQ W?ufrFH$,;=ԧ(-e2a9遣Q W~4ūyHxzy?|p~,@+?bŽ聹qŹ5\yEIHe+ez#$g$xߥ[Qr{:zV{%.)LS"!ܻIBW~֗#U'/YXbC(j^RIWm8?jf6<3OGE=h ;M+lu j6ePs3jqܘ]5i5< ?>>QLZw|?՟9̰cRO/a[`zDn@3Ac3FW]Hw%dDK!Y1FD a'4F20/( KGtN!-0K* !1ehg<UX- /cƌFMFSѲ9\{cQIwza)Ŵ%xj auN8 q%.Xu j`Hj` O> ̂HB9[)r- jklp̨| V4v02܍'{*->9T!NZR$\i.'gqPg2ʭY˚4&EkR](Tʷ='f_ŒAH zJc8z%:C!0Q6*[N$Gk&#y=n#7 ,3Fَ*-ٷbh yxv6O\S2pa0Ȉ-XTɫ`o2x(861ԮI !{D%$ؤB:`>%)l;4LBDN5r#/wڲeԖjwƨI& >HFe+=6bFL7ʌ[m#8!1 4Cv4H/'H PCD=r`TiLx94T []QC7bqJ5pNLKcmdqŜ1P z&y9 PDR6v5Dp J``(QHL=_L8cu4(RD/Hmَ\.!{숋7r| [*B `r<18؃#CBQ Vk2Ca[ܱ+vd괭rU&?E} 'Yi  Y;+Y;QY+㬭%\+Ė}+aWC?=\ݍGpu'r O Ww>\ݍJN  wpc:$ !\ԡU:΍}+X\}=pE$\+rGCD-#WJ: jLLKr0pUP ;\%*iW_#\1v5 eVXT{V㏋_gF)m:- !µ,)Ѹ \Ve#pr.`9ϦaKpSs5[c0`dLs<ˊwZ.rkBaK5yH{ɱ6$Qs-9˝R 3>̫ }$./@.>▱'b)*\*0?z]rWE؈AH^d0<OYJЧ6Id=C{gxry ɡvF;ZuwUWU}w*TUuR+eR1EAZaL^`4GRȣx4 .oM8Uh%Ah^Gõ\`󶘫V4>P.͕DD}˥*5*խIoZeUBɺx4W\1L7Ӧ\=^jly֠r&_5>qap:@tsZWdQ<5G9w0tԀ`iJ뭐@Mi=nNp%k5 N(I7i%i4bmr`ݞISsвL(e*?sE*v=#L|9'~äNyIhs9d [GN0W3Wv=TS ᶘt%+%)&k\%\% P2D;s%6+,oJpYkV+@);s"Zk_3 |gހg~_Noj@}*M#B idDg2$r2u-bQZ4~&fk:?Z3?_ɉ X^r`c\bb7W K4Wd֘+{]^nT3W/\I*h֘+Ia4m1W`ts(\ sŵ(֤%<!6`lI+6QlV iZdT'pl 6U*F0hg dp~2NB@!@!9oa13/ kq\*wM_A< n\*l/*H~ܝF'9򴝂5y.z^ -@cܢ"77;߸Y*/9؁/aJ=Pȭ)3o{߯.Rtg!G$*C򪼚۷7N^φJL.'S2*lZJaM~ɘ"_Z P7(C޷()ZuX4]hWҶʶkuRn'G =)s{[5?I3P{S&5 Ѧsiw\F+v=#$nN{>4f5l:=\\c)IV \%\%J4\JݴF;s̕,o4[o$l G; roUvo*l)^ UΠX}ӊظj6enˊc3P Ol|sɏ*Nrܼbig:V cixxL()3NZݱ4JNmjy8vvm* UwXY%"i LpYkքVR@H%1;YOcz@Ko$zļtuv}<TL6nŮ{I>{ejO0 Gv;,_ʵxѺIv=\Ҭ{yI>Ju={_urh;z4 "h\0Mh<Σy9 6Z֘+.$VY"hs~JNK4WB"FiU/V6~]?,יc`4EO|>g5=L{_gmujoVM҇'*垒avY3,--pwPVCkaFŃnITf0. s=+42E>6k^fVzs7&MLJ+C#@r4|A#1eξ,*P,=(ySZk?+N~}0}X@+Fo7q0z8yJEjŇJ6?&g ʛ!X6u TkwhioZ~u^P(w߬qa6j4R)FTYBF+d D5FOa!'KKCW ``>92j!Mw2Pt?wp1p(joF pxaR[ER@4E N&x)$HRϟJK61zH4d4tJv]l%Ҽ0+0wJҐ1ЬͅKYW~1#'S|fq.g{\OHj8|]Gᬤ}'}zgֺ ( ΥȀ7Pȱ&Ztvq0H.-B2捡)y` Z ?IM)kZ]ZkJ kYͺR㺲tMh"9s {@= \ 4; lDli4 KrDSlK  PəJbGdTaXK"RkqQ25 D4V ,gU.Yu6K 1C#JB`xaxLSKD5@XtA:| 2[ *gP})}SzY=qr7]Pܝ:yVrB?~|slx 1swi;Ĺ,jb3Qn2W(BЇb_OeoLU?Pb1'"Bdx%Pa5FTb4/J8ʧ zpK=1x j!,zh5k:^hmzDh12~2xA8P TZ*ݰSpYsB0-SN i/^W)պ-9'@lw{IȜԸtNO]Y1__א"`2ħsh# [0 p vgIDBPuN)swJhNAy'VS$O0:õ Jc%C (R3s|\,c )y—JKOVw6u"6xHc> 6Ȉ~?=a6 +N?%6v|xZ4W_=Z5nXx&{aI Tlr1" 8TNmaƑ1QOkyb]ZQ-{n]BZik/Y| _i?΅W^O//\~~=sb){mٚ'm UiR9I6A0u>_yå O=cF@1gz4‚S/u]_  A,}27ڭ\FǻPZV7Q߅H4(1'J6!=~9}ྯ닾ix~\K{'--* n#v3/J ū3$Q(Eb T")GL ʛnh=/2p<:lNWн]O" hWi`=3YβRPx =S`Jr2,x(Qjq"p /1F X lP&qxwӧ/ʽxgEѹkVh4G5hLr.j5H0`f$ZIM ;ĬO1(M5Y@S5X0VٚIMmpmԤµjq`jەS+itg0~Yx3,ZJcǎG]bG~<;$93q)j@k<Θ$e'YF8bϙ!vJ+HWƭ # ZAF,΄h|`O %k`EP: d#̉"Y+;L8M(MjEҜZ)"H!fvX,tB&0ѿp۟zu,r |J>:LS-p:v쟿gQp30`hf)1H-h;ߞ7&[hL/-b"bf=R8r x)aR"(R+’E*LmղJ~zڒ8+[v lIK-p~y\n+\(j4h /z"pco4>omMn{%1a%$]=$%S*EZH4@Q 7IP͓AjC,YA# ̼P4#qY#יc:7^Ů]QQPUk3g֊?ܨ,x&drU=R# M(*|(*?=5V*&?l~pEŹ;c1vgXOXP7iEMq09RD6ƃa9=g21)"<`up 8xV&x$y|tx0{ۥж WҐ'덦t'qq JX,c,q'/ok [sZ~_ѵkEr@i * ʾBDƅVX`#@Y7u)\>/'H=JH7>}t:%ۼ >Rq S}BϞ pQ]rqg,l㞗D_ΕEVr 4Jkͬq+ZeiYi)h]WmA`9ےTFgOji U lkF[iR=JRe50!|4Y pRǒhΎ=Rf5 Wġi.DYDJp%FƆTfNg= طz(jAa oɮňǏ #n-0l=5-|}oQ]˳i~g,ʕե6JsEu U>lQE'`עMֻ1A&$)J&T<I!h)=7<$3^ҥ@89J`>Ds)#ɼ*0^i/dU7m\Ns@bd THXCv/Wqy3#x s7b?LeoTɎ:vggΡ/2^t1ܠ7kV3]^X[3=yp|7W+V/ ;K^C q]}+Eۿ}Yb8{exUCMyũGAEިCy3Q M#A읜ḋh88p[/ ,(zeޮ8;tM KxnEsۋA0˘a߯|YԵ!U7A˝~s|jEoTQWH&h^FRԯ/^Ir dЧ61 d޴rsL^ <|5m:eW-|3v M~YVS*(2]Y{􋛃|)ozLvyW~#[}vrp8GsJGdbA0 Mۑz(gX4yy?ƀ)% ^V=J;U-Imk~k 4q] {'ty6ޖWɕl ؚ:&2,⯈T%$n`[#>wC=ˣ(lId6Dc eFK@\l0CrFU\,vZD#Ŏ7hdBa"Xr}g'ZZn !566}+<磹*q}uh)ߖu6Oκ/$Ґf:Yx%szّ!`*5ΎXJwlMMm3w2sI 5J[縊\$@#XF- pĩ61٬!HA(jc\qpxFuG"F2M!ˉƆgB4=es)Wd|\q2#:'幗n-ts5QKļƷz~LeYh/pYrmd'9d)#7UΙ1r!`RNWj=Kn y.p]+0"SJ@B0ŽJJSSI8K+JrDD9GG1NJθ}WYᡙuhpyy(!(-R'QDJ@ANCe-?OHvkiF@C gmR`8#$ȕ'ef."hIz\$(=:y),$ UQ2% q\-&V)npR0o_'7/7:⟯|~x}^u)qYdW1 X.}䮱yנ>]˦O򛗹}5M>~/~+ͦ<{=8G"zM|vkz?w`_(VԆav0,.Tқ~vͣRK'h5d~װ< uեM4z1d֪>DXP y^E#gk7^ֆW /,'nr/Z%r;\;HE+kzw_,@QBCl@I\i`xbc!5/tlXlD?{a讘X'5XW!{A6-|?Sp81>h-dmbr1*~[w7Jow bmI=7VYmok9ō~*#mpfiv@YyۡJ`Rߜ_ηfCoY8yZ)*q]*nQ1|y/u𥳐__LOg\c+Rם^ Oo+O- I{K C{~t"!bɥ'^5YS,߈2?ryƯqP;,SԋAM̈́IVBvkZ}s;"c=f;"E4hŨ,``*!)uZq3-"o\.+yGo z 6B~=# pmDCӞ 5rtT.]\3]Sٰ#zеAԪIa4kIJjۯ낶2L.k[ȡEQS,契rE %LWO±CQ~_ _h2b Mgl{:;U/Mg̙'AfSblТ GG,]w.)MK;EY"/y3swΙ;V4rGMZ5[!,E-ջ,Nz_z}qz7 X\|6 6y3lϴ_yQ7 ¯oM{&˹_Rs9e/ qL2΅T3̅*<$HcuƏq\cN8" V\-s(lȨ@9T@jl2&n276e1E,U,Qt2 "ߌdm!,(ʕT.bܤS9UВP1J%6p4ۢo[ _؛Jɺ=//N9jfv~v9r/d#R-C/e6 Mq0t4<ؐC#CFdH,:bdd.f!檳HH ad&nIց S!hP"f"D(V#mEࣵް!Qh)ZG5i)#@0v}f;j|qѬWcIet8sL)*.B 8ɘƞ28i"x}):Dzr\3g.~ے3(*S SA#g*$l3eNHY<&lR%8e9_aw>db;w=bxIpTُ#~jz>Ti%8.WpbFzh'%_|WMEUz^^_=N/.K_o ެZ_($5|wy3̚{J)-72VFwC_jl>߇}''/6¿p/08% H{ꡛK?xڵc]WZ^=QF;q`)r.[4|ɟ/?ݼ6575|Oq7my -pmW/Ż7:.߂˫&< LG`T7jrWM7ST:23W+Ă 6\A! ZerMŌ#Õ ذWkWP+=xf\ۦssv>$f_\/Np65vyZ :]\^}xoG_Y^t],Ozu~Q xO.&%E60j ׼X\rvu6Ko+w31?TQX.o֗f9g$4^*TٗYnۑ`:tHoh r?0Mѽd4M-tf4٬iF`U]/Z+'?Tqu2H-VUOt?ނJz*WOtno5+ܝ?:pT$3ħJ!ڻv,iεӎPLIoj:J;c1YB> ة~&M5{USk'?fTS@2lzRC{AD0Ͻ0'O9WfC4) WMُ&Q/Z~Sc3WR#\5ϾcArYt]5MWM%WG+- &-Ukl/ZqTNm}Փ-޴6NBy[gt}Rw(\5n~Jog\!O{xcqNnpqf\#Q3pr-}K'њ;_ru̫ v<\-\/njy5fQtvVkS F&\5ӟT Wz#\A0=ʇU઩5DTθ޳K? F?g};H%9-\peg\=#\A\5OHjjăT:1q%;UL7\\5JLWMNF0xp<+:UIMtԪGWMWGv q7IV0LKL9YK¨) ҠΓ:PBέyR52+ws_{EMQ˷:yVMY BQV,U/@K|L G:*b"rq1>F^&n0NK*zTFGt&+:{Aɕ઩zj*팫cĕJ u&Wu3> ̓Tyq Փ\DKI Smڱ2$pHe/LOs' XR?s'>V0ӏ*ʩu7cI0픲d?cvMs+sZ;MN~iv@'_o2J55/D`5hÃp~\?(oQm}5E=\,kXs(]=KE'_"~/Dzum\­rc^gaWDIZppͻ=wS+k?}cgݍ?0uW|noq6n{j?\oXfo[j7o?';~ (۾y?PE y~@4aV>n>͟g?1o>!^@7]y?sb,syeZb 9HmN/A.vq*MYF2ѳQ"P2$ syʿHxCowjW5U?&|%\-.rV.#k$jjD؊V2[ G5R0e6rt2GKB I)oZjb2e%+لlRħ+aA2)g m.=l-2 Cl$%'90*DUޑ͘jm)XtEGVR1'4'BVR5S1xFiE:C 9]~!>[U딣TV-,.LZEDRX"D}`!{]h*!X3b̪5GDtL8|M1al wX4ٳJ!(f(6tʶ~BT&aBI1V4#S;$X\L ^Uȵ:[65(Ip6pN b wԺFaK:|Xd d %ɐ}GPǠ}2E_$xbmb1+ZJdE!RΉWe;Q䪧~j}_KnL&Ov7v8u .íK8=_4HM"SZjiS01jp)"j)ba!>q`"$MA@"^OdHJI9G uRTaK P>@ AFk`E„W@ fIPp!QC!#Um&)xM'i6A1Ų-ЄƮY"mgu(~s(SRElgTq8- eG5Hǒ5<ت6VdP*Re9ѷPޤOlۑҠE\ZFo(jٲsL)*.A0`bbpJڬ7̹( (Z&X@ri+w%#`KFT5Ȕ1VN\%, U XzMBP^p%S,`9GȠ8fW_*"Ҕ-EDF2Bc馳gW,)ҩq#zU|F$+M;L!r% 0. !{v@ CSB !,LD:ZW&SfHKUF1J !+ֶw?K91 ۔|spȤ`4;d ~_eA*< ɴdd7g:%\pn W,Qk=) oPSHHTzȮL+VJQQ.% PDSIلJFx0MVFXEp J1J a+miVUWuDM tjUmnk1.EUpN 1_+R Н ~Ns?a tmXsL-y nkmȲЗz?gv@1bdjkTDJ3SM"%R/,21lV}SU䄨G[-.d:l:HK/(yi{`Vio9@G:+*Z@Ln]Hh)WVSc2(3(v5n7V[/ի~a=(9@i@2QӲB Q`rQR0*k]y EduX:+*@DUĝfE <-`9%^V0No7Vx`VNF#}Z9TX]R ("N'PG]rP1@>rhlu ^HP}=`,Ck6yЎ2*vp d`: (N6Gmƪ ڜ׽bFP4F,%im< INGhαKXT5I8KNF|d)e$bzc"Õ, Ƃy+ s.pYTe(! cH>K4КmN?Hcy!gѝb,X#5P7o5 RY\ץ ^`V[ W[ 3$]$y4$UBe%Z`&n7b1wy^ڭ'0:MM,<['̹2*sWp śC|̨C G'[؊"PD W}nO.DN!Fu/V0 .TuBJ8RŢ4xT%#>^ATXd t.1#dІE8zV"fmH@jp^P/6'Ek&O D2)I1OH2˷u9(V8/?샩.5ʾ(ѫZi#(5rfF8ae%3f@hreu]u0"#,ySd9 LnC3FR [0?x @p4niӫφbUUtD|٥P#Hh%0"%. 踜&J#:KekT]];g7^u#q;܋'|+`7⎶J٦beP)z/.'?{-KE%jjkrМt,spíl%;,vTjrVj1 Sgyގ'0M/>K%kT]7휪;gü-jblZQD$ˍ49tU5u$%ד%:v‹1nFJ /p`+tv>-'U6OQyߡlܥ; =2'x6ei$7vC)r?KX&WlM)x%55~\1>ʗC\zĥ~.ȷNhq4_%HQC,Xde<{H޹ F'wyl<ucxa:<յKG\g;ϦHL.]t?lі.0|2:{_]`˛t pZt=U׬ۈ.Zp?.7Fa+3;=tz^~X8壮c,"/^;TteVY~h]JV`£.|;uH[r qً̚{ϬoY#4>䶩*6'%QiUP[QnG JF{V-!AZ+.ԣ!A9is~ۯUZ$vq*A6Q΄3[⧇㩇> Xxdi[@V~:6a.Gd +%ȔNj j[2dkߝi/iXr8m#ybs` 9trH+8]^?AMNxo0Q1.4Hkf; XY&:_=Ϲw`̪hcpI cZ+$BX 1TesJ3 ^wv)vcwi*`<_9(Łn%7k(vloH&߼5b|Cy?7ɝq 'qUÕ60>,p60uȒ,3E8]Qطy62^zQ Ļ.X=GxRO D{Ï}F{ďs;~i .Hp}xr:<_q]c|C>&͎YA_}NmVU\lJ@˾WT9s>tHQ:S]99kqsܛ9GFKq,ǧɬnk{_y NXY43n3.3Ηkm]\,Z+ KZ7CL7+L>͙dy=~2cn=Ёٵm-͸<ьcZG.7OfuM_iFc}Vis1&kZ'9ZԵKkS*u*oW6_}v֭pk]R‚B>Yo܎Rf?MyrfbͷW)x0i_N[rs$D|j#K.}*4e# )vh3)[jߣNlJ;[dv|ט8  ںJhu1I"C}JILE$ KhW_LLRWS,p!Xʢ㔌'/ܔ.뻢M6<3͜;N.˼ӕj05()T7MM7w볰 :Ʒ-[ed3-#LRX*+@$ nBpS5:H1e w%s(xκ+e]AZ`Fj&@aU*y,Q0DUv 7;-2__y.|`˪NE:<\OιO袲2Z̍R_X }^-ƉJӗ*Mw $cQZ9)Uů64(*JTM"֒M+/@%)|b6)&YtrNI;Qwm2Zo.K͓ȭdP_ڹBMLQyMI7iLp<`"mI#`+v,avډ $)6XG}*7 V_aC8WQi[Ȧ*U:y׉|P7OUjxzҵjw0x0}yFV4B1WAKO:".g93<_Q5dvKop8;+Q-.P Q6ֲˏv7?/?fX5P 4\beHtܰب\,v tZ?e!vBnZQjd2.͢շq߫ ff=u}~n .2i'* v1۬;Q\nۣ `f@z[mm ̅z5#:G=\le6Vv鱒_gD qvV fbYmm?w݅䍏c;z"jo:QJgXl8D;!O$Mj>DO(FAe*dCp,i1T>$9 qS%x p3j|;k\Q,E٣F6i}]+Rɽ/fyvaWA\4_[[)[B'ioIFwS)r2/E9&(c\D@D S* ',?N{bM@!`%&%tEwJTid,6XNV)g+EwBU   /Fm̒\&Uaok~>_yPf?墲봰8?#BǪ%۬љ~QE&0k@9Sy `x=#p(Ξd%'5<Twm@ JVyP(" h{PsN\zk'-yh6> c}~w]61@Um']t׏?|HgG B;k$QO2*W^PmQ+rI'LgREJA!"7G`8#T\K*A! C4P`"gAhɘa<1ڨ,)ŢZ EոT{G;[nk08GI>;,,vwso v`tŷZ-GtZ@9mL$ii ĥAjA,CsnHdW-w/|?'Zcjj H1E>wh,Xz~9WW:OQF.Fɳx:¥xoxOfo_aS q`#*.d\ >N?v Yɴ]ŏyʫ޹l:7x:ϙD6.w~2;Eh:{<HяG\hږl(͆nHlZu΅76YWYnnhqFnqx}t: 7nqA#druc˝O2W垎\J wM!H:s@n2WǍ~2\4nd:͕/FU7]e[_AY{y?rJpd@<iwak:nC{s7K#dą;pӍvLj:uqޱxmMW0 ?vvN5|!uZ(-ԵO{.:6pv=fdbM e<h]uVOxĵ.PJ$cҔR$F| ̅hP6$;$EyϫV/xz1g&h(7>'0*GT:F2Ei rA/S`Ho^UG)FO=KnYXÚ(J9zt1E3 F(܀}ݖh&l4?Sg2W_\-!<Ap@ dV=d dVru$2G5pfgxYNTVrGPJs)l$`%Z9íelrh:"$ D QN8e%#O!N .^}Cx+U7n!{ ᾐ˻2A?`9c(}ZnUi6+u-sb,I^KP5RNFeEXՄj*&B޺~h<#!O>mWpVpT'NH2@Km s@xJGL,m `B ڙrg ɂf^j˵$"i%hap(ӕvozGqY~@#|SEOxnOĸĺ%kdm*DJ:R&fbՒ*mNK q\p`Qh_nq-"ȑS)40˴PLFHX-xb&49$6nPW+5@MrR=|fp80sɲ.Iy+2PFygjg>yqed@HZ7:!( Q喐rhd3ܹQ.-Qt,HW/VBI[t4~*st}|݅_=r})wB{[3t잮]ZrJytemdR2!etw*6 :"zfZ%r zhfɎVwawtͣ} k-q_18Ta\aw.HsX^pF=ţck4)DWrQZt6R];NѵE)GjFFQuIu?X~KOib1\S `(#$du6; Lo>P"TӐ-rV69R7@c\n8jSւ"8Q O +'%gm%k=ruF0QjU 7ǖgpV0zys}Q:DґUmppv$j$wm elGKUIދ| 2e{V)QP$V"gOUWj㈰FbjЎՖ$GX]@ڽ`;{Awnۉ uNy Q qB(0VsLU T#UtFznl $I@e$gSG'LIExj5qGh9 Law AzJ;j==;>   Sj-@ L}>P%K KH!:L`s*B[ b2;gUYGpTT/gL?rݶx]^?i&y6JP4|HMd3B9x`Oa){hIj1\-"蛠,Vii^^Bu}%^\ ĆUUߍ/}x~nﺻ 0|ukhx.qG13cѾk~hm}*/3_})NTzU(WX ԣ鱍yx4v} l.Z2׍˲#5rI>]>?ӣIneO9ؕTKϒfkk]o#l9N2(%ͯ3@~ I)Y-!NL WDI-dZ}M$&I5$+k 縷@CB &FN@x-Y^PTR*1`b2 hׄ3;Ig EJ{)kZNkCLGʭQ'he^i ^xWD{C8I& D[aRD87ZBcHW}3)N&.A*>Қ8 d7'Gיch5p0^f-6s'h pœյ6׎0' :y8oy^as,"vUIgO#j}7XOovT8u9/[Lm xo qC ^Ƞ~|2qA[R±!Zh -(5}+0p ,6c"7_^bwy>=c\4V{oI|`/-~7G?=shRլW{D -4XB1n/>9KW #4e0,FDN*$9bzR?e.>4Q廃ۦjFvp߬)ZRT8r׷Lqلfu*Sr}Ӓ>7?侀,PDU( V*ZBPNZBP&KU( V*ZBPh BUEFz`ќ_ i)vZbi)vZq,NKR;-NKR;-N lb )N -NKR;-NKR9B= muꙏt,n3E*76/B oLf}g}}}Y40޿d+R* bT&,(ÁDVHu mm"(yYU? ]t֧٭~|\ӛ0wq09RA0,''2bFXp\ ouhC[=WנGWP֟eE,Ņْ bh'P`Ndqѳ%ډ,Nd)Ux0j흋˼j{fSN(j6*PE(a*\Gb9e"PZIZiv)g YZڥ;;^P3/.sGޅkr[b_Om;o u}<(cO1P1@CvL@3C 6͙P 82G9Km&;xhcX5i1F/>Ye&yf/Իph7K$cJxrV{LkcH{<e`lD sJme3a I?\sزڒCs|[-7~$gFf4=Dz?n  P h Ek"t.ux8✃;9;9;9gs5s#uF'Ⴃi#RKҌ1Cm7}sc,,Ng;-{3H\/O#!nw#Qj{uy6f>%軾O:FU^T`d4&j0@C^X&#nS*)Xj#z6%NLx-Nmy&3x!ڥyf1~w+|eSF{.)x[,6}rIQɶܢ(0A!T_0HU2iiͼ5k,wF9٦XnTr-Y- "<eD%\ # 8EJ:ƍܷ[^|4,M}{i7O^2RD7Q)Hé0RH† D4$ÅB1^6[`A>_9Bu`2 ûV괹鸣SE{#sYHW[%0eK! /RYZN.4\%K_e 2_?IAHC sM:J.ϹB<YE-Aଶx.D>sD.tkX]v5$_ҔwmG_ס_-vt1}ypɰ̟EgUV@s9!XrRaJlR g9K16>$6scoU$.G1"SJB0|,$m@x gD8" N(*i!{5Y'*s~BᡀD!DTEt&dbBˊoE_,MO42WPB9cR`.qQq $FP"($ Wޚe"=p"-#k E.HH$`\PB$ )1_0!1_< I=mH} VTuï+߹8/o}s7ߟ۟O e{|N 3ﺿD.}ϣMs 4-j:oǹU5mvyIjatdq ~_CgԗNjLxD&حBxɳ/vCƷة8 u=bT\,qwß_{6`CT6˛_U=::xq8QnB4B]W~>/s-qO|7s dGԾs_TZy?x3?&777W:8C"D*Dfj:*DW?oosnx /3|]daӋvu,l~&p2#4*AC5Y4iJBu]8?}A?F~#|0ݍizRCXc!ZBЛs)*wzc2.FlKȄwbTH9Hn1C$$R\;$Nh gGԡ1uyOM5w r\XĈ+F(ۈ.uZ D/ 4WTߥ<jsQ ZL#ϝ-Jʡ8Ú5G<<y `UiJN#0Jf f^]vo4aޕ6#ʧ=B|9v's&w?̦\8MdI%xSo$e9۴Mtm&htz;s5LiޣPwBf pW^+ e S᭲FmKPiQ\jaj\ǾRF{{󽂶ߊM9Ħl&f}ǧT23'NnŲ , `Vc`\AK.-be$XLjA݊}R_݆{b-nX2>nTPqr&geLJF^WEQs&JH +I0>6|1pbŀcDTvB#mѥEi^:s i^Ri)CMK2(fh0k5f,`\8FMFSRgjtڗ|C\qz8jo jU)Ŵ%x0 q 4 sCl(=Ccɭ aE],Pa O> ̂HB9[)r jgpggiŎfb 4di3K÷A9].Gφo{peI஍M6e;ZJάjY{CăST0-f`;#RFGPqLBJh+d>ZҪ#'8>Xz93@kd쌜؝6YWqS,c!NpEQzYҌ7 s<1y@_Bi&هɸZ~-XTɑ`o|^3ɸ6xv`(uktRE#nHƞQk % 6N0+D`!Ot`ZD"rRu؝lL̾vgqSԖP{`4KzN^Hi@Sn_kS_=6bFL7ʌ[#8!1 4Ô%8$` ȗ$(!9<#c}>%PAøwFn8=@#$yQ2cTtMio5v*z18=>ķwoC+e*>oi`i8o6m[SQ7!AND{0wXRDc̿{*b#`9GʭV"2$cj1IOYBoĥnOR?Ηm$)ONOO# &VhEX ˰`c)QFpF#8W!@N!r0 s@[3u%IN"XIS@`):I6e{68Ai }4}_&rk;fv(Ҳy%gq`e #Ѡ5 +@( zABU(3O73[,z6Bπ<_Dϑ3D4 =.cq*E iȅ<@Y[`I@ dzF7t֍If%XFvW $@1E=aՐۺ;_> }˪WBqH.umٍnbkّ'n*]m jޅWD 5D7FP\ׂpPYŸkTȂ]E#efA|JƖ37>k*,G/.W0.<qjU e'3,ؑR#ͧ7ůT.M34l#/kDn:C8=!.7Q}y1t:N1IK-h8%/6'> MɨldnÓ&W ?l(ߍߔ3d͕X!X|rFRԁ//Q :Sc&&eOK{7ۨt¶wq^#/(9xkƋxx<a1?EBF[;[r4M==MSyB+n:l|n[ |}qG9Ѝם:V9|4@v;9x؝4ťؐg}CrgMrr&d O>?qtNTԟ[}ST-!K,E0џͱZkMwu@$ܘ@䪇 /=tޙ~tynw/}Puĵ$vbfw8ϑz?c'l{^|QpJ@S1WzON7Sg~͓ {S|?j_R rP&;hR7]_jr\4j=n7@Wv6S٧e͋%a]$*?q7NGi>) XHJ[:,sceϵڻ[jjGݛ 0_7vwLRs= Z{UcP'M ԪMT}=-Nsn]7%hzբxS˕WݮQƬBCx-w+9 `"n0R*]i7 lV=NU_wjG.=yw BH4\ %GQeS %3paQIicƏxR*hgq޻ {U3C)a@Q9Zu#=ɸZ.Οq0>r`KxJ\2Oқas EfwC< ]@џ,d7bӡ.mI r.D SG%BRt@^0z[G2AiU"PƽuH ÌМ]늜}e\ N-O筪>1֋>y¼Sxm j}VZvKJ̠>ݕ<糤J@i5"{fd76eqtÓ9LJI]zu+pp:[ߊ`g ^&'ERc|m u;*ςl{ DsrWGfDJ1kfOF~i` .Yh e_Цeqsz]w[n~PftЙwn^|Z6 >pyV7{ խ, `z($Qb3#=I}w6M*;I~6L^qiER^c )SBBT \sl&r:iTfCQ{0rHRIp:TJmc}J!tE NLwl&wF.kovĮ)k3cdM= B78Żo~D67kiqaĹL2`II;bD-9RÍ"d(crvI QM)0jZFe)A$XSu3UdH vDəS{UZIHy2fP{D)=Uƈ4%H0[\L)[@`SAz$c[_5[6Ņqb+, { BwXg=z$?"U}QB!d2RG'JF;aliڻM֣M '8uJ H5VFόRBERr'?f?ӝMG eZ30{-#chnDt"9{jM9о~MߘL~<4f޿V^:N`>bnO֦e]s؛WNfu/*w[myv~Xf[kKwn~M~F|}뎆 ڱ;[ kV |xe߃ݻN#[ <gl ZkJȳ0x ?P< ݳ0QY(H\%:J*~(pLWJzp%FҿoP.mlLOFŷkd$70 G^=io}Z, 9\܃K LGD=GHBȋo J"`:{80 t!LkAi`E\СUUbhg(ij>8\]`45%A2 puM* H+W7z9c  \%r>J2wJT :c+(0V&4${(7BYT-0$9}~x>ճM <&*GH(dƨR@K;xv53r갶PW99g2}Xhfn.?mέkPTs,/ b0X)]o7Wr! 4mkѤ0ȒO7JC&YakwEΒ̐xTT-ONJOƊu)z T Pt5S*i 8mpiBCs!M|s)%*%t\#тW͉۶Z|45?l(է4C 6HI(J&4B-!9w ]N %р?)Ԇsz?|E4-H:"]Q&P'WRg!d@#D.G{A˸>0ɕ"j  @̃iCNˬ)ɢ3J}հ"U+(?Rwh3QBWɒ$ ]Pj8!)|\maJ$)xJ΃VM ~HKcH^)sPT/6O*ӭ+s;Է~{~qud>֫g~I=y=OQ4nOoǡlenvlwK3Oo6gGO/>zkOGӔz~>^W#(ljBGWItg֨ɗe*|Etn,/cxlڤM}F\6^p N SSwU k^ɞMp"yf+NNSۅkNNO.4/ڿ~`9N 0JyN J1|z^@ ~vdz|3nϿ_&aj qb&r^v4g8Vځ%ه-wVSU@Sm(SkP w׫뵪[I˚gp:qku4Xp)oPkx(6fG;9<94C|Kհ5UG&h^FR5qKՙ,^köy:p^g'''xߎj]ѕgN|ppƋ\ oN׾seպepnE`& bҬdӜMsYuy~%lQP7oI{-^a`f73 [v]pܒfӅ d:kE6Y*;ٶ3a\ 1k"tG| mw^;emv2%J]#J4|wYn>8ͥIĤ( F!  iķЁYvN[)3? \ %gَ!4:h0RK$B佥VHGv^!K;0BW `ۇF{ƅszLە}cʢJM ;UpUKRNqu?QqC wQqk.CkG@3ͧ{x;8dkȵ"/p8&{ lu}OQײQעsQײC$e' IZ*p"͞1L2E#8QR*.D 3DI]ZsyY;,vM{-PebGOu42!pq,<҂F0䩰[m71olӇ.5m*ݴЊlZ Uܬ^T`jI_i(^q$Pcאf:90$ J.2sC{SanWU;*Nl/JmMm0gv2sIB s"*W.(=[F- s.PmbF*sIQH#Fus.D&B沕BƉZf1rl| es+;m>9dJ~"G~V{npiooW.n(ot{LaZ.hOQ,*vIz ڹ5C0% ZrU£ %YPckk/ xd2 ɈL)G)*)MјJYZQ#":J.29zQF'+? *kq ?6ny48KQ<=BW'QD JNQ!J? QNe݋M0J>k#&H&pBj 1 (AR׼DTv9A O+"F3;`I.g!qP)+j&tsWEY+JuisP6$UqsÓ8__>=~2}g?)? qt'\X)Vw]s t-KuٷrkJ~_ef3K= >~:}t|vk:od2Ḓ8T"eY0/*jG .S1^/&6,ƩZox>ZS=[=6_<>zr)hZ_Q^rll>0>y4N\?`9 N俨RHEݲ}BJm> > z<9{r1p5f';p:kIOp'争fHB5LNr]&m%Boɵfu:sc{w7DąY&̀_ef( FzYcx1Jۣ >UҨM6Ljl1hF:`nt&g<,..B ~]^g{Z*qz2 I8_'|z:?Z\xnH^[KuaDX}m1uxjsСo~Y\MlXq/Ͳ?$,_omi5}8k\3iB~tc9i^(LٍawؽL/"E4 hЊQ!Y q[g"!BQ'wB+ ^8[dԃ{ͼpSω^8=#/QI#mD:j/?A*r)bkb; ; 9GzХAרUqDNǧ#Q[ / x 9Զ95R` )(J(q f]ܿbwBwv{JpM&KC%'eAyIiAqeKIѰ@MШ&rdsgJ@mrpk1YXu,FΖ&э!wwWz n c4cI 瀚A XT)pD/ʨ2 Q"i 24>"8dŸH7NѾ0#vForzNON;z]j|ޞݰxZӆ᚜m֬QuKuX]|\+[r2җH\G@SE ODaܸ)BjBH-,  ')BB 6X[J؄ Us;2U큕Zb,{OL7?͚:E'_󙟏fy``GhJ%PQ4dH^z(zr")ktr0dcFs \*/I9[%6'&D҅9#Ú.:vEmUUڽ{9^*N $γ\0MhClc.Rbt`%I˖IB*ц2jЊ<*//YdKTcf"օ$hTGema<,FxXm ]AbcWDԅQ#⍻Xsf+*>Bd$eK]J{ *RG jF hG$/eqy QFI![E3K{#vD<;8qq&E^gU#.RRM=.޸sșq,/|D+P84Qh@ MG RsVDkq>p3xXju슇,}nT/eV-: gm(jg-zǏ ~Rž6\/4$'6؃!tZQvuP1}Ҵb 7P\'MpC`ĔONȘH ^:.{L@'+7ټT+P(I!F R0|blq.kvv4%C"AHșhU8*aF *VX%#r7"G{+` sJw8zf5 h*3tDqD47$qNb"4Y\6ϋvz9P7HPy9"1E4㉋hU6M"PeʝOM\jqy$ZրH@ŧ{r,:cdyڗ[ake AvGd {f٬vj}Hw, @%>00M^ZG=QϤsQϤCQϤQ&Ľ&Rbd$qCS)ZHHhCل9}bt}g$n3e/-M/M;_y#i\QV'6+BWCg ]mo9+B>b>b w_ɑ8+Z%nIm?QSlXT,jutj̿fۻ9]tfG5i6RZQ~w=6y]%.=i]ٖVW9#{mg0فz P]jٛme3-p'Dj,kxd**{}/ Эl_`rk?x|{M޽zO/sGQִ ʃJy6\<yhQÎ캁zOE&hFfj.#-5cc8k'ƻm諷KPE:ϴ웵=mMvCK|˪L][ mwЮvz;y&xm{&&ruՋZ],C&7ןr}K7Z+4eYIeoBmﻸo ɷzWZ۫{a⫫=f]7[UXPA;v.YG]2T;tpH]CeGǽS1( u190|z>5q#1ymAnJމK2t)BCQ qar*ӧ629r=}ȓ텠ej6'.hr!O'Ԯ7ܤyL6M9=cW|7^Lg7ӎ: 6tJ`>T_m_ ÖUxIϐ-d9UkGVQʣ ý3&.LnJi9kޭu0!b,F.+ͻ\wť:m/<pjPM?n§2k%;HhbMհT},4ی~Z#s iWz=yLd6]*7YmN?>.mP\THT=Dm~,.ί_ٻ0iMBfo{%S $UYm8s'pkx Ii#Jy[eE"0 Q%=3KKeӈFi9)(PdS2gՆ(fAd-YMht96e&P)ͅ'$ *-4K_x6 9E%c&omӚ8 lK/ [M:2m|b &O}:O)79#0y6$)u IJ$)4JHBe?cn78{߯XK jmxIYɌk2ʣCfc쓰/DA 8YGB9Xy J$"+ܕ JA/MtNStFzG&n"GO w{Ξ)T#!^x<%j~n&;8pU?6 cbe;`BN Mmv&gr- 1(! \ ?N0zs!s^4;Z=D(%LöRh'*15+%sJm\_{5oc~S,遻q;#LRwd92vғXpɒ%pgrZCdD@Űx:* –[ =4l)T\A`dkqUx!%$}2&R\@v!zdޚ C#|Rc˫_"sY===xRZ[2D9 0N P2Ip6:Z$3}ҹ?#&Kwʥ9^5E _]=1Vd OY ,"Jh KYt5;]Va"x0~5~ 7MA\;iV:T'2S?#V:` AVhAY (Y ~BV4*}HC?krۺ™R*00Sa5ӗ%aˀ} W?p'(.v#Fr0v'Jz_&-S-^`23 v[+-> rCbWP,+TKPEq^0qp\aK)cX+LkCI$|ߤȋK*[clo.1 ^RΈ\Y'MFKIbU<~%n'[ǾV^.>/?~pgn'sok.*(E 'X^{m+DDDNg`r0AFM("ԣ`֕},ȒtvH7 {ZK'q*yA&T. u/X̽6Pښ4CN%}q2'_tw*аFhjN1Wu$(&^d2z\hXRԑԌvҎ,:6 Xijl]ynAh'ڪ-Πx"w *'͡cY#OIih^J+0(hq /xHyLN1j^#^W " œGddh2z h@cL͓ X{Y{X{"/ GvXlH$E]*Q7AoQx8Xs(z-ǪC#6'ӳ-*W$7|,'ݹ9щ#%ψz,<:\ υzO=GF깟zW4]i꛼5KmM-&gy8w 0LL-'/cہ 䀲 *IFO`3Z[vTUQ6qe*%p; lc~!h-@v{lMT >x3%>&R케zyu\x_!PA;v.LEmHe7Ɲ= 8AK _m%R!%U )R"%JZe4>< ;`8>fVU%0|Rb0Rp}-[JLĽ+P$eAAbHui#,+ n Q)Pw1e |FmSap$3Lfpc3q́>.TD\<`GȽw2>$='u:"K:YEPGɠE6F.|( 6m`xW\YK8Kgɉ^GFs[pgnKgPT̫j=Y&T4q9h(=|$f˞9'<]ڄ7s[k6HrOG} Cc37 ƣRr"upjɡ q_j&^޽{knvΨGùҰ,MurH5ڨEPE(Rkx%Bd* _5z95Ʃ7b[=H>&21G."$P*1 1YX;Bd =~朚9ɧÚWOSOSƯj=Oe7a_V s"zldJ.vsbhVEYi`̄NVg^٤log` ڻys٬.[wtj+"gi[zH#:= o,.RJT #T3,UdeRTնEn_Ǣ=4B2Wr[C5v)Ͻa$۵}㶺tsMŢO[ۅGkTu VB /dkOQ" .x9ZԨ[!kgv!uCSǡ0sdfF,Z5[NO*H1Ŋ0LT,X%%q'W.WTz,fW}6өv1-|VR=T6&eT &-ND *#eVcE]J\5}8ol@3IHix Nt<ޙ8krl%ܳ^(uJU1l*׋B"Y$J%>}g4R%sR&H.b Z+\ːJ:Vܠ "bs4SsNf5WiTmr htum8:Ƣ h5YDrdI,Qݾk" 햱c:%&)KpbI9LQIpc!(%rm,9Qr1 PC:A[QAѠIXJIl-pkqސhH ԇ2UY (319kx/AIA3H(ÅBeI r2uP:}?e<86cpa޾OA$%!=s3As囹i`pUWM~oJk|~w]r3g֞aLiϾo'D\\Wu{tX/ŁABbxG,J\*Ѥ )−hſF.RCwtX_MWػwz|7"ZhK$˰UB&Y/(?ڪ, 'o&5c4~&giWWԑfLi^ϖ-6K4㨔1eڮ* >Z|Ƭ͗ocIguv QkHaL_^eOGщw]:rMCL7ߕwT$B/_ez9 ., '8uqGŠs^\72>]H쨚̈́CjS==xvGmzV5+g~$g8.IaikA/VO͕o+_~l'KsGGz j}|_&8Iyׇ6o/3^i/KX)K+Enr\?eXo;^;N飞 XG޵"R6#VodcXVCodF .H PccW4Y졧6>uU[ϟ8Ae{ Ec`28hWVj*D{c$y/_AdiNN+cs:љN,j4i\ #jg&:n}\^&tKz !L>&Uax֏TqrHC*GPSbeJqԜqy%;P;r ˆ,Cy/C,DD!f1E שHĂ\4%0xmJ#d#Vgr.{gYBu:v&ΚFޥ;-PO'>/!dƠ^ b*H70tRuD&%҇7:5ԄL[A" 6$ п%/ " = f&tls8R5&gOl\*\ LO6}W{$/Woe1wӺݵ[TfBGjHqpS)*)<cL N~LPƸ"dJ`DQXiet1{όt4 L0RLf]>:E8%L 52v&؝XW)PXU,ITx+24'4|M.&7Thpr[H$="g %J%T7}Q.ȸ$t sQ0^( J^a’혱'6U9̂6#vg܎~4ssWP3uǨ{=ȉ{(i#/5׽/h96gSIxaJ@#1d K iD&(rSfb,cav]L&ӌpw;ӏMtGoŒh+9>RB vgL >̮iH2I\C.n=Dkx EI"pr}0TR\C2!riAuDKgQc5JK6E:Eb7j|r{P)k'R 2$bc)@~#Q'TIQ"6 z\. v1[$q&t\kee\27Ap}cE?lzKc4P%R |?xS q,=΋ /?_h2K^uwg>Eٮ}:/'[K8K\~~6'42b7]Ԓ3neS vęŃ:~_6Y}@qC&C\ tn.?<,Lsй1Ɔ.kCsU g~vg =g}j`l/we2ހl6,6.Fg㙨'9xк6zEcTi) D|BLelʗrc҂sׯ8J'}I}c҅.*[\i7D0FE(xT> [8T_,M_JTy"SS8:7>Ny_J<=jg2;I&;lzAW l6 j>0̤34bioSv,KS`.'+)@"a)xcg%ܐ'GL \AVGC}eY%XHD#z*ܤGE?/9{dxDGpR̃FG`v ٽȐjm9gI譾3߽#d.+lv] ǡ8C u'kfwo.,Bv/uŀۋ?NTQN9]^"K`"30:-1EqN,cFd-S&B$-_;d ~oΧ?G[sF *ɬTm26IM2O+ոٌ)CR^YEP1DƇntd֯?|<;ߖ.nmI?-^~z^k{˰^e|}]ONxwݛҴ@kd?W˳͹uZcrN}^].Vp~y=|>G 3'7+bR/~{ѡ/Wи?%g#HTl:SJvn׾r:|,Kp`>\s/o^BZ=]k];/R_vqZҍVY6+lW7} oMooY0Pi[ƀ^*Jp֩/8^eՋtyMp= ư2,gmq ]*̾UpYmWqwUm/xYk1wɷuɸ|F  UcaZ  Y>wx b2|={Ǜ45ֿraaېm6a: j6cޜIۗvpzk pp-#&*7P~=R|Uk̚& -),!ۋ77۠ʛ;W<$"AS|l6j b9}Nq6KdGI1OM 8ϗO'a3^8>Av{ty>ox%6bfc i3 v;CkluwcJngMuɻ#^1fq> vMB<ĭt+[A#r6} ](=ydRIcF׍W\mYk(D(k± ު-3,}~!Hb  W#,x׈ڃUIW*ΔeAWNB7`^swX6/2O=X۠ w%#>(7'"NRSz>_L_\~M[y!4f̕G UwLb i#ͺ6Ϻͺ6d:g!Vm%CUsij%\G2J|*/<&ȢIF&WîKk_K_rW{Ӏl/*9T0^ VWF6Cz3M/Ny> yc å຀?u ;韋7t1wZ;;0mZ'&WO_K' lJ2+!TV6CZc_ (ڜV6lV6V=1Y 1ĺiuՋsw]&.]IWV=Wik q+:v]nt5] -+O]y(]䚊Vu'] PWRk( 5KEWH|J`pRwu_Ɠv:)k=[iU힞a V#8Pi{a=tZ> t\vF^({7 UW(شe.RS y-_sk=Λ'"V6{~?|0帠x7UcfhR(e7sDr6dǗ6r Ri^+1'?t\kRTj148`:T<=NK&W%y 70D)J% [{ղ" wɹd_YˢZE$roKDе*{(J;- Ȍh1*#,[H)l pD v VuxIWѕ1ZYIHW5'+ĵ(vq$] GW8ͮ:֝x|B͋LMSӗg;m.\3-Ϙ쟿Y,Up4'P9C%ی3ϴIiOߣxMH iLqfig%^it!.(5"A0R*t5@]ykc? q t} (=OGW#mJW:`[tՏ֟h/JdCW"تNXqNFW{'h]WHiEu%Q{KFW+}u+4*j) ]>v]!el󓓮DWе؝Ǚpt~1&p^rW5Y_ho@4^NQ#r*+۰+5 R], ؝|zI?\Kf*z&b~"H!^?7Tws#TtLEt3u2{2GՈ+5]!ֱ )K3[+%<,⌻3eG2bˍ/@Nq5H+B_@b[5.iI4홶; q+6v]!M|t%z9b~~Ep-?ѪQ2IWV=KMHWs+U>v]!M*tNrf i]WHuu%w^{ q :IWԕ⮻*N "?30.ɃЫuL>׋N:z#Oԫff)f!&!90hW02#?BJF4Chagtr2B\ h]2/R_ ;2\! iMJ%!+WyZĮ+m~rxe8˘:cFRvov isqMk)=Aj+e%A0;# q-A0@Oltz5b2>~OD{:vEQȖW=tz*J`n]!cTtuB&] PWB .!]!edtP fbRrt5@]IYK̐:T?ZicR!ꪝKТuҖuԦY'1&h U<l8'Z2w'։دH݉!^?IgkWW$]!TtŮ+>jntX)OFWktR9'm |'7zsc:;Nwq->:)};1'g(2B\:BZ݉6Pz3D]YaYJj2w'ֹu!r]9g\)?r9?r;-i6t Z*FZc4P:i/OJB` q+\Į+mr=tez;bR9fN~ԏz6\?ZsU?JYD+tuls$ٙNWl]!5Ttu9thb(Bazw,Ǯ㹕ĦtFb븠2CSMGҍ!Ĵޖ +r@;Ւ+'XOW'+9S&v]!IW5WtN&ZNEW@kXB4L]Yx8|ƜI?s{'*^n'+2ܧ]^FZ}RZ4=@MYJX<V?\# i]^@)Y>rV1$ӔU/`~pU'ʃՏDЕK:VZqBB`o p 3TtŮ+,jzQN!/C+m6ϲK &ZD6mê|0Qogm>FA7 [V'X{o+|ó[uo8ڤ0ʰ Ť*D9}s9n__Aa/W4݆Ho«nhMz#0xu9_ >tŷU̍SzN<mi+8S;qgd 4mM_ͻoqUuN{vo,#:F eyg!jGsCy^)-b|:r _}]E/ ,뼪|=+5b9` |6gPqeRh8Y+QpSx5/эq-,~o9rA>}D_pq-UX!|"U-IQ1'B&4AO/Ƨ\|8o iVE8USI{k%xN4dhc9\2'%d ~u}d SavnE);IM -ڒBGZGWH }fN^Dڇ6G;%Xr!xQ+QЗֶڔ^*QT,tRO.E#OAQ ׄO#{XMI1!Ku/Xg]!(dGhOM6deGގ`f#_f| E Uk\4`)[ݫ N6h giy9xC9+`BI @dr*j]v' E˳ǚژ[]Jq).tlhhXWoT\s ^-FU5E+GR6fa=gAFm̡  "ٻVLU:I=RqA1)vx X-V-tC*D4`Ԭ4 3*0 eid, |Der HM$P W2T j,K8JH&{J V+dWVb@,AnTzCZq2Fͷϡ0S @ mn;clQȌ<B*nYy$^Vt0wD `L `iƗ)B`@YLu>J@ZAAN:2]:Xi65lD]`#)#ͤ=kyGD)J6NB+mebEK]VkD5=() ElDFLcrtVDlZ"5ȲW"ZJ( ert  ^ B9TE5_Ƞ aP.n3RTY7ICQŘ@QE!i$% !pB6}fC< mg⍡\\_0O">}xyS ު]-^ƺ L}6=Dfxs:p~>m:@GV*&{H\!iZTUFB1L: !'`Ge&:#.3(Z R|$ LjyU,C Y0~cP&>yt_V 5ȤuՅ:@vdm޺,:?jPJ4%wV4n.w`E^QH"NdviP'7 B`~?E bX0j`R ,*1"@jt01RuJ` t@ QYj0Q3j hNmS; fnfRHkҬUg(jfҼLFALPha3BBvZuZ"O{ *U.&ZxZ*;k*Y[Tڠ@~I'X©rltEiaӨ/zBLօĸ~)AH 8QNZ{*(=uGm(6tqHU׈4JC` Ic6ՔWn˥!DL0r( ;fAjnZ%T$~BN Zf !K yۨO]ghuE#BPޙ#B3 ՠn_.n/n^֛0jGC44C Abw~Bq[ "=@n(>QM)?{rX|K{qŗˍl[J9[l7itqA;ŦI7wqym16zanqjuƑ^ׯΞ=ʟ gh[zWt޴m$?Zvnж-xjśtq6m]_II=DAAo?*k;''^8i8(d'1:ٽq; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@G ȄjNN Zpqqs Ah~1:$sb'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N vH)Ӌ8h;b'Q:PT{N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b':,jY9(Ņ886bXJvӞFv@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; zHZwKjJ׷ ~vf0 1'㒇s1.Q1.;|Pƥc0.5YĪѕu;9"Z΅NW;#+#w[^_иk1*-k^8)%/Q#E/VK*]Q!F,tqZoZћo %]HwVhTw~v+MJkgSZ %_D>J{fDW|\={Z%c+\D . @]~OwZ-)  ?i3._(cټE۬VdnF-6wWǞ=dEexqߞܷf3?yr :]:{ǡyAU|]EzVnFtE| ps+B銌̈]\;ma:2B;#fDWngCWWŹPn[CW2Z~gsͲ~h7+uM fcz16/ɇ4}bZe+ \iӭͰ|}?];]8~QG4wyau뗴zФ)"8tE!2*>`=GyQ~<rʒ3FhQJ*P4ҳw_ҭ>}bZ膘j1vעqbuC+ll*\*B¡W4@i+'bs9" (LWGHWcgDW[:]\BWVbBiY]%]y?gƝ`V~NB4ϲ5iPZޅp4e.̈XJ\>-~aCoũPR?=]=OiphyzJx8]mt>C/V̈8ΈoI$2%+ЕRwǤtEc ]+BԂJ~EQ_j$\-BWC+B$ҕQFp_ޟЏzL@j < Z'\? _J#c̟IsajD[D>|4탼y]BN%yشvtu]›汻ǨNDߞϧ'?=>n>׎R A+rsm6/^qՏG;ҿ h0r}vԛ/-~c|ڜ|?Z>;ý_V/nl6Whz//.> LBaJݳo.g=?%_ @|xo}#auYKMf%[v\ę19ٻ6$ewq%yw/v8AK )dgʤHS#r('gUuWW!e|v..?'㥻)^K4MRՋQU$勿NΜTB0F1e *sapTIg q 8]LO\"e\]K %-{ե{n21(.;DHTT bc>td=5N+_<̔9r˯[r?Яe^_w$f:'FQYj1'֑`1h0˘=M̞Z@JL 6 dEt,X">T5IBI'HG<מKL1 c%:ەx岩je.”D&^lV-+FëM]L md&#w}"*5roMG0LUm:SUupTU'-zwݟ.5iԻE2޶gI~K{C/QfםڳhWlL| _;684<^w&K:W.~n,6>nYlrP9N%&0Z!f+V:>ET68䙋KÓ]HڲWٚ}< QƶztEa惲_LN*7~I CyZ: VU1;OoOsG{1kh>>{ёGwtw˺UBA;i3#ɫH[Z.e(Nց.dz,.u),Y[Y! X[cL{M 4tgَRn[eqclx5!ɼ fX]yRMϞ߲ m='牠)G q н.}/^kd) ŨO>t!)~EV΁1vI߂6*f"Ā!\$7Wi궧y急.}&]lQKѷGdz:-N ݧ`T ~]aoJʜxM?s)zoS'v޽Oi蛱ia8J_nٗM׿65zr}!Pf4tTzTOdhR^w/;⨯G[Vu߮#:(n;3]iICZ+bD ]LAAJq&4<'{O<V!b)Eѣ JKBZ"+`lzuo}t~Qzmpͺ+}\\`-'‡oBGT =e\W˟%zpЖZJ<^DOQ;)Z#/Ɩ4[Z|hQJE'*0yUYJJxOB`Q$p(h¨8Q⛖eRx':9C/Lĸ,T7d1R]J\5u76 Q g<.&!&h'DNtI;ΎB k-(5E4{fvE}\=o˺^aXV]/iYd%uRw&H\*iVz!! tCADޢs4S-Y/oMs޹|U# %@Bh :ka6eFsI##EH djATKAyHRVxjeE/hex8ҟcR$>$FTD_"W$ȕr(O"a8NM55HJ9ʝFsFP4xD:DC&] :0}Tni*B!!#g x79()蓉 y1Ey.`\_R&ş*ݵ$$F6{qƽ?7~xW ?_ݛo_QI!B9?N:r@{M͋*.ERr f.ʭM?,tLs޿.?ƿ' իe`8|Ph|,'0b}4RIV#ʺ7H7;R??֗ofj{z}UՄF8rkI)ՇtR`Q毫?ߔx ʃ/R6oO_x-u2_~VjOW˦jjq^^3J7F?~|]P1'P_;aI2_5Mg1e.Z;BF?7eɵ=aL^u?M`4.|A[N!i)xnon| J2k:Q֛5ib_2Ra!Gӷ\+7kN )_Lvmoȣ^힜]Nz8 _үWE|]-3N/ NwKRXZP -ФNrz6uH-N3G]cQRtϭGFaRR{eR? ih?̇n, bRK-]x"7b9|L\kڴp)BE#>Xa4N6yeK'llk͌L<Ggd2DGJei}B0L%^ hdAz{dd1U=bށ{5Z1A0V/ &SL f:zefKDy7FzNvʫ3x^{#={[^x6ªO4*(m\lqܠ ~8QM8R]DtaNULME+egBfA?sGOw攟!5DasNu 7$ڃĚa,*XDd%G( М#V+FF}ځVv_x`oO<:x:(zeJ<,ꍵE`''F+Nx4"a4BceXjb[K2O3Yƹ *<a4aCO .{B@s@x(qLrB"\ j)}F=ɍR~$V 0 \[Pj4Kݼzd,舸Q֦tgFZAk21X/VF$8)o `zixFg)ؘ'zw!BQ\thsp]_q:ya$wYS (0t#vY jG<t{i ¨1E=ag}c}4zYX2gW?0t{ EMkv{7:  4B׫d侄#qe rU)KAa43;R:FE+cd Z#a{?cX?]2M+QxY_o^\ F ߏGӑn}x~~~ͫ98Sﳀj~4>|eP1a{6'G9٣Ks.Fߒ OjnO~G;i9G'ӋN0'}bϘzz?ep~|'~.Z?Ύ4YĘN~V%F}B1Oπ.)^Ǐy W_9-(ʾOX |t,|/EY;q-84U\KI@ʂcߟEHj?OVkpnNJ& ! br28e,".5Nwf CV/ @|2C_6]b|UD&?-*X=>ί,I\T]+$gôV$NCfIB^ϒ\8;ɞ>+cz(Q4gl;*BSDaU}GptN?Q7Iٹ@.gu "$diau8NOS"Bv^W@F4-rL< F\& {Kmë WW"S(ƌdRDŽmZuЅQuOXDhjl&GoG_1h+M9YNQieoD˸a& 7mB.. tۅwņ;\f{_Zk7+U-Cj% T{xc1 &oUcZ1&0Q#cՏ-/U{4os@4UoT\rHA qwQD "Qb1<$HˑByb\PZ "mՇ~KIL3Aj['@<-𣏕QAhIk#lVs.,6: SFh8%4]<[ʥʅ`_Q/z7SU4z}e1*|zJ=Ԁ36{ ).sÃ-K arg쭹*LŮQmrUϲ~ne )XD (Z!!*% #L0&F5*:19$$V8"{IMLc1>"' &xUB*gyB# =lL8IMKH9^Zr, ɥEPƼ1;41Ѭ̹DgJ{f vDəS}Q 0HH`sC k%FxI6|z5Grʌlq)%Vv$qQ25 D4V8?$ʵU~r1tcO=7Bq#JB@0FPh)Ab  V.o0@XX81Vqϵ&t= ,͛Av)%n95' lG HsQ*A}{u QNv$)3\XTcne(%d\$ +({3'n:t|h[ ?m/L'2{G*zEOZ魚& BwN%8]L26ڵo*4^yik^39my ͹zW,95RO $ ? P8VE7_v7ȋwp73,BS ҿxߍ`Rya3=`j3mm~3ja?)L{B&d-0̥4 &Ƌ:wP*{*.BRu~ {6M3*zY[^^s*y,jG[7sx {wMpVV~Py__PwT}7+3ӳpR {MV g9@{!\`4fE|< <*~3"cY8vϗhȢD"(ѹE#rS8RJ8w$ـ[ ˓( QωRF"ıC\JMw:ƪx; k$VRYIش?LnFLAER6qR$Ғ&7 pE4 )ultpvZ+z3ࣇUEH"ƑrW^)ǝ67s [9sd[5(Fv"ۊ@FZT*Z"Sg1'6&K͕ho3g C 2&'(Ѫ`J$:ad7#b9yqp:,iЍ u8@/?[P8޲=ӟ#ʮߘ;6~~BΕ %Jr9* %(sm}T+&cF\nm68ea1<%3gه$ô, $ݹ8ю#BSґNch\s)yB:gʒt:r=g8ۋ@[ - - `k3!G2ܧZ5HEP: dKEQ0XJqTQCjErbrrK$'Mk !0t~xĿ[$I,Z_x0ŷo.G/bHT 8XkCV_奼EǸq؀aR<`s@<Ncʠz)t\ay5YLVdQv6,j8dOOF*k-YV&[ e$}dZ̧.O=j(yAr j.h028=@4/>|< Q9gd"SFTk7$]XtYWh*,N %%=M?A֌եHJp ]% 1W CWxŮIBwOWL1].ٱ|?lGKRv]ឮz4Ctkص!Jhn;]%tp,%]`EQg*e+tn@dsJ({Iq!UEW*%JzztŰf tp\۳z;6qf lJ+Gfy6&-w79&ӁU18n]|a7|Wik1  QW.̹RčPY628=*zbTQ4~sIx7^dݾQ/ 73ͬ;kWE>ג)E 07dNr'_Y&K#)6gF\s6;˜`.';  %]I(q5FUƃMUwZℒҞ ] AȃlJW 3UBKZ/]%"]4d)tv1e?J()ҕL:AQ g cRտ[ u`%434X!$6ݜ%CSttX*3t( VmˌtOWȊ]OD^M؃^h满l۞]z]+MUtW*tPʞ"]Jǣ]MNx%,縷G/p1^,=9;.bP?|PBʉ!>g2\G'rF;a)z9ڇ6. +il޿}ϋFu/}/[HbOpXTo3s XY`OL0Ų0Cti!I/|~[BB2:hD)7ay噧=ogZum5-I˳3ޘZ `LA:9QHD@8vK)5Bಔ:h5+~i;O.7x>Lu2x4xXӤBJ]_DJ<"%8xow;dƮ8v+űo9>YWn.$Ndwàೋ!&G$$*;dp@nUA73  l0S̑V}G˘Rd3Hgb4ј >ղ֝*H=U)1)Mk1H $'UcHcb!+#NY,ͩVxJJraOK)`AN9jT3]d)T]k::Q.}>Q Xg]GLl]Ia׭BK e6=? 5=l,s3=3]Mz=1P,9`.8~rgs$+댩'*PI]IƇd\rJ|Yկ#ƀp4u`. Dogx]/OEĽs@П>qGQ1! k2i;w&+^\~f։Z"q?L*Zd@8^}lU bTְyqсБڴɚ-@fQaVh(IG%:|I6 /Sw_z sK/*u_Q.~1bo]ά~0i4?$n7_tKէ60A(D0B*X" ӊa=B YѮ*:^ _yp(Yq!QfLukP̅U¡H@&2q.3{oܔٱÂf9}e5qk l[8Z=r\({r^`J5Cp٧(gbrޒr  olid6S0ĵv@Ftn޲Et'{I CyZ: VU1;O//߾ԁ1qMcc㣥# z˞rQGWՑkN[Z.e(Nց.dKZj%i:%-gqH^MH2/66O}=9 ?g,`}}0A? mSC#N?4>O7:@qo v1r ""W/-+NKiR^OhE瘖=FxzE,W謪ZcQ*qUzI-CY6A:u3 `0~bkyցhGOfS\OeSVq8MqݦpSbV֐Vƿ[)KaTTgBOq Þǭ&z@ȨXJQ{*s(ҒR'\ԚQYH;#ZM83#YQ-5 Bj teF\s)!w턃ֺswG.Tec2[J748OϿ}Nv7*/SBH C)ʚRR! C9ǤtD6uۊ39h6:d/rfi!bhltd=5N+/e:n̹@~/ jX~J~?KuNDLՎcN #`1h0˘=-̞Z@`l%@]I>Cu@zS2q9W͠㑪W\grixK]]Q_긼 cV'&nb]*t/~wZpz7LW0tZ#4HMVfi|_K0{M>cfLw:Θ߽T$pͣoר?8F5$&Ee zO"U$[)p(̆2{2\qPzGeص/.zvN jmk벗E73Aߟ6TMM>]=5cԛ<1HY0lI;C8;lE'$q۸Xٮm_K'4qޑ߰~v٘~E?VWS uw^yX:.͸זXIU Ra,bȠv, ,DBDzt-|VR=TeT &-ND *:o\G|YK "5c03zVZ mh֝-Yš>;~uRz'0ur`S:ro7ZfPTGלnaZV_Ңx6`K2}! -@,{J/2$Გ#c7hw-e{"o3Ih-!DoptEќk҈HY4@;6|>E> w? KYʺCII2=cR$>$FT rEBQJX8щAĿۺ鱖mI)у3BيalT>GlE'i`)A$JCT Pfc0dO&%f"HD"AQ jB\jwzIIIu}}x}yz{~ƽ?7yoN0_o~xEϽH ʱ_,y.Vk{?nڛ7 iZt뿿.MoӮms x]vkd )-ϯ$/s'k*^=G})0+<<'0{^ikLQάuOH SoZ_7֗Ofwնŋ'UDk5$~Cwt|h?*f~V.Zy7uŗы\˛߾q~5鿧Ky-[m/7]G?z]vb8A-3s' i]~k&bZO\ޚ7%fyØ>d&~$ѹw}&z rx 3koTm, 5˺MF~Yrxf_zd+޾xh'}?v!A5_/ݓˋ8Nm^ Yɮ3N]=#0ƧפVӂVpli?;|uyq62\ۜXZөE4(LL>BG| N䵖.FBb9|L\kt=}cohOݸb2Y?)Y(')W27;432u32|"тK%RI[>C!@?&sxi4 c6昺ctyEch`2qLG Up(HB/ A:4d Gz9C7HZU]a5SbΌޓ*8nPanTqIq2'%ʑȖbeJqԜqtwhwwn"?sȏĜ3òa<(lI'Ρ^!C,DD!f1E *t$bAM, 3pRȁrDo%>cu#w; JeZwDM1퐇ݡXѥوDP2czAg1$`i ȤZ<Bi+ xH If姘 \ȳܭ}U z|/Z]x T`s[/{H5.L^[]I:Eᔐן>Z-r9Wse[1 C9bBQ (QV?1f3pA'1ZFNQqs.j2֝-c{X=X[-T-B½E.. 2n8izތwwkef*73ZF2K<[(Y2,?#K+ Ɍ[`L$F` B,));n:HQRK"vCz_}*U7}Һ" 8#nNnx8 s=,&W:(ᕈYm;b,aU HЦcLۏqluuǨ{J/rthIgZZ0(,EyĢƅSIhaJ6$h!#3% pЈ5_A1`&($1ɘ Юs=F4bMAmtGWbI-H|*a> O"e7 (,qQuWm RpjҔGTqGj&!'4w|ڣ3q(z8q*[ڨήV-qqC+H1A@+Pz)$4鄦 pVBk }aWxȺCzK{YBΪ {Z-656W(^;ޏh c A] 5ų!1|¶1<|ɀx+^UoOEgw{7t wW7~Tޯ|w.L&mZ|#o)<OٟgKl/|u t/d zp5$⽫4mzpyfc lXOso4OOʨIXo)ͧ|p҇J)^\`RTm/6=_ eJUy09,N5ڰWP i_S|Z65?oVA>F h[A*9o\1*Rؘ(pνP>ĥFpBI_v~ M)~:mS8*\'iq7Xo[]{5'ʃw^yA9ǽŤLgR\uZnq =0<0Hŵ2 d @ x @E%cEFe@oU];:p5gv7SB&ˇ][bu7m%ݲmIc^˝9-$Gq |ԆgP8#"97 ƓCR[C%11DA'>2O?$ &")S 8 !o0K7ѠhG ,"`hoPYUmG&(% ﵾ3߼#Ktv]qd:nDz-#gn]o(|7%KAHh0:-E R!X`9cZ|7xv6O>bkdR]1w¤/WպS?<Г ۏneGb~wpǃKÊ~zi/~[Ğ ;ͬKJv;bI'{B۬GY޹Ei뼇^;ᨷ恚ϏXg6:¥Y&qR.{nv՚.~JRR:yS>yv};oFad%>J-1PFa?/)}шN/4?gHՁ$ه-E\rV*!ʝ6_w5Uyͩp&GW=mvZ-qYch><{ӣX{Z_0bzU[7_q[ hyqvES HٿZ|vT ŴnCyZA1Ndtr\gF`2[ 88(oUH寧A^UziUˮBWX45J, ei}/\* Fr!jZb?ݬ_\v:yLW[#NScWN~DKiZЈX(zR^lq?o}U3.7\$$MY R2^@D!)Gtr2_m \:zׁ/ Fn3I ,"9'W1*GT:F2}W`Hʍ< Ǫ-NqPʩ#.'"'S\jyu eS2}|[Mp z&sR0?JA !룮)ZmxԵܸk9n%KQȎsTG"o=ˉJ`%}T 4Y>Ҝ'n=lru0)_$ ԀU 謏 } F9ᔕ@Thg:坁$$5*؜8[[HII'F(DF;杝F9Ʒl\60#/Wz, Fv]]A2}}_Q3%y!Ow:^"@X]\|,vY-tZjOBD8 &X "խFP4 š2F"gMpol48b^3>v3q s9BLK Qz$Z0("kA@!&Eͮ$ +/S]PMF5Im|&@ϼa:A3(N "1\ K2R*fIykFY f{:22HZGn/uBLtVXΈ[ByQp犎ri&RcccّdS߷kOon.=[ƶà8wtؔm+\窃>d}1X,N)R0CA9O4W8ߴM'F}2Iv<|mNk5-^(1J*CjRR2DRD陏eA 1P ༷T,NH]rgW3OgɛRK7 GΖ-6H\%p#;?I)D+3oO !A p'Oi5ޭax!\l7J힞|,Ҫ`K^rFc4弓W?2x я*/aGbĮ`&\SU 9fB.9 |}ͣ}~\{iס4h.||̓獧b RBW;Q}7\^ɑ[;9hp| qd[['_ǧs_;9yaa; >9>zEӛ!}تKgfݕC R&S('A1wH٪\< ͭ˦uE.KA׎bR]rzF+޽>]kyķ(|}c!#,<Ҝ>GpO֍ KpBK!gEi#Zn%ߢ B֔=R-eO\oz,}'XTYM Tvc[WY\nTΙ7JkM%\|ʥ/0R.k[faiMLyBd c rOrd *xb&q,"&gh?#K=;V*)av[?! E0 l LgitR΢ lJ-p5t*KimW \59}|vu## W7W<2n&Wp{SJ[W(0'zk*̶UIizzp(7Ln\ej-p5ϮVp )1ч<j]ċ7ěr$yQv/,Kvgwz.{ȭMȼߞ C.{T|~*j% boX3\XT&.#CS<[n'cTy)9LJ8clrwY pz"΢{J\a<u'́HVZNsg"N|u&e,O"WGb5Bq@yBHÙZ" 6JCoJ n`Z$ 9*jS2 @M$*\"8Y3~sVw:pU0ThB B8.3#Ӂ$Nrj<92&zD7vIgk6Kyʺ]rg7W-Ωqs\U!!7041O~y%?vdeYL US-Zh"VXY6ߜlʼlDT9j>lr>IPSiE±H9'3U}޸Ĕ3v!v}"Iwm%rGw΍`ZT'1g V hI(bVzƨKJ:*ZQ#L`wσ9߶/&P `ZKR4oNPD2V`&CEe- wS}lʎ yϲY/c+:˟Y)PA9\GРdRP-^IQOy)+D\[K !d6b/\q5q!*ZЁ^Uh|T!$42 AqgԤ$ßL'(MԘ^EAWp Q*v'0.^o(3޵q#:nc7qqLjX(5hFgzFb"{.Fǔ=G&XU,Vϯ^>op0:{9,IIe?47MKӢTөO{Kd״Junln3e؃g~;6woddu #Ðm)ZH03?xr2*JRXaEyxhl_?>t2, vjPk%H#}hqݩԛpx|8o՟ñ3.ʋk5lqm2۵hX_@,x8?^c>wDX͋|<L6h?N{dbmExe_& |1n⇱_ Sk`w [Lr >󹶾ڙXeY 4 ƈgۻvk`nË Kz_O q6ph%cs<( Ms*GƼ,GL@=a8L^-<:Яmɧ䷳qɲ"ymsdi[:6ZD.RoPhR_Ğm6[~rga!RyWTvY>&,>xMN|_L%y'n,0NN}/ 46s6陋{Kv;@^I !-i먱`P}@(`橲 B-W!ǭid۴u枿јnBYﰣP4WH I?L% %b™**e1zwTi܄Mhs/#ӢsWUj$i`/Uu s SSLw2@Ԁ²Z`&1bK&c[ ڈb*Nh;qtH]j#Q:%Ș2RNUE-Z0FX\q/0X)Ŵ%!  $ {Cp!α(& ]QQ&r t{J+̨ȹQX >۠['}mAzr?=yrT/'fL 8q1gQiPTPmJFHqbrK- ^Ȍ ^`A Ɯ Us;2UJMPB{•yY'}yD_Efj3χf#`[E$&/i3gqmUP*tRE#aHʞQk % 6N0+DŠ!Ot`ZD"rRFb܎vCV:K q¸wW|r&(T( q@ ybp 0 TG&$eb[VǦxH! lfNy[ E ni)JޏQt̒\pz$*GHfƨZ@k;9=~NwN.0&|0=f,tpPHx~(4:[~VcPɓhv k"`e CqZ&@s[ExOQ'ao['I= vzh`=Nr7_=}3Ha' \Swޯ-$~<;bh [$lAFh K-Shk)QFpF#8W!@. BZ2>0pA%3F#«@c7)D j)EKp3"5SDcsR"!wK9 Q ƑPہad!I-,Tw}m;$n_OnlMFgu|#F^iuEqp`TX|EIpB!dzwгzv!O)E`X;ɬH#? DvLQO(pEw\_/61}fUb U(!8$a6K(* Nd!y' ]^ؽ /ƕ&h Ty,Hμ H Y12h&ii5 5'踏QtߞX9=l}Z;tmwXPr:m:n~.GpӖGNN?}YOi-+R)\Əd02@ʯahI^e1 :Uًy,؞K6LKt=7G7@of0p̻w5, \gs[HۡvSsggKX?} ݻ+ZJ) Q5 \ԂO^kp^٭~^}uT'Q%3d9m,X9.5;~5(T9~v*@I5Ui+Ѧ;_*UR&`TYT:_˾xm:̆חҶaKDyWt</%/x0g2;VOC?ޢ?qtNTQ7MwչizBN@U&Ti:lӔbeGW+{>jzs/Y&LxZUrmBh4 w;_E_e­Ǫ%E)žjO]TWh~_J㒰'!dsge̓>x} XE?xO,PL?V ~>6|"X&#gd^o~d)\@Cb6%έ#gy^!g~+|A{n.<ȷrA@ALFBxwvEW!/nAˣz@:ПyuP=ogAj+?޼1.Vm f:|I-I/<S;ֺ Vcu1MNā>xM()"˦u2}rgas. Ef$?Joj1 b˃D !'3F`P KO8]4ҖBĠ9_2!$L5H CJQhYa 4 U;ACCiU"PƽuH ÌМC".2.7y]Xϥ Ko6A BJ}Ӊփf:6Y^`"O^3).^ѩ <[CX-JC   ˲ @",1[/ޛr-Iȓ[$BBާ Fr L#E(*dg(` Q)W=FO 9a4P0T+NR[ERcib rg1rάuŮ[-+ eɒ(8詇IhAn7%5yC껸0\S@1b Fri21o N9sL9؄ A2 ƨU1k(K |^ RM6ޗ];*8e{8;)=*p HH4 \s͙U/g7tXfd~Jb0)~ڜdڟ^Qըۚ1k-"(gd9HhBO@_󃨸h;nQ 7d<j­&54 $|]`ro&:"Jg!哜plu7'7H"]r $AOE&_p$NBY bt/|/y ޕ6r$B.՝G%/ݗSi%!Q%Y$+4F/ˋ~E."i}{wC"luOi_c#.Ɗ^P˴3>;qoHG7j]wɬyΐzH! ?^ohWwW%aOY}vc_{>Ct Y~+_W>o۷*,~GB@n~ϭRVV^4_tSRMD 7-2ʵerGQMn擻z2mo=~k5= 6SgcH9 [n2{tPqS +T^J缵Ve#yː{qيo+W6_wn V2-b4ܡc1gC#@pw.2l;̃Uˌ{Tyo184I5TU lX KҘ RJXusit1̓5|wso;dIlV-z&v o]M;:v*C47#cR/]Lΰ$Pau`$tɑ'^T=  u=#z"FQ$ X!s)CbRFjEAHoeh杖&1D*%03vHBJt>'V p]hyZ@00Yn6?CdCԿW4ێRz[!N9ftbIMjo8kp63?捦;@\wPWUwC *Tp]J"l#n*7듾^W|<:] q8Z#={K|{S&N&ߍy}ާ//ӫo5A?띑%ߌbÌub/^[$]qo=Dw}9w\~=035{^ qVM;g !W3܇:VpDKpBj>'3wͶ}r)\}TE}g8&~\MZ;5])>Wd8wDIwZFN [TKU"tw7חǺ~~.2U J_ .:IL(Ӟ_5kXirYp_~@A 뤔CPwΧpW_Rfղ;{o˛xve'.Mv(C]͢I."zv|+aDfRG{O‡J]P 9-E1cP d!YM)B0f1vMA,?R@Zd *[ V 3&dI4`RYH^ jZ\RG$) )KGYWݢd^oП[|ޟ<@r#%(| 8‘w7g kcgeh3)C#cd.KyfNo`!5dFelDANsEv9܌kQ׫WiSR)΄;J8{ Gǂto\WӲikJ+~yySҵz6MHi=ڔO/V. $vgd/ 4JMjo;'l drN&0ЅCQQLO []*4 cJ`ra'2L ~ʀB+/|a (E}:7#xȉ7W#+$zJkLh7H6Y&ܓ14U={o݊/Ԟ[sP^"L6*u1:r4123lQr w&̪>,/yBG modfʼnxHO|,7G`T͸8(A Y)KI&f$M<sVI%5FzgX!c%d#lp6}S2i F4=̺'$4L#Hoe61¹l8&ء_ {H?]\|ٳžuY4z6<͊vxH۷/bz8 0p7nHF$sقH.%. - "G.Z;l*O_ιvs⤱S[=x(6 [26hF-=5sT\֍]}mF6|hzYG,-h3|5}ңpɠK"U)a坈Vu lC3(آg{-ml˭؞l˭ o[&@] ZP\0S dZ'eʂq.D40=GvtVf[sےm/3r9d}A_V/2i g^ cJF3Ѫ9h'rs((Ž;d,J90ﵮ˯T_a^͍O6 t=iu\,-v=1Ï%>sy/3\^(֢Ӑd}% B(^Bkg/KUfeo#cE82k.+Nڛu=JAٸ Yz}&Y֊B!HО3^2'P't@rR NEor2T.#S5pL}BL0J0YEuLo@ D--ȱOo J '8A - +M-tEh%%FW'HWFyQSa+++Bj5:E2x v5%+d!%2=?b ш]n SgLN_K>M{xcg%p#û^$TDXjhjhкR6iIa"ʎit~ 6zh::]ٱ\!KW‘J=2+htsi,sݷ;]J] ] P eEtE b"c+BttJ*+d5tpZ;]J] ]wZ<]\Bl[?M}sSRNG|PG&WӼU+ {0oY0R~%\k|k~~-* ww*/[d{~_~ܨr{.ŭ6ݟ 2Ip$NOFBZ.؝%4g)ҕV+[]!`"P5UUZd "C+CvTB4X݈K섋3j ?:ff][^hzhjJBFU"J#Ui5sP]!`.I$`j+BkPFWԆ[>2Z3N9Zo]{]CGJ+jۭɥ++l+X-thݝ%׍N+[ ]\j+a1;]ʕxӡ+i4"B0C=vk~hkʱ5ktuOyɲl!]E|(es{r"㟟-VpdJ{u_t"{NJ:NwΧlG_R.bXbz2N) "tw7חeu {u^]P?rYJEtP((SQ<E]VxvdWmeU2Xy9"s]P+"`݁& Yjńv߱-BiEhNТQV|*Y ]Z3t-t*+,ԵUftE(MS+cZ1UШ&O8?p죰Ƃ󴮂Ǻqz`+L-4Mh5JeM M[ǹ( &ֵ;]JӲ] ^dLL<IW/9tCFSP+jۭL;++|~p83 ҕ`TDW:"JFW'HW geEt P ]qy-tEhAئu6:]rfF"N]qfްVJT:<RqGLVscgC{Pv0BA-tEh}BX+#Q]!`-\&TMh3kmԓ+ǬQ#./3#Z[^pƩh[]ZU M#ZiPi'%5e!`.+Z;zcxj2nȤr!~ݱC }jƎ\ -?ϮJ122=4vtB[] ]\k+D+;]JN+XEtchWXԪ ҕ?~BC5tEp-Ԭ e;coa+!ϑ0Wj+ZzzECzI\ɤܤ^" 5enRQcxzp8"WV!Jݢ!OѢQJXS;]\`գ+Biy+1P]!`+ o\&Oh[ҺFW'HWO'Ep-BWֹTNw+zW'̝18c#w^[Ք㉀t4U&F e*OVٻ6+Wi׭[!؇݇ˮA}ȢVgƻSD)pg] Dk(o bޏvǷwhe:[_m7Áw9~״!geW~tħ|2 3Of'lJNzϧw=_GfՁW֬Ośڽ?o&/d;oGeGGy"En8 E 2%7rɾVkO]Οܹؒ$cȳQՙ|ex='򅒱x9O:oא4^ᮣoWH/[}_k4\ \R69U9> l>;JK.je;C9eK܌1BR*ƨTwdp(RUjt78AUS?cGPv,y2bJ1lU V]Bڰ 5R)0f9xDNU.Y{u=5B Cm.5SGގ`fI/3^ *)쬏L9'X2rS0ؽjؠA[خ` )CsTM+vFe¬aP6ܪ}ut%>A2hAgC[ǚژ[]KP\qg次=ƺ{E x4. -ː'*YlXkYgPѺ  vvdZ)V6K#9vx XٕQ!\5",+5GVdVG!}AА$x8R, |LlT%&CijU2TW\M M'$Idͼ޴X q=3U [Yq2FN[Ǡ@4'X!AYP.i|[V2R}_O32ˍsTSnkM>Bj2)%sMȲFlGҰ[jE}sk4-dPg.r_]=bF\X16"9hQ1&PTUt"B!N%׀9;vL<.gr=5t|ԂUЍ3?8ˮm900 ƛAy@"TYǪdbmt%9Ze1%{$;ǖY".(Z R|$ Ljyu,Cmk4X0FXi(^8 rHݗudeJ0ԭ)x $nGf6ᩯEUfUS Y_nyT: d-Ţ.k"Hb"ҧa|/7ggx|e;F,M%8ʘ`Y<O)T5XBh;K )G72b:EjC$*JPKB16#tϦa4( X4kpĐ*Z MIJ X::A:xJjw U30bMGR:Kզ 'M%?nqox>|{0qۍ478Yj3n/N޼!^+Ih~3pa;RWL۳]wBdSSǛжksǶ]N][Ϸ/mI=pkmBXopU@8h$''Xqg @ "@AH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 r@${Zp_hmN gn'PFqH'U'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qv@/6 s;d0&,y$Nr~n=ZdRےvV"YU,VM<& rQsӖk/3A +NBakͬ&n 5T3/Vp`ߪ3A zqdGEw y8 VZQ|\Q\`™0$Rp]ؐͦjfAvf $r6"EaTH!)pnvT[(&A9LbhƓ:gmh2\;T "ghJRWHy0*PULgH 種Z+$x{2L *S)MRHH]e8erL}WWJӂg Z?QP/vL1/VqW[Z>-洴(3`00RKkL%Z9jiRWH||ML.?2S+^]e* 4ߊhq1 8LNu?\KS+vq `*>ESTe[uS%ARWH``U& oRWZi]]!V]=Cu8 us~8*+F]ejwu4UWP]qAA$6VՃɕPRx|{udz*2o?-y~T_\]>yN6tnD8V:D I(35[zs]\}J{8?"OBOOv?y,eg;^_1/؟M]bvf1]HjA .U(cA&I/NY/ksʹ$AY*Y Bco\j oXuưHY᪇,tg1\&TIv(+_/?_z5}ZuA_t /كǯ)-]i%-˘qhՓe]ogd~-GG6PE4gb4g4e>2/6VJc 9:!Ŝdm=G5~wI^zcr2_y*e,l9/dhi>#SL>UuLy(/}eGFW/}4L^z5O.Ϲ:~Y==.ݫ ]vb5ZQתZ܇^Ϫ25h='EGtW̝X /?[/U8CL wInTy{ FFcbN@Uk<4hT"9cFxAA]ҞlŘ[s۰7F'9p0rΟ?;XVϳhY//(/(5UWw&u_!έUOBm䉵,Y> ?܏)pVW{+6K?j.-ʽz8\_)E6P`BS |7-,.עx%ڝQx9bT7b߰F#Hr#Zǖ"<2RNmxRyPyG4*7Z[~ؒͱŧn/[y^n?kqZV>?dJ0Q)Hé0RH† D4$ÅB1ި#M\z"*&ha@3!(ř@AL PFU s,PYB9 ]r,&V7 Y:j6%r\+tώQjy% ZM]Zt5r+"f)zŮ;®J&?]wk΁qT ʕX=T{UkM\5fl$:Y_Zq8_.J c}U7i-=l>q 5n疮K7kT7WMk|0-ƝҊY -Ek#a .<$"a"mr1^gա >^ILj=q1[ hB0Iytv/ƙuGa8k%,x# 5K\KZ0f o#8䎎[S% + N8+L"T@~5A}q C VX8:)!4ȅXIB81 L$xZĜS)OMoH7FVM- ,ҝ/Fș l3,.?)F8ggV=7ZϮLt'4lN(kҽždmgy'hU4R }m-v:gzQCK.]4j79Sh-L!DZS 8Dp0;c$:'-u=xS{wv.`JQwSl;z\?\> \6Q&Y|sMUS_oOu>54is VgJV^[sC|Ֆх=; 9%;0ndmQKiUZ+^+nk z3mkiF4li-LO?\VL8B 3[HJŸ ߭-Tؓ9PcO IA92Qp T- A)Z&fBZp܀C87рmxikM3< ݼS; ا鮯R"VD &eR\CWޣdu!8؈,甴ۈN}]IoJUEV{-X f}!24oJj;eP:&2[p*&{ l e)@{h{@r?rMpZ˘qL1RFp\H|0C577mQlZXrl@7hz:EhJ"8O`sRYں J/a\&g S^ea+[pڐ&M'ɢSYu#Wwd*Ba?~U> \Eu2x$,+@XhTqt h5۔dz2E|O,MiY 7lS+FF%(Art3=5IR$XI)H@)߬FȬ}@eZ囌c_Jnu*PZ'!jq&D`yyZ@IxeFi{0S01hŖAh֓Ă#Y\k|/J_Ywշ}_|(*];]!]8ə5TLei{\8 |ffl޸ٯ m !h, PDɜEMb9 KL(moZQzN#ރerN1eBXmbل~D:E&EQ#e B H]L˞)d=0&m57f:ʔt |uvU"t%b1}U[PӣӲ*|άuD @syCXr\Ì11F$@٩A{9\T7)s<R'&SUR oTDD%YFcP44ceM;KXu9B顀BQZxO@ jãB΄Vвw/}.MO4#HRZ+c(R`.qQq $F"($ e+OT+H &"hIzA$ר{FuR0M.g!q)j?b{nwhgI?-7oN_=YvoO~}7~:yOp (>?\d:^(oںilo4װNӲs_:&75fnitLfr˸|U?||;Lܭχ8GY,{0}Ny9uT) 0L1Sgr|TX_N?*uWrQV5B$/Z>/ -~Q{6?,~})&> :'|9V8|?{#2_|Fb=ʼn;UPQrl@ZG'o^m/L Z/3 y^oǒɆ*}9y- 'Qń&mrOxdR5$ .2i%d6.1a:ЇV7 o"baI5fY.p,Ņ>dV+8Fq'4yA]1UҨUjSI5?Ze.ͨ>v&<8 %rWZ5+ȼ;%Zgh bرl~JT|u9>|$6jҙEzRiԪZoE0}{m\>ЮN]YdIJ\̥t\e)r٧i>}ݘa0AWy|B '7jD{Lx@РB,:> D kGd xLȞzkcjS|ͼ=j4˽6Bsaq#"zF(WD'P!uZ ^L{ ^y56ո98̳qѬj}w()Z&DZED5*J62!,,u)F:(oJƴa! w.6?F4!D&I!&~H^[%OrÃ,)512QhR$vђ [>1*QYíN*dæcc T{!MfD#E@Dk0!iƸ 琽q`h;Ap>:B5 j CD HG4ǟ?hD1.8M)EoXP#~A2"h=;Ddu˾dw*mmr{v"ww:OqyZY=5h+~~(yBkr2/k-NE;Kٻ涍-WPgH/JyYf<;_ƣJj1H AYsm,$pՔH -Ewaxxt@E!Tۨyh-R&|)IaYz93@*72fg52*l`fƦX(2c!pX8(="1^u/_U&otty:x^ؒ{%nH5M^2(p'}IƵq#lcTC(I !){D%$ؤB:`>%t;4LBDN%shT]AlcSԖQ[3إ^{I) )M V}cem*ngFlHFp+Y0r!3АbZAr72G:Hc2a6qVaw0 "f"ʌGZ8V >ӨXE\1gc Ta$7 HJYeɗn8XG!1bS, A9"䘤Fbu=ZF.Vev֙kleEbKg| 9[*B r<18Lze*jjКh;fǦxH!o 6M U5 gVf ˉ銼E츆\DޏG\IW~GIGr:/Jf*XSыy:=n$vk?d*.bX3v53 $<^YSR8.:e֜ڧ1IuW;Ěa,*(1Je4HՊQQ_v`;e-W8G+瓮@L|;xi];{o;F+Nx4"a4BceXjr[K2O3Yƹ *:A4iC O \P1 B@s@x(qLrB"%HL  OR?}hnk7JSLޝ`;n2 $e{ǩ#F^ʝiuEqp`TX늒ਇ?(*'ƓCBkC9RyF":H! Dr!(p$POy OƝya[@'FWtJ=T;vAWI ZjaW_V%ߥE_/b]  ^(f7 [/d&nN} ƕ&h Ty,Hμ H Y12ha0kZt5 F//эdpx4ԻW?̜&}4`?䵝 Mfp&҅53oů]/ߧ? nҬ⧳/aVbϘzr>z1xqp?uMޅ]}JlpzukXA.NCuxNt9fgB?z3;tƓ#\mEwooAwf0p^6_|07pݡ z iyT!׏xtq>whKu8}r+94U\KIDʂcTԂO^kpN[>x3JPJ3l8M.Fu^jv\4.s CQԯ2H|hC_6JTF2y`}n/щN~w4cH%w)^P)Ά[ZK 9TajQJǾ79uJ FP7ĭ2`.({Ʈ۠$00mN҈^)Q9%cȗZ8]JEP@(&ARd=;q7l4,fj P* s>qi حǻ{])1O]4O81t`|+f kb"jJ2tRBG;PnLJp}6R,A=G5O*w1vI({pN]w2W>^: ު'REH$)yw3Z|fB.=Oҕ&5!S+]LΑ?.8֋,X8V|ˆVϳ9BZ.hIa_cίUpl;_qyo9kS{žrƱwF6WkfzR(= 'NeHG's$x[pqV_sEw//Ϧz{a(p_i ,(ۺG?I".arՏVJەxZάaY$ YKk)Z&Tsm\zqԘ^lPp.gݵBۦm_6f-5$储3uTnWW{MG7uq݋M)fr1V.9?~U4b\'-hdjFl j|hڻVLAEߠ9&N*ÜDZ'r``!!eYJa1*G:>z X*b)7{rIo ,8] n.Y& S& 2&'(٪`f)GL 7y>%KKPF'Lgml`j'|W;%;|GD-溗;G@^bRɋ6(SZ8Sc55CgկP[Ƣc\G! e)np0%s@E-2>s͛SRIq17v:N_F.;->#z6#9xI_ϸ9o`7 O=cF@X1b"iZ+I_E~=ʻ>MmglS'nT\aW6T·AQ\>7r_bcwo5=`ޔ3q־{ҵk87{[g/3/ϓ}Sz6Sey#;|Oi,;Ҝ|rq\\Wx5?=oM=aG!Zw?,mK>^bRbQ`R-a-<7ÊQLڛX$.UhcՒZj0VM%r* ,'6H1uJRkz|3K`!vl*ѧIIn\6}IJ2/p;WIJMzzpE۶Vd+X 7pd_ Ud*IxW(Nw|*+UV]+]=GbX(Opk?`}$w=X6\|? I ̦1IgJޠ+TWI\`:ݖM^ QNl9r=̍}:ޱ1n 'c߇Zb)pMơɝ{UCq<͇l#YfR %D2^z^\m_W g0] |Mo ]$d=IEwT nP0O>y˲LVd1LX뵤Y7"i)dL8W'ku;C{OXjs&YXk]3&ޝ$#ŴM<ˈeFhy0RvS(Ƭѕb^ ^th)%m+DɻCHW)p + ]!\M NWN,7VA$k)@UaVD%$F*QhebF"K[iT;0́.ft(M'*_]]KC3U=2]Bkm J2ujj׮DDWzpi=]JKIGW'HWĤ,]4BBWL0By QmA+.$1ZGCWגX 2"NW4et ϙ"ׇ٩y=X^)P8-Xx>'۹*ޮ|ƔR,QD$6x.V$s\ݹH-h0G3~"\G'y3]!`]!\c+DkI P SOd +ʣ+ˣQWVٶ]0JK&µ",JRVt &D1`p:6Ҷ:Zg$&FKRObTڈ 3ώ"+YGK ҈^ ]]DIÏOWcXג#_xP-=R̮J޲]vz0*"Rh Sr  ]!ZNWҘNlTDCWECWV+niAzZ52ЗOWRN5!Ԫ-$h,ueW<Li%;e3zu;j,+mz6wj8 hFm!VVW#'+%o=es`QDGWv=%]`FI4tp-m+Dٶ ]]k"+l$B QJ BȘ&X ]\i,thi;]JKMGW'HW!XDtM]R $GZ߮l,kЕj׮J#+~]=BWVT+՛sSqfQF0B o?z>>6q 0֛G 0qOnp*J.\Vg'`QTy9e]sE_qV ڬ3gQ\_罏a|ww7I ټgۛlN3``X8@y!M `*.HX}Tٟ K,L!{C;(ʫ!o9 N0h0[l@l澥{{/ơ`_%g@{R̾.%ǖ6Ko=|Ւ;'V\W "\HvZi+Ӂ'9nO@*:sJ&K &43uL$gCR̳:q6,f8ÿ/-f*LVJԅDQ·L$uOKA5 /?EWyhwPq!ۤCmoBb|(n#B*àv­&[}4Z W2׭|VI1yGF({=? 3QJgqGE@H&yIR/ OԔ7!)1&‹ĝ sBQجJ@⮆22Ɍst>NUI@91;2A_Y.#-n:̓G8x2TT.wof5Vx[ V+NeyrS=y7wD!|!v)CyxWοj'DL4dŰGY$3)&6kM]401󨦖MiGnQ.Z]J~~NRq']OJ 9hPI*Wuq'{3*]3z<URmDK2R=0GK-՟RƔfIRx#1*s߇c5W1KݸCƚεXIӳt.=zvYre}zY[CN}ao{ѽTrr]٦q 6n<1) #ϤZ '7yq]!_T .'$BN?]!J)ҕRؘ aoo>(>Ii ^LDTb#t \)iMҋ~.>D}=Z|>ެe5L@ߋ^3OM|' ~}?h7; *%X6uTgd7quNd  Nɦd28?<^22d͈r33C&hR2*u\A/hq>B/nn'Ur#$cvT@ciƥan# 'xjl`I9AՔkKsoT#A~Yϟ ԰kwts0G+z;^Ǡs6,eyQ6"3;Lr=D2\fZi% Hx|.eO:H@S4z3XIY&V4w^$U1NڼQ14Ud*#{x,!J&47&inTRpG-;gi1\LंhtJI81ߚ >@Y0M~.44U;Ba߭C-SeY|\Y=O\%4-`x\SMvHpǵ ^:cP12IqI^;4l{|_fav.~0=4`W;V!dbЧbd@)v %򢽢s{/;sct4edYnBgL2xv)fiQQh :)FoYjWx3VTpI5y^'9is!|?G lxQ~|7tt PM+0CYZ\(Kd}j"˲UmNeLj,z`v(;ut$u,lH쏠5Z5 2o5 !6&e:)Hei'hyEͰҢEdޱ41(MS;':SJS"-qgx\gמP|D7q.]ƭ R8*\P\ @2RkrL+MxVy^C'wCO(Y;^Zzvќ:xDAilQ` P|tHpvZ twBmr>ݵr7FԻ/zb\àVeȪJ t*Ud:ꙴ&d\fh Q{! Jso%TIn<[u6ŵm2߯/B)[y,h^80˜CF>~[/xˢb=W_i5;s[o֖GITcxקş}<&>OʃRjsMS IYi /"eLi-p }V>fƍdž FЍO*5LQ# I5Wqv;n)zZut qד7{fkCܼZvw$n\GMdJGɠE6F. ]`c:eƻ8Z]t%.8,Y DBȠR\e ֝gqS*MA5kg_&l?)}-{0J,7/6WGbFMP CѾ~W#JaF6kw&Cq%ק-<5ny;N RQAYc 4+h!$֏Qk:t['G:Qr"?~kD߭*kmsvjNe,ie[%Oں*58Z=r\Ki /dY(ԝ eD*z5x%9sq>q :>߇e-]ɵv85" IW" 轤Q|SD-a |㝧OOkGјCeÇ<ymK_R3:h.εĸR&d09B"UrܵB_!mx3&ɼl3lIe70(|ͭ_;HGh'!bB\Zfe^Jd0`3] jᵄx"jM0F8@';0udRƫ>@ah~Le#g_A~h,l,-+H[R=*W 4i>D+j-@5XoYXsS|\A G}UYViOVq\P־6]9 t-/z$e ȣ!IyӟJY"n S \8b8Q\;[+LQ UQ%! N=5w]S'+ phqGq $u{>(rNA)x R(5kY :Ra(EJ}L* @HdcYUw޾Ѵoy`o>_A+f c+8al69ͥzK ]o0adҊ_-#?>N W 75JVr/dԖ7(z{]0YF)3:'=7v423$J^Wp1q焰PDl@ZR$n+,ʁ&!LMӣ,:rl&Y؀KFQRWպG.=0՜-ge+cm!7?8g˱S#;;tq59%uL ؽd IXtA-Rc7 g" O񔋒ڿ!#8wh_6ӍZbvs R@LAf |H3Ʋ6>:ˉM^IQiZKٺlSOZNV?6sn8'YA0]؟Lu^}U3wWkOm;CT PM#]D Pݍf\kk%e"c"wC,+yXIJxOdP;u,Kڰk¨8Q⛖eRx':9C/LqY%ۘd.%.^76 a g<.&!Η ugO_zRz'pOi7r]nb_)ϛXMzt._?0-״,%uR  *VXkkpYIǑ1Ԋ4A:GK15$XÝe{nO@@HBk 9 ztֆl,`\F$GȢnѓܗc+Px! dpZ ?6IY#cLʑdˆJDP[H2J\yK.':/顖}I)у3B)"klT>GlE'i`)A$JCT Pfc0dO&%f"HD"AQ 7Ԅm޸8(a2zS2^_I_~Nj_~⇟^\pa.?/^")D(?/v.J?MS{>MZMQ?}ڵ5_>k`BgJ˴cJt&kUvyzST0+|9z7{a0;871.RCԛ_E]}hQCVhK^wMWZ1m]ɧ09!\轜jqb/G^.^:dS͸&.e먔benz4 7:y +Hooߖ_ǚ&Ƌ-T?/{yØ>})Mϝh'|=r!A3_y7דxy3/鷻|]g&p]ܒ¶Ζ4cKCs{ד7/ɵ͉e#z1VߗFq[{m~Ə0v$ӊbVK#^S>Yn5c.74SNjA|~q;ɴ[Mmu/9+[8; >h-Yy ccW4Yv졷6昺i)n杸Ech`28hWVj*D{c$۝ U9A:)7j2qnB#НA/\VaU$8NB)cՔո8q& 4z{U#m9825 Rl(3<⊏WW*Cԗq9gexPNqZɐ QuYyLQvBdDUD,)anS9q[HX9]΂+텪 ugOJdP1áX^zw>x I%3$xFJS@B IA@LJ3n TUԄLsEmH2K^D>zR-@+/ךPYQugףRBϞ٨d]x*{{T#L9nnX$zg#i[9Ie'r< >~.Z=|9Ws%, e !1!A`FE(+O+s{fE \I'1ZFNQqs.mugeUՒCml eg j =*c\d!bkq&?q͠irbk,IOPD\>ZŒ *Ч7}Q.ȸ$K:@Fz (x/@ J^a,t&nnj%{bZC,hSFrr(Xjq֕v`zQ** #/u/ŭh96gSIxUaJ@#1dBEHl WP" T1ꤝlug=lI<{1bǡTEܺ%,V>RB vgL >pfW4B$L%.B'_$n$Yi/;JkH&!=Zug2oM'8nfuR{@Yeh;ŭ_Bp*eDAD,Pz<CBf=S>ɤ:Ô}$æN;eښ2moݦ{ћ;yuaٌw m0V{{O{4{u6_ӫOλ?1wL=LWޝv}}8t+R)]Nƴ)w 5뿿+rNN}v%F}̢C ]?cEz@{3<8KM5߾}03qALGq*> Hr`$|ןyFrhTc_A/ԂO^kp>LٝS&aKP1sgl9e."/5;^Fiٗ~/dEJJN/KeV凾mYR%ehƟNyo&/^&m{X+HDytvϒ!<8ex1<(&D/Η" +<eOM~[vaF5!ՠDY= G! Ѹ?B?1]Ūk^x!'%f҄' ӄtLݽo4t* 2;]ړu~fnmQzpۼ}~{h(n` P鉳B*qݽC>Hk ;6_9n*f'X0AUIg/y4l w-_T]ܛwh剈&eU3K5sQ;\mu}Ee** 됫8o_\Z*f#Mע!x(5Kx^uxy^l|-J7hs *,H%f7u-]ۦu=zVeq\iV>ӑG<OisXIlL<œqfoGϋ{ǬmNn[f7* {uV:. 7, %"iw|s m=f޷՗(ՆKғbԀ?%Ds&#ut"gDaj֭=زWWQ/ gCXZK A1sJ*mjϯTF5=tTK+ +ĭZ0I^s=A\7vűs(POOw'Iݒ8ti;*`4tIT; N7* Ƈ"%@+96g \ s)h[3Pf\XDe(eմxL5;_Lxj4|P`8`H:)$SO8ČKs=pu7 A1HdXѩ(j:RP@ k~6 n,N=KP׎O m,m"^̈B4#1[/H9#-'?_<ˆU2P(TC}F;S$KbT팘N>1NJb:Q9ӊ#5AW:%U;#WZ]\%*gbZqJ.xtqu/h+㊫Q㈫{QIі+yq%[qc"8U;$`IvG\%rwG\%j78a}qT*[q3C⊤BrgU"]WǺbJ + eSe?NNxI4(%|XNOS5~t(UǛi>@#$y3cTtE ^L~@,R`ɺMEO1`Ӯ~ws쇷J, 'YP(b:Z(} ta|_JNxnvtg T]cd.ME6 |gs9y<' w}kZElEם~uj9c}uc0_,g)?(h,vFhן{W}*2${14$~2~Cٻ8r _Kn 2G .)?,O~sooV\(p!ֻ\w'jvOS!gSS+k 4jG9qFhPnp|ΜoC7v&cmxlbɞPhDQ yx T/ @pB#hej%iIԃȈd<O^jʈhA 1FрA6n841ru4arJ`z>i`j{ɳ~ʵj; Mj>+vD@JqAfB`b k, 4l} Bۺ*ږ%\c3?:en'=(܆'wݕ164^ூ?e h;" `$Z(p2$r2jTB]0R=*NUSM਼C8Ȣه[)d-U_VXMͻ>/[]7R4D3hyR+/-ȳ1ĎvNuߺY- >ZYK0^a^N}0hxTQ͍QmQ'(ϗ$;[AWQjGn H*D_*L(u`Xw.K' 1^rÜ9Q lnɋM)4sMY%JF3 Ck[%,Yn a&=JK_HJ10dІ6 C1F )M+ύ"O\֑aYqGkZ )$itc܍RK-DZynx>,t#C%Bs,C>}m 9ҝrlyUs@1pЍ ! h;kHM٠%gS4a`h07eE*F^@v5*CC: lp6HF'g)Oc.*5X@UVE r^6)j/SW>pv}!mȺ$}vqzE t&^6,6Lr!\?zEoAD~)ۦF_tMM' 5s^.jbD(m!@*h`_qkⶂIsmΌ6>UDcQHvV5?e(Nu;۵p@a"GBZX(7,ܧᗿm`+D(\!r -%& ?ېy<7xnDyKLe Qe0RD$:XŅQ )i$d0Bl &Z 1)g} 7"_Ńa$FH= JkD('-L <`YŨݼl mūsV)|YD6yد'- b7sm<~f9wE3TdFmd%GٛPNB!2EQj=ֲ~Tr-c1c*4 5ȏ5++'B;[5L>i 7$S)np09sH[G‰uXZ ]&VyozqZZk)"ov޿+5'e棯;%@4Tz A+r8Ι2AqBJ'ih^)[XcҼ}ؤ8a [J0>7U BRB46H%јf#<ڼaۗ7ltaaSEa͡V ̛q||RY*{QvS{8#nXzF%| Q^uPL3`7RK%*ot;yʚٻ6dW^ױ$k9G)qM\iG1o<8"ElʦXLQjOs] 62(=- NPJ2kpgv.1ݒ2l8Z_5lNm_S}+)!+VP胸B@J}#aWa^M;GqkfH$  3C҂-2WG-F#;#V%Ĉ=`ٕTh9*l8>8Q&uAXPZJ,uX#pP  YY;)݇#lUbst*vh:iJogkkuUP(BB/ɳmϛL"$c2S2N'$DbX뤓Tkh1M)fZ ؉<QZ7'E&4j ju_wxmŨgֲ-nu{E0|*LfucD_)F$X8f+Ѧxn}ǎ,IdY"ȳ<bq6etD{/eAFPKee2&(5sq:DEu^HD$&LI8FҎMap-a`mNtlu&Ά^ M90Ia竼˶ƛ \22RHc4aY7EgV&5p )kAY` XDk-!XrbR`Ŕ#qR _pavՕe1)#ށHit`B{d 3âҌ;$2n@}sWo_>}q|:׳_O_>GA!\"1\$9VTM-ztjGo\e[sDes3IO5$uӁ+\}YWHK30zIg *#WdbT*!)Y/CAu?e.V7'͕TGGZ f:3JKO|@g(kٻ)'UpK'q4\ؼ@/)J/q-[~sȪud}vU# ~ ?>Yc6vD?+`П.Xj^8ཁ=A.Y8YfKHmXbRVیgk}ܙXeTuQneaaķjc  C\R}V;::9RhVb7PP󻳱q!Ny4d5+_.Χ?8{eTZW=ysSzӹSX7Uɗeɧy}٢#qdaY:7VOXh3 Eg,fs= ]X^U>Y\jZ$,myE.|)뼘5ۍymdo6o"y7zck=2rqg8+I0$%m50 Jkm:a4؀׍2#V`g;ȅ4|HC0Gr1l&d' c\3qa7)%`Dl~lcDT{D#Q,N iԂii"1aѤ4 @*cB $_qbԃń3[XA`Ir.cK't|.y\;jluz{\qƧ-gbKEҐw@n!v{pudS$Lq>pqgٱ-now6]]WqYLEFnl2~FQOg?~GN4ǼAɤ9II$P u^+hnal;'=~DZ; ̌"؛gAipkg}ojx:<'$O=ځIBhy/DVB80 }!xΑr=xj {WήYۍ~{Ž 0GEG{v{kյWʢ%`F+Nx4"azܤhrO/>XiL=L={)6\).gs_^Vʹw' ҤK~>.9fB?z=;taІctA9^G.JS>:U@wKHՠjSKkK3.~94U\KI *1ʆ?˒Z)km>it~>[0Ud9pr1*f.L]SqW|-yIbPLS;eWdAˬ^bb|QTi%r"OW^x+Rp1_I` 9 W&-m &%))<Q,=|c \ę=6%yz6q/%uW+]BD7uepϞzQ飇#d-{8&J/BE'A뜬 hm$ĢQZustꀊ"<8]H8&39>}`8m`)jkwƬe6G{ 6^~XFksK{Si]cP 9V7Lc1->Pxm82ܷx'cXh Ee:aY([ ta@^y>68gL%ww9 wڈJs%c#c-%,Hk$U68;6:,0#:آs`4փ!rcJL@3݃KQʸ9[Ow,R\+߽)H/ao@x[~xO͗ꩼũf)eJ$n$RSQl=peEMv.[*rUNaYF7c5cNjTG6&JdxQctLᖐ M3ZRY؀4{*{sVAƼ&U$Z:@@D=Z58W":YլBPBӽ;;0Yv6݁@1hU7D#Tr8 ͝l_6*i AIO%J˭52#z*EB+%D.=:Dv=6)z6Hז>bH\d5Yx+gDS}-)[9iezCʛޅ6+]@.%< ¹*HsW){=0F?%ϣW|3͖Bll TaM?s[OtGT 5FRw,vVtL}O.|&o#?G]2Va£_,lF`.Cj>z٣{3op]{  #}g9 $.8[Z(BY9 _KO/_R<+gE,8VDfuṦo*O^y(J E/+qHaWF>hڼgp%y|aiAf!}n~t. m4ٮ{̫kt Yyȇt2y|O練[yrۥxVn}GN<_6U]?v_mzkVmDg3)o7K Cyi;LM3ax7[$'>DsLteWŨ*wk~JvM 2*a1YĔ Lc% d@r-mN;TF"G":^C.$. 3YE0 f,M( CL&P<쀡S6W #`s')H˘3Y*1bD *3)W39Yl6 Y$ؠ U'Ib$bL'#rmc #` ="o!H)gCifL9r&o)!Jsr[lUm~k]~k Jpo3@z%'/e>N%:N/s4Nxɝu:u^THre vJ%rge}gi`)("ψpK8 B( &0<.$/OEN4q$)qGߗQ32UΣ jE[iH]L{,{~WLǧ}x".ga"ҥi$t#,3 - ' xqsK;b-'Y8o8`;N]blރ&Xϕщ3DkX1f;@/ :AXGN"=iפI IĞ&5pMp) 2)LZ)#=S8/C͚9+sT fN!rAZ"˪l1f:C&`{n1dAhdQ͜ǃY J>κt-7o /@z;&h"bm{ZCo9-}hn;T P;ihBx8AQP$l(9uP#wǚ^c_Ǎͅ)1l}[&k&-22Xs42:GVۡ?V,~k^5J-r'|}\sϽk! 0bꃒ?tr9?4Ӥ(-:N/Y Zf dG(r\Mk.VjtVI+2K<wUUGp[E1twEV ƚz5 |)-iC'+9?z~ se#rWdy誖**u,X|XimsWo] M#rW\G㮄cqWd-gwWJ]Aw%Z'嵋`}.tV%}>}E<9-f_{i_o/Ib"u>D݁sءMLxmy{&:'=]3nJZ@6t뿦]Ŀd ZtJhH* Z"d#MQ͞qm[&S9'XwVf 4*on4|vqAwNӻww8}3>NO&wk7yc9_XMc߶0ӻ;$̛9{?%1@ʵKio2Mɴ7&ħL{io3ɴ7 &dڛL{iosɴ7CoD'nDzB:P!Xd[K&dڛL{io2Mɴ7&dڛL{io2Mɴ7i{f0%iP1؈㓹Zb-k[ ()V'|z4E6EG~ A2dN9eyj1.246;aiƁo[^R(V OЍOeT 7%l;fj _.z [w^ sw3CVG8P D-7;<*]!?'S^rgba1'tנU27Bԡx;_\wV2"( `,l`sYx+z#B\dU$#y`#w}ɨo[5(c@H?Z!H<ʠ9p>F!ܒgnmB,D}zf-/$ƫ/tXvk KdžL{dڛL{io2Mɴ71ALj2 6&dڛL{io2MUB= |Ky/t/G?7.'%48]&㴩WM-3=~;=ך.yfvyA14DYzT&(͂RH_?,oֹsqX}hD;H1yi6gKGyhhKU+b`^ q0ƫhM,7W* Phu4+ĜB[NJNRJvKnoȼآ%=O/Pf"`J {>̝Fz}nWJ6]N/k-!HY]@N%+^Gu>8fǛkp#ǐaϲeX˰g ZD:\~smA|p)m@phri~VhR(Xp߁.N0k]A^U@ї+WCY em[0o#]mCZBZ=d0nsAKb6CoY-6ro5޿vߦ+Ć)+3;d;HNv^ i΄ߒUņa[ pkF/A tAm@<+x1GQgaGn Kb2<< ƢeJk`.31!%wv"Xp$RTP<^$|: :<څ-q1׹ʠ(Q%ygpUӛՙ $DgDMIa7;_Hsִ};_$#9WԳ-~sj Psd02@ `ҁ1qhk$pc91*9j#1Q!jL=-[t9H,NB =/{VͥaL3C}5H) B%AI0)xK9t\:'˫ԸLgUGKتN}!=Rv~QpZriS}p+.{=;DKK\kP6&RõIJ&)VHRtXU"X!v,,^t,k͂nDZFm2և M*QJ')'0i> U*SBЋCI6I(gh dm Nl&Ύ,k_:)@:kr ;MnW7|<1VuF'ke9H-:*wtN%!2I'.u9(Lg;Yglʻ :"1 2 AM6K/uRsC<)BҠ{pzB=M4 7sS+.EC*KcҞfa,{헷}??}υ}oOI!Z>߹(r0糛fM4jwoJӻv.vk˦s)-O&kUv zd؋y4>OzTZ*p;.cG(NGjyORX_>iJGZC\z;nZ>mx`Ф\zitGW̿^*_LդJܔ]zݲ澹tU~zMٶYPrNl-c`vY -t%@1}zV O_I FCv }n|ϵߵߨobieQFQVj NCܼGX˗^rYWL[ӱǔli&zk\e]<zWtjV_Wd1oIaXcKCNKT|}uy|fHlN,,KgQ@y}/=¤tO{m[7} ]YӼVK=^i9|LX>Z~K#Edމ}]KNǺyCN"]llk&L@hJTܖ\@AH,exF2Tu챷6f6uUkϿq3݄ 9) eTB_ 31h }"Ãם rtTnB͉&9ҳ鹫вY9 =P!)*}6H\+sɀ ±g@+Re@ 3`0**X5}?*N>6RoʓeT'+43?Bb|}%GGb3˲ec~X3*ƃhPdhӜ /Dz2]1J]J1HD|KIt';9}ށw*hn&Ύ;8!BIpe>>>gK*IY+]4 Rp"^ EDɔFLq ت&ɌkҀ0lxWL^ $):a[VٮШjL,~$9m>Twަ$M}?i)Ns&U6϶Th2k`\;@ޗ[ Nv jk=&&%Q& " 2(\V;umM1ؘ 21ZFYzjOoPF*GƽC 6yX,dI= jF8%tƔ2b&va\ǂڭ͎}Q۴ڦC΂x酸 Q Pl0dAhqm"$Q7^sZìHKbd !/" T/1X2d|۹Zg;֤~5 bkc_D-#CčX m%G)NKg|YĐsGv̾xa$A$J*uIč]ֆC$gB heIs" Ҁ7\ض^klGDC'U\G,9k].n冬@lH&b*SNHv4GND%%y4$H!phٱ/v|{kr3LS ?o9k?f\pXY1?\,E!R0Jzo+TGNǽ;#Hwu' eʯgw.uH&k_&jd9gPY .a  H:ٞ}g1}'4/.g6kXzy;ʞCRI#Aɍ3%} b6G;R6%@wye.C  JQ\EIBbJb( t JSR8b)0-RpFK[nd1J2< 朴qH9Ah]&p%SDKQ CrCs>QҏlxTj)ھ^"zZ5 l ?V3<.3-%ݙ7YV'd>[+ ! `YJvg4<y4Q11J&א5Q̐s@)٣fGu&O;Su2OϚԑ@p<1r<#8M`GPD2;q^dHBjPZBrΒi}kZ|yGIke]6-GC1 mf%]Ȗ6@(pS~Ca-ju]6IeRQXlhcSȶ #K>iӒk5/]Ž{k:'ݍ_Pw@pg~i];K/@HikS$5!y >(CG m |yyv^vOx˓'8'Ӱ#?))OfA''ʼn0jR<3NŐ<7uptL={oЯ&hAE11[/zǚ;-vѤNѼQ{t v!Tq(Yq>:hGيj>=y5гzasZ.gH>8Y:yьǬoubړv N { ݸS8ܓ̲IKgB߯xfo/;s>m߯xjWwK搷Pwl~or "Wlx12,BC+ 54sکp2l\4Xͥ5}5O-Q65c2&vz恿Ќ7 U92o yt$±WXGLRg[Ѧhm~K#5:H&5>Tuu-(k3eQy;Fh.8OiWm{y{a{?~oܹX?t Ws,Zâو{"FՈ :bv9bGؾUF}9bX *DW 8-Dh:,-C+F1UJM$ܿ%hquمzSԜt~OMn o&JgO]E?vuyq}ihr;wZcJ I6*D_|1V iNv94pbh~hx=M3ʽL-4}<4/RJ-r`F $tUzԧw*h7yp}[zڔކh{XtAWIS^kԣE=1]#˷2Cʖnegdt&J~c/FOZ?|{\RwNg\e>[_]_\?&e/%֐"O|jn1tw궹0iS.j7On{UBf>˹#d'?4Fu"C/uxBxY׶cyy:3#9Y?޵Gs$~yTI=#lqqtmf%t>LWY%(F]}g<1YćE'hozC]s澍 蘨yȺ*-QঠJΚc֋Y9K%H7fo^LɅtVT\U)6c,y7M#s}`tlE:>?og,ZL$?'{j%{,*&#i .0MQr9C1~-6ڔ,=7ůOVa8 pJ@uM+5RšDhY{wK!4ﰘy#EcF4CcW}R@ёJUQ|h^ᑈfJd?GgM\nmS1Jqkh{І e,h2 ѡL6y: rɏ@>R;&ƈa4BqU77BxĨt # AuqI%z!i/r/U,tT:DyKPɖ!IBާ$ͫzM59ՑJj MQ Ar#钍5 :־3:9&F7'ߓֈscO昱Ƒ֑ ~Xo3&dT&΄|`5 j}ITp,ֺ[7.m|a  h,֓ΚFR-+Ɛwr26z=򰡥VnYB Q|n` `֑Jb=w|֑]Zy;a:ircS(P!D`9R(- O9@EE; N7g o h;oPR6J2xaĪ7J|XZ,X]A8&_X,čd 8\` HJƬG6lmCc6P@n7((ʡ7\Aydls` Ü` U#\ "U+wIQ22)LHpm I9G1jTPTZDJg>MJ@ezm|#rd2 VԠ ukzR uW4GwG઄M[A( `-ePBdBE@i'rը |u@x`f#~Ƈġb :TjMPI!0Ej>dT j[y(y3@fih!\=+AtbE,N)(J892r$XT5@,NRl eBRAAuT.g-?S3 1ɏ fJ%)`:ABb|Pj]'%CࠄfNY"1A! db9WTU/<-dPgF\z$ %߿ܠۡłTM%c6(!9Yh>3(RAUvXNZ'$2!`xӀ&gǍJZ*V=V~pvM&!jhX + ϻ>gAuPiP\J7` ߕ&*fFVykj +0똆'z$;f .' ) >@(E&rZ#d^SP>KÙ.M9~OE30|dH֨Vx$ ;< cC`QՅY,TG7Xźs mG5x/{%B4D4X ݏ/_׻?d>DZ{m(=mG6:a&ɀn v-E4bt~֘-jUeviL  kȎEЬtԊ") %;)`ZX-, \g.E"BPQ> 1}`AAkF]/nK;u" _^i;d/>Yu%=wWo[BvM-Кݺ!GӶ{=陻W߼?VSM{}o5{ݯ뗛@S55.p~1%ki)%F¡eb\:'So5DW  9qѦ+{1+tuc<[oƍNӽƚJJZwvɔfYYfz;\=ڲ:6ͣUC^5&8/;S{;ɩk)XݬɪNѺh/O>IzI]1`oS0hRV+Fi1V4͂ -f)th߀JRBWHW8oZ ]1ZCNWUTV{ Zܹ67&qu;vf#bd^oצk{W[Xʝ8{/ VˋKF ֘.s;~uÒCͤ4åf_:^OӌrQtܒz<ܳcv1ѺW2HWCWZ=mzMȏ'p߸%h{z&`z:]m_+t6>]0ib,:J2$uu~)thLI￳ b!,nrK+5::]1JkhW|oƯ}޵q$_v) %.0iqM II߯zėD9$iNCx꼸g,x&dUyWqHe" UNr \8(eZvZ{OI>[_ ZX5ùzrYý2PY.\"" PxE\"B 2gel?B-cFo\dSB;awR55l-zby=KO㑋Y))?޻{_[?]9b9۲N .֜+x-:u h4M|bW{&~Zm7eR_lgvB˩kB"Ze&85%]Rr@a܀zI`.>s(VPrJehY6VB⮥ UySry-* *,&GЙFq¨86SA#HRxN $2Rd9ϙieBdhZ1'RwFᦰO6?yʌb=f;O?t0T9ڵri ,)n,Fy# 'ÍB1( G-b"֋kmJ4\"g"*G4u&1{E\9uR-S}$*7rHTUUmhfZ¾HӳFpmR"ۻܤ9%HsXauQ[·EYjMSKqg5Eb'/zSŖ`szm$[;dcSdx`Ap28%x&y|djR#wx`&3mGa 瓲p@_LN*;q JXHc,q/jMGcИC\#O?Uā gIrP^MFekdP!"Bh,0БXLP~a=IB8U9} ψO91%T ]9Qm-xXo֎W̋v1'a\1k~@AnE- }H`0O = 2vRQwCt}@G^1Rg-hM'!#&|\T߼3&H}2 Yݽ4u?sR?QCSlQ۳,n;Bw7ezgq؛b'^6Wqk=߽}Ni꛹ei8rݒ٦MMXM-:\o&jjm+F[i R1MyHW^{$dpׁ [/VW:emt>c00l;LHK!I9Yiu5+bPoWH]hrl%)a =l:cO}Ǟtp!qIb . /@DAPr6(]LtZHYwȕԉ|N؁ ^X(}wRϟn>}OwɘSj/I6D% $BE6xt❡cZ˜Ծn<ݯ?2}$bW|g7M^P#l0b_>V}#lebb$$k*et|sRo%V׃ɵ,`y 8GDTs,/[y VtЈ=VnyqId9/3m4_+.n Q҂ Tk09AU㫽VJΎywt'eU7+q5D;  M0cpQ%|d6Ia)'+I5מkb%"6m{䖧Y fDB *)MI.ъR+"Z\Q+@qi(wX T>^ظ#ks/}r:TC9BҢ{EKDBm ҙtY։f軭驶}J9Ű6A7$n*(r僤>Z- }^5*9~ϨN^ zP zb?ڎ=ڛd~L/^>uax࿽Uc}A}ν~_orkJvyC?ܩטe5Ś5!7s#kmVE^}=C/̿NyA۟[0LcnͿ7?h6NŗQbfɬT{OOG* &gM{*)ެk|՟x ʃoZ7÷oߍ{es۷MUG*[emz>*D?~y&P>/ [6]1Ppld?{a2N!3o_v{?CM 5CzY>B-&20#>}w7*o5beI3wfYm.m9ōg 8ųQ=PWvz*iԦjUIo.|kF=7 Q['g0—khVewKDgÒbرcsE?7%K>x&kcm̢~hԦk7ޢRd^tx|> оO]Yĺ t܈)rݧi9}<8S'N=Xa4A64|uHJ 0GufvF&H hЊQ!YnKyq  Hy Њ/S6昺Wϗc4.xX %FD( NFi r.'"4pC1)1H݀[IpzWT&!!)GȐrdB]KcLĺD IuTևŚ[֤~=+cшP kDiN#nb FYwDh]J lvԑ06Ȳh|9r(8Ppg\p^( >ߐ 3K{ki<"NC8fl@HIah:ŭIdLʲYp<ИGUAD k&N/>^<}XJ:Շ>= [*f2Wd??H԰#PZFQ;;y?>w͍WS1~mUMe۩v/rSbFC*vmm%>@15!iv߇}Bޔ}pK٧V鮗}(eRT*Ӻv\.u)X-)wpm9xb|PJ00 |G{+|WBYҬ]_7㊉pV̟O V߶TA>hA*Yh.ZG;6$9N(RP2Oفq!W$P^k<ʹNq%lQeӍ|IoI$RG2ĝu 9,$l`[Cb~!r-1 pbkIe-D"{ϐ TșZ2,XG@ʚQ,:PUbZ$|BD #j֜I)i;GR9KVPY5bx;!&/ˇ=Zxپ^-Щo}jΉ$"-Z˝YLL$ihĦ p^q=\s 1ځx!" Գzyo̓$|1xR})9/R|1d">FHG *O;̓u/ܟ/wdG(h8N7tEMk4Z'n< v0ю,G JIYڬ^ݑ.Mj>T-Vu6 j#mn^\dKR$'p}&B98h+W92I#'y% ZijA jr49oI8=kz-~'VhvRO8D0RLczR09Q*U-.-=/i#i:R/n܉|G-瓥yT7yzKOsF&4:DeόSboUpAVtOSy~R>CE'uej io%פ;Vmݹb9)E?^*>*fs0OݟZTUQ爲B]aw种K]4ݴSkw*PA7ϸ\"{) 7ߖ;NhvUݔuu{NiSg@'_?qq2I$5$oftԋU[9+vv՟ԡm?m4!o6U)PtA^ U=CDtm+m>g=mj:y[qϐ&Xj z]_Z4&ob^1fs1yt6/o/<1VMl,;n,Dî̶2kV4ZcMGcp!p|1iʂ4x Hɢvx9 Gw㰻&c@Z:kX ĢIy(P"@8Z)d #xI/KOe@&}Oޮ^yqGRt?iHp ڏ~X GM >Z 2BDЄj*VE-j8uc/;ץaݝN[aY~1[1]RQIXD90'9#cpJ] L0 U $P#hS䜀pm0JRHu:BI}10ڲ5*5듰PԱD"fTs.Q2ZxbR;B,"X˩od^v> d)ݙ)3ô(8=j"DD+XP9-;5ȥɢ8ؠߟPWW5}D[M!Y˨;#:Y &J%ʭU%췱cG>,W =G:p H `0%Po)i+WKCTd" sCsl}ޘ^mkQ͠a;0ہm\*f-lSwB)tI^ud(Ui;W^lhnZ󠸗*b63Pc4ONPjI9XR[ߪj;4HN p,TR/.%+S hK;JWN=y9\SMu4p<ᢓSϪ7l*N/ɥNԆ FҕBPMTՑ*Vb{;xo3*3n2LŽ3QLf̅~P%~~Jmg=^SB8yH =͓XJ7TicԠ@{:gIvBIRVJc[,][:%5s;k-mm̓vڝ۹g%+E<pM є}2Zs%&3 48͋nHUp  ⻼Θ#ըRXe58kT^zDH]JJD Ǫ!WgNL/ԐlJG"GpbjˊѫH;RPЎj_e.eI3*ߢLCy.uϏgOjg?$|9/o~fwK v `E jq^T,?p#VZYR>u_~lyk%E[YxZP[}ܘ1ɋUZVKA {_gEZj㡘7~$]94\-xָ'W}ns6;.?ڙ:)P ԫyy8ÿvzV'o՘ i~ɜWTR,6&F)8)DPA ˼,=44%ibJ:ĝ9mL6DNh%HFcw$EQ\B% )9[5fs$\Zύ$Cv1IPDR"hO5Ci Om'l ,2t߉5 Yܷq'6xg(s1Xpfv^k rl ݢG\/A:!Ҕ:oA dTRxTEtLL2s: rnZu ]eV܍/qpVR>>mll=?]ZxS&l>엦ʏ@Q!;D8SR$@Bƀ)6"s~ }/[`q*wU wq\ׄ+xm|w/~>6??'w> V<[_΍Uά+>X&=_?0vr/4ŅH/sQyУ~mgkwU~s~ ǥT<4>;+^o~)nDAg _UL'[hyZR]qጛ"zG>RVdrh9]?(}(C ]!`do*/t u(% tn芮9d/\sp 2v` 8 ]mRn݀@W=Պ>F7tJh:]eF tutŴ$7tᾁ]e׮2J:Bc=+l@8}+D+;]!Jj#+AA-)7ףT^8ɯ#po##Mn_XQXxWLƱMW;퓎-4"%?k Viۋ*O;bytu/I;6]Zv_70zgU]80F mcM"%ܕURɈ16τ;&bZR %𢼞^]$V)6H~%vհ--wI5R\~*etyg,+d])Rv]]"j~]EUJq[EWJוR캺B] Nfyk"ʋnBLNtr&6=/YgF^)m[B6/F3>B["6ٰױ>BZ=:[lPB ;o/S]_enhN(t`k2R\^uJwt n*w+]/<-p9r])J)]]"s+eWNvɯ+&gוRƸ u%_?^+ï8^G$,7hp4 ДtudJ{粷Q}J5j:yJ8+ŕeJв>TJ_~tgY!]m. OIDk݅F'6*zjӓλt/! 7Utiz]ғuuXt1.+MdWu:^WJ+ԕ ΉYHW [GW*RZ ބ]WWÃ92-6:q 77YO{ JRζ>~>1T2RWѕ1]WWQt~]7+Mqv]R~uuc c BV0{1xOʆ־1[-K6BptiM+hZiͮiP]W$.n8>Es BLTZR#]G6}1D.?rpήxY]m\h m4Yv7*zjӓ00ѕ↴6B]XBm!+J)g{dճ!q!])3\l;{;h/5=vߋkԕ>^rS[0m:0wT1̸Qֶ96C}ס%w-چM[v6 > [7pXQg*~+4Y.cy])./+ZcfוRRuu&P0 ֮s7^qy[J{ Z63l:s1Ui=>~*eyw~Bp~]).Utaz])Ю+UpZB H*:W'2W'QWх㝃Cet)+FggוR}f5JOݠ|&.8/p.EҌ"BYef)EREUj:3ҕ:.en1(!RJw]}7JltcGK_|MMb.6ZQ JX^HW 2ųm6̮+tqzJ0_|mn7gWJiݮ+ԕM[&`uAdWJR&pz]GϾA˶Kn=M6U3n<h7 ZhMa[ym<65rS~³a1YPE~ŊxFi`R 0WYшKKRa2] 6ﺺ]1oN/ߧϿk`Gk^ܽV&ˤGơwc _w?~ Gm8݋*÷ǝ~~󫎥s5Ї_W}[\n_ Db'|Ƿ,/W9Ke>~}w.;,&'"w?Xw5̟?zqޏ{ߧwOQ'~qw4/[|Ǫzx?*rs5 䯔_>1;z?ᴜspn> s|1}B#qxzȟ=}^_oh_/A__w/S>*g2f笗)kSBI) TA2J1~%>}d;v)@oSw_}% ?߿rvd$%2J7GSRζ(.p S61d SI|GϗVb%&δT"RΙ 6+s)pxa#ѴS?ڎ&oғ~ZhPBmc! LBDň`|yB*93N=IExN{jHJ7+11w8 7Qrr9b$-N\r}/?#:Kcx_[. ]A&{;qƌ'Z[D,=ЂGgF3k!c_C¹q1ZAFĕF-9[ @x zgpY{L-"]*_iq# cM!:͖Jh*Ơ?@AԎF.1$F9jH"RLQǯ@x>O?w=d#Qr%[Ñ(B֩ ,JAJTCw;Obs!=ޛ'H{Uhzl5'xt %5SCkᒘ$7l U+:Rs&g7b'Sk>aŠqEw 2ɧhJKoc5TX CJ$DmMs_2"$U.uS$JHZ2O¢?Q wqZi1 UCfM: 4pJ=7gcXb@IX'g !bt4 %ctit&krxY[d4"Jxasr,[ < =E:2:- ~:~*9\@n\?qg A&ʙKoYbՇJ|@28S^H[Fho R,:!e= 4©eQm"p>K+U/. C@p'aaks8Їh[UF{C9rGſ )&8OX3#\-" cD05T \;idJ焅#Bť n(pW FkHm`r wތԐ !JP1fjPwIa wdVכvr0s:$PBd"%Bi0#0jMq^[Pb#$,,<pw%\cLα֤*)3-Č* 9 OdF&/Hkb7$77(Sq:7%82Ƹj zf䒄( i!(). CqFN;]E}֬ IIk YZ̡.%h;#9DGLե~H 0:b Iֺ= 6Y1bU+c#CzBAӉ L^_>ޟł4$b68aprbPTԣڡ;_8|߿˚/׺ ףGF=]0Bm$X = 3:di>!K?pdDt56$;6*1yLC#]O  .B`T> Q*Li1ڊ^!:bwt4,Gy={:0A9 e٭z1V#C@n1vKY3YSY?F}k:*'*7mdQiӰVrY 1PXS@ݥ96E3aGq5H)"4ev:J䶦3*ZF]\huP:!THd`Y6I9uB6`A1v('jjGh FM/^!=dQ552Yo Tʭ ޚ^I"'aZ&[o[G ɑ~.2@l$F׈cԐlë$#*ӤߔOaд۴hX:נg4g  umW0tBAIx FgTv( XYuk]ె$p58EMaɪO;Q.t{ohwjfa |7y +`!u>fXJ`p7<ВͮqI \ Et V*z@;.0*@%JkPz wB\ lnW=t5jc4h 'ѳYa]r\?0)NÒi!EiIUk`b5TRXp ?ѴaQ=+NF1VF372RH >VXpkK>9. ˃`LTLG)I1?၁1Xa6jy!D ֠ TfrG OZ *vaw:rfF8pA-+С`=ЕAkO#ؔa9Vd8w{ xj2)(4 < =[Kpp`~V;Mc6]jåΣ0`,YPI7\I"v.zQS"* &puA/?5 wW$pTvƂF8Ø1TGy&B0U.nT677 -+e&ʠS셮&9znZ77,*٨lU]lGnNBo,yoM~LY]ol)p t%ם dAF %7.p^+'@`'ɨ+3q* Ji7RW dkO ]]zq+ce}K@j#4` ًkz|E cB &{c_K3<t3م ƴcoǔ'qwžFi;(!'-iԕG6tJtJ F) zԕ`u`gk l{]uDq:OֲQWORRWfג:JP'P/|PZˎ]]ΓzJx=!u[2 }I8vuR*N +,{ +J~2 TJܱ+qRWoQ])Zf[$ s}jS0כa>5Kinco4 ~*fC|x}U~496^U+H⦟WD:;ol.*"-㖫U $qv|? 0Yr v',z 3S?˒oN{:? )9|o%J~38?ieUscEJUhlaF?X-ҐlpK!B_ZJ}eqpP`i/dV1Er=5^|6/06/MfjJjݽv7!emU_v/v/(I?f i6X>MfߓSֶ4o-~?o|\QX)lKX5jK :X,+`QW9ɓK^Ε.v8W:r<3'*9Q]W`uL0,F@wZ@!d+)GSi:.)Wh<9& 8`xTk3D6.(dݱAi+W\jL;96.it[j}smA]&~\7 ̔32gWwzgm__ɚٻ={۾ ]oX17_&c6!ӪM<%hLÁ?N:'0pAdd%.'fU78,r!=cH{u^&!1uMO[@S h$˦ kT%LZrbAG?F/=\T\ ֩ZK)| FD8KÈHgܳ*n-0,lǣXC,Sq;<' LxAV  EYkKԇY?;삷/?a@pO|8d_Ӱ4MsqRa`>,:7in&8-(_?ϟtWo\~yulb&mK9"Xr>KM_z1G7,Mx(`LM&`e}>hTaJ65ރt[psuv= a:mͥz:鏡ߞ/&x!F!E+4_5.~"6NSfzke춃CuӥZ-omg$zQ&_lo`k{]d7z=\ͪ^y%:$77RY{66>{{ Sn07ݚMcLaQnfvq'4f{Sa|>Omlf>Zov}ڕx4ݱ=3A!YGix3Do8xGYo=?1'rh.nq5kv@e:9[֑JHilYu RG#q&H\;`m=CW9Jyui[t hP9{YctT&}G` kQxzcSJ7QGT58 m[6man20}O/{7ևޯ[mN{Ǽy@{Kwaq.B^eo*m2+~rm]?weoq<6}.Ě3嶾7W>kG;65g^w}l:[;wb^ gCe wouoWx2^^H}B" Nq(;RId5SQ_ ̽7:BBe[SjT q&s `OA $Z[)LJikt9 hU"=M ooYjS>RC*- ?\$JALHBjjPk3; BxqN_2J\[]T>$M3)Lt[be5Ff@mdyr䜉$Ya0`jqS khu0O-6G;j{48L^L.D7wca|iEGO[wDYK"^Q՛rS0)Z +8 뺉Ņ&JNֳRExgE}gJ5h#I=\0RV{e U(s<.$,E՝5$<138@0ďQ}6eJ`g#`RbFMg4>|%Ssvl;rvcgZIp X%&JÙu&o㘈TbmdT"Xjږ ZX— Ntjaq4)geKegg鬭RD,rV%')jP<-y8aup=-{35R9+pȃ6]n{TqfF2%-V:YI6%7zc.& B!Sdέ)M>./`o(ݟZtq[Y|5|9ձXoÔe%8i)^ `Xw sLx knu {VrZ8LmCj6ڎ¼ A($ku4(@X2P2}(r+ _` I4|WӼN*;@5 ⢀%(aY1yYґO.rv-HWVWD^+PSBEE⟀6w+J1A[f xjJ "B-!ј"HR P»\oJt͑qv/ޏ:^KSuh-H:"]Q"ôN:DeFqKs Lzk!tN#E#dY[S bxF $W"#pjt̃i#Zi,:$HI('a4j YZ!?SqH_FPIAbPY  k':&8=čgaC<^gG8J24 (%Q. \UgXź -֬mG cz(*yBx }a8ոV*i#x#$:K"0m$אWR6vNyߩuc2OniVs:+OvC?>Wߝ?oK 82HHo ^ᔾNGWWqhhx]"{i<]=~yrďILm' .IF8?'Qq,sJ鞢;mB>ZAGQڵ h(?ļrKl`MJu FЩf頨V1OoL tjEv6f=SX9# oNv^ *Y9Uӣfn9T@U;7 P{ gqBF8 '~)cm{}~]wV_,'{rݽ8?Tz@&< 3پakƇ^g<9Kc;i4[?)||hÍo7Sd/C&T׸BrqТf.~t|m8ywәlL Ծ&a= jz\) -޶ļcL1ѓLvCE3VXd)kBY %UfWzṏx4',E,*'{>j&G汳1OH~@ ga3~mhQX4)M&H \d潥VHGv^v"EaP#)"J/KO$N>QO5c%36Ҍ|ݣ˫OK^:rHٓ[/t Z+.ˇ=*d:=f0.@SDPV"q8  H]hy.N# R[*p"N1L2E#8 T\@Ng!JZ՚:dj,kF o<Ȅʆ`9EphiJ#ATk6^ W3Пe#͜>d%rJM&ħ%@#OLgtxe"sR>UG51a]R<*UX4*Jrv150۵}CxE@ޮmYO7J&SkN Czb Q:BHrx͙b$)|Io*J. ~bNdz(Nd\Xmp7R/tߒx˅9<bk5UtVj{0S0.KxPdbiv ~n_u5⮙:ݑ"]zDY*&#knǒ5VCȚ[ddUԀ.o#c6BFH>1TPʐUi]D(k؏wF5C1$P:*d]QzN# x9 iɲvD:En6F p@V;αu!2aBs=!Rz0'-֜-BsmٙNrKgFGɹF4{H5rA3[[o/ǁ\߾q"|hm-[;kQ!zI_e]^ /|5ݼY.c>w0滗zq8VL64d0-M!|08a!&~! hteÝ5 Qvǃ^~*iԺr5lT`RcɿzPͶfC{m̠=ͰMrZGjwU rםqQ8>#u(rȧ6͛?o^[$+c ҅Ez6NoQ)Lr_2b?ڽn[WhϦ, bS.VKsO߯P_L!:;t)2j%6,'12 Kgd»ei1`p!)uq'3#{n SWyhO='{m%FD(Q NHo#R^LwT頼{5|t.:=l\UO4\Hv='_ ?Ա'I&+q V +B] *IAQBM. —Cp8P;|$M|!ym>) ʳHJC +XЖ9!i&rD29C)P5rLŚ%%ڨ(BCb (h 1<w!S){=+}!GGeD 0TJ\(oAB GD1.8J'Sh_xk Q"zvƷ]LxRrrmm]cXl/N(A=|eFKCNf&ǒV—9}Xoqj(/>D,p/7HTqP^BX>5b:S5b5Gg<";>㊂شʵ'׵Հj$L빃`Ȫ!R u<6CU=|+L۷]rCDR9:vh9Χˁe/^&+jPdO/9zz-^ðw}zVC{yCGgBBQS$P9X:=aϘϠOd]g;lQ2lO01q 8Mc2RPtc2d9Qp}rЮ`|Pgycr>xyi͎ɓ[z|-¬[⑟t[|Y\]ܬ/Ѱy_WrZ&aw޻iZwhiB5-gi"[z.;^8_v2xY8ȼEzܹ\vכG~>?Aλ8YGe8}kΆ2.R .{t&;-y7Cx|A>w7}h>&& i:;Y7<Yn֗,.ڿ ⽂}{Wmn۳q)X$8/3H9oO?[?hl-j*Ro<ڌ#}!%M7E%gw'֍(sJ^=7ؒ7㦯WF \X+wǵlurm*f{O;QI@/Άۂlʏ6!\[V|^=۞9egnhK8^g:o׿^^6g@m뇻~ώ]`MݲK6~ J>J>D%qsղ_ NΣXVKZ%t/ضCzT-Ggz,EZ%"KPXRӯ5G<}mV]cB{. wk-BW د WZŕiD[E!ahzmlɠiy~G}n⠟]9C÷?ZvzrUs...k_Z$n`LeqZ-1׷Inoo;6Ͽ7/+;`mnMZ&p.oyv4==%Ei>Ly7Q`ZhKL u0=GPRճ*^sy<[u5C]4@&Jhm*]WB骮樫!:M+Njt%.iѕІA2ո9*k@pH 4F*4''VMxb0Bk5Mzt* /~N(c9;V= h]1pFW]1-u>UW3ԕ?=W;n*K@&lrڥr 22VSU?}p"]3*kѕz,]WBC uBL^S*P&Ztp)ƭkO.7&S|dPں3G]% Z-o{ Jj4O#^Qh4DŽj4͸ɨѴP59j A89;E5KBX ]U&!w+'U3~%ya3hiAp%¶`jߪY$ 8S+z!vͻyV]=, ^,AFW+J(W]PWHVx̋yf0ȴt] %u QjdᎻN`5J9͗얹JXW&`֭H[QK50aRY?sJ®bM쨹< eK7?K&W3܈ZYBKaSҲa֓ 4p@5ܤfiqJ(Kz]E2 ؅IlujzWB] e9*iMIO:0Zt%XYBB(r_WrbA?jrTiv'N^p^ЦC#҃b4vzwl&U0p-L'GQvPЕڷT+NIMB4v(KWu$o8|T+MEWLk:0c uͲykӦϽ x] d+ƸCx51 2Ę4=?8 AShc,)sOMtq<\Jh-]WBj HWA3*.jѕF_ucOzcu JhCk?LY\֘'Uvtm1`OC1<Q(TiZh:%c@͚bsvB5MI]U+F7t] MUW_UMn~]UIhDsvyޕЕڷ!) s0DP+ƵSGBʐf+5"] 0E5b8Umu%f+`&] ppjt%SE SGij1.hTWӉfVzj ?JZ:U/5IcjoM5(3WӇb!eD#>f#`iϟx8y=ł]1-[樫h Ax%5Lkm,]WLV]PW GM+QOqQӻZu%+nwU؆?X0QtL1!D{ }TivVOH:5ڈkZ(F!|9;V}86,S*&NВEtGF] ]}J8+ 8] wZt%ɔ+Xu5C]Yk$EO%w%Sʣ,-TՓ NJp]1-{WBTu5C]9k(Py ZŽaR[]bs 썞 jf'?*N\Ҥ+qi|R9+AT u$"*'Jphѕ{WBuŮ銁3jt%JhS+$S~樫dtUR6`N?x"(34iZ=ѴF5J5͔ۧWMGU+&=b\EWBt] VnՕv]>MĐ6p8TT'XQv ]Ū}q =85\iT(]uetY5i߻Jkf+ < $C/]WBYT{ՓQM[tBkit[rVH(CV-ۭZ˅ghɪYMY 2>ĤiyCyh ShdD:iyKѕѢ+O%@ uOt%IO!u`KוPZ[u5C]@`5J] .9-bZ t] zUWU~{-Љ$j|p_F#) H:5svBU e):*<`GW'Sh_b荡/FWiǪO&'hN}M>!]eQ7g2tzHbT+H@R+zrJ(=V]PW6%HW6FWK^,+,-AKՓ h銁5\P+t] eiɅDWe#I7%[;릓Ie4HWd`|<\OZBJ9>?=AMb`g0npV6,]WM u ?@q] S# LugJ&z) @Ij"kZuŔȚYձ./ 'u,Kv/7  QODzEBtM %5/GӴcճc* عu'Ufz3QDQ-lL} `銁 ] .] Tsvydf+kpU|E4ע] J(C]QWhHW ] T:(#U]PWN^MJ4iDP$aYkYOE@9Zv³d4SH'=[FcԌhֺG4Bhf8hRФ+#P+uX 9*bOp:t%ٯӢ+I'K:PW稫 r8]1FW'xHh_XWg$GPW qcP+L &T]AW,D'>ח=s+kkYvzv&%fUV m\/ln ϏMns? QtEA/x{BږCKƊ7knl/DwɘW%9[^\\qc0~58j.> -aJi<n黷zq޻ns]i|\߉{.͢yMׯ滗o+ګ15˂#VQ_O=2 aV/&j''*tJ| q s=m-ul_bzXÛo/wl-?뗍ᤥ.R ZV}㡷-\@b.UH t| YCn~ޡ^^j_67K.W~uM:cV'7UER1apmh=4ƣO8=%Vuo64驍_!4iz6뭏MѮ/C'\we҃_ӀVwqX!sK@}X;6+Bl̍/>H(ɅCah[M& hz%l+,\Ьچ!߆ V]nG"㶪jU@ppp_A]cM$R)}")vܺ3<$⥹jUH7rJ5KKRd]3BJtQT, %0)SSԱ B@M"KC.BŌilJYll&ۀUk%i΢rQXW 7pKFK$Z7ۄ2rʲ8$4Hs [b8AZIdPQɔ #Pw9[Bp+&Ue(ּk948gk+kὐ,oQDx{]\??eᝏYlgڲ`<%Ctȶ8pw A҉ksHh%9ʂZ\ 2E%d|C:j(x4 AJNqu4~1J)bBZq,ii f`M$ZG42$dODR N* (A=,IZIL9עej%Uy+[lJRrc$ frYd+@]0)(JHJEw)I6Z*e»`4.Z mRNv E #1٭(PdZSAfuqxt&M&N<䪱ɖ>1}w;엾6o(fl8lH;Ӿ5[,~mNF0dn0! aq?xpz2mǺ'8WƯ-߉ #5?IopU!{p1.0;f'3tA#@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; l@e(y@N Ld p B?qQNvQگ۫c=J2z nG 5?ګu}n֮jP0dž&O^.G `ncșI/%IW}J9:2(v@Pb;(v@Pb;(v@Pb;(v@Pb;(v@Pb;(v@Pz(e{HB:?%}0~q? G,;:4; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@ ᐜ@~8'qr{N ̞@ Mv@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b';K둀>b8vݛy_jz/.8\kYnr䋕d\ pKW1.!Zmq Q=hKaSK\!`M`ઇkk=ZGg+wzЕήW W 7<._JG{FII Wz%J%`Ju9BƪWmep|7d`aW{p( Wt71+>?'W=JyUwrurzg8/ia~7yDcg9gH{guuqC_{$?8%M`IB7[~^_̋s{5ۛTC&*YYkЗo51*PHn 7`Yi=~.3]߯{D8lsgXpbK2?|b ;a"~<^/<)9¨!0N0X/Zm\^|m׼ѽ~Xi~yzzd8]~jQǞF*cžV9OO?!"z@DF=~,gE%S?w\n''q}϶ڽ E;,QTj2iK)W-=DY֫]FbA˴ r:}pzRynAﳝ6wBZl_Zn-vʢޭ>_ï_~W˧MgoYᇸz;-wwB:>W>^>@-?\o KiKK)w|jNovW,"LZG26zZZd&}wzH:IC5$wy_+9Hóy/'ꇞֶ7$-.9w0@=߿Fͻr; ?t_XÝnYrθ1_Ã48SXL~˙#S$&]ypӨ`,!Y]6dC*S:1=Xr*5{wLdm:gUDB'%6yfR7v¹ڡ_N_HN@#\\^1uPN.n^?[zLgGh:C (')uAA4ALRJ:͚(uJ}V햚{읋n'}qPHnF󔊕X7 RmDgQME[CTTut8%6nMtB6ȅ ̹o>C} Y4lRA (ֲ{Nf.I !L؜͎R})+[F/w˪4TTN/)cvI HglY޵?q#庪M2&W'Y. hPj!( )q@ݘp>Gy(QA}[{=I/׷~7G}{D9޾yC^@X'Y9ڇnσfMs#iZj:ׯs۴kKvyMjEe349FA cjvS8 gbn y ƓӖ0{UZ*bLu{ag쇱 P7NQO/gt}?==/TN\#>*>^]k:>g9 0ùGʻmw?_zW˛߆$T8Ģ:kڜ]o}Tخ-+"fR/F˗ W_.NJ-#6I4xIVM0Cz;xI[l!oifg>ַv~~IkL5z}M,hsYr1S܍RxxCŋw/VV׈ ALj5#]A3v'0a񙑕OE|U,3?G89Z8 [- i~sC%4%_OO^/;+cK҉C|=jm}7ɇ{m~+̦,.Ė'Z:5] M[ѧih78ShN~xR&׽l8y)ղ7ydE>wL@А_Krp-n=p癀`p4C$$.ShArG6f6uh?0oiBVz`4\:Ĉ̡5_kWVj"Rzuk{xT ^фO=ғ *tݬ:>I?%͌IraA^P_(_N$qBUB,U^T M %Va|R"ܐdeti%!isb`Ib!͵c, awEM %-{'S 룻7z0 Ms̺sTJjNϣ\_9pxWA%(IN|sP¹@K D8:KKjڠD.ZL5 R=zd gW['}7}9N57oeVw\^8~kmݳ;j'zWW'[p9ݗ (cr~@D9 0nsV4+K$x.Ĺ*J]PByG¹DolW>l.櫾a:~[rr8h}ˆe@XMM%C)!S:pPgAp!N'7< qa+ JzY};b,≍ªsN68w#’/]luں0jj,؍,rthIg >%N` RtۘXtݸph|Q_9SzbҎ"zҋ@cvЁtuDnkRMr7xXju슇,9aj7{V-5.($%ŨZd6ѥx?-@zd`,Vs26敥`RPIzIG%=>ܘ{I|X061ӟEwwY]I\AɖOj ʓ< Z@ JVyAl4 CsB9qsfB!u:ڞp׸]tVnEѳ|?KEX]{,T4}s"F2D(ByVsN.0/41! B^2>>0Hŵ2" @9%cFeK(J(Z+AHkaڠ5g3%CT{*E Yr:`8w@FKN 0խ,%§UN.gtQ'<6 9HF .E+"?$9pCbt)<$g-SC$1 QœB>0 A8)i~J Grh‘" ds0wM{ArO h 89(.DA s]tx_amXUٿ?R\?ZbT}bP& k)޶ԯ.lxIe115|Q-CRc(۹s;~[cץ:FЩ.a僨b/O-t$ K>sc# @ȷN*轻g'>!1NRqYΚBض'e"vN˵ovV8 -oO/>4I)I:Yxto-tob Qp*L7r3 ^jWcp!JA1iʂ4&Hɢ h$*JCz@cz%Vg<{ig&hS>(H 8(GT:AJ{I~W j[O=nYp _(kѷϓ?-h߂ί`7f]ٯƓg}7՝26%b1'W4F5|K~-q>W8|äq]L}8Eh$S9?tJnFϚ-8 /\}},~\&" $g4+zF|5;gsKT -׽*i14zm/f/Qpf5{kQwQ݂ l:'YnN*'mò7SiO{{W\Գqý~ݧim mZ'Wf+W]KU ^L~`^W j$ wHԞk{AOd)Ձic=1JJfJi.$XfV@\]6UH! V%.>'d@r)+AT:Y)q'p9hRn$s]z2d% 97fG7x:gvc>u1#JIQ>**}Țֻz>*(MkRAQ," flͷ,~B侘oʲޗ(5gd5u-)2"i "TQOxݶ3`aRA@@LQhArR 2h}4bEKфӜ Jg\(&CŜ+S6d8ABPղpO;V썚C3>@\Ch 5sc^3D͔f\]QmCI Nd:9ph #(WKJ1By]"D-[Ls-TU9 >Y_.& l XC `,%2"qX\ ΢qcq$9g 9R#rmMX2+Z4Q|(/gR>=xHmXХri:o2&hL%TV0cq':*a:qzP*p鏼N1rV 휖̥dCάQ*2%E& ԻԂ'7h?Qm!2N]DENpT 2j78Ec.ٌY svQ߃L-miTGV土18`1 xarI%0m+w"ObjD|ŖȮ +o@oj+D`Nhg`Thùwý' cnUF U;Da!y7VpH_ץk'!'3Bl&FR>ݍ0UWa()$y,0RWNFSy@hBbe=b%eM`/f-/i|yuNNLhKkuXP4ta˭r8ioڲE_>^ "ӨRƪ`J&8 }/lsU(6l;Č\]r9ݛ /ł\,ڇA[NwɞJ$Rj>e`U:_,k[;/nߢ$kk-\}JQ³Rꌔ /_RO-7z|֏cwXDȻxi}xymt\8p_Q?v[ᯟ{k{]{_iT%WPo~6(!+*\SMeKs-E Z0c/\P*6Q>"Jm'Xm{`YLktʠmgڼ%O)/ȿr)ݐ ޴WW7cb[7{o3-qL# [O:_4xPM?I/Θ$kBF& | }sTo/W>*ܫGjvR;]ܒ悱Ңe譠3<.'"[=q灡dĸ#G~ ڨRJ5I Yg^iLzI}& dqؔmckl Yo=v$uVYӲkft8Nh u$ ]b%5B>ھh5cAEF9(щ' t&1&>"MbDE(r.eF3>g{sVKs2%GHY%eG[Yay`{n1! BNVA=༬ Fa(ES"-) ^S}ևcIL- 9謡YNk[k~-,#=b(m1~VJG7cn0& )|@u2 }#nk_ךcbv{~iu~օa-ס={!>=^l3ih!:: a*BDt'6bLs3Dq%Z!ǭ:koO]ݩe*ElYާY)M4m^m9oL;赏mS痷-~mwF➐?NuhD Wz`̽lm?pss˿-0̓{ѨJnIoRy6;H:zn>U?UN RQTKbM,6ił4݁<\J@MD G[͵wj@n꿗T RlX2c*|L*sap(3LfvVbgnM Zs[zfeҌ5*uN.1ъ䞭B.V:>ET68XO3Iy+<\\f9A;0Ø0iW7e.wAKnb.?`D-a 1;O_^}A781X1xQ'3,VTqFW*pu5țiŭ۩ Vܢ'y0K-s:0yjl7uYڪ#bꢏ>K7Y$ *ZbN&+v{aE˵-޽7p9rD)hG k-D]1Q:YY`t|VQǴiYAgUc]̍`ֺ@ԓzSݫq\ʚT֮[$/uqq]0iJ ipTsG^sFWsGTsGQsΕr{WFHʂfHui#,l ~e90>r;k 6/:NwWV ~v;ؗ+挆ظ7P}xܸd%,"PGɠE6F. Pnl0ߚr|8Z]"& 8K}6AvR my>޳M֯jܧme#vdYdY+ݲ6o6nU˶2 6jJ>Bވ`d1ٱtg0Lg:m[r9Cz<^X-#bY& 3F"$P*1 1Yt`2mm˻ {ؖW[rL{GՓ%+<3/9hdJ.vsbh-a˭ QL\0ީRuo`3o-wq[-וj>YHw[&u TTa+aPEQKBKxPLTa+,n78vR}= ܬaA-5/GH403&Kx`F~ׯ̮/oO%1yV @y #2㶤m*Xc54] `l4]Pr>i3|r`Zn(]t'w9P]`+U5tUj誠ztJWDW ;.g*h J] ]rԢ""Rc  kVUA9+BWI\7k}p/~7/yRM>CWbǩgLImN:ƳZ9Pn(5]t%&w20+n%VCWQ_EWc؉N`kRW% \j &:A yEtUkQ ]CְѫRN d_^]XWB_oax2M^iݥRJ4 #43klbsf.Bz ĊOY=g˫q ~~8y'N4a=UiY-tEhZNtut #?""P3գWWʼnNplF4U[OAkPJ>E֜"]aPXKxy \n%]QD#M@jF0Tlٝ"M[ k+2jըʂ֌ J;ۡ+qY 0|Nqnp*J5{Ё`}^0++, z誠$L7ck2Q $vlφww J9w]1duwJNJNWF DW'DWF[uFY&̌Qn NA0dP]:MܰzIy?OɹA nƢ!U)h0%NSh'& ܡtCGOWDWCW*+XsUZQ ]Z骠6Π&* \ck+B J]$]M??M?C \s=oq[8 N%'$tA+GO:L4 Ϙj`Np5g -  t];85惇w"RLtut%LUt%QS/*p BWz*( @][s+*>d$4Pg+UNNrrUVL:$%ە:t/Yw "K[+3эECWESnEW1:]J] $XmUxZdϭۭu;-IdDf$2"Z`/Jn9SIUDt_|ZltE(Ot5@- -]!`Ѽ&NW҉DW+#1hhPgteڅ_grw*ٱU"=VkM#`D44MpABӄV@4M(%$ M;%i[~m Ѽb 2 Pǡ+S)k `+?X=)AW*sUDV""BWV eX3BWP] 6`;ɢ+L,tEh-Kt5DBWv^XO'6zQm DSd9SRGbdeQ2ɪ+P-JL^MFz#d֘ƖZTMjӰԲVN +#6n y*n US&~m69%wõG-+M k>W/ΧymӜMH]kֽwtV}w M'^ [r^gEYirYW`JTG6Ǐ5-LP&i,&&.V*Wж`Eieެ"`s2^pXJ$NWHt5@B0 RBWV]J^U exlBWV͖PDWC+˝@SN @yUo&F% M;_buxBtE(ytOcfe^o/Ct ]DWU=8^tND]."zҘDW+4+wEWECWCtE(e!ҕp@LtE]˯SE-h?hyQ똗dz~N'|\F)]h߽yjBW Fp%:,զJr19|2R~s3=ygMm1fֿgCk~8onr5} V8uuyp}fJA JPi^ߠ ۠eLO)_08 jՏO)t.lcٗ \ӽL`q8ТDSkr1cYǭ|&gZ o &j"HBl}^R2JH!{B*.8XokY6L8b:G&#CMCӹ!p8I%zpOƴRl(upXU1gpJ%{qHi{>iX0{֩ݷSlbӤ][ɚ:7&6&KA'|lE1;6h[)X@{P`wR9iEs~܆ +"xWE-Z|ZDHEO)-*sDņ::]Q9H'r=UQ`%:Q/RyЕKt\ՃYDtEUB+B)\ ]`ã+ku,thOWDҕں ;]\b+B{>( jt%N\垫.l~Y+?luZJYl>7B&z%G×UM).Ob+90l࢞g_*Ka^# gfL=;>p3a(("kx3)X댬v\j;%[ %?iZVM-!A6, U8TE)yajA$-y(?ݵfEq ˄E&s3WMƙsySrC,Y[=r:%,"ZGҺX"5Dk;Di 1RSɈJ3#++PV9ՋЕa+;+ uplyt(2HW8p;ECWY&!B\ "Jpp6,ȃ=Zꎸ8o;~{>nAF* ;M;'!FRAҴs]`g++x4'OoOW&$]i}!n?z5:Z+97t^fZNvփҝ|:KfJ|nQЫr,x-98Ql T͸.p=r٦ >¿b2On :w䳼lAٝݓ7/Nb>\׋ :::~Wqs4ċќ씲ٝ撳K{$krb>WإQ kW*4z*J.~t"7rIp=˵<ϼ:}Uϧ,ͅC7Mq0֧yUr5ϴcgYUBae.BBk󻛈3:+lVWy3_I}{ zۈm#?/?.4-r'FyE{z菣/(LyYbyseSԚGwXע+ʛJKQ4tLpsclsq;r<]U5TM8sIޔ~Hr;rǬoTɮukF6mVp^)% B9<[8tOn#w"o-|$NJ=wyA~-.}rL_|2Nڲ^'U=mZ`!&7c"ltߓ#Kɑ"siZǒrW0rAR(h3&SK6:{\ 2mNd!.Ȉ)Bϴ%8^rivgDO2Yө$"wKƤ1Kɮs2)9pUqdKҠYeOl%6CYUOTN<1SLy #‫dʥefYlYZƝ,UI7h:ZO5@8Hh.i5H8Lur3b|)Ń,$b12SX4*&K[(#v|-/e g=[r*i>ANY([EiS^Mu:W!4ԢjJ]Pk ԲVN +#6n y*n US>{fZX(GJdIe REI (]U D&BPH(KvkIrT]1[ZzAПoS]zmn@˥Fk@(4j^k{صDA:ňU[5w:q/NG:}NT5;RA<{a9me]t~/8Zp l/7|xS7;LgS;oUixWpNdqtqS"T9bJeK{ސf;d_}Do=#`Ma+H\#Yx$OsX\j Qp+ \38#9V#4WBK吚`4sǯ_8Q2Na ^WPtG# I0qq;2a»#v'ٰD8zYv`e 8DϲUX]˫|7vCrgWYVIdtUUƲ+`iALIIZjTJVD!)<ɍR2RԊޅ. !CW"v,&@1! c̲o_KT%Yo71'8+YUS,Yse_gU 61NDi\I 5Ɇ5i&_0!З^;a T13Nbƻ{]Ѕeגҍcpu#'}\GԉrO]gײcq ISM:a ?[g E[-ucY Lj5ޢ7Bh! _1V,TRH+P` Oǽ+u7i\ Je`cWWu9q(Zh*97~xs\o8!.b&>E;Z./s޵p!F]*"uSR CQ8jɨAp،=6:CJIpH~d+=|f`=" P@~rx2(?G#~|(^  'x1avxa`F1cC?Q~1F`FK˲aӠlBkG1ek1!Q<~9G@fwa`N  O4uA~N.)9t挧"lRDQWj1ʛ\/gk) =>aBqh₈`4 Z=tm`Aɤr@p/1»i^?eAB %A'ۡDi(a8DP1䓗hD | ȇ#OWvX{k~Lʔ`2-Xrm%&{".aJ)c)dn-rC\oq6j׻j5O|q_m*y+l1frU5sz#1O.MfN'<jU((#L2E͚D__ }{咱&%fX#G]B:c\3"IJLu(s`~)c~#e{>ţO=|7GK`(Mr@2aB巤:ZW'^$L(1 Wޚ4ӸH7B"Fchb :O.IcF( u SP xԪ~ht'le;-u@ ˆ:'URf/(2GƤhXPvE}Mpbz6ȏV}Px`HdS s2ce#:WFhHPk`6]$TFH~ܫ;{xzTXTAںև3R%gxx⃢*LJI4_.קSԚ/f [eaMzW'B(Q"Y΀BaiHB (dI#b=B)u۵|~Xt]6zð ԖU\Jm]%bʾCvwk+eJղ'xBHBb *y9̲ mƖ=B"a$ / J{"Av2!e;/Ŝ`U>$"@ J"%Ҩ~vc#xA6eOʍkc^ 8<qJm^vt@匡xק zKo^X0G>`EE?tq.Ucv])D +b"VME` J E2ë?dK g'GI&}ŇJTugn: YG4),>4,OLPa Dcs%KZu{ ̣0fah}J J l]tPԂ co C77 1T}T$.5,,V1K8ϩ-Ֆ]m𣜐_Ҕz.p+\ ?ٱrއyM5rp;Pe2.ڍ h#> +4/ ۂ ]n"Td&H$gcdG_YZE;0>|$-/iwȴ|*-RNtpCs]Ę =vʙ VnfddKs2q\Ii ٹf%d"/a2"MGkc ;W:h|0$Q7k}̌͘)x3>wwqEIl)ݪP&fc  X'@Whfbx ɬ1J8>N^8Tq1; up{ӧ^Q0<8ہNV0j?ReЧիFaHckKR<@  ëaqvQf-p`ZP.^!_zID@ )3X E;E^Ĩq.rAH0vБ8K Mg:@E!R ^>8s dWCs"ݓɏq)JQ8xxStHεǪ"Bcw@" $0q?zG0+?B@#n͜O#z* 3KbO?jvN/}}H]azW4Ϸ#*{ 3tw_\~$:E9FHx 4'Aڶ=CABCRccSv*Q;r!L% ~,ˮ>\lᆁQLOfb|5` +/L$xy9aС$=`WLc]Y+b9[`|6KS' #)a$ߓऐbHWD3Gs+ &ӧnH0mU`eu")|%n x9%A)B,/9M}'}YGfZo.vܒMx)Wu;3O<ݖ&,|M)(-4hHP:w>PDbf. Iaw8@͍EBֈơX9QIK`/ {€cu><;*1I&52TP%8K_.qPxI^EXV旁_RH&0 s6^!A w qDi3G@Xc-BW7pWBHO6A_Uw" hʄ3rp2)%GT$UI3ә/fU"-EZJ, hٰiJ2ψN 5wmI ̫Im6lb9S@p;!XmlUbr{z6Mh~qtKɘ 4 \b)0*F.v@fFVI }#? YZ Iq$IZB%㭶x G#m6mZ :υzWVDlc#(";[l],fWën _7”c$̍Bѥe[gPXB}f>h p΁^lۖعeρU!? ̊mF/.8weNXZODqĨr%kvk[ѽʂ^]Xo5LyIk!ڟ|O9Ն+Ǘ8e0gWщ2ԓ8'H PE*3w'@!竰Ysg/Uc˛j 1.+ĿӵWaQ;?,5F5AB7CyBA)8?49Pk~(х˿.T2olZH{5?S.E2D<ߩ1[kHoOV ǁ!)$ _xE.7!AWƙ$*ǀj́2$@iT0[u-+/ קZ^ao\끓K;mcOA+34A0dwQtWk~`W2\Kdeܠ=ղ~_o ^;CJYRQ>C,?F[ڣ'GJ Gz-t3tkJ4G"5zk rc \YAWddK +!o[VcQU*IߴD,Bɵ|.~ΫqvnTsƢܢ8ia/H΃Ay@ "Fڂ-"n9CXHm?>=-VJea)J/ٌz \x0) fv [ulPONC" ⥃з1h=onUv1F ^&CdԞOGVۭ`zY)0{Q&F2@Z G"\]eQ.Vb湐eɢ^8Nd=;Xd+bBI.Qj eFR|)mjca.FxA#l^3k4Q6{ S1AB\zo\MռC#x ,ωAV O brCK Ow+|BpzAZ@a{Yt=U_ا,#|+?*Q2q$ *ql4F 2[ˢ=5w~hn{ePxD&SRVдVqc)%H O[bEtVc'I$c皳 '`?]NrIs\y׈c7BXgLg}pϥ>.I> ,S &Üi [ZFM&V `z6`<'ymls!gQAQXxAĩo 2O7vm\A(T\UG쓄Zw8?$iL!OQʕjcS\y}s΢#35G޳6r#Wno*> w\\O5eYWjْ--G=],ֻX:2SaLrlZWR J?)O^:}3r=/Db3Ht:GF)4Si[7V?}F|$/kAuc>}$ƪ7P-  ΣPsl(ܑ` ~AkTŏʛnaV u@ 8 3hDE~Fm{ ڿ.Z2Әʥ:wFdq?ؤ%Yꃤ#A05=xDJJ2@mLJQɛ瓖&a6*LYk^V,Wz|q6Tk7܎ZXT 4wldz0EPw J:1W# ˨EB0rbO/?G# !~mu Ci+epC^p8a={J5R M,g!@'+'B֮8~\[ G/lq#xQ22nT8PŖ.pZGe}-K}nNQ^Q!$Plζblc9B:- | sχCM,&=YΞA re% Z)ivI2OT^(fd€S0fE\G2> uRJ>P6 B iLדŸ{<4dlJ\Yel<(Ql8e+tz*a^HoSBzX箨dE{xQ48<e҅!_ìpې Y`"+qܫJ ѩH^q6LoH Y+YPwgcknbf℠"0DoBMeY%Uyt:pVdPLUb1VN8gYpG12Ӫ^s n HKa#kceCel)^tu逬w˫*D6jP@S np'*7 ]ZA6~ԮƸ+D$@ON&ʂg,#~bFZ!Ӝv\cl~KTb2/+j7W| l4Iiw)üvtUDfS T*JcEWjjk+ZF t""M'чέTٲ[,239H&6Nާګ?D/y)rF,t"CED[Jy#6W"6e.AKɒ\垭BO'(S?7 ӌnQsg.S:2s \ \I˟tujJz &hN,8Th-ѭ̹"-aE"/te yڞՅDRMtݹl$rh*9l=$}:LҴY!-!1QBv t=4e;#e=3殊6/JfH^y<7$ˡ$ S%F.';`+ʨ4!I?ɡ媯EYF0KŖ3=~||ƷiA= nӺn^ȟ>IWNIW'rJvK@~gyi,/LW/k!atּ"~Sȷ(b}reS^V Uo߃$7uŁI TZy+"oD0%ٔ긃.R`NAgL3ғۣLk=T*-](bXc@|/ϕKcZ |o*!bK)Aў tUr<:ڻ&g"qwTȚ*Kq*kg6}38GYR9;'Pb e<=WD)N<_RvQc^%x`8w~&7-G?*`^8XiJBxƚ3ݣ4=#O6%`ф}[RJ3,ħs\l'D-acD6@xdEݼǀNcM_E."Y?{ XSO،`>:]ZҢ |Q`d쓤SOPkʗ}C9>]JpIf՛Y%U`qcYPf24CAUSk9IĆC!-H\,q;j# g0Qb-n_ﮓp#EywK{b,}_fŢYtnLjxF7ıJ/1ͪqp@LCiQ+\$FpNWi0V&k~-RUn0+Hn5V YU χyUpa$)h,'f14/ ˶PK+gA2P Te5գS55}9&sn 懃2 ȬT d,^DXx^rѶR(L0RX?S|,]sy4'yZ56%X$+80@*0vrW++ F㗷דzV ̭V8hb\&.NW61M?OZ&)gPi_3$0`e%EVJ\kC]5lpBh<ֱLs&fXz<,i=q4\}r{B^x2DB[:˸-[%s HU]^ڞ9gng(#60i4=3␦~hI-$% ֊k9X@u ?z JO$ kR[#i;5_!Z#TC5Ɣ` 7dlV\r4qyN{s9?W;>Nh>n͇?r>gqs>gqQY,_PL7T9`ܖ"cVtVIVKJw_X;WdbVN y~/q_UWդ ,M{Jd:7EgseY0]./w1|+4: K{ckZ !B珯,ϿGuk<d_] `4_%lB}Đhe]eqH/'7f6ep3̾PzU6@8~p_/g̋f1$KΕ?8gg#4?q6lkQ ʡ\u9͏}&qi8> WN|O4`@z`J 5zO9~hz3,6'MϖٜLܔ9?7zO R*it꾛ܙV=_͋WEA$ r( C*M\Nvx27MD[T4qN]}JE0T1[֪ڱA} -|F;;`|'=P4BJG;ǔX%Ԥv}e[膌0tVeQd}f.aiw"R,Ir 6 aGP}&pg!Ǫȫ^]x)gz.W1y9l 8!-C]P4hLf 4=jvbs'z h>Ez2E=,Sjg SOeeD")bxZ Q (Μ3 >b@F i Ҡ6Z\U=ɹ:ά+]8ööyN3F'H瓫mJnq~xkk:.DezZ}H-8zӑ1,Յ,JRc,ʶ9zޤ!ɔu"XaDBߧ i8MH%ep;ɴ9Z1ЪjVy SSƔN]J5U-밁{ }KE~ʪr2M:be䞭wsQOxh욢M6An_QOpP !|-G13уD[T E*T-$4ʩwU(xJ/)׸KY8rאrE9aD׍`8.Q'>դ Ѕ׼j|jW*n(>8`e>rFjP );y"+#X* ƀ WͺR%(v n~i_]gzTaA%L;& aB~ 0řg 0Ƭ}f4fM!w/☍qAIѧ~!(RJ haz硈MȠ%>u6@Fcz8=O-9a@$19VK4ja>`n41,r+7xO^ecyҿw1yn?<ADkY F,Qcވly+OO^AYeKwbUzFsMoP]',<0G!+)Wh5nv9qs()T50a; q F:ns=Ygn '1}>A/gEukK9`=v1[^w؀q*`%W҃g;G5m-Ek T:mjQQ*4dT/\2IzmՖpc6l E /XtIBO<1i`l5k"pNy%)͍ N^3کq)nti[o7.~BĆq]>ُh_Q/<l3< <ݝ-zr连~8$.hy|7#Oǹw^}]ǓD7f?xcO GW?~g`_{{W:L?\UV\p M>2MZ+)`g0,~ZMy܋ym ta?8ѦW_LAZG8;n>.:\Յc7!,Vx§&+pA9r2bAD& L9`|[ÑWK8qఁ]" g.0ݘ3o&H|ŧ鷕âU:b2 CK`WEMR(XhskbFpƖt9,hwtX.kވ}6Uӈ=d`E5( #tHmL--#g#o5jej&s{W:e$rj-wPZBLDk5KÆaDSg1IV@n]yK;%)ȵ{TpenPӑ908 J3I6R9uK~AŽX4z/?Wx׃Cxُ!W_JGr(Y`r{to%xkFL~fDn @IlBBBBY ohԘ@S B L?s) 7P6)1(>@/Kl$WDGBF-cM;0(.y{/ͅeB~<Cgpet˴FG4TCՋ7JǨx<͟Ca:gAঌݼ+. k&TG<"Lg7hF$ ui%: 1Q1%QE~{0iG+ُ]ABAXw K`h%.ߢrs<%,pq@uxک7y/s}BMxAywA'NLA!rĴZjXysflvvrw>|qYn+P vD| (=ܢ1qy%, N4A:Rpʀᘡ_&yio6CL?V&_BשwC`%;m1Q%?|'rG-ǚ>򙽺[bId`#j t3?E7l7ض-i:/g>w?=߃.<FUOۉAYȇa|4nDY-˜"Q)J+흓N]Y%T{ - |O|;3$R SX(',N]@N/geDBRvh&X&BkMZ2ǼT":}x+ҵ5:k4JoOasགྷ+r -)yFӢOp>Dj. O)T@,F] ݠcjOԞlkL66&^d2`Ktj<ו;uF.cofPOHezCmHͺ_cR֓W5\);'9G1T D16 1F\9_Ob4d_`7j?>=es0"5Ȁr 8r :Qk| ݜAN&ti>JDl솲W|h' #A8"_UIHayc\(#*E%ԺcXK>"GsnչYB:g%@^#(l)#H'`?tcb$MyzLc5Rn-R`ErKW~nPOܺ *g7*cdZU-$'Gݾa&zf2~- ȴ IFkdOsv%Ҳ5/B&t/ T*X~Vg̈́>}I懻 Kgժcrѓr芦 ,-;Jio?$ʘQxPhi!T m3|xF>gmJ+ˀ2pv}y[/Z 4bt^'ͶȜQTE=YM2]k:3qR7b-‰ę`AJT=xO)hyK8k$ @{0|EXh+ﭶl+E+ pLYTYJ}U}1lQlmSYP\MQ Dz#KaW_tr?fx>{VGNKDE$0h#=E 4һnl^կ,L RUv zwֆl7<hv{s=৶X6"$/6mdsaG"PV~/nmPJ _ 7bBNLo-SbxYTvt-4x-#2y?B&?6:KbDUr]RȟѮVK2pUgߎfwpj~7;S?(Hq|޶-hU@2LҼ'M?/;@c&H̢"PS~=9|y׼668$=c< y[~-[d5 hȡD0_ޖRxh14}csyeImf ‚,aq~;~lГB=G/=6#"d>0j"+ (+aRk]EftJhvrΙ0 hbjEt MYc_cT#]ͪcrai؎ Ѷpj:6jv`}<3݂喑奇(]~V:\]6 <48|x v8tE[̽,h lT@ܵx%$} \$:;.n1"c?#XR)sEr8j1aQ`o@Bdg;{p./R|hǩ{3HR)ҙ@cJmMܩlC)Y~2;z-VTS"Cdg^UίEB9lZz7xg)ax8+gfXU1}9瓫ϮA=48UAL=1;Ep?8^9 4>sXRmL`lNgY]W|Q2&xzQǢpUp*uD9E"9KaȣMG6Oips]ko+D-cb I^@-Pl mٲilSZC#Bcbg<3>eNV*c= 9F񅺌Ê,0n;m:Av 03&ڑwf7hql&/d8 0*n ;07\l$4 `)o:BMMRflSTGdKre_i%(zTLˁr{[rWSo$b eKuه|W?'x4%jN' BWtd,-Vb^C; }U) &6#mBcaXg^JӺ. 40~Y8JC" ß0A>kkA}0F0ʭō`>d6K3x)LubL5K*zantj yn5Zߋ|L%o|z}6iad |ۗU dٌl7|QJŇ_?U}+ >`MP3Hhx1"j8drjP2G()[~A ƛ-tq0-Y{ `R%Q' _oۏ?|I)/j Ʒٟς/xzTeg[yUׅtqiOgI_1/9_܅9ݴM$EБ?eN?.`;mKÑ`0 CXS0N}ŠY ŘᆖTZ?ZN&ګ" +!/@ oVz jn0<n8DwU+DI=@2l=[@ OyH7[mM %(uׄʉp1yV.KMuԣkC4߃«}z!P5f?!\M~70]z CEYB}7AA8T㽍&*5*BQYxVÎYҏ InL^N354N'F3q>ʨ2,tkm:1ʜJjSO6?ӕ?FIC鈴Xr/7}9Ul]L;'~}?M7qۭv@AӧS)uZ$ 3) mX~`80O6~kOϝ]j_s y>sRje#jUD^Y'c,RO|'F"A54a~);b^RD2;@!zT.t7+5ΝgHooq|c}|,Ob,ȁAa8 3[aaYIF&5( <΍2!WW:"IǑU2Wk;8wT~՝i:$tM}]mPЩӠUP[=[`{eü^msp)9\TFH[C"1F G)R#*.jx߅p_E;$j!CT]nHUͦ]0ʦQ,5R8" (A&R1RJ Kde9&ZX˅Fx79w 9ɞw)-`:)[B&&3 /SBujRe܍kso暾?ekGV>~ddO [Rͳ<] 7H6ycbeRmS(iac!St/ Vq~ 0ڡyW('O+̃wic*oaw [AQ#T\-'QTCXbڗnݠHl8BBE=l-%*1XZc.e6Jw ).J*XOGA[]Qh;{>fA#(c"Y!p R< dN=ќÿKtkICtlOpz:OtV =\.ɾ]Tb+0+wIA Â)]Kgl8<X.]5ջe;s^uiV%PJ}HY#j}(cj}Nr0ށ\ܿ$@{Y^~>R|C[rL CwX o&Rȉʫ_MH&)s`swoE| =~G_eb;)$]B543a7V.̮7v0*JD(=87N]6U6iЄשr \w%FuPYHZ 21DSTCYPқPBJUIȈ-Bz*7!<+# z]tzDdWrv;]OZ؟\ngۺ)Zô|I%#IY,J–!4HA1Z /^l.J t8v.T6DUrX{æe $ُ{~lp"16hܕa4#y鮃;t54LZ8o&8 G'zR8/&n2$}oH›6|2 U$q̑ť `ЏR86 Z/?􁋌QvBg:.4a&vTefРxKJD SL$;'W11ėLy#)㫃~Ҋ@;huRj5A ~t3po~F4y 搖\CR~pﷴAg'6YC㨹dѫ56(U|se]鵕uO: 'qTaܣ%V$CǠiu3>;'vV4ۯN2Sܥ,YD2jWц>6y;4g~Rх{RJm/%xfIlG541-&rȔwm4^Nqw=)Jj$p"8`g2S}dc [mz/F}Txxa.ߩ|u)P/f)0`<-Kƣl+DηRmgRyJ-{_m}9Z_Կn>]`=IV"9Urxe7]ľ nJ "NtHP9atHr '1PrA عsn`@pYVr| 6HBf}w/ki0ƚt;̤8$E+|)x WB'D?P}ķǸlD> @ӝ߸T&uj\_]_[W@CfKKJ t^l%*B( GmVG2ELs %(@cD1W^ԠH,Mb(WWSaJ 4te3%7l,>뻻oC sMQAYDEH2! bLEKL.h<]q)4_5&UxUmͅFhkhΡzP'-i9?)3HKTLZ,aɠIacEk +4ĆR86(* 1؋54YvͻnvTzezP,ӖHo"A}*0J#_454Wk y}b 6pP"DrdVH[٠TIfi6U;c_`#[Y$ptJF̿?Uׇ?戕q͙\M!HC8n˧0A^'8`aKSbvƥRRA0|߿} n?D8;]B gt<-Dhr}avI1/9_an&:`-dΧly:S鄹S0-G>g` aM8 櫳S _fog} ;e~;l`tp)H(qSe"1Mf*`t,hǿ6E $Nyҙ\ho<Bz33vC(2ȪBXŌauL V|OG/($2%L+XLBj].;ODV"of}#OD#n FZRp3^eAc%mq-!4UyBzc2f-E_ArJzl PT`{ni h |9A(H Q2!*b*nHeJ+.х/`WB+ZJV7 Ğ}~t`zˮ?+N\/v 9(6/x#F)P$\8)MުvުF}J*ZhFe(FުV5`}H<.}s.'54;(L|3~=lpIQ=d."МSrTC9s)CډX!*yNZmPh!ݯi|>oEfe^dhR?PP'|ۗA6ؙr7kw4W]޽=dRѣpغd&\M{^ߨbsCC$hXmyZwΡWX{*S&n 򺝊8&M.uV _4ZpV[pG*Oz:$2aTT D] aUu F9Xi>wUry RơoQ]5[_~,N"@|+b҆#L=`o_QLl-ϰ4`EFoC`P%p D01F>`; *4xTQXkFQwN]~a!+.O;>aR-#Z)9ȀIfX wP0n:w)?]CP7M`iW)L"csU_sjUC(A5Okկn턏ey.jh&{z7*JĜzR8 pXyOz:-8봬6Xؐ5 QR1Rq2 P82R=\XrUt@WC6k.:Ti^F)_ˏ "8HӋ6k3zȣlR6]N+BS{99YzW}{tNMMRvF*>!k54Z#8:Z{_>6pe0֔S"8h0PFE [+^hAԽt.B& O.m`ڠ)&(cKwɵO;$0Q3|JI`VL_f3)i'U̿ ]y=PKo6W+rpU`g/~LJKP WY6y)hhk#TȲoƄ7m¢ d>@<;<R,Mb9m0Occi6KH+ӐXOr{1A:(FP-kp̋+~* lI d%*$t?7[GĢDR|jw@[zm:e}>'3sƩ @lD)20?A`9$r?=.R4xufώ|ITF(0ªmbRa_,٭WMZS_D`tWk>1?&>5JP&Si> iuݓp"l1{w_qnDA|qZqzt,ڹ*~bH*5J[h>0[yuCn~󏿝,y7׶<7^!|oGxGG.8]JbC2>\@-ڣmaR'_Nj$Z6@<*.11,314f55mXo CcoZ"4^֔IEp1bS233b%T}%[*S pi}߯r,"*J_~{dH8Dv@bkce ERf 1 Eb9] jI#rVyE q"^ /na]uB#pX% ۪Л(mxd=ttn~ZK/(W7_}R}9uo4:^ZWMK\Z~:NМ= =-Z7_4lE?Ve!yXo/7\xJ!פAgKyz|e;H6Y}u;(C@$[6֥v+W3G趴NuU2C;ٰc3m"[1TQ&#b*KX5)K&3BM%7,#nFBV Tvd#55٨t>gGy񠒫srBt#~䬋{r"naG{cɓк n[~{?Rߝ)d*;~^?Hx"~we?t!t] ڐ^?M܍~0 bdb.xvͪ9؞)zv g/{}}VƳ}ٙ0Hp@WVHjڜTV TrʾI- LLY LB22%j*S®&0F-8JŤ )3Yeō 6vLe輭2Xlv8HU R!tݻjK F5{B-&U)* caLh95$n.`8ŘTU9%B:yЊlPSt4Fcl)V.Л(*xѦ貒uhM wwXQaH-AG]up5RVv^r`FqChx]FL6R.ՊɢkU@r0{ ƿ U׿,KFۄZc!TdmƛZK ,mѩ܋k.TRLNAXs5ҚMÒڐZ3* 6I =I4519[qz'$,Ʌb91m|$v硰Z*Š` JE'[A) эQ(IBy=UWKJ!&?9 sEX}*]mq(T2 `#6xڋs*IJXqXPjl n耪&̉s\ro6WaCc어J0cө0G?ǯF*x!&eoԠ $f.P+ฒX$űߐXu,+MM]0qW4$Ar˿ń6$)-u_߿tw`+m?җFr`CƶœW9?5~{g`aE" |}Ⱦiqy^QmlOZ"t?Sos#ſ'Fs]˳wϷΫ%p߿Rm9+ Vaʯ KퟹqLd"G/#|>@'t:p䓟'EEnUQd|b> g@uKb#vCdujG03Й-XOpsn*q֛WH7g7w^U.0jm06g!̆A w 7@TyCO95 o`kA?ݺu8;7lo!89`-z߆2 ?' wD2Ϝ4lѳz :b~]R#lgf;}BnT!y])|<wJz ػ}%MŹ^qCU8zn\X؟Bv&ܸz?<2f]^aCs. 7<QٶzIC!j[W4 ;֙O|zx t![v,SXxCSo\sR3mRe%0܈!U*dIlrŰ,--UN!?8l256Su# ٨҅+p+-؞qZŞUl=GUWD,z ٗkJLM\:zn5V0ho脞S1ʎqhYrc+;/?#c70{t c} cevʔq)Օ?_..`'K;@n|%8Lm\m @ $oq s*ng%j?k{WJI_N8V}#C) 6|73jKNB{/&$4hCtp j|~3z}m=PCQu-9*`*CD ^5eg=wyfl& RDOجb%^B)ͯrvAly88DgdZ/ѸUxc!Ȗ9۪A[txVݳQ I& ^z/qVfizpl9=m`t&8f:R^y ;aܪ>onT1*BD,GDɐƨ4)Š+zKq 쌧G] @aoLUrU.l6˨%XhdkBG]P~atUs_PX?&];7}7cն>͑T{T~AnGڞU߳,fiʴPp^]GXul}i+[U-H!q]mn11IF}WQ5RT(XB1eTE,}kl&i` R\PeOI%B/,G'h0# 67Fa3@B@SeX PRlNe@5 b(Ou  Fq`= Hnz3W~ȣ{h=.-E 5)io<;m+#zfD<_#PS2\Sc=\Ԥx wjLiL 搇do7u%X /߽ WX!q\0`L4d@]J[p+Grv{# -sW0rmVK.P,(O,>BL[= JEAEcl)V!gD^6Ek 47(oAm (V EML~ZUEl&[~5wuIQIoBʻdkP@.~#NR(ѽfvY~c8iIKE;ʨJ>lB>~)u]XƑY\ᙁA(vK#~7%jWlY?g=Õ[:zE9K˜a8} P%3t=zX]i6@z, yx>$qlGݗwa1X16|Z4<޵l䒚h0僼 ʚNqN紞No)&sgCPRG[h7D5ǟ&Ay6|z4;7i¯Ds35ѫS͚(ވfVWɄaxȥkΒV9I6չR?O0_.EXsk}1,(3D3F^3rfw![c=8sT'g䝦[}eZ6w޽~}yƳYVĒa!u؊?tqރp?'-gl.^3 BgT=+dzYVq뇃黋B1`ę},/`NPXPlGO50j?mmÍ Y|#9"[0liV98 ƅK:OZ99G#c7xX[MIG<)cϸdBg\]0|YPH\f4Zeße Qv>FYb1{)φwB0G@޾ft1 Ȑ8ɍw䝾(7^~w-hɍwV9a&\wz޴y3 %',%7h[ƛ?PGsE6ȧ-ZKxJ|}xt{c/]Av'̮5}}7 cj שo&uƕC-'&F8Ybmg;K!IZwOw)FdikT0B{ykYCVmL Pqm?[oޡ,iͪgap9St3g)kPkc" ja) Vms-cзdxH}yYs)Kϵ~༜rmXוyUhOQ!tsG (:w*HJ,:ɣ$oJ'y}ʣCb$_q]ALjKh n 9 ava_(+ut֊Ԛ'b:}nZ]li*M{#0_kfL1f7&wU囍Q·c|`=DcLVt*_mUYp pe saHh:0Jơ2`8\kq#1=;D se_.;%|Z6ij358bi^;oB1 ?%JOs-%[,,,6Azw͂TbS[*Mszl71 dms`* B *,B!ιܼQ}쇷!lD$9lL1jB5TJ\L'yTM'G,{o9Ƥ#1o=`KTa8H$*dt0%,`vK:0ΜX[|epA0Zmn00 YR' Xmeÿ&] G,{]71#( d h NEVj $8ȹe|3`; L,u$%&# i_n$zK)":>{ͳ`L @ 0`#*ޑN!0P!\Un(0G-ޗؗ }Q\a/_l7s󼙛7{z.Y-tZ~cP^٬A67k publ})Ϥ(.lA7N*m .s@K鱵.7ԛIV ~X nBM*7ͭMSJ`A: ϮJ̄dAHz| lrNCB&+c]63J jQJ W#X٘&[2Te7 <"eOA`u;Xam="of 't A8x`>2hhE񡅺됡 G;o&heDkT幰X6b/8YJ6hl"0h+㊰C9ZܧSX7a홬 !P͇?|/huzYڐ5Aޫ3*}X&~ rav̖nV.BJ)Ngj|y潋i>shwSS eHR¹:Pt 50еXp 9}{$v֤:;>/ڷY@H<\{T"%)LhfJDqd٘Dp =q3iC՛ئIVG@J~;Rsё#4FF <40z1=k}#"['дZZM*]VsWvQKЀ~R-Btl*|Zn)wBBLRF!!a), DžkjzOnjk*ܫt;'ͬ8ifI3+N݊n II@&cP-d.3 uMUNq-C19&1c9ge~6ׇc;V:~nô+2f \c 쩚V.:ԥح!d1a+){ Mq|G^>݇EPH:mv9cb ̀<0MB Up)ك+ VqK1I<(}'aŠ=mk+'=MvLu{á2aYoػ,oNyOhcblwb@ͺAې]|yUd\=D 1ϸ{S( HC)g&Y H0?jĂjɼ뗰] un%\E6VLV,% HrP= WOf78G*o!'["W諵^8g&ҔJD3nkऊM'?HM 0y+Of郻Y)χ%GQX#`Vt Bol?lL [_*;0q6 D5_~!ј ϖMĦ ŭ {Eķc!2ALsCECjY -FM1?/?ܯ{v,Af !99ϹaO ѝOMP3 ۘ[d$;:\x.V ~ҳBƯ:zn}EW庲ݸ6%76Z=k?ehM? j;n+17LZ Iv\nK>}If(ǭ@rDn+q_]y Ƿ_XmonnxasGUZu:휰8='J`I%lzqqR- 4nmrhR)xͪ$LnM-d?SfA+P@P {e.p5pF;_4 [\۵uɔET8/~܏' -[K7 MnB` qx F-$fŰc84V oEBU0lY)- ̏k9\wmmWuV%FY6T89q}z,R,$-stǎh 76@ ɋC\]b[DBB֠˜U֑G$ºPϹ6C7ˑC*=4cIB 1l*uLQ٩k{]yWHTT\/u'E`buyЏ!p ۱օz;+ |e6ekzw}qw^nӧ9SaZ1#nqvWc鹷s.IbuvNy )0Lۂ"@|{5$@g{%u7@U*D&sbY}8!R'9$]% doo8jl7b)MI?ך'$kנZ;Tst6-n; jL~[7kih)bJ󘈒2%p>Q S ?R!pxl%93sdy;Rbcb^~Y3s Rw\]>j10 LJdM݄O뭷oG=x)Tm&vNII>%p;zpZT>9] 9S]Z5W=\n92*< u9ش T|`=eI) 8Ah/=R-Zהo)ƪF0ϺA˷l;죪U_6^WG|{0Z>"DHK~"hde ԖX!/~kN^6y}w9xGWK{s;Ä LSp.7t!RQm末*6u ߼coEpSƴEQjSg‚c^b;AnɎ݌"k)ꩣc飃Ր9:' 7gd=1 ҭG攚x.sQtjs}\SBVz|㤕gF.q7ll@ GY1V}}+) H {-`溌x5s~k8 {5ty >Zjz&&rJk1sVHmk-jjQ| S{N$&jGf@=ըc<5jL.G=ר=sXKRKhԮQ |kF^&UQ;]w|O O|aRpGe%bH. ?(^B]9b.6,9;s~@c"~ Cr N} ,3x{.qtJ8m P 108gQM&,0Y @+mY .7k? CԈ*dP@PKwkC ꜮPjb {QW%)!`ʉg  (V0'K$AbHBY4UV}~XDc0Ol~"tj=.[xk۷?WğkRǧ pC!JRb)Ct'NA?ޑ^):Gr^=8,zrQ d} mJfB0{2NJ?$zI.dzPp)~NFL jV0d\A)}4X[K9w{_Ѓu4=LtwHW2Į;jėΠXq[Is?e:(;=HїĄXY qO.^7L-u޹Dwս%R秞(uH٥o6>>Z)~XQ ])״vPC^n6n>}2v-wN%{+RfUc *UK_JRu_NZ%qػkCΤ4|&v)5;4 rdN`%WFاhs謁 趖 !gRmq0mÀȜ rkXXӰ9G=ur(FZ?P%؜b's)Ň)m;kzYfD ./gcU}c0yVlvS}R)ySn嬆M,FDly*ԛjVܞS,9=Eݑ:ף$WCAڎuVTc`[ rlN0:Gu^:}vYT־^c.^ RQ)A3\V&RgPU?^j3PdNu`vhmfѲrvzDPȜ:aƹa=HIJkmFA[$EFMiGBa8#nX_Ӑm8eNvV#Szϊ az4wބT"}@n-r>j~=kCV G$<ԫZKS.͸=OMaGjĹ 3br{, rdNuC nAͺi<_kk5l -3b z<8N#sz $n/ 9F)m҃Bcj(@)SG?׬,k9_~5ή bsÕVsuCUJGP939G:[m~]v|P(沃ִ#-:Pu׎a_O@G tLC8=jWN/SQ;R`_k;iԮQq|ڽpj˨^Sc<&WHkn7f\&)#A;]cvۖҙ.#}N1 %<2knB=j!c7`N[uAc娽uknߜب v޾*j5;cމӨ]vg-S~jvuVfh~L缾x2?tg?#YEU~O55tG/TZw}CYN!^w])fO~p|jc|9z@`5\m̿Y߇@s ]S'q:-!Io P=gCvAoM|==ib9Z-*@dަ `" |"=:kr(QcZN+G>|6f\.a |oT3~>v]cvٽpfg-*fWKx{UB[vMCC A{&!v|XhWAx[k7r5j©.#bt-G#kn k4,K-+2d}i-޸Ć:{oy۽{w}/5eݻU{-c|#{9>U~^y3Zgɽy*KtG CN눒;'Un},$uGpo NjhpSmͫYB^2 zSo$VM\QW5I%*z+)+ziWӒf>F6^_rދII"a+ =8Jwt88xا֚z GZbz $;êi[X~i;hk-{[>7ʒ菻4Y|֕߄Ւ}2 j+.$a ?c-Mlldg=K9$Lh.Np{`҄xp$ L}nHR$_t`bpp6R#vBݲ$#QsX`Um,.zԎ`+ΉڑT<cDjwlݑP;a[Qsbpr]Q ]vMi.";qjG09Ս:Oʩ:1.RGTZ-eLt,^ݲS$Ԩ9k FjAYdOڡ5@Vш8IYΑ3RGT3%غ+AJPm7c gɎ%c1\2 UDP@H\܂t:Qz#SF#9YG3<#Ihu 2^23s UX >R+bv"$mM͡$0 R3v[7TH^mQyNXbSB !RH6>aEjw$NXi,IknjJPCZ.Ԯ^9&ҩS;Vg@HYn5^GbكRGb䀤"Z_ 'cHUEbIݩ 7OEn+SM"X.tA㎥&uF^rd߃B<ߐ>_Lw I<պ8v*>MZg}SjC8+NZ@nk'$2&4Q`b̫kDij?>|?z)z二3f!49L 1-P0̥b0RZU,`K|HyJ:Q)d:˴x+8WyMt[¹Z(QT(q.Z. 6f̄Ig°o۷͗Mu3 O}#P~I }Zn!X ny#ٷO7+LeaEнa)n0A( m !rsHjJ'xmOK"(J`IŀCHA6BtW 5?c˥YU!FɝyE[ⱆeF5:akEtE_~8^ hU[a:v~;Z LgiAeDsI6Quv:%sWk3؊JZyViw&ZZV"i6P؍w=FP=rQY|N<Q C6;pBI2SO J6| C(]8 C l(%ŝAeH*BָsEѝݹ V/wNb h2z5Oh&u]l^.\L}.> xX(+uY r]iJ Qu J:# svBG4!c]vUT#{YjGp"[Vs"Uu5*бQvc+Wbdz]תvm,ѧ&M8Jr_ʂVq^L6iZ IQsڬPd*Mܠo%ZRK$TyM 's٤0Di.+s..xfVp¢b+Lmq읏g4eb Aػco=CЖ}{cڠ1hhU$N(״Q2egg{@zeSoGM3"&-<+AIB\d4? \w#S,T<^v"I\w %'{9;ɰW<EJq7l:Qq - ,xRxNGDfXژ6?{,G"l%Ŏϑ@ﻼV4S\% v$뎌ᇲ^6 &BþoZ Xa@7qGq۵,\~tq.\oyHFg\+dRia."xv-1*7!iX6]F{[hwɤw5b_(kpYRNv{U쏽_z]#8 Un'>{7yGg[\7ۊi^>|zG-b؟g5Mx^|W^|5L &y[ irԺ>wHk g~>̃1ô::E]yk݆ws/{ҿ'[\NJ{6[Sm`=c*0ۅc}7A*;EVoX¹hIuDkhJc {:_;6@Ѷh-E1B8iвal1LcPÚ19Q^U)/zf&| s4n4cDiWqu7hiG 1hJ.Lk; Bg|FMܑ*`QZuZ+*&AObhTVch"J*5GCHqsiɇQ}XPq:FRaKS*ݜPH~J*(XVa5Y+Fp_a0;73/kB]̚/GIf{=uJwO~@`5헨iDM%joFMM0&@V:P=șiT kũ a(w~c ./T>nh4}JxnaN$u%/I~33^*Rim^;tWO{2͐#Te h]/BFh  TiwP@Y[tK[t]oM5M=H-uRzn-19Lԡ۟<4G%c8/dk߲tTvOZd:#U`vE%,@x j%ouh7keFѥgI7qJª TZX -x9RoLմ!8ݼZc _@0]5iZ5մ;pΔ?&e6RR]5}V+t+= uCf7fS7ɫ{/C7rP= AZPLJtu{Z'GPUh#%# Gkʨ` ?B|jEUiحe[J1;d RN"UCnY͸Pje|ղKn#1Yv(X5z,BӮVXWч];q`4Q*.|m~ieX|Uf՝feЙ?l9=~`J.IpqKUI?f!˱2IۖKF`HE}g3.vÒY7lՇ< 1`D*_NYdaS_7 1WBɝ6w 5^==Ha@WtLTAGݬ5X(Zs #]%߸6 lH#H5Ζ? tJiH׋>]Yb۷ ^ȧ9/nxk%3ϕ n[(Jo?r3]gv.=1|::ߺw[q < wY{{ ˾.베 R ϲgBeUHFi]iCqI\ʈU2OrfF"# Vnw2;Oh"+4̿d3FB0f)f ͋♴+G'0,ύ&$a 'e=Y+ yJͳe?ef Գl¸GNZ"@CëF)x9ܭAo*OɽW/P-3<}M-J-q?St:tGgaRPL(r8w^{L- lwaԖP[R@mI%MP6eDR1% YL@ eP NJ#mDNٟ]Qu;[!28]g<2u G@L.H6B$f!e" ld*_XS;cA +ov5Edq3%uz۱kp-vPQƬ>ï4k?˛Kk<ee⵩b_rD 5G1 + n5 LZ_DBW+Kc =1[Zbѫd[ /k詆֋>-/[E W'kx@rr@9)^ߩvLzuS5\Yve V+r>?1G C7UfR `+eQ=Ѭo=y\=}/&A6 i\"*DJjR  7v[.8u<'NўC@ކ5 )^t\6 Zd֒g:z!+ʨ^eA:Kn-_[/0d5 =*嵍yl'Sp~}ɷ6Ӵ.+/lNo[2+M_~\0tPZkF;,GrjW-DJcs^n6SEi__!G7Tu!&_dd~;A>Nщz){>噓!Gĸ8֛DR[?jQh'ZNRF)f|DKih D1U$ˆB%P Qǔ@nPgZ F۰ "Ǣ3ٲضTPڋ>Gju3Z bN#Ul NjcZ:]%߄`VM;d<@emqG0D)c\ k/ 6Hi^!n֢ګI[@z~M,u2mȢW@}]\A$_0 ՛b ]3;}0|:RAVs﬷q>aH5Ëhik'~W5WomZZI|z>сJaw +u+({E H)yg;,^M;ۖ=mM#P}|y}9HܔF$%¤c>omY͌o/ޢw&HInص  Ye3dbRp' [8"/#ŲB.1.+NQ`*kAiD* T9HC 4(sp|ÅՊOTRV3XfWz߀N"P'ŚB)i+j@l5S5F )_s J,):@blT ZБZkQVZb !jTFuZG:29f80m ˚Yי+(ZBLM\4rЯ15`~`~ا 1-ej)kM+JrHLZm`RԸ#tN /87ZCLu}P'fFnPaj,wOؑL*!U@ ]V)@\-ބd5XrXf\uVVLy-&[.ݣ,=-R/р4u-lt/'+x+^x5z*#Uhp2΂njfMj2Иsm33" s2Gx\OQ'l~Ӫ ƛ|;jGdf[7@hoÜ58xFIaڧ9WCŘ101bXkhB~%5S#1%)<x |Y5'dE2yFMWD5P֪ȷʕ,7ZCGs6*wmI*!CCUuu_nݵy0YD)Y/pII&9Cj(J2S䉥*$q#1%C .l\=je+8pؕ8k|C:E6}N˻*Q.bݡd"}>T|C6OI7Q"]szv2jG D  2䀖!w74DZ0t!cFߥx%߱0|.g 1^$+;5t2 uZe |(*͡gJF-71E<qi, AkW)N DёR2= zPq(U\lc( d[C C}087!yu!vhK;=JJvJvOݳ^z~^h>e7\y*e*$065 , } !sD,d ņK7um,UR,U̖+V]yWU5Gk(PHL=*|IgjPa\߷\y ËݝSvwV=tksh\ܷ>VԷBzXZN,Յ0yX'Ա nˬ Qe,0ƒ WTv.G{Pb#h پX:/NX%B}=-tK߈1}q#7t',0DlK^*`SuCf[/3)ى;ׂk**ƛôˆ9yseߎčd`H`'1N#:Jv[/V4?Wh hy_4^Z#]b&B&߮ )vrl<{˦f YZ͊*)\Q6yɳO>\~/߉=[ҳwpcȽMlf_.j,-pySyCl&X|˺\w}>>Tzpğ׿}'g7Wg|?~n䕏A&%"O׍yo;dh*4MAHl[ [pu_D Mm /]8SSFz]L{VnUՋk"P"#5ŭTZoGoXY@NQ/UZ98H[2g|%XE )9 CB;[[;5Oa;RS,}T&G,7&Gk1;7%ԅ+ MZ;re}%*Wgk;(Ф0d#ZIr@ӑ#Ӣךt娵=!VmDXNmKyiDE5٢|5F@Nܑ9(٭qG3If>R;FvQ;1snHdְdJlpMv[/2YvvoRAdӹ>]'6NȎvylh[Gv톇vSң=/QO #nMPcՎىU!.a.燄v(=fHth_vv2#-ćv遢ݪ̑XET-r=IR۠nLJ,GÃhG.k#;-g(ZgMR]3g($ zBn!ŋǎv.Zi`l]GFhhY[5CA;qvk2b]1H:O(j9GEM>(9yi!Gg#o4/8WS"} ډ;(!5bЖ9ǁ! E ƠO z7U(=@])J}>/9Ak5zQ1UH\,4u̠qUVTe殮B7%\o؛&d-G[zD+R^P g*bX[H>&,/q/uPY6Z^{ަ%'ClK0MIݻ:XCV5u]E6<Ǧ(rWXu{g+lM\tX,Xu驢-4cHL_j .P/=46>c} %I_\|gտ~5 .QS %f6zV0˂و 16,@ ,}6/cllҠx+xbut& "5wJR *a"-`;vx׺F.}|5O?<]a@'Jx,7g7jiEB$ PNY9%I22NhE]PB ޅ![c4yΠf鋆EEA;qGvk2efveI??~{77s iW`4,ZU󖿟Lk,ˊ*)\Q6yɗ!1jxYfPbt g2|yimP|qi]0wPu&Oe4 Rf<.ꫲ?Z4mןi)}H f6Y5zVSR;)s [D DbQ5 d+|Gct E|rg23WBu%VRn “+ ,/z6{ȁ⧤:u|Mqƛ!s5ĭGMZVa:Eֻ0:WϵOZɎ@G .~oy 6)#N#ػhv#adzK[,V  y^ ]p2g2}]%`"̑og7bN(=Q֞'d׷MZQ?BK9]x9hziRĈURVjo!k BE厸Db]{$ IX:4EtFmak$c &4̵o0MDb#ōDIn@h_߬a]]7 j3$I(ZPhERK{!ݪ nW)!0ƞU톆v:=rh'n`v,=[pnsK@NƏvvw<;_ X e΁q􁬢}čTh'G GUIQ 3[)!+ٝj,UfF›GZӿ:ȿYU7ukƏn1բDU5a+'Dy- 1 **v2ߡ(Q= XǪ(B[aMĹ*s)cTUh#Jv_#XcՊ}*aolAkAb-͕ .=F\^Ѷ#*1-Gh~MOb3#x%`fG>AOĉ#tmQ0k \~ϦU:ԢEGL;+Sbl)a(u=Gvb&f+RJf;p/=_|^׏;*7ϗJ|A y#jOond/.o~2zvhd`ZF5Z|F^qb^r̹I V"=!$I~קOoGe~ӳ9o=LC2/MjjaX/jcc|{Ol!P0l)Ң8#[QlQkJٟ?{W6{|1mwQۢ/DR٢)ٖ8";}.[(3䐼Mxw~Rۚڪ4~,1`M׮8hЄP?yk~ab%R UZjʺH\4?Xw@\C~Y=OGM!Ǭ䧚zߪe<7#X@(jp̟?<+4SK:ӝF"."}o m_QɆ[e"&PKBvLX$CYZc7N[MwߚA}v]w/w^#,I6}>{_ر,([o~U)WލҮb>i +ZVIeHG\iN: V!ZJc`*N"< :JZlmd=<?Mf3<9aZsXU*,SWOnL*OI!tOԇl~OߠUu5HHxB975JK& OIbJLw!bCp)hGlǟ@)*d=)}Wy W}g&Jfp C Q>SU2G?t{4"bڢ-/mfb'L;ўo]Z? ϙW/&@(J[Iy[|WQrnSUd:"?6w8?; ,PgSO| ~lF-'/Kw9ɮÿmN*2:!q=Gua~ Yo=Q /hpn@zhp,2V^uOX9$ͦ1x,{69/%KŜ,W7_~>{=.b.nbʹkZ}K3?R4IdbY:Kb #\.4.O,WkEؒ|-FN*JRc"F wfbᔳ6)+NTT땛-Fx!㋠ 渔uXl~!W sk%8,@/"GF]qIoF-޺3|}xZ޻x4zDVt <fs1Ke8""FR7\O*B3}R1&K%[7[oj+6>qlRw=*իYw^y_cfqα35DA"] b2EvF2$21e**x9 e}dV?dZ+.?-f^@ J*UGNZtyeg $6>~}f7!anö{ %\s.~}94izzV |sgopnDRtyI)%C~柗u&F|Oc 8HKEΥjQtajp|>j]ȳvo,V6ip4GiB'Vu3g~t/Cp~v8SujNKwoNq6_W~^?\06 zi@ߖ>?z~Pu8Y,b# 9+{W<=}OT'a?}9Pw+X_N*[Wa䬿f5|x9{Z\<4At+O}1]x4s9_K" _4p)`ʙE6aUery \Wʳ;8J}JOXa]yب_i ݩޞ0;Prq n3\]KÛ.uWc:/'?}A>e熭օϏ*Bg~w/ g?}>/>V-Əfo/˒~S~&Wɼs!(37֡GUOg&xkW(l?DFiqV4KG$+_rz)T8|~~4'7Sz K<%]LO.f/LZ;|M.klȒ*_-/*TήAEч,n>$QR1ioG@ǐJ r\U+mw[COx;lY4g\//1+bάضpMOPW;? qɖ3a4d>7lSR^YS*-QQ6 +g\>lV//+Lo^?w|TR[}m^kK[^oכ)˖׷yAE)Q=^rdB&\Ic"m $1T$8anyM\rd}ٖ(%M^+뿶,("u?H}-H-l}%F{e-跬,iqsbE-dtRb2!gSÿ:_R)͚U`m `,EVv_'Ь^pd{G]5GKnb#(T )0j㉆zP1N4aI',W6,iÒ6,50[FD&D4TDNĨp)bb0UV\AdR +i#Ay\4ԥ hp󁸤RHop|{OK[.roK&xrɖK\!AbWA4.JTj:ѨVTrG%V#N$md5-ld$${[&2ɖIaɧd.1IVD1I^FJsqa$9dН.5DlS4`֮}~?o<8<;;OS ,&>+dl'ʐD1$Ƥ'[( nFi7}u#xt]ܙU=n \@ow cAERIhl4a1v:06'L`.)10QԢfuvtZk՜QO}~f5ԢϬ~f0gOIbE&2bXXŊǎXoO VX'J)'qL|0H,ZM3J),pdZ('~nF+ ][T6F|q0Ih5-bC48"l\kFTF+;q]꽢8{`HS0i"[ g@UTZKm B#Td2w5Dij*N rZrf(l$^9Фg.W6a(ǘS C!%(v$1㞲띷 td ]}/ڏ_} V?͆ckJ@'uY 2X 7Bit+7ApvG'>+dĆ|C^v8I> j<#;ȤfbܨdsjTq LO+ \&_MjMчˌ<ܪ.tT:)OIz,^3 /\6LC*{64{.9.p+t+w%uכ__OW]^?KǼϿ9[wo?9` 0k6̢N,W;#B,u{~<_-.K=^g\A@81 nFG) uT㈟S<2_2cr~m"g7lr4g,@)9@aU9dlu\4OKroSϡtʯz$ytW܆ ci>0BS30YEֶTGIzGYBYjIƸRn4?ɘDr vtOdU]+`4M%3NilcVelƣn)9dQ :pxV{TZ2?U:Xv}6Lʺ5=遡2jS*.F-(b6Uu1ݞJ %6ɜx]w>?F΍3fd'i/J9n5`3qIH6u`r%X{Fmm&=rB3 .7K^oI6u!R:xoV⳿] O}k(^A2NFt^;QV.k1PV@*'基 (f)`}^ wdn12@;9LƊREgۋ~Z^ySh \[hw%%s89)HDKdi&f,F$t#),48&mUӝ*ԯW)S_@ V j@HQ v FVOER6d!;؂+aĹ#KezH2rhBp(A8c _|hEMV2FIE d ۫ 6s&HG7*;MlZlrvrɆĎnU%j. 'cYA[ b].XƅyO& W~_E8B% .igq|\J]d3贄M22daep\T(Dɓ$#t:DېٸX7$[ a [#TFiݭz9bU nbB;t#$4T3g#4bBQs~¶㝠ܦS\-}t!"fL`LF&}JRR 1N;{VoipWISmi0M JgT-C+C,ܒڰ/^ȖC:e$՛Z\T"34mu4GV $T]4$ͳCYE`Ĕ;!s`=$e[hJA%=Va݂8"cjc'tZˣ*5X\={u.b#)=3G`)XT[h2Ҿ"n9i4Aٔߨ%Cm vAoȃs.LY82uZmgZCkT4nufr9TM"`=;@JMig</2qڠԻ{BA 8~4 vC{q@0H`'yGMalif!)d˜lMkO Q1,u45Z5IRҢ!/Nh1JvCk}A9f43倥N:;. ̈́)&F 6*ils 7sH4"2,s0TI>MfK*{1_j(Ȕlr(#sߔ$LwH3(`\iul~s:&gWJ_xKIln-  G*vōC7@/PNlWۗ`165n/\EAh)3L$RJ~˜eAeuژ8@Jh50=g Cu5y'. ߱ty2jJ)*g<XL5'#d\ހ3;:f]d-SyP.,[|Ɵܟ r= ;Dn%^n\\ g-'< ]%ܲ+4ǢGayR/&-v0ؔ?AaNs{9=fB rc⠜ FMNސKdTL4ZE̔?`ni ].2k>U$ 5~ 2R/&+!.j$zzC}rڗJg~mPo2u16ͻRˀcez' K@b5 ;;<=S熮zT=Gӧ8㗺[-@z1x},=~VOOO%48wQ+C Bڄrѫ)c}X8VQ){;1`ư[ybncة7M1IAPo}ZBĹL rFaGi 8xAQVvʌ7)JRK[l޷Wٗ>~|s߫,@2o/>ۋpEJ/V>{uLXQW *S$KIh9Jrv2[؝ia>oIa2v%ˁS䦮6bn10 S'R@)syɞm/7<n|7KYtSh5]wT\{A7ޜs.}u={U?io%wdz'<\?]YŚnYI2G9tD'd-͓?R#5aYR FyB/014^=F@j \fSOܛfyYEc XsA8fy@86ͥ&2(;O@UX(!spWsjmV<5ٯWW墰XåfZCeNJtg yƈ $B5IW㰋>7{[OT!b`? rszL@Rnb/}&8:W+ ^ZiYz0#R|:^=/7 0۹d:t2BtΗ{ SHئQGZ60xp[r:aV^a}̓ =m:*q55XEu3Ѯ>/|1ϊro wU<qdbtU?eOp=Wy_7蠀8y r` haDj.nmBM>w-w!*Cn{gu0;uRYUw\'-!M>;\9yUbe֑vu|zUA6пtx !8^LN0w"]LSߓ I]TvfFfUJ*!:p^Aj ͰE2`5L^ V$\fn{u|3AX_VXߎk" $^?dDh @GYsdԚ shn$Dۿ`#z*f tw=&8س|&2˫x 5ln2V'" (ʍQ;y!x.؅=*vc,veQR5'5m=qpBoop (ΝJӟLEӯ_?@G#Lj]" NW6Ν~88(r+_^buܡPcV(8ѐ47. W*^,Q{0fJ`k#M]!2J!Kje$G\UdVECɥx#QVae4QQ{ 1&%zRՑu(rG$J#?P.tLm+p{Pe;n68~M@uHZI~ J#Q)5#VI1}:A+d1#/]5^gMvq( eQ*5aړa㞢0HW.A2EI}u˗UjѺ脎Qƺw1h̺ZԺU!!_ɔbwc̡<(":cn-.ySnnUHW.'˔oý}'4կлGS <@-G eH2VopLDݎkUK#yijDS1LxM|L KSN2$9TV, Ær(9!6MqJRHl[]0?\. $)ܫGPૣ E OfEYc^3g)ܸ)NnqV[GEje|V~qQh^+$[ G$ZPD9׵rgG:YU[cy̵Cb;q\ PyC(TURdKxACK@Jݖ-Q ff\3`&upH0Rľ<2I8Q.IX⤡ΤT"$X23UW Zܺ[IBJ-;īY|1KH1ʔRkQS\pORZ2$f%O\spu'ee JhGVfگvryvrΌE9[FO6ϕm/U,k>5vkȽswkGreAދo߀2. h̫Hɀ"_62bR4G@i($ ˭ 9?oZy$ ]*>-a('B, vIȽ(hpU%  !:ɺ k(Z )D‚?LE"Xh=ҧ`J0\OtmOi+j40zU1,bXr)mREFW E*} %,)UAB$B: #/x) l*kP2DMj05mc" u%s[ԴƄc;0s (Z4`@V*\*ԯ>kmE|7o8g\>vs&6r޳!$ȍdH*kBbM6}1`mod73!pFN!E6̤0EYmTU |P+/7@ku] U-'@ȥU(B)Z!Rt_w)S1#x=Xi"* %Q$<8!2!*:0CQc"+(^dxkk9`WB8vv\pb=M"DDHƽ?#Wt0͗ ~8޳+GٟBOͿ/h 4%1’fٻ{g_qE0fn?Xk>>Sk. .&O g%!\5im &K9b]_79SWG=dnw6  _)TX4%i ӴRvj~g(l|g{+aIL ZZ҉ӄXP9G…c& cEjAgp_`u>܇Eh4ue؝=(LXTJ֔9'q@hǘ0 n0CI`SUB<(Se6v0(I0QU"@i!'!0K F± 4J2diUrO^f`=j9O IX4V:pΰ5|2C*>Wq3w3g_0FTp61nr?d` Nr V:{\+3)zh3c[zX kDbdcDY%hȇ4 %{$fAnaU-;c ̱gL0 Y]- G>U$v=41W0ޟ(HœGJ{bB;3#`W4Iv})¨PLV:KQ[䋫+OO+G`ZO,*_ GF_.adU yu| 3E*0yYZXW,NB^^UBQ->*_BK rU;PH<=!h:c-s`ғ[Y5G6Yv3[SxR|N?L:*fC|7Tf#fU.*][&Ο}!?=?R&$$TzCsDG?s4;3_?rG,_ɥnF} R8]?N7TqS_ۿ_];p5lJgFr63Wr`w,WUz;Sdꟗ CcEtp>E_T$HሔH+( Kfj%1yc܉D2A[av$ly=ٺ3Lzi_2Zh_&_E3 &YDWJ v/"*5]T>dm<3чg7n:p>L/dU~bG@;%֌(XcyY6)K5b'j>~Y71H(H!ۍ;]p[|ÆȐd]%z}X],$wV*^%?2)y_^k01N3B~]| ?]EV"/dX룥⛨8֦Jx*V4Vo*[%K}}1`4DY/81G*JDXlKQeDi Qc7!Lk&†jK0e)c  jVcpmd>VvC|\ƀF>z BXwy-.Ae߹:N&nނ >H+Ɣ$ |rTw%|AAo07it޵d"mgZr/ A3t`j3D8[ERVI$ɢY5I,wN:u5xi8Dm;Em-dF=] 0*<.v-/Y):[?kC}qm] kme(J6(g],eeHl9_f5;B{ֺT(S^,N uF&y(y1۫}`'M8±11|-Ŋ ENUѱNs-t\XpgɁgH_ܹ_3>zb*6]tgHCMjBtg<ֱVKr&#Jήሒ$-463{ "9͖A$o7F6wT<0xWQ_Y>>'&[W !|c%,ɧ-ծS ȈdIH}†`[mZq7 A-Zz *(Wo5oj%4z8j 8 pp".ConBZ@h?ԃ{G{VCSSfbzJJa@H$VX_sCI s }!(`@ق`+*Lہӳֳ,_Ulr,aC%u?k5߹O4J]h8....@" p )gҗ;N;^L+cø.xYȋybS+o 0YUŽ+ ؤ#&;?/hg kqJ2 slXfN4ڷ/:) =NAͩS:\i/4 {~X}C+h.YqLdwMh(fOI֎AF`)Koˢ?7un6$m8*SaBم 7E7;ΆD^p. 3e&Qg?L6YW''X$]?@z?kKA1jpz8Y&I53hN"%Qu-D. ico?62th"Ja i=w~8BЗ DA8 zCd4gIA'Z/e2o/G^e$q$C1 EVG~VƎ~{' DRιAݐS-)d\D8>&$< H0l,?O>⩆g6e2o;DJ$DHA>E<\\pFP O ֳ9z!d0a96Z ?>' n o"!.P ݵ (.SCu:cվG9ܦ-_v݃Bи]>WX\M/Ac))[k(A;:߮f7Wij`-7WK˻QLoh(>mǁ cA/P(P>$T{( >N`(c_U ^sz!Q֌<,i>.DƒP0fRFdyC\ƾBH/2|PIDRKDB) 1$s #=<0GT@w)zBz)HPS%}([)}RS# ݙ2z œ(!Q,8T i&c3F?KyQ}l$RD*aR/TRTG (J- ACfیّc\/#b#Q|ŀ>&pr1lE6*/GB⟵P6)ɱf/yW^sfu.Q;j9Z+h#- S>b Vf5QmEg!<7+̶܆@>W՗husFưG^ִxSECZţj_(%ܺ)up]jё X#z&2kt)'I1 㛙ZѓArkx1A QhV^,)gIwҷ!,ɛ΁Z۷,ɑYRt RL >}3oʻH(iV'kXPԍ }.Kڝ@۬2@ 9{s %u>PG>!ʌw9B;,C_#rC[DU 2/d?oҚ_I$QmY P ;(.0|8UKʡ'|#Ys]G3OpPY_U3YG7>ʃvN89F=*Nnꭏ |}@ ,IPhr(:i;bx'd -`cƑXƁ[7jD |&P2lFݽvVoxG?o2ȴyTrtf:(ȅ}i-YCx /~ GG򽠮Z t]hFU Ө?tIr*w̰X >$t3Gё,CbG6\)]jF]p9\GUEǹvre7 K!]ɜ45$dÇ B'8U HF&Hˤ F$>?x u_չ2P1TH*䑬KOLTMA^B4xBNbڌZ.Ru{t }+HA: t ~؝^ܮ8n[ L:jch0nWYQnQM9ͱBRE\khIwGl '7O,S! Ft ZW:× =r~R=NەّAWHq ;Ur\|!Fsjda* ˷x ܂q')Wɰ~yvߟD1pQy~Q>qfj4CS&qZx%BǚՏC,\d˪F퍣1 |?}<%.:w%;lEs oFLs=ac.ش=)Q0 6&P]Mm+4! $t^w Rœ2;k1a0樳Ű`h?"jdh) ZH0*]\˻P9BM:nxQbGs|NOZ'RF9tYH xaj1t8ނ( Ա{xn~^8GhZ-! @݆QN3WXmݑ3!+3ptP=-Cg_V"w,1o5cnB.9\OfE͕.Փ7MR_e&h_>~|U@Wbboy:92Wtd^I .d__|/eiKA7 C^ߨJ_n>Fz ѳ^>}QTAOGpk@lfgj'yQ^PͰRy Idw'kXJO[U끠To{3x j'37|vy081yOԨԦvY9Ykjm:Gֆ R>E{Y\`wD))xF yvrgGvٌ% /O=qDNʼ ah]~{+o Fb$My^fYjV=ŏKepI( C4|J%lsMp=[('d`0 noc޵m,")Ni 8- \jdɑ.)KDJKrbAK493p. eLJdFT"S*)4d/H R@-ܱ‚.iz(~ p w'`Hlu犂lxހ^"q#P^YI'H@ P>و8O!A4k]`hyў^$$pOm$9LG/u~?c2LWew:L CO1ʙU1)"\j )H?FbpJ0'sl&ŀpXTg,\xi&q#z)D2X8`Oy{&l*g$yA):}}H;θ D0ftxܡ sH-%>LWvL9ВtSF1/5ހ$饇Vm>v`Rm]ċEB+wj! & :˻xyS3T+R0")'s@Ob۠7sj~Ocaw1$-85~.o(_wu qʎ>Х5H-;\(̈́# Ҿ~yۀoJ݀ʑ aP{jy]w'پ塋TJgWidOP sdMO d<oHݶa@-}}Ӏl!pjѬ<}ҝpb>h疡O?e=Vu^5ߺLyh]UZ~ o%N3A!%+ӄuX ՚hrdPG i dydNr`~q9Ɯ̮n>bO헣UOy~v!_/lE>zuTX/]sN\/6AFyg"S(Gc=~F2Ydh6se~y:}fWyw.ϾwiO͕`@=l / OOcP>u\耰)|m]J[O=TTRP=p]d{O6JvEu%lepzi}Ƌm Y:GoC"Y=yAŀ!V{uAߪ~ ?,@(1AsKZlέ1öMOO=+VB a˻wҟf-?w](^Gܰ}/Bqi#ybrjKƁm5`7ZSw. '*JnW4-DK̷\W7P% WJV[l};&c6pq3-r~f☤C71owҕO~ofoXRwR7>X{SCChKQ Uu8=HQ*Cv/iz! =ƍ0t-xWb5\K/z !AUٳ]~d]BqEJroY)Oi7k1hs<$IRmGޏ_ QΕ|T5 SڙՂ u/+P _Pq{rМ.\PĂT/0{qF``h9mQv/= NMFQp;e& / uBWb5G!9ލ1`ve qLTP9tGQ0!H aV!./p"R Dr~/)ܩVBA.{'K~4 [}8_HG/^yx_OoR9%}  (M8" QJfrT{,F~NxRvxF| ^`hbiR7 DW@Qr=@¤.te8Pٶ 2N 604#C'܅iiDQg>NQhe@}횗yHB4nQ!IGC58FD؀U(C;82{s(?O0ػtB+s5qŚ$8gQ gbDDkLڜT Ib-ev<7a<t.ܱM$006JH"c Fıil |GQZ3vЊTojה|}K!(n>}/N`/;xqz-^c:VXX!}n- s:!v9%׌"T " H8 3ZFH$F3Daq^3!Pn[tݏCdL0xeK c(61$a,y2%GfC(0ÜdEW^+{:+ɦ7= ;Rz5(iO@>k>"->ȓB;EJ!!Ef20O'myw" qd@tupf*w#0'O=_AEj0&%w+l 1 4BQy X/&:vFMoQ:n(<~{\]v]'H̻ 8ļ3O#Xo`E{ :fzef;1DH|΍mY"Y j̭$P C 1d1MJ` `{'XeL=*SH'1E115:cIRHeʠ\@hlZAUXsʃRr&xws,c@qYҘ@4e Eaγcl֍g] Z1$j,1Z!4RHN8XHHe&c# s!NxULQ'Pzس.n)'H6swt5z>_-zW4(6(-幺zp2_ӯ9QN;0!]|t" #qcc.;s @,(*70lJ^O&3[aF<g/?fl%Ss%.dJ$?}7{>➒aJ t\`1 ZnUk B^S ɚ&+`@Dfk ĸwOlNv,bb6!6sJ  ?܄~:.bZC}s|=oF˛Q<6ۂnjYWf.CwƝ8=lYE_Pic h*>X[ nrqt4^y'0YSUr)d@G1$=n/B7"]XoIO1D J3˸! Ur4m>(Zu4xZɯ97+u55S20 .C9$P= Z@{شo4ѱ?Otl¼牎սXM 09ڛ hFtl̈́):6wS.l *Ϸ5XR86 kcv׉4U<){q㜴zJ-|L Ey[Qbg6U{Ӟ)a}Qw],pvΥ@BQBY'G)Ȅ\3mWX:Գy\Ft/p' h w>b+~ \;N<#YJ\7s#7XĐc4ſǘG6LjrVrě՞jJG7CS߶S#ltx:lp ˿B3$iD!$A˩>Q%Ha"Nɴ+@ѧlN<}'m#oCqY釈u|P_(&~ 0e8xn斕p]jڥ5`b({C [BH dяug4ܪj|`PKi?|cAl/&c?~nVn2ѵ_DU|eUʪUYW7P,Ҝ3ufT,QIH89TJkBcz[7_1ade$ٗVbي$sjIi[dT9[U_uX]^X O &п?}\mN\>ُgXQ<ͣT#Ԁ#RjBpQ9@L:dGnA=^2Hw7= ـP ݂ǑXZ8I8IU[D=u`## H`V*g)&YnPZ$8DD#q>X(սTт*PҐƓ9}d̺ia^hC %؃,FazRiTVB2NYp%)"TIDyHh[} D(NԤK[,4x B0qD#,]2IKHhA==VMSOo4 db 1rQr*$@fVe X*41skŸ i$haؗr0ZR}1BECUK^2R (uS/  8IPAb"&Ϫ쾉ѷX.e]Gϱ\W͉kmSZ m'@g AN%9=!GVvN:p,-a<b9 ڞ'3JS>PZ攗bL; wjUs,!nj= :yNX^!zMBK"6`T`8kq&pHz-[VsUzð K10LX#=аUU+e.;$ހR CFޛycR0ıHAI&IІL5 䌼bygmy5yJWϋ+iAlB ѱILI"&氎/ ˍY[zi/RZ~oWa&ok >`#?ǻ~&"8~;ݷ.Xmn 6}<" Q!Rkg?8[ֻsWg}Ƭ~m%y6'*ԅrL pkSf}[R%Qwi}q_L; &u-כ'a]x /~ʟ0fyy3G_&9\r>cQaEG|JO)M|ϹC AJG|J//|ޑwϸC85dȋp~{(UjZ/^"J~VUc=1Rgd=` o2E_۝T<+(`k\Zyo]e|`?gf]7ABr@vdd@RSl 33Å˞LApDeo_7wO\d_62 { UMn {@C[XdeԖ ,)mK!Xj 'h%kXF&a$TNSJڔqII"9=]RHyj&'EwdSEwG)* B;EqKyUk;K\+,&)%"t R]R[~B9ۃ3(-Ѣkdo&OOhVōfrEתKA z߂d<½x |Uw6YRoȋaTS'GϴeA\+j}DXjq6ɂ +4)sb@btBcLkE$XJ3I㣩Ump- :\J:O_FJM{1+RSCQ-Vbģ&Di+qOv&OqIFF35LcX{i@RX%pp .;% I/~2 S-`Q`[9&03TS I86 * W=$J޲q=%#W'uu4 u۬CKrZ cLB8o LTu&Uفw|Oy')e^nj/-%"y۫>$hL0c(d c#qWJs* \YzbN_YW\s7rxWDxLΓvߏӝU|qʂ ˄]iVvj Z@b6RQI_ĆS΋yT{Tb~=?|i' LF 4P%'7J켙Gz}՛JenS:oVqk~j|Nqi>5`Hz|U YRqڍ*x\H#[)%Bt?kz4!h]1g1w w͂}# x1uNz TIvG)}< ?s-Ԧ`2}B&FpMC='BL:YL4$VKlL!=ib LjU;0lT*T }w`oVg%_(G"^@kCr9|h޵З k'.,vo7_6'!%_fdפݿGףu'Nqz :3 ipM4ZmUsD[& B:I Oh )jEI7ꤘ&Qj$Q 18F XGo^V\Ąɚ%39SC$V9\7V m0Z:T R2`1ZcH7zڍ{:Vl e+v }<זsE@($eĽ(8՚d%R*#&(LPP y *n o0Y~1n{b ֚߻cx/Q&IO>ئlH `P%,WDy½-.JDuLG2LAWL1zsM I?~˼A9qBㅐL;mv =cnsҀvn_fHW5Ҕ6 =ӇþJ}!鼑\jGВp$tycEރ5\x5GLe3tĬmbeKτIţNIeqRIk QxjODK9# 𼏼45W"}"1'H6^C%6J7{u+s ƕ`*uۜdg)Uʞd$Ⱥ}ёd).'+2(-򂘼l<9,ǽP{+:[ Vu XOG3ʄR3AYOud,ĺRtos,YA!$1 WCI+jn:qŰvk@W^@AnZaPpf^9A? FOf!!1ՐCz34C]]v@8mS @|*+jF*8 WAS4I ,8Tl(+5_Z`X޵ h7UQ5S|Nj$zhY j+3̝~$Ci:2Z+]_ d: X(fjVx'PaRR(J+_kcLr@ps-:VT,&,!%DyCE %GvvۄCW @}Ey2j`MCFww qW ) 9EǐPuoIF hoe3}-߹v;:4j2^>3tG;>wy3NŋHLzXw~ x@ݮԋ_/.fS&:b)w4gRd~w!磯4i^ZωcFclZTu-fH?d[Yl~?R8_N=Ro.~N)|BzRg/pBI}gw<ӻiD}7o}\uN)D?!G>.$A jnt`xf-k 35kߛ:VG梣^KWtZfuWP3'|kf8=ol hxQu9wÅ>~Jjt j.\%fzs^f}/GQM\F`M'Rܠ֝T޾y)fôT:q>|~{uyI/>WF\ȟ/adǃ?qu2YNce^*M7m@)S,`O1Ҭ %Ɖ I .\T -CiK}f5ml6c-am8O׌0s3$ u?F{Tm})NnÌv6ᗅ2񣽽BEu~X^o出e*fOoWxg -@~sKz̼gs\ ?3ʽhFeǧt&9 dO{h'4ڜ5X&%9o5>wx\f{}X=Vd]'a]h"-| a2:֘>k ͮn׎3CN2vZ{i飓q4NAQ\離XB{IG':!wO@Qtk/)7;: 7?@pxaxfHaKZ }-@/T/ L#HD8>p8(T]OH{"}o+]9ˢN{?N>ŇYw>kAH+)ڻWiasy~|W{ :QP&}^x1HF*Uhۜ|~)):諼i}7[oE ę~r7ɲ;ʪ59'i#Oo<'_c1󼠫0+gQHScj fv]*]6^1F+5d)dT)V߂{VBESGy0& $;'fۇYX}6 ~E;^&~# p4WpI}"nЉzvUB!4fCb!p.M&Z}oS5t215JoOeffmn'^>8Ⱦg| ^Gr9eƢ^.`7YF0ך5G@LVsԌemm3q9QZj$w}%1o InVmn<^B,)NGs|y~&G3ILB^p@ 4eA2}7r `Vk댶ӈ @GETX|@P|/~dWN[lĘZx DT:?5;)mNrCI^bW4RDW  NT@T! %owKvsKCJnh#1m^%%H9\2B&\YtJQ+7B{2: 1!D)Xğx&9ő&gZ:D#R4k[q.:ʨh 6 C_[)Ap|'aRW?pID\Tqj#XTD-&i [V[6T Յ}w'[/O+A9Zhz:o9&15=qjJQ3kO-{7Գj=w]ilH98"t ^/Sr4h-֔+qƛͥ^2?utߺf3v}}WuGy6GzC`birFїzzйX)*Ii{R풴j2f:rm?^]}g5âU.sx'Odn7g^tIBm\xEٞQrcƿ"HrƸg|8Yr{EA;Ir"eA!beIB yg܉RVw3 " ZZIQrUuf*ū;5tSKXd||=ˠ\#=78<-"7Ip>BXk5pc?{6QO3:<,:I,"+v$#*$yHR.Y"sd/Ŏo΋p?Z @ڂxցy B1gR[( qE5Hk7w[V /_77>;ѥ)X{ X ))T`fǎRZ0Qx)/ =*Dq>/|sA%Ǽd5_fZT$'`KS %`FBK*/zr<=cU# &y"]QJQ8KK1Xs&*H|_z'{uHЙ4UuH;i`XjI hK_E~hT0ΕsQ/]7ӛŇ0#Uڒ!BU BD/,fq\)o@9Uնp L+U2@K\ZKN,8`gDTQ(U,`^77<3W`XABh(`mH+}ICAe1N)OEhk"E| U仇ɭY*ᗓ*3ö! ׻`^W𸫀jY{c&[7(pδFiDYcY(Zz؀8ˁ2$T>3l &g.131@# T *Z{$=`I3$`DzbI3=T)2X+ #39L`4gJ=PeZc9oq rF#vT`7'P4g,Cl{&w&m*D_Eֳ(?'x׸+ 'R|o8]6[Ew,j|H|Wf#}Biuȓ'^_6|d[>-U~yOPXAl;ݹc&߮u~jvOma漻Z|6-f?Dv'L>yyCWfOuy9c}}08ڧ5;;toEh.irVU&B~qACL,)=h]ۻe7$Iե6JnjrݰT#<`кhQj>DOw4EK'É.ѧ[`20_gn==WDui7]nו[{uD1,Zo=hEQvڹaV[I^ݘ.CWٹMZڎ r7KŇy%B-҅ANYoo)>}~n%q!XA=vLjaRղ{ ;372wg!f*%h6րh HpHy~9ӳz>K4ĴCx4iǧ"{-ᙹw ![ 1jڱJ;nJx<> b:B/20{y]@j{yq^FNBFP褿{,{x~?DZ=ftA==9qʇx.~~<<9y"jy9 qo4m@RX-M1 ^ RYwjNCCTiTȒgh6y:m'!Oj6ƳکgW YkV2ޙ?c>Q] .:suuas I\.xz ʇE5Ib$'R݌=i?>6 ͤU F OdʘWhlQ;jΛ&=V$E=Һ,9';edr;(y?^K4"d^bL(ԣjDѾ!TmL~B:'4DQtUty@Ъ`L~Uvz}hzֳE5E(8UD+Qr2~0/TU5d_U(?6^nW9QsxZyFȊ@[?5;^Gg=oϻB;}nگ1;}]Ҿ W;S (9SGάnɁ1rB0N~[X˙QD{L-ٺٺ~gQr"Ype{M;fPҵp4s zl)Жv;xmS)Lv Wq> Q4SGNPޭBI[]vY"E)v wI;v WcSiꯧnI<;ONpa30P;x鼓nVH{ЬkQ)uIA.:?ʧ$ȮHeNG`'D=dGA%Qonͳo; {-*E{٠FB#N7:q[O=~JVTZc$0 Ft>wgFQ}#i\>=?y\syP J@e( Oyi\t%:=<7?b0wMKյG W=oDb:7^cE"O;6.:o5^~1>9#sٖ ]wh8;~!o7\J^8zl󒏘cԕ[ǀt$9rz$W's[~!'vAbn=u1YyY,lpfh|R8n^ $L֋Ld1w_b m֪q|or5~9j|q^_]ߛ5|x ÅQ= n~ϔ|W%2ĐrʤS-H.TiAz3׮+sȼ(A|5ڨnD2O&/o7k3`io7 *JS8wi*@ PF@O( $rs_MܗG$뇊U߾/׏:]i~~~sC3_o\#bofdޘ#z;Wxa\O{4Kn=RP)pQ(4jIRܕcQpN)Ulބ:#F v~ Ӷ6q{dm웸"!0d9Ν6ZڔCcjY Ne!JB9Y~ߤ=u-h{M bJN´j<Δ6},G7{SsMp0 \h+T*1q锧&!#k>AP?3"XsRK\ZVd"$:`B:A.7 㘗bg1j-E!J('DpHI%q8@&p4D2EqW y<;2TBT`J7yc3 20i{ $ ǝ0MBf^zSa xxƹr9AP{uT]RhRxS(,+KpJ6iVz p-(*MV"h4t:sXr!B [B\}as\)o@ցn 0! wVzl%+$R=Ֆ`#~rIm%A+Z!@˃MbySK 9~I~܄Gǎ_pd_H/4 O]c074? m *$ygf7lsG7Yݹպpكg<9]Φ v/9Z%++hp;AAPas$J T /68Ј"J D2|E¹גV:$=S=:.t!l< gHiFFWW8+D%|l(Q{5st_ h|qBbbU۔XuqbbŊ+I/6.VjieG5k.6.ք^l\)ظX3rqbb-Z~8b%@pqH+l* 8ArXᕶRʁTcȏEPI]9 ƆIbƆIbƆIbԸ/5.K%Jg4.HKRbIjg0gtCWm|,v ]ÃQs,ՁڠM'5ɛ۫jqm]VX)VBۿ?" vfA=lޞ=jΩ V#3%$8<RQyFk14;{)!3G:?jZ;QSwzPCZ߭&~4Y׳%ny?m߫[.__-omŲ*.]NJ]ݔb]tcϫz7 ( ۑWypGlL j ~SM0!_A8Sng'v+=i!kTGB.\Dd{ucJ!XP N3Xtk8)H-<кŐ  hkƉu Š>uKR{nԆ-\ɔߓ۵nR!XP N3Xt"7~^u!!.A2w|[(MD'm[|Lũo-DTCCB.\DdJvL"v Ġc |F-Y޶Tb3Mcr"$SZ ,KșbDtBƺ;Qqw ?кŐ bЮuxY@ m>UV!90ǜixBӬB p )!pǺU/m4Fa3ZLӝe{\W&5]F  "zgwuKhB1h":hc+N5ͺ6n1$E4D$&c#&|=&ptτggM3 tb7Qky|^6&(<v6(g+cj`T/Z6yc>ᵐ)zmj|f+&H\{HW|O>U d7 ,Hdt}bs@LMPc]Ɣ9ǜsq5Aw2Ǭ 9ǜs15AǗcBc9樚h|9f{Zsc֚9œs1G)93G^/9W1s$$9c g'$1s{O9K1L)43DŽӜc9昚V1s,{켑s̗cjt9fN5=sL0 c\9Wع'98s9&HGc攰\ǜsQ5Ǘcxs1G~nN3xPs9U_3gHc9昚й>!e*s9&PǗc9ǜs15SƗcBs1ՄzB2s&2^Q/{n6>sfn|,k5|qOw7@?S^PBC)NbZ_) "Հ}M^ϟCo+~unY,n'/owJ^_u}NWۯl fi,׮߼1kSmT %ĔėHO(E[R0)Y eZ{7,P o2def_ISC`*ɔ1тY(+"Kay¤ 4G0JWL *I+SCUMUq9S"rwÑ4|IC IQ;&! h407$$](^_6l܌>"|B>*s5</IQ#VM.Bl!Bhwϣ&LK+$o4g| eՃOJ,?@#$y+hnBcFoϪƩI gUFG72,I/ExVL: Qs)x 1=\UQ)A8&r$ ,[lFC`NҾp: `g9b WSsx{ %+B64qX#cZJ4VqfTHH T0*Hš#| ƞ9e"YS6 ˭r:XL#\bPzaZʋE-|5w&Cgpg5|+~#]7Y^l Ô Z+EZ-A& KV6^#{ǝ!Zt0[b@XCY# $oth6qc3M/]b41W:O~UK `juo+gEP.PvMA[f 3n6̃+e0TlSM0NBWPdx4 B9~MB1od ~>WwsՒ`Ac3~{ .ܠkC#jo4i2$DҙhXTP9`fi,Ld4|[iTr8WjdEdSӛa}auނ8Oo?:M<]``*sX<-蕢 ˬW1Ye߽I5i6c0jx2J|3~lE2Gsy8%7f܌% - IǙ"q>Mh3Ȯ>ٕ-8('5kTrjAymR%DPfw#hyKgai8mxr&% 7&cׄ= q<񔣀χ< X )l .QʫfTE:CmKI^9Mi}?&8ٰY†WC0F3v<mC8%5&rG_}9$:SڇٸKY"bi_/VOg2NOQBFC UdKrNiDࣟ`zT݌/:0T#'Ho~?؟LJ{ O;3-ně~gT˩!<\C[)KH poRT\*k wc6a\iS0KD"r6x۟.wjМB&%GjղV56k}H5\(@,و:胥QTтJ)ie#Mc9l d+Iȑ3 k `v|u1C@ϟR=Ej9IfӔI-9gPMz!T k7yC{Bm_,k $p/m]ReFstb'9U^R^R^R^]uܤ?eEŀ8|Mp_bP,R Rak^`8~`vt`->H@q}Ο,ke*<4W6+A+X=u|WS٠xSfU-P32rSeXYF7r3 mf!p+]Gd'xjcQjFf"h~d<,o18+Af-!/EvԷoYphChWSXqLr99[f #u$*\M09fiUgaV17`Wx+p$#vaCƦB'MZ֯܍% ~a289,PkCUIZwEwd<$wVGr {ӽF>֟98?ijqg+EkCtQB@znѣɗ:(gkpτr> ꇢwi1RabWŃs֩?j%>vN6e|v4L{,+|)7{ |΂&c0 o.Z` 5Gz)dt_rߟ4cW>cUwFO]d.>Sxr$'M=,liZt0.bT FO;﹨K>9Zۂ_a^E=Nѽ2J_p%EiQ95XdM; *kOiu+S:,}sH]۝tXf6 hVgEy.&?|Vg t +8[|nݽHX&!Φ)͵da>/m@5c<YӴwѹ`謅!܇+r8Ι2Aqn!NҰR^rEt6(Ǽ3Cf X9M)C9tQQklLlJaaDǭ%Kֿn':V9*',N머(QNKQZ S* ':vDGΟ,BQvį/_XrTz'YAhڃ0FKe`A.aܡ/k4Bt֘tWYzv`ʉ{Hbm4} xCmCDS #x`ɷ. ZN yԱ#N]%1sJ&WA0P[J7qv]amIڤG*QDSk3 FWyu܎ٲf{vN69WB{ܑ8g uڂav}Ɖ[qeWdB]#&}7Haut 통8"F eϺ @,GR Jd/Id/Id/Id. ^齔8J*D'V dD)i{ێZvdQZr*+=(V RR`8H *5Hcf-@Fv/b1Һ n#lMے1^!42FtNP(kcKklR ߑi˳u,P#P!^+{2mM !+1?7D ED-FPf+bQHDO,T :<[A֘6!^b%l:0#HٲS<*RP6G(2as\v%V7+V_ܹ .!:m>^ _VnOgn(do1J0;Xg?Br˞$ UW5˚MeTrew}\6L SzOEOQ89!Gvnd`UzxH"Ҿ>v>bUo VyoC 7_\&r`Y+[nj6. UW5=d V.>ۓł1XvS[r[W}.ns_bQZɝ sp۸˧>3~UwtݤH%О.CѨhO3w \(|* ks-YD6ڧ#k, YMG5om eD:x1zDNy26b~,li^ 5&!=%cc-c{c kq*]'K-[fRՙR-@BN>a)Sby̆d_ڞ|vj,l( )]p 拗b!;L6*3( -g`VV9 :{Lo4J`,ZP`=@t0u鵶 S-?Qb)#R),L]{ Rs)W\ NoYJ/7' La;1)]wωK_!eۨCo=}vx~_BQ]- =@s} 4>@cԑ̬Lͅ44g|x@ I37ly޶2SG:af|j1h,RGa5gq l2ӌ3ɥ;iRlmӠKڇ736k0t)c>v<5SLک J7ᘝh0 *c[΢9w+Kb pH >cג6]HpAkm  "py4E啷y3,̘Qie+Y. o8{Ji8G0>Ӄd.Uˎ>^1DRx*2fϰF}L mxIʥ. 8Jf9hLMF8JlM4v 0)$Ӿ=)h9;?-K_QrM>tv$/fZ܊q3 6HJG9 َ>6eK. 7f?\ԑ]Z fcV&%/ju7ML0`§pYUg c5nݒX $; 9oP^;$zM36O=U%vyWDu )LF AlYeiQ\I@$:;(A==f'1 r_:`/~O7Av;ad;.НnUX>#-W|%1,`J2v_gc^rw MWRg"T 2(v`TWBgç] `$BF.'E3y 5E]EUyTd_1 =A[aWIc4M4QIj=OaI J;x0-9ױjUn0m,m">Q<[[!":Lb~./%@!%7\=e|N)Z8Җj+D,R0'LW|Rn5Auu9䬳@C8;R#~<:|\ ^gOU8{vK%bBh.rOu۾VרmUsM hs]*mWwn0p̺Rg4)Kba:DWDKrIӇadu| 2d$iM a#^xat.*X \ e{Z ~;NF?dr#::벎\4`'qFZsq03TalLP2&͉!!*gLm+5Xcl<=0]m 1@v ̄(%A`)ۙXg8#&6`I$ub>I!@z"1&rRf.5 "2]MGB!SFD)Vh#HM)NATَX.:'}n)4@9=W1וw?>dF~/?>gn4LVYpӄ"jЀ i7o-4=7;8ik-Xwh0J[4ު4 Fy3y;P'ڒHJ|4y]TqZ=(Ĕf-1%=pf宇m3oF'sO6AKR@cGh!Yv"Wt(V%D*Lchqf3A9RpQ.MzU p~TpT5!qT$۹7}o͋+w3L:y"P}bl/NN(,Hpį4wOԀDomҴhMޭyE]9dTN;asRzh(SQTjDZy$@^IFH@6# +UZH#2!~ҭDS['gFtPSeoړKOo#Q1[L1%}2i4{s /ngǫKLo|Vnrt#s7|v,8T4ڞL!D[W0nio.8:$KЬtc*gfgQ^ \VRJ_b?lnf2}@EbRG娝OI|neb60ԝޮlZ JD6DB3@v)S5HeJD+ xI`z/"}0?3YcWKBћJp߯ `*ttp_Um(}2,91A?mH4:>$K@KJGHZB|5x (ŤOg6Iܦ9|1/h;g}TuȆ )z)'7<$Ȫ'wqy|0A8l> M`G6gj.2^y'0蘝+cܾTV_ф !.cV,$1Le;X^ lHҍTVA8uR }Z]$7]~zF}2&˸kڳ?%/wU_CQ[KGY v*=F-DPrjn}mݺ"q <4:Ar0fDulu:jyuCIDu/<A.5~>JV kPs;gNi$WʋUi[Ʒ>|MI)s91?uoqfS$2s"P2KƟ")Q{6y›5Y_+?iRErINj7໖\nt LJ,Ҽ-Ҽ%=+[2=OzzRinNзXm 0fڶN+Ċ1H}&N!E&KgqWw=V%,hH/U.T{Gz-T,{.͵4FҔx8#qzkcoi:9v}uJ֜M(-?;S֘,=9~:%䭉:KlCfGI%88JGX<0?%d㦮wN`Ɣ޵q#"egdx3'G ,{H6@ l['}ɞsӐþɃcFůUEÙXt"đGTI Ɔ4*5rkyNK{.Fb̿ ͻ:8*j帒56]C2ӈ!YIW-d>(ՉR-๠!O;HjQ1EdsAϓ4;BJdPxM o9TI_X^^ ։Rf(GlM%uK1H!/-}Gx)3o~Xȯۅrߦ &FˑC PbdIes%Oq'6sD6KSTk;a9Mt)U4*TPYhX2B&uX ϏmbȂ׳Q׽V{G~d~Wi= ڢCۅ?G?|M-%! %:S(2=/8NY6q9R-%ñ.r QK=7ހ c%LTXCHD7_q=[a.]^|[T7Uތ= YhmiYd)A[p%KB  $ dt* ]ZՏ=EKţ+)Nq@vC(sgE\1D$ ˫(˄"&u['hXE]Kf #'kO߅ކ\k9;㻷 q.VM֎UzP8^?~SJPEEn}Y6[~R略D6PLvw sb'(clJ@sG ZYi!jZ Dqa ;O#2Y+W] LJ$X[IQUa`ЖQJkJՁB sUc˧I:M-k ZZAg\[R[0՚;.ϭpF S><:'+ T1"KDjXA\ HY5B'V. BkRՖx^Yk[=1 g.jE!Nû4(ZIpR[oYڌhN$e5U(OL Q,+)C|i'gӛbi2dqp2Ok=g^R.V0.i 0(`ayzrifO ,i'Foyg"r^bwu{܈G*RMj^ȐءŝSjM}l%Lxskj%(*:q?OQwNei7ގ]ԵGBur!BfQw}Iw؛$-c,U!q]f:]W}e_R"LkNOQ~^N|H9@G$+\)U9NÂ͖9xdנ $Aw>HX6Pf[&zx_lȑʛSĕB%kod¸%&VpntjfVHLykFHK2v7uz-x]ZE}W4IėF% :+ݡ-aI:~8:ɝ:ts]RT|ޅ!JQwl)-i"$=pxa30}Lo \_E瓿<4# v^x-9ifIB1j: ?y4hјƼxBusq5q+-uf5BJi懃*|\et͉(ڟ}ۓy܃EoO1xzѯ6~p/۾SvW'L%} 欭u@h4ZXg0+NB ,Wkw_2NQ .ױ kJ(I^`U)C%k2{Sh]MHl|S3Mjm0p/I 7jEO7ȅ{NՌ -jQtb=.WЩ .WI5U9w+p1 U) R&߭S.o\Tq)T,N[U Tc,YyA !J[cLR{/*UPACpVJ jA@l%xX1'bckT2\XUj)wx -<$f)_OSt1eS> EWY\NQ# Ēd* %Th7=0J&2Hq๰N٤+yCu=;#vq-W޶I`B(!4:F"eS@->9Y?CY2(uNSQۛt1RiX=q(*$N9[Q[>[ ^DDy*{v$~#Wõ̳'οhMT;-F(õi'{p-n֜yE7VV]~op- 0dTyAIgӠ^AP=>OXDdb,fe6f<xkc95 @5Wp&-YИGɇku ֕,)u: [Tlw^wg_H]MS/5b$o p|jЩb;7ur}Y6̭~o[ud!DKlJKh^nU5~NDszM듫'((~|w拹ysya,^&($ɿY{eXT dJ0qmw/nAM.ju)aV޺Q]ߝ9"%(ǭ"~TWXu淭*?y|pb?~>3]|9qmH)\!lP_%U%* *rEqV pFD;WgǓ ONߊ@ 5e5Ѡ[%'^[5X`B 6nU-{01凪NJRіwuqԵwQNV?gX2MqFFxy+C13D@3&#+5!,ZW$a;<V}r@VHqE^,%=K4ϣiGQThA{bjꩫQZpi Z RqUY<D5U: _H _bl @\r!鬜XeP+YS 'rCEːD 3E%S#*FG5AS˾}B/'aͯEJ,1\',k0B"0E$ˀQSDk Y0:,RVRKy@ W?9yK(ˁlFu?2&sCr!SPG, 0Mݍ?(,Jpm:8</Έ.Gj Hl<t s{ɂM|N< 0`5/[J *86"qEB *jpMPY A',"w3^F-XΈ˺3ff"HufƹJyz0I4aUUEcG`5+X߈;Cnld /,=HQ10.ع/at6y%.LyLG`ta68x0֑$=1glJb( G8rH;N?}(\pߓ΃VQd !B->) "Ղ6-˃QhKB6N;pY( 34J)H@:4C F:xx }w-Miw-욦db eYuRV> @ S7ԑ^h1J'*RN^+qEJWAc71Rj(#ZNU]Kc!oPi˽rDZR"%8&T A}!?]Yo#G+փ%;<Ìg0؝a/ye벎v{ ,bjUHFCxdE~WFFt`+˖4k׻G' 01\kA > yun.I%*Y䃉 #4llZZF 0 DS`up!3 i,l:f2ULƖX hQþ'hiFnKR!he9* ac؜^1tënnxŭi;^Arؼy:;^U_IەOL$LӎWhʼn,AM&M*>r(Q,zVeWS٘baLB&WHhX9ێ7<.<dzB82^Ϊa9 h5,X?ގ6Q ^pʸ>#R܁`Rtz(W꘍"眣2 OͥTHocOsN{U[#n OK6OK} +ԟ:]bcB>_NdK'X;|F[KD>?i@A]257c"4Kڣqb jk{Uzw7\l\Del'mgK9 !Nۏ"b !5!\XĦV:Δ 9^M5ӄE^ϿE{.rl [\emrJx#U$Y=)܂2j&ߨqUdq2ٓ,m@y1ZITAG[K$ KSz+Hh[2vydvoӬ`t, _G?j>W3 4.lzޯ^z fZ34h%nAv#iC|·O)O d2/;mDo p8}7fЊ'e^j |Đ8h9wr! з> M+Ͻ89 ,e^ѮUJܫ, \{F)2MB\Рv1ƘDZ\i3'>ڤ`#L3ծgv| ֻ>,o6txN:Xۢ"{,j>J$Md^JG++AXKe`kEc☕֬&DIned6&13Z2o>mƂgޣѨvҜ&?BJph#MEQYYɬW` 'wǜNkLpuo96 Hh,c g2HSpkr>]F/DZIdC!'EXZM".AqMX\niFpA IxOk<(BFFzYN )dO0tU`']F6 eDɻҮLK߈տw6w^UxBuKM$YvoJƄr ϗ"˸5܏X!3)^se=_%jIMA)ߖ*Uyh-i OK|Zg#1Z å.[˴Љ~!TBz$L>',DbȘ|xz꺗1KS-^tHِ/@}iRW9j\2@8v26h nv JɕY"ѱ9W'+& ca5g&%I+naLlCѺ-Ǘ1>(BkD+VC7:#Je_$Z1cFcsK?YP(zmn/CR 7~7iJehhebg+ Op1)Z"R7yRL'M5/WOȚlF{)886w ⻹ -f"H?ݘHeiyyeQoy<dZzǿu=YHyנF`a0d5~z6 VڧK #wt %w׫ក#$~Og~w¬niYQ3K5Ij/&_]>蛋[f|ߣ;߻dnfhnrnwd|Qe@V,V^~n\\%`+RrOv%Ym`XzЛ'fx3ɢT& vZK{U).\Sr+/l7J Sիf1*}gGNJMӗ'7aZT<6EݒG{P!Nr§_cg ~cq u<^b(/?XΨ틧kvEv|44M)T.қB0WԖ܍vp}<Tzca9`t' ELoIִݺbDt>vۣw@,En'T!!Ȕl(SrR%u/F} גBf6hQpQ.kk\pF 0H69$4{ 3ـ#@Z2J, `Xt/LҺ}T% N M4i3%뇸*fn,#ʜĐUŴjgS͆YjM, _'i@%l)^dHp\N|V492p0aH53jgRaiLC6|U9rzؚBc2AsUyC^pp?bA`̡YЭ߆}O5e-˽cIA,GЇs0Ά%WM GfaC!h@L+@CЭL^+&Z"j~=ֲ#lF*fu"m.OYM0R"fI$L6,PrI%䎓Or`FMxP9'3ދZys=D$ y(A@Jit)K#fxkw Is/"g18<Ed!)<2YXh Cnqv9>@G@(.4ψ%VW)f.GZ&-5x2(rXLY{*lw1d8%BD3Ɠ9S(y) L0ivo. WX )IiY [(rCD"b --lҒ `+-l8W3-Vr. l7I|UZG3Z&K~~MyA"ϴ4*KW_x#?h/AF?3\+4 wʪ)ZppZy=]趼t,dz*]Xdrt9]qk(_ߥ;?Ni ,q͒őNL)ϙjŖRzwȔ ,;}[+aWL&fa.X3e_Cnɫ{|6~뻰Jv$h'`kxg%X'n}=Ͽ}ZymZexw iv2Z@k B^ly`z4du' Ւg3qz:䑱kXgwd.RϬzd2s)jwy'Q;(>|T;f Q(M9[j&p 0bW2kej>S42E3p&_/>cɹB;6Zpm=p&ReKn'ևq0eʃ5t:hZM>Q2%OAY?AO@B,mtR 5轙+}vs ~|OP]Hi=LnvW(u˨we>ͅA\7j$ְGt )˯Q` FY;IҝoerITԫ;q1\N7~ZŒ) !w7f5[ȼ[Gxku0{ְNSⷣ":rп7ܢrwޤ_%ausFKܥT;Y3j8:]87WWۙP ?'˹΁hmpc_K m4^y5|rgk矮˽L,dV9*W3V^0U5ofLVdV(Jw_Jsqd7H}G<"4c8paߑ_jḄZi}YQ ͡{!r}\ދY,gYڢ.$9|aL\cP֑L9ƄU\+.P˾,r]VK}zD[p% B20e4|dv<8kKړVDU`Q3cĀ]!3(ޚ6X<|Ktbjsl.dԭt\$Ӝ(e\+b"fĹ1hXzDTE2 C=, :,ew,UfU@0hȩnWeϧKD&UM Fǿ(Ƥ"w Nx^Etq~ђ pg#KGiO/Qho<̗}5|b܊JgY-klt`_!%b>~Y'W_}n(Ϥ(jyd ]u`ovMq Cc/qjphYaČ$n lr%Vcy=N?R{RҲh(f oGWS~t!\C)"8 Ī:*+cY4{ۇBbE^sԔv_nfۖ5A)Cs0[9p"ҿcy?L!#;}ӇEZ"oTn&Z%уl YސFCȉR+ḧ)HۍLz(tDBVyQB$uFn PVҙ]w4gkO?Ȍ6B Qf&p TfSo*WSq}`BkJ) '?q}=|M[%y$ԁ}['1#Lsw'X(<6mq_ta\bMu4cjv/D6c2lhPWc%W Tun%|LG'9}[!X]1c #1n䚿x Z8J!Q$NGAK.TNs*}ꚜq2pT*RVP $M{J͑ YN}͡7a oywl=;eA^L(F$)d)~M| /fSSS)} nO~V_xWۻ?_ƻia}H¼԰Qchۻ鄌RȨcP'VC=~5>@=nj2C Ԧ5}s1v~Y 8,sN}7na?BoNjy&֧!JKg\7 ~~z1ۅRȎjA5g^症x|ΓHf JUEjTG\WqhIW4鱐!>[\$!߹;7c&[Z@XoO+爡iWx0&Kf85~Eԓ2Rq[[[Vn=qEtH;CWvͫ_Ź;].}P9]"b5+?fI=z^w`ZϗmhHT 2$@a=#AHLTBR 9[>iLW:WGޒJֺ~eU}wqle=gO pOG2Or~@(D\7p) ҇ ЄU}wR<%r2kbުz42Yj5f'%z[L䬒5VITX[rNUW3$e dgr:r %tfK]c{,c_}E޲x]x=cc;n n_6nAqȑ|& >t#& >;xJN?3Ǟw<]TKs,E))O. dmZ>$!mo s3,9 {h1:"Us;;֬ͮ$0JǑl]M+s)|)},b<C"^hc0MQ$_Uq4QTuj:9+#5g3n xW; 2xf]b):WM+ DtN։wu@_5^Hg>yD-@~kQifcA(Ε01)j[M "jH7E0環C kmvV'nitEGCPҭwXNN8D'uރ$O9]뼓n-n^~f.;Eof^D+_$!8]~}zʁf,5}(U^?{WVy` /deit/].7[,ےTP+D\4N9%?~xNvE7SR2ˀEQ] Y<$_i:_6 P85wG* A3la1R9;w2ٿl~9+A.Z^JBp5YMRI%>*{6IY<x8c=!ļ/~xJW֟SC8 b[~gk^Dx"`oZ\]H3!sK ia/ߕMX谐HKQ!^q4 E%|@L{clc.+6b2/.2g`Y ̟B-atUlmdEF Q 41ΉX9HNT vX*:EVOeH=nڷ>Z2u~ŬW|Yh(3XEtƊ( 5JDՅQI &u+1 jt} w{/w065~I4ش-{=e2pcR{ST1N?aa<}M%Z޺>3v}kGyJo 1wK8הT&ZRF2GCja̼IX."R3;q1-8#+8j ˚Tsmq2k爫 scVn.@MJ9s>0ᕊ[gЃ}Cx1t\olϲZn<`8dqtl,~t.> y88_am2IfQg{hh (Sb""1hF*n]D!ƝDz-f)j_$59ܵ'XH*UQDSFUJU b*# 1^0|j Be@ǷT.n71hSsZ)ߔ3TzNc3_-m z}ɇ2Rf)TVcRTjrt 0Eu2Fih!^$c77W/E ?(qCT"C49|È:[g[H|z㹌A+X$R%oS;Y'fBETVN)Tjhfp>]P& `p_!HDr(` =̠mfpXK ^~[*`eL"MEF=ؚj/clTQf &8}W|G Q CQW N?`׋in1~ӭ[.,UЕ;F 2G McMr=ɹAWw!V`b?x EX^ìixp9DK>@(Aԋ+١J`]!nUK>r{9"Z̡~4 Yv3=TUGA'< r[@FbCU2ƫ4lGn3&i\0UcrGA'@ Q'}.:cq%2gt1=Bh6F#<%]=ǍO{sSL EwnfX6u-ChksN 2і e1Gan+SX%Me8:@K8@5rlQ4*ϖFDބqtftJ=~p@]i\۫a"SrYwhINx۟edWՠ t|%f }0H963h8{ESJrQQ`TaS1544cֱV9o+~E?% #T@M5hptD41dwŽRg|>T#3LMO.uΞ1aClDI+Idߓ{9$;j1"Gˍ[U6[3dBApI,T溁mmwX@Qїח$sBF70%r~qk\ZEQ% hȈ"jqeA1Ӗ +j?2kۧv̠tIM Xltv'> .&iAyOQŘ69mHq2ҹy8t;c"WxQ[vSWdi!yCpA8mzY~L2Vk+KkLq3RZ\RuYG!3[-uXUÀ -26Ա@Bf.V[ߪAg'wiJo.f\fqBYad_nYQ2?࣋Y倥g}@ 4diFYV||xIt|Uc8'EOj=t= &ye&HSU;4,ymƒ\ΆњSuBr5QkRJ:Taↂr/cln1ոn3ZlMV,'Y*,r59v N!+NϑlO\+k27>ob7"\=SȦfqXhLTN1T_L'ٻ_~_~L0sV\ V.XL m9.\ԨsT3 3yy+uz+8Z]!}"` @Rt~9ITY zKQ.LP&*T@lFʨmtDty1*(B`bj3emO ,OLn3Ɔr9[Jnlnm9ίģh򘦪θU!Xj j{GF-oa9+2 [ԝ:scGĩ>e^̊ {q! ` %VY1hGR7h>#Rʡ%ouN?BIVw"ZwͿ8FI.JzBQBqd.:=lhrWncGuaW%SrOuOu,e)oFF$ѥ"cXwX L0nNCx)=bgw?RNؤ8E)NDZ^Z{ł!*/$$nyO {a=4"2akŲ)%fLQ-ʨILuncsTJCKf MQ!S0SX:좲vCnv\՝11^nT em4AI 9}_HsY ,JHѽG:K]G72Xq!8CtEmh:ee%]ZQ C;x OaA ě 48<z{Zϟs'\0Wiׄj *^:'Y`Ay?=!`ꄽV.:T5 XIDb%AIY+<'G?_>ib- Ipׁb ϒpgqHtBEDj&! `;SRJP¨?VMW~/_k+D'|EdI"M``sM5_O4h:v Y.T߈SA ߂6p R^`q=R^(< $ 낥PVIQ'4[ˌ_ImUЁ>O5'n? `ˀ>Ɯ.Bmb2)S8E: 㘩O7*P}-h!` Td͇qjB[羕J_SͼŭcǶd";͛795qg _QIɝ1^T+,f=rYQv2N.~T(SĨ!P\~FH)~'z{/&Qz`0,܀L ׊488KY RppVBߦM諩xg鋍CRY⁉1gH@ {W\c!Z;-U#`:ZJ ؈ ֳg `V >6Df^ cIh'5FaXQZ!O9ב{@a-F) r6=Q ,BuJϒ8"FL,3\JyU%Re!(n&hWFi6Zو`( .;ABYBBYBBYBBY U!nSЈD޷ͿqYjbw1V%cd*`!`T#Uk2:aXͭp YaZȠTj)p ȫ@,@)z= By00xʕ|Y1v:UۘqSRKrKDK=(*-0 o,+( $cu-/ku D"`ۀ_zz$㷹~ ڸi"kT>ۜIrC̩aֲGBTd%E}jg o1*1p nP#us9шUpeJUVX<3aA2LD[ceν)Cy3Ly 1.J HjM Oqr4wYu 5[gjx}6gjESR ĕ eEt#anH$ԠJ2zIhO%E3[{iVaHT|0&`W֞YJk!s FsзH1P$IB"kFUٴFڳi CR- $z%asc͙5 5eTTj:d";f HHLJGZ;6Ԉ;6^_ jJpͤw:FLܷr]4w|N`̙\ۚ&@T 4O@T 4/W}ˆzBL Eφ`p8mBP=tiCgs"$[ZHr6W׆ͧT1OT1/b,9Tt aR ke2 $f! _bT\6Ԉ͛6^;sA5s g4eN!r-7S^nuU*'l¿ScO{zs>E 2eIzBj?p # 2P2xö^ NaIvG߳L F}!!0:~"𲣐pԅ=#PcNWw[ ^ ;[4G8 _Aq7hd֯:0vzutw;{o{99w~=;؝xC?|<'(|ޫUw}: c9L?.{{{+w~ٸJ3yYa\>oSq%MQcn.wfҕ- {³!B1y7fW*Q`.(^_ݪH}0s1 [C经fHdh#ZxTcwM0W1b_j1\<}8yovĪwU|Tha|pBB{:991\t^LոcQ(OEg`#P&W(1|oY~<)^@:{X>Bݽ"Sd h;w97ׇzQ>0^p0}cI.:8/ҋSGg;_@;d6<>fpv4?tx"N?șXvY?L7o]vs.&9?+u|;37''|9,106n IO(Uj /.B3Ш |zҝ늎L&OV{gŪu%.K]l}Skloƙ 67s͗7Pl?1@?΀(pE2Iw~G ň\fvsB.8.@lOn&OeofجonqT a-UnO^r#8ʃXQ38QȌN{s .Re G1i㭑L%qesiC̭*pl+=Jŵ"k^k~a]I.Q H YBv*֥ a>ҖZ"Cޚ-.վ{I[6mlٴJ8y/=IW ŢvFH  3!p2)GB%{Ć)#J@(S 8f[ǂgZj* mF`++!pԝR>,R>謜 ufPnD]0Yud%4r*g)r*gU4q#*V¼᭲gJކYQWk]\iPriȆ3 Q);02NFͨeV 茉pTbeH`m2ʔ2 q)*i\&rX4,m` 6K` 6+炭BۦSmw FNaHbLXxPRSt۷o7,Aj쏻%)D}e3R)kY0v#=VA:nVUA i"fzޥYS=:WUyYdt\_59 +Qslfj祩HʪUVyLjs NU⧪J}1MD3VUYػ[2#V 73ɐH)yR;4C3 rFXpG bR @aƗ>G9]AWӵFuYNKfDlJfDlV.[M6̔H\x ^ .x*7@ZZ\NY>s~Vؽ5 A!S3ڑ޼3١k*.##Q b<6`tbh-Ġ"mTu?WB,⹹I[ӓ?-2iԆFshB9nSy=ETẗmBC(uDE+uEy!U`#yÑmf G!["CCN.%ow/I?bb}y^,hwUJ<}+XM] \E#ڵ3XZSLכ OL[N r&vRUGX)#4Jb*Z`,,!zIIL^u%~/).뤟 uNrFf[5fsrs8ri]Oi`-ٞ3_#O4i1}z)\07ve @xڔ"RV2"s6u`yKU޵q뿲y} F 'з6H_N~r)KNE-eTuʈPd¹̐TK`fgH?|s{yQ_0 w}ww^".BdևT_hipgeSj!mqynjk3M3R8m|@ t}6Q"Ѳs5|2(:m7DӂIQ  @x!gSdE܄F:*ZH)DS4k#fgeq_M>G,R!>,KN!.׹J%U)Pa"a(_HD:3^3D7"aޔCVpP 6F"a)UNҐ7S &X[ͽW:{eD=*'ZF Rlg%)ʞ!;R_O7kN*<[#-A>$s> rEX5̑ \ B P *^#?04\(吇w6Z|K _o>L"2K_|DA, C3]S:;٦}|73T $jf֜jχΈyb}vx:%Fm4ʟ.Owk XjT%S%5pxо{̍v[#!tI/le09m/q*Iʑa o8>*KE^B%!^A+ Q;Mv *aӅk!!RhX>h9Aj@}* *'BXg{:(bT°3(i+-SDG mf%iQ&,Ix%~K',]C[T9K' z}9poa8ЩOJ16T`3:Rr :,CdC{6%'wn8G6 8'ލN npBB+,Nx0 ݄pnB2k“4, o6lV`48,AFLJ%ˁa>|PnsHk9T $1鈆fLkӣR) AJ O<\pFRa, s݂cym4*7m A5#]jK}E˶F0Ǽ/lIne x!uУ3Mc` 8k! PҌ76܄`f$R=wKn7* э%֮6q=Bm)yf@cBۖ*T!˵bwÒ&,qD*¥Xx4qI3.2٦QI{RHbFrpspiYnր/j%-ajUpC,6 Z N1Sx}1&!Wjd#wWnk #NS&|&P=Æi2VPi%52C2-O^(g4_M+&1F>m8Kԋ)$0۫.Z[~j' Dċ.brXza~}?a٭"<ɏˋaO9é0Ȩ?sڸGM739Z\<8];qW'N]x|ug'j=qsGq.y ^Yf&ll$#fE1&' 'WO1Wחo=+n_`eOo׹>~٥kotr~_}5Oz}4HA}P?,Қsu>׷oW2'KlqA'P7>3U>FÚd8'vu@{vڇP(T\k_sA&l S-#щ+3Y8䞍$7UZO1gM_BS환HZ\kPa#eE4Lf6ʃ1r~r긥>:Oc'8%+῕L#?Q#O2lTY.ǐjI=k)?`Ԫ w-~3'b!r+F-$!dz!s8G%#>F.o.& #D XXH1ymvc'L n)26p\k>QHXesGHQ$idX[*jrr7ՠM1$%(ǪzHLw GA)aZ ?έyn{>RgzJ] wIh1% 2丁P$y30P%8 d U7nd aOt57?ۏ%}MO?szTBLEiCHEdXkڋ\R2A\{Bk\T#z8\AcҺ;ufv !U(m5cJwwxLALwu6T^%x 8XuHu'UYBv/+;fcl^F#WeQUI'"j>rP XEfbsA"<ܱE-bzaFDOP?#Þj[w@ivPq_1u}1Kta.aɊR\/F*bJ ^^HFSM"3 /m?ӫGf|6&0XlNA*?~rFJ %9$*M+IŬ=Ϻkpd$eN5 $3V9գmERd>*&D!;hϰ&%F%mX!('\Њni @Њ1L#JEƣrYĘQC QOɦvJ+!ىJ$0- 0d.FXJe`-e<1=Xۥ _I46 .tݗ)e dxn\B2ǥV{K-!g-=ڦp&PcpI|rddB0%Kj,=/i;`x#^%$y45ͥ%Dڏ4Ir|\ZiX"lӉ;'57ˍz[z!-[a8q fD)NBH飒׫ S-"p`y\i8=\. N(-kL5ڡ?j5(ω%TxcC\V{^mQ}rzxaq Ype$x/Rs7]B+>Z]D *z/[s#?T޽:7*dꋻ뺺=;}]ի/@s*[.xu~o'VeWj;^_|M-'tY~>4}~T\ =vxdxyIe/M0_I*=d:fLѮKj'*Nד3Pg'͑O303'KL͌PBN:F<bLdۇl=/(g-ׯo]}I,C4?nX.ltCX/uwy-sɞL4$3ƺXTt- Ik8 kqd?{Sj[loV?Ϲr 9vsS73~FBL %.x>!)ͳyFȥSnr\uQfOx[3'w6u5gu0ERA +d?o{YꙘRf"!ynsS%5*4RBfi=l,YxlÓ+grqgpϦF|1-nRPP߫ NiwH>?>z@nS+RɆ:-˕@Zl:]x SWe7Ph!%U4MqZD%9oPFz5_YDszQZos>xG}FCr/%yAcYOWF! %jleFѧ"nF9N9^',x]f:ԉrl;p(ND1B!dclڗ>LuO&( bkVnmmF+Vsi ](i/+6)VteOP5@Wájƃu&)5 p2kz uCy"9f؆Gz = 9;H.uۏ !0]\t+0i vU,23eDheD 2853 X1rcR)g6̡R ۠qTL qmBB%c֢Bԩ}_M& CX jYlik"9J81O>= 8+F(-CdsL]gM2W9p;L&g mЖϞPm]wOVBkhL ~05e8禭@5_e*N>Ԡbxt@0NlvkGܺF1qL-EZe'J0də}Fdgen![ċS@X0 & .q8S2GŤV bIZ5-{$زCS'r-J:1.QqSzZV?խ44\š!|`W!LBVCL A0~`X=Z9 M 0҆l6&wQVR-Δݖqx"u0z2" zn.: Tv-@$cj_cqsޞN i 1e..F@mm 1,ű/z`eʍk`m$yQ2GX"js,UN8>~Z<ljaYE9C H֢pb]bmҼ M9ɷhZf.aRhGxE{t)Aݩ`g{H;#w~-Oh. :ϱajw lśXSzCYˡ6{-Y,"lA>Zq6nBM%gP#DL_j~m#)ʙ4My dhwSeܣOkX@@R/֞9qtn17Ȑ5[hlľ#\M=>Gw4Tep8s+}EB$(`K~O/?e`h@ @tPO۠;q@hA8"O}#?XIp磒맂Roq%HW4(d8x3G@@~|p {w0f{dHG5 17Tk°*l?_Hh*-Dc]lw*S {]7jPB~= ӆ(. ڭB-T5[K\BpXob2ʼnn}h9H"H# 9be&DZ9ϻNAΜdrXЄω7%stAEC`N rvvA 'LWdn+м܎ś(*/LeN@YGR*'L1Ү^&: VHunRa`_sܶ@Y 4*fYD-\[:ۣ(\GJz*d0Gv(MRۿRݞڞY^uP~8%oWZᢃ$+e(DocW1JH>YފՊZc -\$B%oiʿ}"- -׊T㺥K-Ӓ`/FJA9P"$zd,U8FLS') 0qcpJx93 ..#=&70x.7zZ : Qq-5ηDW}p?:|˾m#8vU|c" LWm, !J-JfhCͳ\hю7w *,eZ_ӏ+}L?i=<*tq[b-cr H+=N-9e>.2#Q2?]B"w(5Hi0|0|4٬pas݂BJ+A؅˄:[Qx[mKw >`%y0|0^3FYA ͞&紇D:*CXgXE鮉[ qjȞ8R'k5-)?*S,rCdJ%CCkPqp[}bN -0եi0W.„0@T!{?4 ҃Kpi)pEX3}i{lpC ɾ.T X} r DQQhW|xw)V_d MYi-Y4ggC$>R ]8 Go r%3 #zF{"u֔Hd4_<^U\iT2kKESĖTDZcM>Յz>Sy)ZBN5KP.ce HH (K3eҨ48 X4 Am%j{uToۊ2: &YH!Mydem{&W6,<{s?YlO<=y0'KVqQ&sޜkLQ{k.mYǣ#ZSw(iH&Jӻ%Px{8VbG;u QwWl!W4ͮ>\}&y*;>MVd}_X,n5wOяx|dDld~86nb:[dOt}4ߠG3}v{M jL>GH~!MF[F栘]Ǘܚ_ƿ~4v!?ntGn(J'hx0#~1YXLJlܬuolFe|5`lnfOfӝlHGƣ"جOWS9l=yd4-~|@?&S2k:7H,2qk:~Na6z^bdl_9?{ƍl r7eYjc$u&dBQHYqT1CQ×8p(n9`@hw9x4Mtn\=8|ߟOo|懓M2t1vȺ<]λRvw %U:xB}+KU{BtE ʟ zSt(54Π?ŝHX9"чܮ\b绷h~!N/޽9\ |wfXClP׃R̒4o/7 ҍovn~^.)Tq7}v : 7Ӥ˛TNїjJP.+bAKgtq93͠H}⧃6O/ͧ/@(u};} Xe=N}6P,M7dޘ\Cޯps6wS^ Wio_H&އFR-,dޝ{KM D Ð> IeїtqҜLwo;7w`5S|qop$ͰOc!$8w<?;ut59Vh/IQGC a? En?1=ϓ.WP~ ӳaY&hC9 ?~9E*Ds'旬K6_8=aB=ǿv&7|]B q/Jn[gq} vy^]79,w~n&rv7NȘpܖ3 #FWd8,C>SHQ3;NQ;TNk5 +odNCkf8ijLn h'$k7ET'  @SIdz rOdz%k9`ω''+ "݇k4-r~߲kc JuC;ۮe즜ҍgOQaU(o&n^*ɔ_7-QjL7^x\>]qR"JG4 faYc84JACy їohWQ辙FK΃Jrʕ*%}or0-;/r*i&,4^CL ܏6THn lZOSZR]lWC؀e VЀrh+i=駖 q9\Ⲻ!ؖjƒHF3 -(:LT=LTG|{=Ð/ >3cw\ ZeXtp*%qFҎp,)ZqcHv{..EyNqn5x2&/Y!EHN#a/-]Cd^z8mR@3Sl€ߋ'e?uvqZ2N؂!yt~> +.l\vSr~gR,3Ld4(Q2K̘ \־/0%[_%%*8wTRNacՔ&ZI7TmڱASvl1&wrfrhGʇZg[}]IKKx jEP * m\e 19\lh;Y6zE:"O%#/͌~kNnl[>zBrgpm` Vaݎ/ٹˑ)ōޠ7dD*uAzHp0L?؂xi "jU$PO5bOۭWȯ<Ʀ_ Cy7Yj9D)tR[55ֹ~F ZFGDl6EXXfxFٜ~ny5+ύopY4B[ݞ2nxmedDeNXdS*E#VJǠ N{cYLTHx Z X %'x.P+H"*`QQ4q(_0뉩tC''R$MC@KC|DIlPQe4Pb)Mnέ䊡و+P`0La y*jR* KR`=c 9ehN%i/32o)D|pNr}'tвWGJ= s+)9#hǐSP`\,682d06׃`a\rѼ65*8B=)< 0l )y܂EA@c; PE0 $ng]]oGW9XU"FO[2C&)" CI!ջCT|xw[$-^*ߌ@@ y6o>GP- jVnof?;X9V'~QfgqɢfBvf2tI.2&7Χi@͟F˫Yc0%pVK6̢9 99ow)d)2CQՏl%v;p+7gvX㸘c);~qH)+0LʱxЙ;XtH*iKѸzY0aGs"DXΙ,gUO.\d3CiJf.D~>9ξ~rK"3yҰȂ+t1@@ϦJ)/P=gM@={#X$|GTzP]x _gM%2gVm 4cڷ0TEDfv++X.ʰu4ن$-s.LRJJ]~,$8mgZc|{J :ppm4 Qb[3AޛhYh_`O[>Űx;bqYR:ZG 'aG]M  WeVg.Q  {~o*iİI^䘕|۸^Q4pXȱ`N6ɒKha+ URt'aG.'~U0VNٳYV81P011޸,!'qeUIo1ՂMhaӚrDʤRZn΄Rt=2lvTZKbGD butIuPOKvm';v99q.mY[qt =MUب {S 6ۜ֍Jo3&bA73&`s \Ytt:e5)(u̠"u8QX,uw6\RU=-z*7i!]?ĈdEJCIM:uoAp`uZlŤ((ʇ,p6]}P-8\Ȯj\v]łJs8wx38[Q|p猔E(Tg.`˦E| &eto%7sϿ'Wkìo9é霙ngYa\Ka|!kC?+nm;niqe-~vƎS/޾alH;`V 9Y-e]vKO Hw1l.l4<{vl*>@۵UN/ҫT"N#}}iקۭ92ŗ'%Oά% oK&O'Q|zYQ|pۀ64c6tVj4&;t#`|-[XP!2]0go^;&vkcow#dujŒ(V *kg-Mɣ_3|*_lŚxٌDVK`d]d?eoFϋs>8J睳:7ookսx@B=!$_|0?4#/Ah˫Q'K^>hL}EYN?G_?z@%ҩ7u=Bb9y <~5l -kh]]Qr] ђN1i|]G&ʪ{IXu[~wnF*8fU2? F u[T'1~=υ%j1})|juMY]!/2)㔗G=7s$!kRRE8YJsF_V!?|X`fÿaG>,n$Arf1%+?2Qb;QlQ6M1}_6[@Mʜvk`:k {m~-)A?:-X̑\4dL,$&hZu2b{+%:VeY=*0e9mO#Dڝ`RrʟkvW>t"U>d C#iC{B7V>ZGbBи-]X>߳ ~g;CAArAikaX!C>6rSAs݃ɲ`C :3_gd'{kՑ7̀^[@ĩJ @@W;x] GX{5sCGؕ_w2Q2ӻd!PFSm}HO+JO_H0sxQ6of`ޙ<R1"%K8!cT(Z)ڐwJ^h(tuRD6;A4^֞t̓-&#'#qj~r& &Z$E;]*Qm}]# =}^Na[Oz)9%lzÖI=iߧ_nSl1N\c-kiqZ@g0Ȼxs9~a0P;oAF]{ge?i Ϩp3ϳ?\dQ Z!24^LFq]dLF/n>O?Ӏʛ?Wy_'{pSzCs0EOε?Ƨg=* hiiujVoK>ȢJy\R^Ǟe_NGSfՅ}9~lPY %jv̂EWFv9s1!w[$r.DPJ5Y7lLv$΁sjm`\L]%ݦ:I86'kF^G 0g!IO$Pj )S"ǩxi&T Hb9XmPP [b/g]Pj7b,\$%:l zF6`vi܁f @j 8F*$}"ㅍ9%=rm–5yQh9 2[L6`(QX%%-D|J$#f-rl}W|]kIUMLsܵY%ްljwZO8; mDM2[cE%Vc9C/H(}B17 \) ,=FgPPZ4]^( "{tsw2^7ɃUl]"(d6^Cx l7̶2[5ڞەU<6v fJ c7>"ٓE"۲B)غ4%"&B˿7 w NT\}yE"+EVgQFz.b~"; ::;CqD:GUγPA(A2|w~S0$&t!Ӏgښܶ_QK6c/S嗵S[gs슓ݗ@+iI˦ u.@)PDF TUH#sfF 8<:Mjֱˉ}Ԥf5N!F_;OQa?̥I lP0`H`ۛܰq 6V\#s83YQ!p0Ua#C݈OŋڱeS{ 1DJ AZA-Yh5VY29^[H{ `бbmz owJUUΔ%Bu۪"򚧚^]j$4O.ːmꂨ,׳n˵m<]RoO/ ދ^T| G9'Ƌ:VD=:dzE0I2hǘJXL<̓1o|f<9?4Lr\=>HzMD hh~mS =Ռo=(8Ŋ}x9xPn"eKT,޾z>@y+M37E j(憐f+:O]y.;o_e}&yvSњƛ79Sg7J\*"Y^cқY;c:"PGx!#8uɕ"H҇ѤaemS o3̓\W)$4MOUP1.E@{- N$7Q,G%Fy ȿ_B1Y =^2ƅCB֪v }3̿Dz8C?奀pIf'vPsAD`_2ঀTG"GW}H#>CདྷZo J AR~[ooc/9i)hzw?G u۷[Rwn1)tYT*VS)Kf8:'>jt|( 9dnWB@YWT2WLp{z~o3$mRwyk{kR'Jk&j.]KK!neZ6c3Zje%Y1mr.?H#Q&a G:e7TYL\`˲OVLGùCe?>vct ,&a$_env/ hmIڈaYMtk++u6bTkEb<˃.ysgբ˛$qpn ;@s;ڇq^, x+>+[<Jk[y]9ZvB.= U 킢fT r!SZRtunϨb#:uǨhXҚv=Q_BB\DO)")A}Lul]LA|HƗr{[mZdk [| \@Zr{ ۓ4,qn>v %m+L^H@)k') B8 M$U4@4ĆKA9vfǧY =AQSM˫YZ]Rpa;#uQ'_S%{q"oD\O: ؇3^TQθX zz,W?3u4SVV!Bx=g2Fٟwٟwl;O ~x}&t1XYk[hu'5O>~ }/eΙ-w7Sx(lgџŞ6P1*gV",bVVDY4[Ev `fQ['iX$$ e] Ŝ6z\I.A|xTDqJ sB%.1D ,w.I!%f8}b@y ]QZF$+tՂlp+>jQ)#H65撇pBq׷#7<1\$R(Gp#AT85PhAUT2Rߏa`dZz,s(k^Ni4K|_`(b"%Sk<)8 ~%t[G>Qs{ec浘22g|J샿ZL%cW.Ĵֽ !FY-+YݵRl6˫޼> @fν+`g 'B: $aMs(CRB.̟* w؟*(`&l qO>'I+dT* 8њXA KSJ#1@$$Ҕ0LT,AT7Lc- (t_߭b&u*1Y7#w`),Vv8L(vAyo"Ra;ySExa"bHul%Xc5 %g<*i7" .0(6 >czpu,V [I N-}0!C]{-Mo! @>z` 0ƅ lFrB*MPrM2 *wc/eַbF&6a s!fN.ɔU(P{)?p F猆: .) V!& KR m//jMW'WOT*dcU#W+Q $أz> (챰FBo98ɒK/!^#)^)㒙1)mQ^}CX;î˟{1tih3(Rg=3л/'(w~1Ah[@@Cwgw_m) @st,#ſw< 0lSБ/G[$=qgޢ^T4{oʔMޟ &% Ud4/ fO8,d6s&BUŗFj:T:YVܩBYiP:\C.00L $ :_0ؘh)Y"!I5 E4Ҍ 0QKyL g,v?_@9/^a^6$ M}xB W,eR(ckAQ45HH.$+*6& O:KѦxuZE&ą[fN= 坫irZe-7FuB:SW :u M{B)+ iSL*Hv7m4D[mOPP&O#U{ ebo᫱Q 'qt5Cyr)s& e+d"oDΟϿt;s;}Z7QLe ıJퟔ$JT`c2 #j 5VC!1bu_wإєoŻO;߁2yg(7||$Ifv\ZTz?mPp;x0.w5XҀYBqӭ3s?8B%(G1V]B/7 ~Oӱ/uGS 3JXZirAĎ: 盟L?Ymp0<[:/fCУkn9ul8~ֽ+8^,ýuXQa~dFya~햯ew?7֔͆㑵;;!BKc=[3,seQ*t,Iy﯁M@ q q^8M5ReDb)"hINԨ˰+># /9-/^=<$w0/ޔq s#:))zDu:\tQ4 !RG'RcbR`B)q i"QI$QbvVp3aӜxQH*m°%E@ \]q$JX$1vYtl+ڶYse`%N|![io<` {b5 [ܞ+`\?~&R˜`E7ۿYڵH.wn %eQZl25F[/]YP q"h=+צ-5>JgJv&G73D8:8|-K.Ξz3HbvVC|:tWf:vYv,V. <b7` #,(F|7v;}{nl?XrkZtYmenK2i#vӓIB&IZXnJ;^VYQ@6-xlRJXB~ׄAB\;ݤi{N¯re-Z +kUb<',ȹ`]f!x9pNOd*V8z5~2tZ*NN˚4U]^Z=6 gϨA"me^ | }^MF~jD5/{WGJA/YlJc=@ynذO[]*dz:KLff b&_#d&C nyϘUV16T[D&8yd"V҇DhpǴiP ")W_uلD?(=쾧}hjW2"^ڋZ{QTQ0 %B 0TY2p- N-)hYm sEvyҎKv^5R ab!l@DJ'gd8ijo''IKlLLf5ӷ o&f+-?WS(,7?N^j* VW UHcj=NўxZgF )rC$\;/|NUTi]1}e6:"B@^ns]{J}uvfR&Bx8LjwEDb#>2Oc{7kZ#tmUz]Un;OiZs oWK)Bj*n9<9؄_n`Ƅ k8\ @9-ˆD+Ue#^* 2ghJ}"W wm%73RuȣOq _H$\3u%aUuU(sn0ʁF9Le73Z +jAؽ A3PxJ)ʜ ^%%TKh=.4b `% CIJ8ҳv@qJjifEqw.q.kE3yk'aaicz m ?]>H Z?wG'{}Jc 8Ptw3 |Bf\ +Pon}"p m3B@\j!TUN̷fm/V׷II5͹u5Nk#Q4Q`$ opcf*>p"\(!jgTA@I*}}sm͑ȁkZ `IMm-!J_N&Š@V b,B*]|X9XF.JI#N c Yb)ص|_ b(UAiƤA9& l(8 P5v?n})EsM }\UGH ӑ]6~Y!(e8t%m/W$Ƹz'7( YmBQ( CmbM#/5^8BDϒ~5ئT={%FJ#X18efO^7yG 9Wv,˗3ݾN\1/?^M&ٴ)˔eJFNܔ9d4(Zj츣&|XcӼN6->K~$QGƤU;tL[x xʝ4#e4@v7'8g:n3͆A2 !Ы̍ ZB96jy*r"4&hV50/~ǜxk PQ>AG)6v3.挙]яE &]q7} [4cymwfW3>gϒu;٢ ;|ޡSyC9!\Lu)z"po ̈́$T+Rfk<:Oj]K?ѵS\3:@o=#\S7#>rQ?9}Jd@ƃ#Mp+ 8k=83ʱG b>s7^00 K68LT&fFSYu71"ibY%g`5>rI1ze*6/[z0>@#\Ezu>@B;=:Gpj⚯sN#^  ighXcЬ@YNcf!GA[uًycL ndstzp 9H]<$H8ܳt+8Ro;A} Uۯ7xWɲLB)& 3tǜMF IUS7‘"~ KJ׏6R"ܫhpPPM~)͝h%x'qSOl6$+YG?y/&H3z Xm_j2>LU27wRثú\%8ZkJuj$A~zR0\~I>[X!eEVq\en$(cK-B%[3-,K5̠^V\ƨuma>rqv9Z!ݛs̺?/gIR;耶VZS;Fh([XT4D9䀶NgTkzQ[Ψc([K-\zdlGkdMv4 Y*amӫxTGr[j;.KHZ;BQ7 )vcuGdFExb t6="S)}ŧ:ѭM#'g' :GPDԪG*ަD.~C\@„m5dߵόoY-/㷴?G~' Ɍag;g *\h 諕.շAU/Ytz8 K{?7P7npgz[vŐڂcTLUVgmyCe!/0 3Ĭ'_lr%/8  \C^_ﶧxz[C_L:ŮL3їZ1SL)rgCojOy GGyakg^g{L[}F31]m>\]ked"pf ~_JE$iI.꒶x_VqɢJ2I$0cGXj&SOq#43_^Y r<*,gpz?NʆYo ?Lg}B?/FBr (&2 rD7$5^ lB ~`Q$1{=4[ҡtvڧ25xrZW)bܣ'.ڇaͤդ7 Dd?pwejGôfJў9^&Q[hNzz1#m_fJ"%y-2Crz2qxcZ}ӌ͙-) VpӊC%d&+(,hȟhW Y iny`AT!D0=QDd|/V= ,W ||Brn s6FV$6 GhX@)EER%spne&vI+5 brtm/c yMK*`FuioBGs48Ɨ7qfnuX,;b!(LD'$=>%Y0UL sِJX(UK/oΰ e> ՜7nu^yRK6_aKAyn)ЭK:'_':e^'_KѺHuD"E]VN AƒqF 8M)N Jh 9|\ŽN ;_)k!>JU+YkB6 k>`Zbc;3ͪi4O?^FD_}|5at).FUI^)*}?$"5LZ}傎ar5}}6O_O>߿}?F$*n$ \\9~HJ-> .54 Y(x7[/"B~ A(9:'{y[w&K뫮vW[/g9Kmjs5hB[όֱY? zH;+@z&hIt pG˰Kpm|]׹ ^Om÷c4sZgf\ezXBUT_}J Yp!ﱈcxk#cFkZf4.`wxVMX(bñb͖ &-x7I8Wkɽs맸0ne^b_4nzpJ5X>N>5|9)ݲM 9DlfgOy'FDPqd>Gtks;5cR;bxF=t=ԈS,O[;B,Rr8XiX&eG "+~X+t_wc>O FK̠[rSW,?`8BBi1$4AgO R^2"rmqZdx ! (!\2D. 7GGe9GW48,+I[,B SR D 6-uHF`eԖ3f !y~_3j} n%vGQ 8:%cJFL>)r.J Eᣲ:L@qD64B3UN:앧:zd%6ze(BB "x zd70fA݆80 i]DRksV1CW:(SaFbz q;i KVXŀ &9! 锔 95ٝDtgo^\$d{)~h"a2K`ai12ɊR*0)Y+@.l 7w\J$}Hqxr\[P6;a\G 8D&}"SܺȐL)~$ޜ|#MH tB翼퐤ÛD^YdHU0)#nt;49<wr 'g_Ҟ->K89[ SB!bl'I{{r]io+$A2t_Շ .vo}Jڕe[e<9 g<^`e穮JAȊ΀gwH;?q~RLB臘ZϰozQvae^="$x' (sU p8RHGl׆qwqU8 {ehx j^]$))4W4%@𫤔}Ȥ4h EN5e H:Baq$VrΘ.V)V* א`0h %ȉYjN0y=["\Ll9wr򓛁5O§y~Z~%1j"Txx {zsU_Ww^aeAX:1+? Ct?};1To޽+Eǐb=}FꜵΨM?t=: ߕ;}Yӷ'rC47g1?}k_9WZӪЧm4z"A4h¡gMc"q* 芦1R[uD8CP}T nkKx׽q9RC:*.YrĨ}",:!\S[xiڢBy3dIx}C_g^p|otd'wI%Mt58ݺ=v,,G?.(3~3uT·u8Ps"{wA NwjÆ9%xXaN''JIUR@X]qvۙiWw_a.׮y5Bh G)xRĆKxxzVJw봰6gJib4>XD5ρ#;Zn_إFÛ=ZcC/7E?h]ψ,C᪀V_m<'Y_o1d\Q zJy?pє _.'z+r[9<>7BQ/ܒ@dDžn1`a/˅غ~b_>ԝSSF 7΃ԘHul5 فSnDӆK7_^鴲oDN5=\˦Y ;˿/N.OUkϳ. ^JXƍZݓX-OBINAL[;نS0,'үZX4I}˯ V᪢AZѮ֙Ӛko})qp1=GNiȊPPrd|͜njrIgx xC\Sbhɫs`HTF֛e/-]dwE6~eW? r0Z %YGD$mřFW|ۆZ՟ox >׏E^UȫyU񢼪XKHAނJ3`pP{)L֏ ҏl8Wt5s$l WVqzjj:L}x ae:UiM Ģ HШ%hq$1R.I^M@%6:6(&PL2rri Fq͵ckXGoN'c NH:JZVxK9]MԄl0hhj8ke%P!MΆ$fJ {lbg2^+]S]8)Jd5Pt̉iG7R멍19S8!ؘOS媛|. Tr읱:@IP$)|Z xn͸STHvrUHZgP\ ;`mf*ʸGr m8pnF ^o&#tUwՙ/J2 l%k=\Uy:}V)ObU|Wg׬CpA!g 3lA8?wjOpɵ8<`C~|kY8-j$T{7ˆ7vkOo%ek;{kd8q$z BѤuQ-5)J @t}D?'|k@w3Lte)}!h]CS>cJǓGFw齻Q^Q>&8F5kmЀ5Gu`+${}~ao\Ns\ί{4O*AP|8z21~8o4vxu k޵Q 8x;ק XRڳ`"6&FT^540_ n08+LjkЌX" y+ 6+\,2Y̧. LSnyo1) Ld{!LҤ"HLdk :9 4I-'g-V!P<@i5iOf`ӳɇF ҭ{צs0$Z: oSDVe.G/ǟYEMYX+^ðl]f&|ӇWg=Y<8 ץ IR˳͵{hE~ŢǩnRCi7h&Ա\hP!r$:0<*K=Ш;0!N b"u™AskQ&^'lLj>m@PY \r\l <9GF)8s‚Jh #֣m#)$ce&&.S`)}^[p]>jƚCF||.+WF82kI\|z"KU6lDzNWnʱ%|tG^H9X\ovڗ*ݍΎ(=u,pxRU|\^&]ooxQQ77Z]Q[IJ)Dꀈ"5oMig7nCJ+Tp/q$ ;OFü9 %Bh?TϢ>uRH*2v߱FIO+n]VI\!"8~+,Ӣ@ `}p Aeӗ/Owo®tNkS-CͤOj&ρAY{1'o=}2QKO -<.}- q*ismc& M]V%jB)եB^بhC+c*tu%:w=}Xdwi%@e Ƨm@q[nkޭl6N1~ԠB݅ :,9+Ponhzss TKw_m T~nV v9=6 3߆dd  wsw#QFPe!I1"F!zkU4ѵFJmNoX}g/m91ڥߖjsg} P6V'bJTcJ WTDZ0ZM$).`kaO"̄\Z6lx&K)0]oh fkCfwxW'^a99«{wsj0 ld@2١އn \aV*3d.&NUm^2hbagL"ĩ%}Uz^N3kΚ|ZTӣaᠥ30T2#k2#Brڷ9vͣ N*tnV&K(]9ڑPz%{}Md DG:}?U-FM*wZR*ThB3;Jwut!3pq)`G'*:Z8H1IBJZ*ԷJֲҹAR8D`3W4, 4X̫ʱXfK4irϯ}Nח!<ࡈ2pmL?>=L]EQJgܨ)yoL1Em3"9ΫB=JY 7缿>]KVB?^Ez*UVGpY·0]'GgŮG{ eU~sn ĸbgL `J=ڏ:>B;,amH5X/!&'\శ Az;3,$" ~׏jRtxG +YhaZHh,m,UٍBsu&P^N+3pJpK?+]e昭S`@5dYH a}ϡr D3*J-Ne]\x;M9̀}͊3\kd5ߗO.r򓋜"'?('?Y'pMM<jqjNSCNio@CK Q* _s|)?=j,]H^*_Ykti:ks4F?_OFסu㻏PEՕk9 }".%%5=đŎf{s RʵeI)j͸ɔ d/%gU 3e#g^0JRKmF H@)`az&jNN(~B?%i$U?*a./fEH}߳>;βKOZBa|u=ke{; TH%Dsl QBZO9 8DIxG'%=N?{WƑ /YPuCL8vf-'uH5HK1}РD5/h-,tg}WUe 7qK)G[n hr?> $L PD4DI$|FkM$c#isN!+'Pw "!tXBܢJM+r0ŊYUu"h( -u*% s +# ŎkV>;y({%`pkj] +4#BP2 mBDHtAզrP&/M$S¼$2Y0XZg%xSz\P*7`yHeSGa#(,V. $/9QM k <*Hp[$GXUqLA13hk.4Y$/ X '\pѓN XorXTic'Q$/<!$U :$ 8l=cd7Dknl c 9Ex& -;3K“ס|6Y ;v֜&ap7wb4s 8RjEiU&( U~2^_.s P,V*G=qfNH\a||^ewߟU,Vx ev큄뻋3"2%/o@}5W_]n>^ot5A+XqvR?s -r+{ yP=(չcC1Hj,2'ZI3ڛS1Dy+$IQP(x$&0n(%#$Gv7eh !R_( P"%^8Q@D`F9SR\+5e \)w)G!l2Ũ2qC ZCۡrcQP& 9Y^N2n^-(!3Z31QgR39Lh3G2?uDWT ݵS@vظeGbu|`XvXP?ϻ-WsXfyn~0sp,@p3s5 fvu?bqWgp+Lgw[K.e/c,|cj,\5ܤ'o+R`F7J6L d?m3J%ruA*GUrBP'{~noF?h#QJ/MvD)v훒&0›X+%I{Hio Ng }s'ivSpUvZݛtutw1U_zՀxA HגWj1׵4?5͔3WS>^`6IB΀޾sp"1C.lgvKҼ L qқlmgr^Bl |7󷷿_nWd~^;oK^,5ΉGoJw>f '?n_&4s{BJu,<ʢ6faIB%u1X7ˊv[ۛ =n,H4Ed@HvB!1z[BoÙۻGxnE%a7F]d{dQ0;UL85F Ήh^Ǯ[?< xknE(@|L-Y,R‹pr&'+kƐl@@èvPͻGeA.ior7A8GSQ-4<rtg@" Hi;CQc7unj5塈cL4^:6oC݉8!%MG(ČJQ5:OϱuIy3ÍcQ1 NY4x,T&2A%LK48u$]&D15G3̬;+xvr"E^D>}dSד1ByvTg2GѽdQ⍡1Y^^ fгɠRziibd@ɸ22B9D0iٌ.gYg9@Kx֔ =x<ӆB&F)b{7I_P9 CC upz=r%83N1"Lv{PѺfukSpkA) jnFx1d:?4xĻG7faZOQP+&ѽ3R\EY͜5l=sǂ/{MMuFDp(г::)$-;3c}&;4mvmboq¸ձKٳ˙$N{7k:mmBg5]5(wvc߆(VZ t!ٳ˙0QOcUqx5cpWya=[۠UZ$)8^jB8ư'6CbT9+OZF`\PO@g[<+cM G1 w>תg e= *7Ҭ(xbyY g|G@:) e+9IYB"wLo^ Ze#}V!(P-Ċ`&`T 뺸0V5]8D1$y@`Q{p5Ø[^pШ`YjHrnWSxռ&gdo @8)ƇG_}<}۷ĘAӇۡUbػksc>p4?*o_l86lT㵙 hZ^ =k]LFz b1xs H&.NQJֽgvRa~=.v㸰`4S&*j(2\s)v X]G5Ț탬) 3&qh;NQȨHYXNPSB}É3I)0KEwS|!Śy3(q(̋LJ6+FGU-+VezGG7b=+u:^\#9R=N_ u+i݇<SCABV>/?OxAɖI0EZE7_S!/) mFY"'9I2'^cQaצO2f:bȎK -, Ta#$p Y 7 ? ?+ŲsetKwrFQ.b)ޭ8֗MW|M(}<^1$,Gj`]LZV$ 8Wj\flܬ#2 xbhD;I_~0}ݰ,e3`Z(c:!8Zq<2j/9we 9F`6a, x> Y902>K$0#ք4MX.Є 1Bi3Y-5K,InUzH'w G ;Ov DVn̮; 2Oº*;@תev-y|\n6[&Avj.ʂS{̄p6dX# X @MA R (X~͛v3Ogլx~uR>;z1fD`)G2NYz%>a hmLzDz0dnjBlIij75 4m{5Mhtu]*/nw= HzG[<]}Mz~с$v/NJ}ĹJ˹nּHT`y(ڜpm(;2^%_{*tuCHw|ymU*n[r_c|r?mF %P7fZ]v!}v-8~7 =/ r<ɷS#qiH\)%mNIh{ѝAB;#khx%(-ѸXR8@\iO/[arre+8ZNú];坡 DDbz#8B"APp;-,Fmq@,Lbi)F#;FaDmϞ;,[>[+,h[Wb /.ΝJ<#څų2dBen>_vA%^Ǒ nÆ \O?~\8j!l58c(u8^kA6WZ:\An ~>?{F^ogm` ɲe$Iߗ%rNH*4q䪣#ύw_&v./UҳVrpgt9&(eI1|]we>Wfy-AHAS}7r:!'tdS2waYs"ɶB]$H̗=ξCS7?m.P\#2[Y3|MJ?߿]-2n'_F#4^ebo B|*cno#yK/ĊCԘ m.#4 tF(7AuYNUR+4lnʿV%Q̀c:a8 Q'lkلaA 4o5\4w]%f^ )[2W5 *( *^aYjd܃giy|Zqo~zv󾪾D@S޸,(G, <$r l/fUY2!/XdBw1"RW/Z2+єkkx!9;!]tub0BuڔF ϶Ƕ 6=k'xѩ?Dǚwvds8FnDS jl<826V'#9e@v֝h>X$cȶ2AǞx~ZNhtIO:R(MKty8I q gqYӣJH)x|oC<Lms#9+\bQ U⬇./uwN׿3Tg0^OםuQ9Dȷ.js4־W@BZB(.GqTgYJF*1-U2YCf0dSvi{ZKq 0sF$mrAfhjT6l{'E]b𯼚N#gM 'KJhAh3 xl3T՛ G( wNКwO+]J$IlUmЕ]np]MSxps3vۿv8-ͤ2=Z+r .9AHl;^XLjwd$5~G%Ϭ6T-Mo>q:.= сfuU5nZh>"#hY68bMC"=Az?ECB k|"] +Vyn$uBIHvC "8Is8~5TpB TI(J)stuV<ۧ('+580wpsBT:#nH>Eב*i)&$]"uf+KH| nc"b۔9W*s=FnjI'%neF i$x9rB;sR`qS% |@ΣqXPwMc#R6d=jwcx#"oE s殶jJUJo`A6.x"䋖KtڈwyHrd+CqTjK o Kx9jSc0sDhj 6;UPƮ!ⲛ&*v%Ce@CUܹb٘JSzq:Jbuj7Z" }O&r5;'XU.>i{v5V4ViZp&pWv):z~ӹqG;Q+=a1>[ 1]A(K)@ghSQzD EPDS/殼`-∽hD©q ʸ$4aWNflAekBnB^O<9R e*N!Fqm+!;b^nhn`ʨEBRY'Rs݁{v/RQUpB2+sKIrTg| ?`qI:+!Ү} 9"7uTH+v{L0f%y;ډGUˤcosL$WxBހ''L (艕 APBl9%8;u'&5yv/WUQOHLe>闟{^)#^GN[2"E]bkbB>`+ a"myάђ)1s;WpnBlArerz[κj?hx=z䅾~+ܻFogٯga| t}gv^ĿN 8j4`'-[6'Ze 9UՙURbsэ'Ϲ0C^RaFyV^;`Sg+U@)ƣ)iVgkȔօl^26 xJ~҄J=l>rDztŇ%C %5@ 㨩SĂNʖ-LN4_ Ȗϱh@3kE(m<ry4)l?8mnvȚwtCN^N?l5h>Ifks[JcT,N)Bk{`[zK\24-{R+OHT1BSm.$zv]xɚve Kv,UҘ&F*AI.fyq mqEv`ekaGڶ'Ώr4&E֙3Iً J. 2zk.v 7ai~܎w'.#ێ]a#w헾?}?YQIdE7%bAʅÅo"o@dt82qGRQxø~|-}}pƞ{ X5$Р h6o2Xőx"('!$G<aȧ ?y&*[4=Mo5r7 $:6,} 7_&[;"IZ(28ߟY_$B$ѹI.)Mj_[co)+WNIsk??)ބKKߍn w?ޗ?z?k{n657v>B/{c HS"n̺B?܌3[}3KhEiIH)_7spYYȀݣܔ{2,2D1Z.r:+!Gv%Q8.J/nӣQjP"2 ˣS8 p K|7^ş2X+oڈ]@`G)mVGiF< "lRȯH&x `v[RP#vP SHhнȽY4/1`ofMANVC>\$i̦ 0d!; fR\^fIggFѳ/Jc:!rsz\P41L1ޯBR!$ʫL(ϼc3-40GNAnʒDƏ'vm6|* ,< PBh$ b6j3Cܽ8hZ ?䤈ma~$@Ak:PnW1t\.A\7h3[E$90euw,?Pİ|u RLIe3Ի쒣]2/$r jfsVPJȑZITprDK/E4+ 6(c\)( }Q`ݗᅣ)>OKIj˕Fe喱JZBmb>&T"!\ Cz.zBM:j2Q>ݕ WQtJn6ZJ4:dD(!S5ڨZE ]F.s ëBZ7U8zIk{!vӯ2ػ^x;zD_gCH쾙 QrԶ'&W!-:6R\O&ͫ<&/1#N$@  y+R cxFQ@);~c<6eCwg|03`و*R*v{AI3jNDm\:Ձ $=uTއ]dsg]ao>HQ\Q_M֫ȗ&dISo](p4>6.5üR5򇪱8(A"8z(yee9q~YN\N\D1Z! EQ乥Z"z((RJYb(備 NDZ=Q|e,82^7a~&F1ZoOk9V7N҇?~8;?@qK3i@jidtsg? _U7e3%\X.0Vh0y! P"o?1: H'˱d+WS8I3$sl4/`9;iS5B`TxihboAZDG8$mGDmk,Iޫ9>z [ufY,5+feԬ,59R` RP7-5%cr. Bw=nh}`>̛Li'Ңh֩oղ'M{(y卲ۍc[t<$!yx"i K"'"W60kHdWrQ0`hQ Ö KaW6,z:BPm)8xtXP!R f! z38><}4{-h4}:H iV0I׳mgP*٭78&Hϯߑqg`Ko:^\@3%֭ ͳ Px/7z7X+dXP`%KϜø.*aG̥%Lדxeʴ%7mbХj~`bS760iBģ:_4"`υG8 ie)KMиAo]7D"ǽL:qOFx11)<-`\P޴@ l7

ZD2tP߹+r;w%_;g52ݻThEvP~oױSC{<*R粞;lg67g.*D0r [z̠]DJjYWG +c qB|~DCˬv1íh}p@GSgjjw/Zp3^B}k 7;z>_Y _C?%l=S-7R {˱I_Ye+}W;3ʐvFӧ9]oً4k2i^n{@QcLJ}{ʕ=L' q.ňs45` ɋȋL^|:j[Em/!ti|. HPiixwhӔpԼUk))]jUXJUM{8ĢOpY. 뾜O9Uor(I\" y4X+Hxkç0me||rė怆D`N(Q)J# a,|`Ganm.S}KX) W\ :ɡeV#$ϯUv7M&Q^%Zj\c]3MYUOJc,!z|2Sq,*p/XE9{9Az!5dfy ]5g3~N (mrzlݎמm޶yB{WbfۭmW^IkS/ @̥X:Y(FCAHD ky +h"i|Mr}P“`zqLzPXRQQN(nRqIc94'GE|r#GAÕC8ஒ` mT=%ևz,7 ۋVXP80Fx\ro5bL:!lo"D* G2%G k?5@k\()W`Q˶CRU<@!HQ쇂rC]7L5#!Sg7tO\W"pQ>MrǫF|-T3W4 z5ά#+46 7|cnuu]H2 Ee>Ux.ݎZ}|5#O+xf%Lc㊊< i+0"PAT6(1(ht&k xwC]Q̈}hQ pDpQK!"DZ=ͮR3]ZcMbfوpfM*٢)=mܢ)Dmof 2rUU[gE㈡ƮUBav޲2 [h[[·;EYʰN4hAz4XD &d &"B+ k* L$㖍HgJyZKm6 A@Ʉ[VGe+$$L)8D[ʏUICAP^o1拁@OkkkUR ;NN%vJi6.{ڲ a;GHR#xJp'_Bb܂i&V͝"vWbÖflJ>T0Ϫunj@0C>?۟cz;rA~GafD&EU7G3`רz!㻎ڼVV~HkNusAf)|) JŨ(hVFXSn$1[q/FXZ.b[Sr#S덳4{q y gr3iBsIti˼iSr[H[)zVdhBּ|F@¥B!ZԊ {V8-B^F̺_(H\yFWjG")2'ms~i$t\Zlh#7UO #dV`2NOk[Wfv94)`@&q"C-IP|(V I|Jo`o*tbN=`vz񟽡¬:G{kՅuADBtڗ,xyO^+"݊|,cC&h$A/K5EcQoe@f m*|腽xnPq߃ڬܫL /VbYgrG1Bfrѧ,75,I}dʘݥ+tNNI]q-!6$*}gQ޷>As &5{WK+ zglvg!E{>fk%dZe̮-/嵫LA >]+!\ez!S~萿QNh\/edWa>Db+d4k}~WAnsj `ڧ!> $UBM6qQyxP8kA&Zcc1B0iJk0+j6t Ҩ9-Y[{r PT{n'Yj;o R|;Êņ#<~=Xeǿ"%5~6b.i]-+kl"5qEm?V$Pwd5pv<[w/yY |6)z096*8Ϊ&{ARe:J$dŹ|>Ny†I2剻8nrՖ\JCOˬtI7L,{cnw&f_`F7o> FF lݴiP 4f=6))w>⯉)A' B.`bݞSxHtOF#(n,Cg|0mzT`jzgpJf#%y2 וU{A?ڠ$۞v_iEZ Fh w o~] ~ꎾL$ŋۇ߿y`WoҏC w3r=x0w_p_N֐2b*8mɢŘ~b^϶y̘=sZ.Lu4|]+a>뇹_ DJM#r #_2O1 Fu mBg*fAUH2@e+TYO#EBΜES\]лl ]ؤ6|@fۘ dBؿJ*Oz JNJ^;`?&gRMod/]^j2^GGgEaa:'|]u6DIMbsWf}ήJrxӫ!D]0nMg/]k=\ނ[ƵaB)#lɧLfXk魟f):Y^T}2TFQ.t'rrN.umr &9(ueBN]Ae U@Ӵz!\Ce2:kMʿ̢ց{4y*J8Qҝ[j>T[5ZZ Yz:J@\Rj1 ͡XuiEVsN $>L;qSJЧ[~ȅhCΆwڹ3T!U{&Zѳ SBwVkY=m&"'&E؂~8O9@M!'ہC^+My`?`⒳]@\\wAZ) b {"6M{3%`)m9_ w%RD1 Yٗ5%׻cbAs|e_ImqerKZBa)&~R]L(D̥B, $ ?|5<\RgNT ‘Mb0j`1x[={E{1#aq8~H[|M 3\c Ȕ#R,rC癉US I^8z ɔGs_+p~t77Jޒ1eރ*O\_<[mzeK:Pt.s1.-jխdTs,-V?D~hacG0y(*<Ǔa-"h)d 5`F3#sq φlaBi#MH,Sqm0=K_ͬY {@,tADws*ea?{Wqd _Zy0a-yjZk FMv5f`` M7Y/㽸WڂK|č5p-[v`Z"(b0vrsyj;Q_~-z@ HӇ E#Îw#=*bi^"ezU[<*jïm S{֕. se=*sgz|4si\^1f^3gzDSQclܨ&qfzsP@ c)/ 픔G/zxDjyD@cHolءGPJȺBHù{!F2uVR.4zp/n7+!& (mucGO?y}+=\хGRRӁ[DʥڌuNxHKNĭ[_V \h!wֳwnCů9iUN!+Y ,st!k&itתC tLKCބ5y9w&x?}25Xae_.2Ufʌ^WuFz%.XBoV)jЈ/u{A c$z}!EZoۗ(KNF8YcZ)n}㛧t!ouwoH."w|zE^b?yT\uſ[s>oo1x@㋭!* ,Ӽ *2 NDT Z)xOێQ Q@8b…@?h⁉%*JoEs' B ݡ*Z[+YDu:l(3D ×z.΀DCp\j>a;(:$.p瓢&m%W8}X,X y*hS=NQ&/ZYx6Tr`|<Mj 귒|銶^upth3E pLbJ0L1E $x*l%٦1Nj;ୱ{~Db>k( v k*VVVqrchZF%1As j`H88=K8݁wX@rngeW \3 6iFWķ5֍.~=߭?}#Uk4䮎 DZYgu?o.jf&ڗ.X"ŝ*HV@rz(ӓqU>ss}_~.9g1 &L3%5o3->D9ZUɟ0A|K) oOnRM.Ph(0u1_Fe<ZyK&q\br\mGvQ 4%ytuyOn+chph QbT0+a`J 0<J OXwGS0ÙַShh022ŜD~d0Ab1,J`íJ̮Xz.޽:,C#CjF?NIͼ$(L ԒGM2 A g2CT p[@~q4p16z>N%MpflcGb/w~U' )Y+of1bx- ):¸b#xTožW4ԫxGr~ȗ]Mtz]8aGp֥#HU6Dm$98q(>hh e@jZ?}?#ʏodz;5@GH>1sa|i"wbLD8k6fs0;vٞl3]?$'2/~ym>araܾdZ3WxY:/BN}67B@Ǖ/e_|I T9Ҁa($oJVVY$h ʗE}Sz :5lo+-vl$z]%AS+ *fY@XLJNgr(]4Ք fEاw{R0l8_4?^ST|8dXb"xԐCYOwN `sWzk}JS2}N6%NCzF"lѲ:UMzKVw ɞP?g"o( sQ8 ̫h:3~фfsqWsqo>'fAi sVT{ɽV8V>â*s޴&*~ۻ(r/(&Xi91~ pIv 9.: RwPs7(Ms7c CRzX;/Pl"m_(..@e5)9-cNDoGeBtz)HXJKH g$;qZ.) :QZ"EmD\e1P87A6iMk=]ç MRr=jwBțw޵iؖ7åw)>=ԀRf80q)aueNŕ>3v-BKڛ$zĕnnSLXZQ@{%weKNj(#jF;50m<3/^t-ŌJ.Ug^ѕ:s* b(U:!ڿ9/.Cq0ΐ+Ӈ{2FQY$BL4asA^d uu8No먔pT(*(}FYD;T:"Tb^l,4 jSye}p2~ _]q.gʄ:9]ڵ@Eu@48d% %ˆ,E 1.1GIF.hLDJEFXǀr뒖) Nb<:DG(NQ2!{jwRyV$q*$#C3P62=KZnF׶໙Aɉͽ]7AuIOroY83JpC c!SR*$H[f][fP6aP#aoh6f)aP!NF=5ʷpbRQ$6㸭PAwa 6%툮NQ+Dgh%*ׇ_7x!9`/@ͤgkmՖU%ޑk8|&q@0un{.nт㱰L?_oWd f*bbbeRY-Pj;O!Yp*yY='߀t9'_L[U .[' G8T%ͧ}$Lt?5Px6|fSOG;YpNR2#:J1y 5 ǫuE(&Tvc"}sZp5yuƷuq@ ;)P֧X(\46i3cUN~W}ja"(QV?4Cs5;]bBJțh$UVyE5bܥOvoG6BP1@80(Wj6}s<"[9yw'@ؐkS 4!)*6>9C( .a6_޹ J8فI*8ZbS:Ak t'2?a/` i+Z1H:/r?0-EϴleBohֽ5v/8jY`68A{ᗌb}N ~gNwdr< ] (Sc]KsQl s>{jyAtD ErApԉHtJo1RAEy˭J`H9Sb&BoP 4OIsKf*y21nIroϓd1}?e^|m@ȧW;4RT~z F ĦʉS*磣A[ٸ\+ɘoۋlGgLfaB%\R*frdv}ûr&j LA8vl3Z~⚕Ͽ%t1Վw,ߒ2jsq㛤 z&n"iSVIQb 7lạV6FL%!vaݽ2FUT$})=B@o0ɕ<)Z)]LƓbUr ^p^~nFWM<ʉq7_w:p.[zҚ"x֗@̱\J,9Ӟ_މd%8"ιͤ7GyDa(Z{hGn/c[SZsh]$FnnF0 si4{15=ۢM4C & ȜG4G/}Zc4JruUSCN' 1kb'1:NHCofD uolpR49AO9 >83$G2" 7JC$KY"FrYb+rͷ yO4S38xps œKs&4M 1ơO}A&KYD ,V^fD^ŊZ׮M(1m6Ԛs0*ΚV^$U6)bAy3 R9&KctT݄YաثGG^v"u ּ.spmWBq5 pK"8 g "8qv"6SgPAY?]%%m 2(l2(2+lKuy('n&"k8n+][Pk{h=m -7nI+F^I6 O}=byمARTG23߯nR7ҹg0H&WbYiTtM:*pp}i`*(pI<-  EZoݘgJ$l֠QdP.}"Xh =/I?-cWIJ Fx&j;3+\qvCi<״8mѯd;*B ̺lMitE(A&c>8kAO=:e 8{. P MxN`hsx1< 9l\}y;MHCd t2C U{UV2UND8ѵ;re٪h^n/"*'du_Oa<\rHPU JIL!4:M+]8t,= ۀ9$8Q'Z͠'I+)f)NlǷ_c$1_숦IJ`6P͠*+I#6y槗Ct.|^h=8M"F6Fk!m!ٚН 3w:G%h=v`)F)ӓj7]?ۺ{gJSM%阚@wh |*%&.)[]5ݾfMv" <&utgW%2Ɣ!Lx;&\C(1D^؛".$WOcqBx턱ՔcZ[5kMkj֚pk-iIsB)ifZ?3ꃏ-{vOKv!&Ȯ!BV5?=ا5*jT(2ZY#iC7 -JMK[{{ t#* )O&}; Iw0rm^C^Cί dEOq(A9}6L^tQ1֥sPGV# U$A!:b {<!BbGr`2hRbhE*Qyۈl '8yצj.ѽUoqD\Z2qT*A \L ^b0R mՒa'%%|RkFJbR yG@mؐ+戻SĆ@ϖS ` %)d.j"=6bsmEN%F\vsY7iYQmnIFFT^׿y~+S?k@l܇d<]deZ9\1%#N[UaWґB\XxJ^)"\WiE~S"aƛ Ԋ$M$O'3컮;G͌z}Ǥ u`ӝD4TQNevvJ63"I3dr<w/1"ε>1p$ iG(\wcD5p:9PD5H:F'{k>#U\ć`/LE}NCp0 ߉5}&/ɻ=F7R> KNdH(]%"z"4l7M2*a`>(ԃ` _&a[Qq=~ ;r>eS~[QTv(ad_I.%6iR嵍 -(,Qhv Z7\nf*fƴfGKBi_yů#uu^ Zjϼ\zJlEB(j *%1[™9i{ԏ[Q,,{KrԮG5_/e[w_B]}~%|-<|o:i~ޢY.z*|I>^u&N<U8מӧ OvF(ӗc/ʅxO|bb f'ZcR]UO}@F?ޗl_~^!+ÈӇWư}yGʊPpBB\sQSkv5y%OMӇf>-]D]L+u[!F$R$H Kb-yQJִ%-!NV Ԭ\._^R48)s&sC.r ,.E@(‚V`Y%%IꁎQ(g{"ks}z!gq{Gl߀$btnBka١jJԟJ/eCT:ROՠQJKPTrŸԟj^j 7Y@4=DgaHIm9OPѤc +RM$fjW@2NJd7j)1!@m7k8 U gcK8DG\+lbbaȺGءoeP ["%nk-ԥ Kpށ/}wN i<r moۋ/l8vGI;Ba AK[0VZWzOj"XՕ%u_UEDH2ċFr.ҝbNsto>QLDZ*C02<+ KV#PRZ(s*C\i#Sw2e sR\KVu tP ,}?S 34 jj}Rх_+֘YfrK06Owep!:Oa@Ib b/1:K%ƖE6q%F\#G pj RU1ʂ:γr`'"[5l4oGO/WCu:%PK$= Knz5gPl1.0^UW ]z-$&eU3s{dP;]2xc0G3q1"UP.}ͲOHt*0:Hap5)d;tQ 8I<[^m-Ě'z3Px!]rMНYJNiSwz(nݡ /UJ^l_.=ȌQ@~ A&_+%l)Z<Td`G22/}[ " ⾥_vOeBA+>v/O Z '=Eyw(H[N֒~Ttqw?t` )i3iUc+mk(O+Z#VO^~tu|=BNx=MOǓL2CSd) i&ߖH-B3RQcp0:@bY?OfN109w:PT fX֣[bK)d-IsZv[n~V?}ϰbBp)b^~O7F6GX=xQ.}YW˓fjG"`8 RTY>׻ʩ8FϣNlr"1Drs:iI*޴v7fRKaQVޖYxݜ7–/xS\|%<5+zTČ%͕l|>3u+cϘ"s{$0@/3 C[7$K&8Ap_]^kXIcCԴ}!jm[;?]pg$Yxк ٕ4W:Lz A}|cp-r=ѕ,u5jF,>~!Y/Ob8Fy#BV*{xt/ ph-OsSX %cjRl7CS:lal*.RP: Njb&B#Ӛ}I} }&R >QϹ:uОQJINr(dF=; __:)OV88 ;yLV:sr8bc4"k}:Qޔ %#o*)a*0 P9+y关CU  ֗Tr% dNb' Y9D{]5B4*%c6aA@@1Ba.QQS*R.cW  ^rq̈́*ԳiDh)z!) я5L  8PMY}?+Jx^M_$_{GExVa6ߋ[lAYSG˨ '*+Hdh(k`b)dK9\JɌI`~1dL 8_E@ӆ#H${WƑ {s'-׾C`i2ǙյLeHJ3$%5ŭ[-]}U9u7UHcs0bN%7/ i悸'կ#XBy>߈L_>x:3d͊|i4NJW hk+yK7Kx?ϐçttus=ߥmRT)6xY6*ruEY9:ǽ^,^ _\È8;>L vusgxye֭zsyHU ^BCW9E0-co>R*h|MawM89|q~4kЀ y7çQ.TKSj, %vss\sOeF)V☪'qzV3^ͤP!OrHO4AeyrwBdm"GkV8)^9eB:I<9-%ZH[Ի5b~ݧm&O " I$΋y c` A{)B%삪N51HM-Ρw/!77m5ۦwzW;C0-MۋIw2J|] yBMXPAga(@jb O^o8 {#ra LJiOnsަ%fArK죟}ܰ؃ 9aT 6jTԩ#~Qn+7BSf$1>f'1Q1aZGǚׯ[q}28hQ\{2"2q. սyM4͊)fU oGwwp'wEP`?Lł,_׃j`=뷃t};e WV+x(Vuֳ ЂBptsWuj)0ɂM*՟6Ç?;Zہus0'{IPۖ-'maٶȂ>l U{WIKq&tklhRBݧl#݀'ia+ ъ$}>jTĒ F+?=[:fwD(m&)Rm `،JjX7v/8zWDd'{0\o%]3g*7Eo[mC[||ƱBz' ) SkOG# fZ.DgrZ7ݟGmH1kl?MJ\s-#j+Q*Y֕eUOAPieǺ gUw30[ƿ\))B 4Ç#_ ]_Ae[PY+ s;#F dHP"cIq (K=F3 f4L,hkOX(1DIO MGAo}Hqo+ebL*S>#30Y1u*Fǂ@+b,Zk)!Hctp@kfRGi L FXÅ).) rq$W`A:,x T Z,BuQ$^f{Ӽ=H7WY;|q\\?Ƒw~H&ȷl[~U*UiP *Eew*-DJ@őPhK/WfAia`ŧ*AHR-X;1)1}DPT- <R2Dq[rNc*7*J6DbXvbcKDhP</] ҁ.őDwZYD2F8MsMB=!PmB,B43m3^TG䈉u%#&F+-D q(նw <&mCDI='m5_5Z 3gF\[AdŽKEőBlMS +Op # sQJ=KRyU^F\q 'tjq>ӫޟdžR=:E+PzClƊdNPm `+f6[P Q oTA҆ˊVPM=`UuJgl9q1 x S*ք敞e`sښ$,Wzb6ƤH>+5S=bWudk6 D8 y۶Ƞ-OxF]#(M!`Uu7/7Ӫ<~]dh t\;%qs6=la|5Y|᷃m_,C\ g[tm4nQԒ?8 )˔" n؟= [ /Mu>T`5,1N:kO+JDEFs0no7nIpF0{}QkϖMpdphT) d6 [1ry#V`6mQ #g1Ax==xpcE [4.0)2WVk &\uQ;bfu.6'o`>d6O?&|9I?sKV]:K K!n2B36T9S#CH h.¬SS%=\9W>E\L&.QntsUAogfv?0`"bs?P?3>/#X(#WdJuhwªN0YPaaRwͶ] UDף2s\fN rwE`:8l`[4gKSGRqgehIP9uѢJ?xK*hZT~`6aXcI%> ->.bk]Š(SS( Ҭ|0g`)D+D4GgK0f Ub1h!R PJ;p ӎ/KW @НVUߚyAX`Sa M-p А,5Гu==km5g @i\@GJnR-AR^ =-&f8n8LLO KlN8w8QfG2Wwe2!Fj8e R"0=&ayP`2*E2KArA]lA,JR)FJ3FѰ)QTy⃂-9 BMXR(R(XFP+.`"=~9 . jr6~k xy/" p?xwd>* O_χ [a}4䙹k$seO?LgK">܏ǣwC'?_36ߗ&;K~n[6mXo "ϐ20.N~nHraL-J{^ymbǷWwIO`]^2 e46"=8&~Ws= 7o3y[8OojZj^$$RJlJ7k#vr|Zo 17.ng42W,K .VZѻ [OGcrkg}p+Q7NwxѪ"J7Wu}4 <`TPpKt[u6JVO!mos|ڥ3a MSHV]>{J.9io-()W7k*WA ՊQS+^yiZѪK=Y.ԆBػ|-d:(& B:p),8c[%w>JJL  ZxD4)Zh/525Rjkn*'aRFFYzI,ID1 (W.4Xz8HE7xS87)).B\j7DҔ.J#Q p%HTs*ce`)Bf+Hɧ"cUS`6Rܝ`b J cf8K42G r&V2ʥN-"!QL5R%fCӂbU R۬S>dFY Tf̤vKL9R mb}S > A|Bb*\ٻF#vP[]^@rw1 JXo7_5E#> 9ެDT=(}JʕL;y.I4ِB>G#yy?.PHI^XX Ы朡fzE9EfuVT4 $\boA(u2,eʆh3Jy̙m47]"XF=il(e<> Z%*ޟ0(ԆI;:@mThS!ӆ|FdbSt$k!4МDX=fQIjt1>"K.0 F"vs346RQD X %IoW=Zu~ٚ,.ꎻ" ; M?,Ӥ@. _.+-g~]3M)ܠֳ|nXo[.KM2D u&<*"ǿ(*L>ޔcCkUyQ/vQ8z5MZzB'Ȑwi.`B1ˁ8%U]Fԕ`c |_Blu8b=a11'wwd݃|Rv NmڨV>>BkzmV6 ֪yP0 Ss0Q5tYe9N8]VӋUfI\&#)L Yga. &j+] (),GVrz[J}@cg3F* ɀYyZ,M^̕[٭5̠ͤ:ƧV- JSU" $E;Bɗ t1N&.vhNB]ݤg؛ؔ?]ͲC}w!1Wo"dhkXRZO6zΏ\ph.%0lk9X+/ ݞFiLuoCv73bK{F:#TWs vN*IWh%#j?!CVs9S+mÜtwݢ[)4k@ki w j -"dbF(6 I'5LRu|ѹ,p\] a5Ş ՚b^:=H,Qs^]O?K[loVV Ɍ|_nBq~P'//tYU˕Kz_dm{ >6-]gMZ{gjtn< Ukw0679#R % (9\ _?Q 2gBx=1 }_k6ٽq=P.L8򲡏ex&Q;DŽRi晲X;OݞyTHrOPȑcGϕH.XXБ)+h7Y @Š}F * &[ݺ#V2eVZmqt?̫iuoк7cjF)Q௯B[Ltas- S625)Z/)4 M ~q8-9v7(=+q)ߕACwjPI%:U暳)R[Y!{]{{l$m@I%HaǓlՀ6ɶ &6ؼ* U -*!\pr0O΢7\ad:a0F>i\15 ԡ+RB'4c`FvM3>x!&ake!)!u;<C2?O6ç xl)uXˡrGh'輁€7 Ewx؁R .mX<8dE -h-88c"v(;{.Bݢo1mVyGǫjd7}XYp k/L~gF `|^Ov)<8h~5Vj?>g2:\rg"-m &tI d(Iu,qH3Jb#M-{+haMMQ}:戶;Nk:lFi=FGE>YejĬ9g9)^Jx"KRC6T`Dnӿf^ ⬝mFeFK5G/ .28DF.R\ci3iJf %Ej@ ZRj&^F~NgzWЇ/%;6|P&ٗCrG1i3~Zgd-s~*x\;BW7{>Hƒ㪓;F~2KˏW.lq2chGrz=Z/[-2Udm3*P(,os.9ͰeD{ UaN:uK~-3H=#{"FYۧegX^dd$ WI36 %ƨ a̞txP;%g1}(N 8eV113"$=YΔUVYنL~:;V_k,TX+$aSD {[S-E{/$UH캂nSʩa(p1ͥ?Աj:Geצst/aotL=aʏҤhŤu98*'?4JÙ 8Tps Ꙓ)}Ip嵌㊄ZcܽrC䄔9w'- x?7;*y?yOoBu8&WXW uN|sO 9L4NmbNQR1mpDѓ\u] }Q6a8H1.zHuxT,KV<c(sJ_y[{t4y^a~4+#pΤXr?|qTFek(bo+_xB- (`BF_R-[9)8IU!Yqo9IԮ7.4f(rxjȹpsl׍Fw.(ʩn5D;'O@ &=bz]&b6i||0ӛ*W۝?*qAUk:=b!m<;Jr\vatWVLxkVďDu %eY4  / 6,#nekpXQuK%craT)D&_7&ݚ@T~|\Y>Sosgor50(#:y3z|?A$=Bm׈b*QerY<V4@?Ȍ5f TB 6j݉<%2" oQ9/7qH| yup#ۑwȻv]ȲXҥ;R*!ZIR*c< cQ˵4q>?gߢ`G1[liPt,7oSso |qt1ֲx\)IMzљo&]w3qq{y5&8M&}9 79~ G3 ӛ7qا+C%V2( %EN ؇ΝNTvjOGPRm`RRV%l>"qE HeydknURe8S&iN^0ԞI09B +(JiADj* \*pSGa#36'UB8d3 3%0ֈX1i05#RѶ NT|1ySڬ~U"AH Qcc$GSLđ e=XHuDrzi{@v,KY1")PpN$*0X~j$Yԁ]~ ]\/?EbqIJ[/~T-yW*>}.rrkfdL@ҖsE^ї[D /ݼNͪSw(vX,:jT ΨAO4q,<ά2ޜ(mH0ItY`ngilXx򲹠Tוc&9EsW.ĒTqhY^ gؑwy,`aTơy c*%.7\_Q974s\s,s[8ֲп'rDUš(!jBz*Us"x[^}xsV+t"#W+9Sy5 bTZ2MQ'h|ר MzqJ Hɘw\%V"'}UEU$\]c 6l%oeޞ O>S;]cɛK?o/ Օ_rZ,H6-}jHe]{! 1C36t- Z?e87|ʐ|'SE,2|FUKIG=*bOq.v82d%f|jb?u4*8uTH]s Nvi C0{Nw :4>1o7 \ʾ=M;gf!E>p+:Wʿv3o9'rSv!5owo\O#X"Xs5<<|Yz G ]1O6k5iE{ $ Q,X f5N Nr#DSqɸs:jV@SyiuQJEVu $-5# jcimOI^;Z ;nVd Q\(qH/-HD ă lD9b8TJ DI!5ZB)=y~ !vby@C OTKHAa"&BD)7`<Pj, k1)To8Q)$ |HCžMDxȨH^\NMk[bv.K1l~ r5*NJeCET뭥;yd6jA- ӊxv=]HBΔE rNKjJbC!z:.;aGTu0?'3꒯%mygds393ϡɝ .᪫t-F klKZYd26Zk TH>XQ~oٱdfdƫq5W7;һLdzڸ}ߊ\Z` Ļ"DP G^QGٻ\.DMB BڕP'٧flYϋ`&+j=/.2oW1^%c\S>˒8.?б+c'Qky{6WMGhԍ f/n)R7\2y * @-EW_lL//1ַ_Aml5AL\ry_#"7x,0F\܇b{F0AXS;==8R|SE<\/4`:$*5LtNQ AQn Xn683}._Fнy .y&>Er0qãG3 ӛ7QF*=AY}UgJ7=j0MJe' \-!VLJpJ-s"P'r)] #i B%` ,LB ovQ!&ׄ!r[c8r[mF;&UT$ X%E R%& RҴi0ь%0IS EWj#לc%@LK8Tj.$G/3&ƪU尔h T4s-w)0G;.Ko 7O +?L>VLXM~1&5EH<p8q1}\]}Z77kǷJjushq-lf,4l4k%G{L wu@l}kc o~CM68zEkFI*e80G`F$JwQyIO~8N,9skNf+35tY6ERrCcъk ") K 2H(22 s8^nDZѻaϭJi=>a{쥃h\T`44m5,X,эGۏg!Ҟ۠V[0O (dQ\ $שܤ`2$Wd󻣨`}p@(֤1{AKjʩi5Vc]Lp pc0A3PPkkTXf%V6䰴2N* Ԗ8pjf5 'h塨 _'C'Ms7q>!6Y`n} 㻭5Ȓx=ZBmX}~'k)VjvLlw S)"BD[\JNo8{?~%C K`Le)Gba-2^t^HJ{;]S*VF#Vfz/. X_CJ<8tա\!G^PVǣO $Ev}=@2$m2m87;fhdgYXwy\텕h&c8sɌ*v_sƭ#eM"85I\v2vJ m~Q%+ҷ_NVQcS}{NPG ._#uϸҘxwfZi6 O,V>z{j~)nWJm6 ~ao~|[ C롷y\<*fåC t\?6^P5/ļ: 7ɨ>ɿ!y?g{7?l dĹ"wq  $7&YH^8EE4`ppS请ò-]a`2$vlFˬtV{ɓrymVp ׍ FͰfQvګ&F hLk~.:zMӢk,V^3?BP0u^v$dkuk~ѹI 3s8Cf /C(!6"ri3®//BSBć{Io- tD ԡȦi1moӴj.۸7]VB B"D#}Ŕue_?I!9 9A͒W>q-:CTB G`\~Yy .s9># CӆɯOd9ʼ%ϑ㞻G{gZiB)a>Y]< 1FJ˜w};@C[Wd^.U=thNtm󝢠h례;"Sze';O S -Bd!(5y,$̚Kb(%9Fltv<$%mo$1>W]x9;f2HDa6ajJjk2J-ZՃ{B8Ij6[^?&fN4+ލaCHf-Pgd Z֏ Ĕ1<02 pĩ+\ݔDsCKߗEK^ŭ?3\g&[ jY!|&)&6t-:GvBEPUsw-қFJ6V+_kkJF)yM!xh9.s/.f]t*U(>*j m1'~8s*E!EFƵFLj4ǧz4h8?f.^OVrڌMZ ͝{}_Ey_t}ҿ J`P(|SBk1$x-5S0V at _RPr@nQjlkm֯{Huܸ,}Ƿ&~1Ib%;&~&zF*A@4ٖqF&t|>?MA(6,8#\x{Bd;XwC3t݈(Yb; Y=6>#9qϞ! >?4 qD2fwG@ 80FAl1q]p` lPZ?Ɠsm'9hHkff*yz=Q͚5CDu:6yvE:bاǤSpTַKLհ:6Lx$5RL?A;cx(BICѩ$B|典ՅFȬ`fH**\6= lE9206\ Ü[D@KÛ #/@seDJ& F(͜{n4բ*0S5Lje",)&Ɣ9]@ i4=5B(u ~7zasDk+J _$=^J *B"QKCo)'%~6zw-P +BHo], U0T2RBQ'/, 08eIzԑ퐓jRէ 7m@pNNBe$-є+4 C4DŽ} ȖgƷez Zd#2ͥjLYoJΕ𨹨tB0eh ^^jk pN`K5p*a:wS `$PizV)nB{f-?vOy&[5٧K.~ZNTySW{Q՟w[ AV7oo~d:<)woz:3y?B[)EףwW\*jmSK *KW&k`G6OΟLZv[]Y~ˣ5diFS!ck%P H@߄xAwd<9!.6w) 'mR?6%ELq!zX\d AaDQHؔ^RcujW3 zJH&+;=+iPsaȆoaT'g5IȘ^h]!kY RNé }2j%MO0GLVNNnyV 4ntMϒ@5Zxf7%Or,Wt~޲;7@E'Z!u4,'#ה-Eno=- |?v:Oz ==|ѓ7M5`E|S}4~ھx^b*uB \2P1 ##A m5_d7h3<+R$8eŜ2K\z@~ΐ㈁t?6D ˼(H &e0~ٗ؀< E{NZ'OQ-գHmR6SrPjRB!\wofᅼ)NbOd$,BI!$gu 8H!X"K_mYnsYPk%0d‹ [vpȵ6eY[Ys̴~-yAAWg?:]AE~ٜ#:0+UFҬ輸$PvpaIcHE, hhSD5o-wqC9pײ=>-_trcVHHOY^I7W)vR{ݗlpȒ"QTHJJ\8r tA?BD>RS} (Úi;jymvg!g͌Do+s[_!о{nSCOj M؝g/ polr{._+5 "@e՚6n:k9$4r~YƔM梌SZd>ZbG|Y'teԲ,B7dq+J S{%agGS}/%W4NRȁ$NomD[_YYuJ{;b՚6i+iqzPiC5:O!rh-ȹf)];Z { KօSwOmcQi5GkvuϷ8N$|] TWU נ{\_7ԅ@5QFxMtRgD 7VAzHʢ1Вl"r<2Dͅ}-!JaZ#4dN5iRu%fHP *I|0&`imI]+0E,1&wGJVњ6z)1-<;,* ͖ P9hYQsUQHÔ\W}F' F9]O 1 ~֑E 59V"-|LSq2@$iϏ*lcr[[ My$LMOsfJ b$^%wXB%$DtKuΠ\HQF# BER%z$j5ŔHoNJNۃEAY{v'T$AkHj^B }x{}ǹ}{T0IJڝP q|(PU:]t\%*+QqRj5m3Iupҽ9h^^8SR,Yc-L7~woơ5Gl?{BfKx?#])"z\~,ă0 ?M>Sa*hTZp4́dBc 6D[Z^nNl8E2k"O =JX>?a2Ş -a"YCN <Be9h gRmooĔD2U`!CeG^7\I)$boʒ2Zs 2'lEt,RO[%+A_\8ArIV75IT Q7M>op1'e;fh9`o *3]zta/o~9-7]^Ƙ!JЌo2TN^u,x5JKKRmYjڏ?GeBյB^f˭w쳟|ulN pLyso.33& < >,{g7h'g qmNӓmˑskkzl%Zn~?8M7 ?FT 1Рfh]K\B=c1?%i-x[T:e1nL>xpp``q0uθ^KߊG'pE0Ը(u=3; bZM tί008lrvgE O\Lߥ~m9zl?M]lcy_ֹVڻn~^֘ }'ڋA{` ec^Q5 ;=`Antj"oMձ[#q7<6<,x<{-ܨ$#^hJPQD: 6~u\ؤ 6M4#.x􎷙LI:1GZIm&~yDQf|z} vX`]eKUbG&JK\Ib '$ t2ȠM`TB+-rT %:RT{OGb I!:ޝJET@Uϱ3zf>2lA)ɹbB%VYpNT&٠kl/ {ͩYO/s32f]ٴ^ˏ鉿o Lzho$UM_MLWW|Wf'i85.3@EmUAg%jX~?^]\ۓ/F(i"Rɩ~P_a>oBQx49_`YzW N5i}昖䓙w?tʗj$ 9'42iiJd23OJ HHf0Zg\ڤXH{2,ikv˴%^٬Z&d# :;^-qٟ ٟwV )0T)Zn.뿉W.ίO|35՝?N&߿4-mc<<\> O'N, ۞l}3Unb&[fz}]3"獅.Od͌ޝOl# =YG3OVsV$c*tK([[ JN6HݙiSnZ*ݺ!O|Svtk[ӳE%5 fY?n=:^Ok ɖR+_?{wduxg,#@(r"JYN 25Q3o1 0YG;"ƴ呂>`^ i4$q?ɲݡ~san>-Hu!|;oMهp6EAꒈŤ2֊ Z7Jb?$}xU'Iuў d0VDb;\m{/k?\0+b6ޱ(U"ӡD׌٥֊c 'N{NqoZ"$Hq"GdȫT[cʮVjcV:IE.N%L/O8XgH,mxVi׽S댙zsnS&Y΄U%ZX2\fj<]R u/zx2[=f{P};j<5mX1BHg! {1E!.’[ѽEad8pvGye˧KہR )͠7LQP{pqӃghKjD r")q'9J1N æPN> N=UU!)6ǖp!+řJ^ *`!j@2>Z4ͬ$Yifgc __MNy*aiM_U>_ I3C8bj]#}&B scXƊc}D!F/r8&?.Y! CiFkW̉nzQ=_Yw:ӽ^nsYquFz:0ymbc+hb"׃ފ R^V)u)7KV51u5En*`5/tL[VwW46UӍm?8m*YBVp.z"pސ ɡJ0:a͵ѹ7}mLPfY]Φ&G 8ub,W7azJFZ탱]{c;4(1 e׈4cɼWih.9T98[wfOyob7,\6j %BxMv5z2FHdv~H_Ҡϳ䉹#ybt[!-;fxeLR˴Cs7NOIH^ZIB&ԼѢߟ_6 ԬZd+!&Sѐ (}8Fñ(FADl7y\` CT:ED'Fow>rZ +%^ 5R'!z1yJ,ΝmdQ#5Z.U,L߅ N|Ʒ$#Dc<XpHQ CāHm]Шcj*:eNhFv]hPѶr yזmaw}_ϲ 0֟"P R-Y UT6`"(`I:S(9dcOp+meêfS4+4WVAzܠ1]2 ~?]jM_UQAѳj(An}뾈3CR5G-qEDl:g#.4ڃE48z'1Ri8PORnk#}ni"5>iѦ\ c*WӠ䃄 67S:TaڈԶt0QDN>gF]#i2zlqOvKb4a}OoH%נvM 6Gt6XY01Jok0SR5㠪5U=13T'k)s6b;yzf'Hz;8E5Np,0CA!S$$uc;v?_݋<# oWG/ >`+DWXC[ADiq|<=ͽ7s@ͫ=Ĺf1M(r*rM4W!5`E J6ov +0É!*mY"N jN3*#Tra`Ce0o VP専*T0#P=7륒0Ofq3^EF ב]h 溫 fۨ6`jj]y2v7OUrS㟪zS v ^1@4%V1wem$zRK? l'`ٞn D)S$6ɦXl*CΩYH<]_ߜÿj ?.ԟ6g??sطs&0HcE鳛7D 8C(NW%od jF$1rC3KIRK^ /e-xL)-/qlzR xIڙy?Q9>-__T<)gL{k-;HTM$(UeɬXK"VR!ы Gqv/@GO2{=pzᝬĔM)Nw4!lb&+ F:  % jOXN6-K`tiybΥpy[|=OtpAj[JZfG``FJ-I˨YR֘k)󿷢R4b,; ;ZSDKpA2IKM82*D= [ӆ4TchˬdMmYd؇'n/{OlHdzl2SqyQ/$0t 6y - hRG;Ljӹԗ*ov<~-Nck1P~j.zvQe9Yo1w99 sտ hA ꐅoLrwmSxw뼹eC, | ,է nEsf. _]e>ߞB .>\8y wzџgGs F%+uhұylK}FW2HSs/KGq ɻh٬eiê/ô*Zw0Ht*X-^˼ nF< )02ѥ^hͦH9W>OֆxAgk*ފ%Pq*iH8bOX Ze5Tސ0@lSӬe%Bhr j┊x/8SHz8v8k .:/)WulR|_ C>++|g=}Xl6OZo'8x:לix(@<9&Kd q`PfX.cL~:3@mY/= ȏY34HXIGtQbvA `u&McK:[;p̰yw8bz/< x=Rܓ*(5g =fӂK.Ƌt1щ9՞oŌ,d?oO&Q鮇 *s99KD"-GĨtOLWȍx$G[0# Ox$$Z0YMB$⩵'aV' 4 sǺщ?Ə~f uɜyt! n {a8B>p:HXIAAxucRN1 gx[O)"8ojJi;qb4Ht'LR2&ѯ%6kj &g".-I]r c#E+dtbRWMh`X'Z 3/[1J:,&9vJ*h'*s~o_$4AJU+-V;LsJuiﴟo`bF•)l4֤ ;70=; KZ.8i,TS N8U$ ,x" 5zƥV8cN%O64-,4 Ppm V o#pe@Zˌ0_Oc+'*@%nkuP*zT^^.umQfn2N0`0XČzOT/$10'CW!0ʦ[QW *Fv/Nh1_6 ~U,{ {p.sػxSIΓ$UdS3ELOW.bn{EwYi+g* b:+v>e}#}}q8=|lqVwy⎵# =ߘ2פdi>7;eh+p{3#R^LgdzV }LZB{/#FG*-Jѩ<*-ZPT ܪ/d9eK!-9>v02(RbҵLn.F,g+T㟺 .1kB{*տNDSR{e{^_|B141>D^zb cV^C{#PD0#7 \Nؠ)wƅ0i優}ڂOo!Dɏ9r"2a|a'$&æ0Zk8IIJ"jppM9`Տ6 e ÚDI:i}*/ft𰘹ݤR~nIePyÌﱒ饽`RU9X ٣bܘ?OQKL؜OѦVulLw9 vݧ37Wm{0?`"/o6Z-1?5֖V1lեl}x,9/󼮥 7IkKfyly2G_JjHԲ `ofq'L0+"T-ՍPG]PjwX@{ذ\` %z Ą[2:DQH*dGHKi }(F1HDKEߑ@ZwceT"Ԯ`Q#u~)jUs="!x%qr^㑫V] ?vɥjvtEQ(BHŔ!,u"ȮPok֯VqR嬁Nk2-W?UZXڦ0#Sq1=T \xUS9ĸMWIu)6-cʎ,{rc1ul:Ͱ V#h#Ttq:;*?ަĤ{ݧkT1w˔9)+ZS D,ϲDW0Sf 90gW0#ސz-:J8wjΠcǰ/)rG~X%\-#Œ@dEtDCwZkI=Y?\Ws3o~i1Koj3Yš>}"]k.B ѡ,. N\}GeBɢڶ]dͬ4#cԎu,uiO$PyVm5LtN_E5h(cb`7{۸ =~1`frݓAfy/m4A~[%&EAlɢꯪr]V%__p( /m˒nVzn(3o>Օ&E*իxի2r/Wl旅u!1%B;ނc'$}6i^@RL=㝎07gpXG25F1*i#O_4+>ⷙRG.tޭ¢$NH .P駩fD.w-Veqa2$qB5Sc~?t:\TyfI-Vx4. :;uJ66Yzrt(NϹ,ɟFthS;fC7O| ±޼G8*x15h6[9/lj}PLEN_EyPVֳl~*_q.lކPS2գoFW^Bg{ ^qPk'O1Yu_w[[02~o3 95ȗNPHi^߱ψR&'nDA/2=//)nBluY}e=GkŇ^Vq;`6F~A.n]9eiqJSM {Ubҙ#_ ¹K¹ %ww{o vyt9Mޭ0nͭ! \˶:vNl+_Ƥ+5${LcK,p\zc> p}dK[2>Kxg٣;,\DADA=t#*H̍Ma"!$MsF,sV`JQ6 ΒL';%ПwѻtR }t}(I>O-^r!&] m/qGY:Z>޿:Xם`z 6CY"??g0y@,%{)ĔxqVw-L5/tcJk_$IĕI=Ėa&Q)=D+ێ] - |#"?lDć  ^= 2zZ"3xGQ\]"T %jMB;8RUD1,s#7Cjn&\JCCubPl+y+A Z\]e3=alOkyvNP/L-y߀ ~1"d:+flBT ߹:Ú>;|WL$HjDd6vuq~\߀Iݻav@i@aۻߊc ov @-[,}୺)OTjKё6ܭ==R,{J/ŘaE k"{00*WU'J6=FRW #.Ъ᭼)HQlOpMٳEA%N-k Hj|PCl(mCW6 Y lۡ.1nzm/Iy\yk0<.W#8ƗU"r [|1T ]Ta7_ 91TNiF*t %^g~7Mr#qG)>R# &TԪ^E^bbaI>>㌵W] 3*T=|*T/nwW6%N_J.O׹oukvxhZ=-ZL&[]SSuqG)ה[n~pk#|Ez~Y QХ}$a! B':=(欫 M޺6;r r.d)ٌL^9mtRJSn{Wv=HZ_R}Vt:j}>\neŏ!$F}ABJBzPڨڌDPJ5t{5߂ zld)dLt*.Rv\ifL~'Ur5 YlM]&޴1Bv\O>6vN>G&7nf:L3–,M8\`,d1΁s;!?{c`D$;~vye ?>cIНy>1Olޛ/ h+hC0A~KYf|1JFn RzMAfrFHB^zSXnڱ+Ϩ|1#:hηY̩@;_SW.^2Y7N/uDM%,OLgn!$䕋L1z>ݤ}ng4n; q0󲥘(Ϥ>(ggۋ99_ gw xяTwo6 vptǀpV ;{@d嬂 L(ۇQ8;ޣ8Lf,wzz&#w Xgpݣa6$:j eu {2°phe>Jl7*&[S~r3xbPV[fOR[=5~npJ Rӹ[s?@̞MunWijGXS˫BHޢAk^eK'4yEP7^]Ov/uLN)_Sn[9R^v/xf!|*?M'j3E/h-Wyl[f*g#s~'䳇E@ɋE@{|nr`*f5Z^66;-jDV̇ѴWaIի1smu;b d}!zxk'n*~8 fJ졑@D}T}Ren*Xfd﬌6$6yMזՅXMCS2գomMƝg>x1]E !]t 9 K&U?o<'0xg٬;{32yf67'1_ONC;v: 鴺;gz8_!c$&@OƱyمзHW[=KS3C% ]!z@)7QZɽpш |9Z" D:8W] d:W M':/ eiU~7/hǡ 3Ŗ8'r|/N175ӿg rkwRJGN~kmЊTh;QS2dzJPh%ӆ,hQZ;hyh5hA&Z0 3ViAMNKႣ^e cīDxJI"Qf,<@Kd%F\RAX0D;A afTq& $V2(8"Z8arʣm¹QC Iq@KAxEࢌ$0DT!JEѢRuSkIh0ZG&nd:މ/Uh]Ӝrj⨮5ow]}Ś$'݊+ou&"UMN~1gzSN =dWV^-"Zt9-NC%u˱n 5PAUBYp9?q-Ipo\sX?=7$f++78#-AXWeE"Di5KQ(8!,+ I`/^MM%1˾z|{4K9Ťl88ƓԈchC0!.sL}3XŖ8EՒ@g7ك»lPJA!av^g F%z Vi̱Rm< ϳ-rqGЪB^_//C-Ppd1 2DI4ghl(_ 0ZzEGrdwηjy4 d8L/U db>C.Py )SWC!F'4A|h^]9Y~qETܠk8gZg:K3~12,ұ͇% Fy۞ 8~?%]ss%o?9n'^Ǥً!έ=PQ*.՟9[xh`0{ZaaIEJGHF7;νgw.'I%g8ݫe[{b"HcL:}{Z]Exctޗ|_{ǫ2vAN~c[AܔY*D%w*5LE*]`\D%V\EC DKR4Pg\2ө˲ݵ4Oҗޭšjv ^Nڱ~en_C [Ti)>3ꌁ6o2M4_Cum{j@\gy<тbǻ՝yկpCq^9m;Uc%Mo;Z˥  E, QXey!BvUK ¯u\@kLf 9(h#*"?̊99J)Toz=+&ֱ)$dVbGMg'5KsZ?Bݧ*G-Z(dvàlč<'$/=&eϑ#SYu>QSc8A)i(J#V Q ptRC Sh*nePtJ N|`\wyO$c%hcLq1+nRUsͧVP!nߗޱDIY^{W{ nOiQ 9~YZ_;y5tQ.>RFQE@jf|pPM`(zAlXbtdFԒWbti "vwq3#?j+"C! h»[mX؊`8 o~܆XMh7~V, /Õũ$R ڬ'X18RC^Ss,0@Y?1@XlBd1)7q`}+.->of. YC"\ݶOGә5X" laCʛ[ Bmނ/(ٷgker4kJ(3*!{`_1F՚ڙ}\̴eJFyɜ3\S`l~* H*dIb< A,/YJƛ)pPEs'4ׁaE%15_e1e&DJT\¥_a^ҘP~< `m"^ۤ>~j6VuXXJ zߠ\C,'xˁ̥=I>n+hٟrC}gK]ۂZ%tOJ?LtNy47ˮۊ.j'jǓ <;[p`ٺIl Fȶa,O lC@Tl/V~|IWe`Ph) C. N#ZD_㞤Wܮ w{ic+&HreK]\NEA [D0:BsmKnWsc]ƉWsbuj;vPV=]>㞩{V7vt/?l</>_UUҳ! `S3_gAD 6H尨۶ē^zp 0ExfW3h0f.ƆWVJKn4m3֢;WO0+8!Sٿ25)%Z9BǮT{9cn<0f9Fj{y0@/OCUP@r)F$ }ig7u°ZZt_>Cz1 i%1-8 酝_J(0WW7Q xץqkio@fTw,l=9`H#w<twO|GAW&Tq\pЗ˴$ w# Qs_]rJu$p4 aL5t؂t&sϝ/0㪘M>R( RDM eF"0Y,~+l+cRMǪOĶЊ9Qt6Gkh>5+mZf Cu9.)!-e $s&}]0?xfh̀L!8#ml' 8NGcYG1KȃP;>@]'A> @i wK\{nhhi"nE2-"~6zc6ܐ h'!v|x>JSʧP(kYTv&5U-+8FGFjvx7x B;痾5W;5-45D.k_PNB&Q0"kدӧ/[rܕJ%FbK:`rUƄgfYL&GY8% O mY:K_g^5eAK©@,%2Hd JS¤%T:02%] d Wd9 (ՠD'wf:HmpX͆qw՜돱6돃qhCH QS_T %$PВ0-X-<F&P.r_6O~ymڀJs:bb%%3z=Q4\ȴrAVExB92_x 5 AMD*UF`8 VH-#Q3QFUa$vUtm:B(aIǛYAӃp3Kl+tǻ_q*kcsjW>{AWVܽ4|]!ooϑ!~Jg@ISι^4wrs97 >\F2D*FCw\~O9IgAoaZ:T* B ՝"?,(2#)4NbU.hRֈD ),PzBUzQ`tc_xTRQDT²4C04OCD^4BhWtA,|%aw|CMFڂICt)bc49MJ̠Y8#%2hT%UT6Cz.]|1TD|)Zi3;6*؂fáuik׬p SQ+ 9=6S3FQ* 3ej DH%IfJ"Щ$&dWTQRax2“<'hUYqqP98$q`uW}^-U>PYkNLԨҏBqX'bt#0^ZY:;t]xSOD7%K!#cbikmHE9X} 0Af'Xd3d_20Hvӣ@|$y`&eHnRTA2Ů.2V$1IIl)O3#L宿v8[|p1)^2oG0DGo;}ף釽kl1_b9w^S3+{~Q8Ow  o,8oȡ (2F+RdИè$ mDzlDN(a  B̈́%V s*aqF\< jiU{X޺acmc06 'eR[aR{ qfNlbI0=Ji,7i+ DDc[UacȀxIa$4I2E@>F MB`E4|v,۝kN@u .^P(*Dc {5^| Сn8֝3]f-9gjy Ud顮l0ݺFt1Jv9C9̔F9RVJ gk0uabU+AC tAb{X'{Bܜ;<@Gdah_lx3cVlWvQf%g ¡D8znx75Lo_#<=14u#d< *q9 S*[B#κ1t\nig4,n.03YXGLc5d0,w~a>} _&}>GbrqL: &/2m*]ܺߔecF߿x\Nfp?Lj$*x#y: ¶7@|Y g)=UC*!L*I^^+,.Zfv9bL]% WۭuC3mZ[K0D_NᶨD1/SY̕@xQ|Zj{R.nm&Hcz(3d#z8Nۖ-zo7ha?8CPŃR](~=?Lm\P. ݴI"\9}pz0ء=١FD V3AYW'E=']IixK /Xo2FDF'kj$(kjrSJx_]]>?I؟۟2UrB1S,k!rnLrAqbEDXEB3äՆQs+A9U#Ar.=p~jw0/>?;I\?8ru;ZB<$ϯn$$Y}+f=ɓ+F.NZ"11&S)k5.6 !ĘܨDf"Yi4EJ2a]µM( DKmRbdLʔj+ ֤i`J^8B?9Ңn+! (b|+JݢbڗB!k Ij޺1/ jxY0%Wzv'ŕVXl#N߹7$_T^__C/̦NRPh E1ұ KDr4<9y,u~^oUi?w#kjˠ2qN`Qewrwl^f hj8膝k-vSJ=DQ\*P*~]Y>߬?( [8^NO=CDZk[އ}wÅ _fɃ[ +)Gr wuDȺnþJK2"+mQ T*hd{g1[C#~Jsf[q&5" }RzH2k+XH5bbIUdZ*m(4zM^vHہ# [{cVz'iO9wÄuCLsXo r9D(ϛ+c"#U@.Ě>Y?\,Wq&$;pI*^*{ ~"-)-߿Jy0Ơ!7Sq!R$ c Y(;^:hPPv >Д $> ׃AbZ?֥r[r#G$/Ydt7o4Tv |5^7;. TGŰsC˧ׁu~)o:}CK2br(C5GBzؗ%(n\ۘ@-¨-8%tHJߺ %oB|l{wYGs`L#Isl=nLH?cA#i{㟁y]mo"'޴^jK8muS.ZBA s[[}WϾg{ph G.МN N|>]b\MrDE{ o.F2; pydlpdE`C 9_ Bλ P]4 .C.Гd >RDZqyp藃;z=w,F}鸙bTr`ӵwOt '` ![})6 4{oc(^)KTBX%`'ꂡ C)cݡ@{QdGWLUdt%${xOf]Y'a> p_OvJFV[;n~["w?X8'/=g!T9!p eHx{8 ,bů]팊BH Cz$RYX!A\a䄙0R\d6YҀm%5˨ sS*Qb1eX!53/VE9ז]Z%%xjbO=V\:MC7s_4҇7\mkP% {Q{%ìC4(~"3rѓle.apbƎ^9_P\;l85m>^k78.>c{ʽ{\ &2T{4E,Jka) P9tW:&8蹔RҽeʲL$ E4D8+NZCϧ6[K?{f]yx&O`_(Λ>W7c5<7@#Ri|F7qF>io9x9|"$S#O#NjUw] ӝkpTkL0=Zj1.=:F \cUz bH7.A2ujf:@aB11hs5jo-<SŐo\DdJ35e*坽^=a%pOטjD}0e~IՉb>|*֢Z^$"c3>N3MrPI}-zMϫf]?.VE[/4q%rbFJi. xACĄmPBxmd}^w-~Fκq:If7S\V/c] ݺ [wt[@Z=' ˒ą[3Te\#NE(ϤHHM"O3PRJD2wmz$ydX` I>"}-٦[A<WUE.q='5G诣z]V?҉&5Z{T[s/6wœ,X#uYAL1v#+N`ޝ{\^_Jx$6nY,Ύ2ׇs{0.0gӂcXNjϫ-sF(_XSD`OB2by@l.w5DZiP8Œ cJAO4 +&yذq ؽTǦA?l)6@Qh痠o|^'Xݒ80}̂M3x&xSYYI v|< ҸzEW?x$ka#DA'^+"G\4C|\o=  [C7|^NobMDJuעQﻓ`?IOYmdnN+Mi$GT]ݞ&F' s ZZ)U__, h(Dž%1Ɵ~_ᑛۉ_Lŗ8$Ĥ<̫چudSN-]jZm>Ӟ$)b~]{! ]_^MA9k3e%Ry%],*3% O~~meq!N092b$n˫*|A*eggT0ˏ0)-["9+Q~^,<r{x{s}hW:Px\`4I?_UR!g?q)n,Jp:FT7YocJ o(3Z5C.* #E"ftLkX );FE⍝JaSyd!q@h݉zj^kJ5=y )w78ZR$(%?tiolG2^UhEyp]5V .Du01o^Z #յRxŻgŕ]훳&sX#0ڒBK(C؆6eYq{Nlw猑#$2zl| ~JeikkĶ>G& >UZJ]˧W% W% $ckM@%vT6;Ԗ-!R;i{~qd/uv1C(0yA=^hb |0T䠣VmNu1/?w2͙_!I|9FE$.eGm#@4]]2?Σ21֣r*+IUpgsaϴ BN:m`V zEx .nE\BN^F"c:h K1)F]֢^_D5Ȭޅr]$ָò~zwBH.!KuJNе5ݖ#I.,dYdEYVI~*uiphwk}u&i!DOWjCTbU>13%ŰaVfLa#o/i kt,1 [x1 !l7b@`&[?f-JxS6s"ǖkA{6ג=`UB"z8}SkSLT{@H$d1Yt7`,*,xMjFbn畿ufb'3.43px"u3F3e5rV{FUp3J!y> P󠄟#u4ySG҉&-}GοNg~~'Yl"iL^DWaVFܿ)ӨORgx9xocԐ+9Ln8#d b˜K$m&{6fv6^3]ebH$"7p+)CZc*E x␮eQ"z9f!@VuINNm-f)߯Qh 3&&{f@S-;{$.3{ y"Z$Sނnj7cnNm")eij.$䕋hLi!fՆ{6Ԇ5;hW+Rp+&t^g+gE/o4/ J)*;·.&HYЭ)zNlT9-CW۸p <$ HΌ(w%H[D$bK)|57Դ!vw۞ކN&TI/ Hr3V95SDHEeDf6g,4riorNҳ~\6Jc)T?7yYocL`'qܛxz{5:״zpcZ!W=\jhvԇ/)xyZ|,^KlMٲ<]gx]5 a6 ߠd4]%?ɎMrD1c N/܄"o7"MLU-[C?ȣ|?.뿛NӘf:96S/LL?&xϤۋJ'sNEZ"l ,(F!ܠJRQHW<[%/ mU 4Xjb%JA+B !Zh8R.@YBc͂ c!WT)L%!xB 1Aґ p&b8f7(itl.kbUC+6VOO f`ݷiMб>&C?4a/WSX!2KKoM'a~}X|~x YgQ'ιY*>^\ڢr×'_ǔR =]7&N`lTR@Ԧ; L4M]>QZiBRց8[Ikq"xXIIZT" j ^ysMU?r S89a2`NKrfd`Q|7peXm1JJS YoQt\p0=B]IPh=eX3p+K :4  %SetU)-6&xWCP((^mVaXk2K$cJ4|Ј .R*aEL!qK{0h4cnoYBĝF2q<3 <>I;0הl6hKۑ-ȔVW1^~[ ?ܬDMDNKZ ǂήfnq~; ɫۋ3og_۞jgjE&^DMC}Jki4Jӓ;Z!>|\T vx*-E /+B~7er/&7=[,n1: X#aL:D [Pd;/{FYދ 0O3bws^ `xױ=Lp$-%EE0G:dWd,~uhJB@DˉA7Y=څE0QSn!`4qיm|*V>BIN+A0;_Y ʘzi͒6%^GI6dvLF;^?^Zw.&6gkiZbM嘱^mDDGNb{}6n&P[V(a/k:5+W!?BU>ޚ~͒] ̐ LGRkZq3H3k81?}@-e(vHKGDqCU?n!&BNC!e$Lºzz͓7Oé$+1 h,{"Yjk|Йtm踴UOuBFGS( @+< S^3A8&{8)BcCV{J9 t4xțIxܓ,ęA:sNg2Ssc@9BF?(j%a xMUB@HiUƁ^R3D?c\jHKA gnmhiH׆DZ#ь10Rñyfl#l alSD(iSw4mp?]B̈(LpZVBJlym{P\BO͖nptŞ?fa.e4ѽ¸ +^/Fa%e&rGTLhfR˗Ѿ} vb?T7f:l2 ^b*ڂ=?GL/h۔oT] ʼnA 'Ůpj4L $ _E":)2&6LHdlD{;LRn,|:c37=Ŗ"R ]8$Svwo*=<+32Ϋcj O\P"TYϦb+=VvcyZؙEc at vW ?z0[P1QkC[NcgA/gwՒ3WPy&}6]̞깫BʾS ksVR :zSDBWŎ/F w[bQ]~^¿d-pPZz (ti/]K}%/6)}X%=/~OJvi[^JKяUЅm@dqwh qA`g3EFÜ}RdRS0O$na,AJXm_y{&!L튲_o{uZo>q}}[0)xb ivᇔVPz 4(T#16gy f6-6|&yb[|q˚nms]ηMak;!9D)es1i+PJKW0k51H0!%6TU#O3bqX/ʁ@Id9)D9MB(k.rSr;ٺ.<IZjn9b53)銾=qq۶hzanD~4NUkʄ17Ӱn. P߼~]e J֒y늸 52;遽,J[j«,_{@Q]g_|k,*oGsIYK<ZpI$#Z02xXIYczajbmFT+Ieҕ u(phEw8  CѪ&c(Viѱk2C6/s^mO8XVeQt$v$}Z%˂|eeWѿ.%v 9'h9?ɟs{v 7O0bÍ\(`M$At,˞:߲|ʞbyP8Q9sbavv*1Ta,$ (&earzAJ#wwTW:+g"* GLԴUTQH& Cg84=<nmKX" F`+'(Ta3Bt3́ܖ]ތ=hnҥPar|;@HoǮl k"+@ vcY džFQsbLڎNro2Ų0O m,Is,ADm"QdqwHIXxLAv2T e˒(,X[Q?>='e8L񘋓F-)'m!ub%9?$%aqV@9ti>69bL >V 4>/N, Z!TZc-km*;[HD =Fl=+7\#D"qD!꿝t9= H ªwάl&+\䯷}׻zv=o&M<'z~wm=w= Z[%Ƚk~w-6qHO֨vb |w=v|HI-!Eq\| Kد'ڎ}z-),٫Ƈ{b}+8&~~5>ڿwe[qv6'#`˺I]w; f'ڸ_c?2A B/Rp@8C u@p`?SL0d#N!֤ïN(NZ=$=\!$qyIOܣg;/Ж䊡澏,Zg'#+E3R#cԃ2z,[I(apWyRWo܄^QaWOwݓԥo*H-9<']`4j.Kn k~oo'_jL T- 5&XiqӠUg)̺ܧ?E1y)ϯ?~Z *Cb/E^$B/eWIJI 9a4B ZGR,p>bJٳ B;ֵ@9k³/;E"4t7emn%ƚdt<* ش3T.u0,;Djf,ʆSzqf HEǸ4?[-&~virWmt^oZ@t\߆aN?yCE:F1d)E5N88( ) x;G7]EOZkJse&\\XAE;iF4e="qڜ APO=Lo}NyO+H9AD%pgs6\X`Y#Ic> ,~=q.)VBtUg4'?fj:!zqZƘwZ%{(^*{ }qGjw:^|Jw # /w{PR*-\$b (Utyً Ed/A]#вQx6Vu)U_oq=d \O>N%^$ $EWTPe/7j'(rՐ/Dӱfnvq9J~p Z&^go_̶[|}g5gj{A ɹgC-FcMEJFP@X%S1b5;k&C&P&gcyyF]tYb|\)S@͛J{|1&B[d;/FB=>ƓpWOf1J( Q+1VԘrlXDZ.Wa򳮾8s!t7jvo⇻Y 0wX77wspc'w_Zl b=i^ě?<ЛظsB>_&҉:Uxdm jWeԮM?:͋YY*r.YXtѳJ15Uj!/|)Z"k8kcn >Xsԟu˯hJ>DOSS =]o!¤ mob 'l/4.m ySwݐPpx@ m 0atS6;E]mv0u3pܨntk?~tfkyq77x>ޚWDX s^o scҞIB'Ö ¡iݑH,-qVdOlxe4.>oPQFwYPbPedf}:K9%QlYB#wn)Cusmȉ#y3օO>8BN*eUPPI+h!FyM0&L5j(1S*`Ӧ=WNh1)3#k mwx$-%pYZ[K|OV`6f$3PGvk}숉Ϭ {BB(λc >w<LJ }Ƕ@FC5~ Z޾K+ @G@c;yW@Qb+E婇^/V }/]T-ߕ?,N?q̴R: ʑB%2=O%jxJhprA.ae-PmU0 ` ╶*;tТs@KK**#^XFX P!# ഫ0"1L9hP8%5-2xϟӖ, YlPaND,dSb$@zI+| qYJU̡6 X?BXT#X % j(f*oXH2gɴ҆OeXU:NCCt߇`*R2Q5 3iq>vXX+/HsWR tzKH] u(BGs&GQ;X1J+F0|'# 9^HU~_Km5L@t|:= LaFFBvFQp[*RA$ %|gQ𰥥TS,A`ӄ#4\"cvDZ>#jEa#P6 0Zh ǚUra` _2vFXiLA; \UP@2A<h"0E-㜊\j|[pR#ڒX {@޵,ٿ";r ,{wqࣙh֖zd6;jJZ)7$E%;SSe"'$Ɔ<.: RМn%[ӣif1V׉V\գl|T _?^2 \z¹`^^wp ""B$rt~F|2/ְY<>QgUm<_52o)\o,0RLl={3mC5_׾7S.>}IW"R':SN Rg5c4FkJrMΒTfz('"s)J5us҃|Z;|H|6l>De\<!#MT*1b2$Xif`M O%H&!h Ї<<6Ŋ%u \#5=&00jp%1bZ1F-Le7Q.ӈm N.OG"x;,HB \1THz0S #暂}"͵AO \'x3)rGElD\*Ұ8g03$/<#lʹ=zA!~uyKlbQ?a`X<qX_د8ߦ?Ț#ak"8ZT΍U΍U΍U΍7ZUOPEKk֤09Fɭ"Wtwǿd'psWq6BZ|W_Ъ tz^=Ջ4o!Ut(%L X"&ѹS+Gq+ 4V ss4rۥl;0X_i}5~t5]J+I :`]k"'<9e]b"c]@l:dXa%[?rO)-M n/l1I| k .Adlsxvu]pݮbƳ+1Zj<[C [Ȯ5U޵Y&1]nhY6Gh|~O'D_zO<VOy+oz*9VP':bsO$Auy_e\Mp,mBd.=,nC>'Zv] s.;@%SCLBc%&UXBcb>w*<MHUe8YɄ!Q*Q2iJpBIfdڼT%$KUBZZ閒cXFf<9)8QgJ+&pYɬԒ1WRJ½NJKGiEa(6 OIY4%0AK@>*9!Yݒڜr)Hۘ11iM \wPiU ϫa7_ ϖTX{eVX+;;`샮Ïo>=A.l'"䂡3Fju4V4wީ"{W`mkbRE¦#hz~) DapMtu }xڒLpOPs kax+.QHf Aspmz9pJHӲ`4"\V159TK(떅`ʬ8mbS@(E!3R`)/[VH7hnmq>2 EڎusW܍%yvco D91G*Q*JI( Lm`sjߊfIj &4MVa=jx̍^<ęH`%iQ c~?P36,2F͙6-DiDE2(Q#kJEjD Pړ8G35eܦح~һlo=@AcO nC"b'nx10ݑ<_"Eu3i2_ H5#Ud}z ~QdWO >r9]h-k>BI4Hp}>,xC=qG%yD!>3l$ƫ{(+Z; j:|c[]ݻ-<v8kne{Jh¿4ww'1 'nC)oAc~W5TȐ5OW+@]ީ!O,hCniZY (0s&\jCf=ܧjg ޚ/f#u4Dj! ~~ U_C[_vyͥeT[ہ} 7*Wri ,w*E'}5sNݴm_M6[eS?1M~{m\_ٛgf^,G<]oP[0;""_'ߴd*W9-X7SB( N70]8 _f◨98{ao.MS/Qjٛ%( gL'_:, 쫫 Y6(<%jo$}` IKa>#`^ UBaǩu):5z{ qҮ臥3ZC<~W,W7]\^u_TAC$%cIP9 ]ȹѷ bZPko^Ӫc2om0^Z3_FbA' Ȉ*DbZEqRה#[ O{]qW%. 0!~Mwg?O g Z I6h c!Ğz)g0OLz^C4[rzsx"IT>.9{ MpOtYiw^dCĶb0=|.~/Yךǽ/v\}iR[/Ϯ1woc,ѯ2Wkx?Ey(׵˖\Qa98]5J|ar~k7fo޸6C: TpPFNcl3:4c"MP̰cN抒ÿ8n8MB߮F׶mAѺf9Xi]lJ&YJ"%lS(<`z q%XV+FI2&) e f<\;]v+xkŪu9~xX?Y@tyxorX+,JSA"}m>&W ^zVWդSX?!у9SzB@\;s9:i.fSѕR".mʃq&a%59Oc1IT1SMXb8G,ELPEFrR]%pfc%U)֩%u0sPޓZ FG3갑h'xy&|diB~p )!Nw;< :yfBޭ MMI$^{]548~4f5}cSUD g rI*Xx#BIJR-Di.n:B{coz>BQ⊒6ˮ]bXFc=uiC6ڟ";ZKp2,6֧VG[#.RsU*A֨W(lkb"TϽ;G`pN-Od]% @ kz鯝%2v;qhXbv9L A[ H{z*H!100Uҍr:L7`n{`Ё5 %:-Fc Dv<h ag%UnX*0$Ɣ 1Hg"14L[̜ [W1aiH} ʀ'&ˈ>8u`>hLOSE_3\܉6lGKO_ YDax4iB 9"+EO>~wT$C~wZ*ON 0C#:1j<"9!"9nKoƔy."LQ$y!W(>4IϨuJKa5WlՆ&ܦ" \S¥P ZM*Xl >;%9-TJ 6?]fSt fX^* F`L\ ӒwQ̱)G??{FJOgsjdKlvkI%)ֲ,SiP$4P,-$C7/h/bΥ7:@ؤ"Bβxƽ [:}ムee3C-HŊg#6F'd@WPS Yot85N}yߎi7{"Y+@&jD dfC4Їxg:z] ;I&s1i"ۥuO2!kھVdK^KjcbRd@lā؟Rh:{puEkFnw߯S~v=q&aɣƣЬr}CΠ! brFI ֲ[(k? -؜ң@ҶLZ+)sYh..SNIks*Ra lQ 11uQEއb,T @EX*ދM`TB.u$-AAsG1^^kUrq>[oJ4S/!bJݯ"GEF1m"7zs?1)n{ۮ }|tqmjW/ ư{'no}#xǬ@ T*q7>+ş=!.sv;n)\iրTo}|W %HZ{;:./[0V!{{c04Nxusg0|XSXﭼnnɴlP+%IpPuCZj˕?juO2{׈Q̾븬wfu+b0]O:C^FL|Cyqzp V(j@eO Ve1kK]j}-nLw޴Y8m$B #m"dzkXSO5_̯,/f%yG7o8`-YԨS'j &z;O ;ŒW7fG<=э̇g? ՛LZyj ֱ ;ejhuG`qDjZc( 8nn&_J<יF:/XN-4V#> aitwH&"+lj{d;|b 1|bN/{ڡP[_˽eVJ0i &4L&PFH䬘~鄲J0{FHd\ +)+kF#a_ +cX$t2o>ه IڵhsF us}JBFPŻݣoÞ^4JZTt{rg*R:팒Ȩ*#ָٓ!@H1UÎSݹvxY$֔'W+qؑ;,յhYS#O ` I;)ԗsOI^V=48^e1$|헥\ < A@FQ{C0*A2L:촱Itի\L˯=\Ic6q%:YB A֨#JYMABhh՟t{Ѭo\fSkT?x7z/ 9'w_M>hܼ& JXk;` T**02>'<$ruOAd+<7i^-j4,QyVjrBtBU_pE5̇̊G$Z0h# +z8_QM,÷M''cB {K lDž6Oqz4dIj})Nj{ጵ?xIFCiR4kBK f(hEڷoސ(u*X(p>Ţr2i1fjSe1we'38,er ,A!?~ny:rW¦ O5s^g{џa4nn柊n")flwr [g(}  S1H?x(6KD_E&BlI`EaQfi4䫳OT:N7֪SH9YQ'E/@]P|5u.C|pB$@,u(5չCz*y^d:'xW]jc%'o ߺq ޺U< ê1_-?WbqtSЁĪ#{6 ƷgUٶcc&󛍂zr>;4>:dA5$&PKvjHA kܖSϿ~C!>pp〴s{wX$!vyi#f;r'^ wإtbDշ{'kɇ/#+-1OOQtWϰ1St-ߢLoo$hjڤĜÔsZB.ZO2٨)@A)-BD䇥:쳖Q=4VvۻRm trFaTGBbNDPke❨ԁgmxf]YCn%H_ot=" e4B[G\ b>t8lI @w0M)".vĘ ͘DF%poLL}9:%SM܁!haGqחw{ @ZcBCC+h'$NafǤۿM̶cݭN_%Vbj5[•F}oه{nX#ꖼxՠKBF̂ܣs5 -iyȈŋHqTK&_*th,Z-s*/Nɬk 8-*-tӇMhyה;>t9^;ip4b ;% P j5q*l'/-ſ4fB(/K`RKav)k6)" hZ*&Cc%!<;q<{OȌzJҤr:`E̪,*#Ɔ 9 Tjϓf.V^3fU҇{x,JA)6myhԫDvB+F/ N+tGR\"%̺QT%Ln`fΨY)%w( ۾SN9qh lvj 7nGsk{f7ΛG_86dɸ3:e,$--^4)3O4Ԃ &1+g,$6@FZ-Bh=cx)P[lɳDuXOx-zr\#xL8G΄M"s`1TPL&M;9,nBTkuB܏j$pz;iZi'-b~i<@vc83z2r弙vn@"cvuMѻ%}e4a!!1,ijT(Y+ aGiYj$ ٱ\rvEi rㅀ\`u(Vcv0:9GD`(Cj]YPk+S) 99,DrH% <67L9KȲ!Q),mI}dDgh]o7Wt+:$6- njdɑl'l/d('b;V3|5Z[IiNqW|41dX"z1}_ꃍ -IDL!"| -/Q`,d-2:_RthYR%@Ϫ^d4:%\RܚAtPܲ"Nq!8ѮD׎ %Fa|p $(*NieF59f!ku'>{?p37=Ťn׺ ݧ)Yg|X:l0v;@hoPD'K$H]N>ֻ;HHEKnJ{ɔ-b ,vF*87lڗE& f HjQe# jT6쁫(ѺcF-} VK!",2_9֦񈌍K^ dQB4+"A le6R%fQU[~֡p%hHKhBN̅IE ;m[BrSXN:O;?K 2(SptzL?q DwGsQPM%vI2|@P@UyeYQXZ$GC6a!e($45YSKuQki5@H:D^+Mfh>QoAJMc;'~ Rި]=ч}ཅcQH?D!&JঘVvK)AfZʨ8baΒ6rzI2<OeUz﵉Ro / Fآda#%wCH  ÍAz&lqnKA#Mļ6F$(jB2EF10Y M!; JS\v)G%'7fe=hEO*`xdyg-[MY<+PJWZPO3rjٹ .6"O8oX$[)yOAVɒOS fk=O>Չ1Mv &HYYGR'3 BE> M^e)DsIx*(/-1f7벐<shf@ņJLI6*.ozxS@F4f)Ld JZۡ!#m5՛૳|L>J3S2- gAz6GqY>;'PNb"SOWHP[rQs^ES eIBSWԍS2$=a"(b[8Ƚ2C?^/k}q.ި޳`088%: w¦%frl #٬ e}m/CFM>|Η0NO7_q?ø>شxtƓ&M|1xRu<|N?~h'7j pI/d?${uyBnr Zχ?i*^p:or90sD6r{.ڜ8O&l^ooX xbO,<'t`"olA/>h=2h>]Iڦ,yes$h=rD`yhq͊wG&["֮S9rcJk;si?Nr}˵J3j1zfY7"48wlP>N~{psy+ E~t> ='Gk{x N]0AV=+'c;O]yuZ8aKo^ʔ|,e_Ӛ)$i9:-V<dIt}Bŝ~hm{2kT5c!Q(qSIX?uZh0xgl=mF3-LfNA`z-@!bkb| Ihz_߇s0r-43R *t12 =B;Z<RD ΖnZ ~Z i;>8&ErTNu@db,-XϯB{GuBZ+X!_MΉPV}r #C=WI#RtO'T8 At _ "ZA3 4Q=  {^:@)Ѥ$ݏƯ^ U;;eNՙwtCZ)J'0;Vg6 eEVՕtCKB3Gꆖ-ؙNuCkYU27~b[̌43owF=廟Ul( P_~6}"h`<9-11.?lߑ/Q\~C./6OwNt[Uś&Hln>tO5q?!^~+9^ 4+E8Rom{ /|+eMyQƓ.r ˝J!Ӷr*tX m6͝.8:hcRk.Jc~r*+#Z3[#ϋ޲ w J*=]f~9_ɺj[/1”tj;c|)uk\j'|Z%%!t.v͠wF k+V9S^1u|{R׬u-6d/2تjAbYoNrw-|di^L2Z~ov~ޫٻp%So_W';MZxLp 䄁[`[Xɚ'j}t9жd@,dliZХsv} D:4݈=GZ%}8CZ}аESE٥1\*/Ƒ;FD@2t0L}.USu )bYJ9%fs<;=BCU vQ, V k""AxBPVj-S(^dȨx%SKQU-BYg/I9$gK ;" pbHZxq̶P+vnc6W| oY8A/hJQ}E <>L$:zSpzQv4ˏO6l.욆#aswj|CTљX23b^ ɃWPnrzkh{qBT%8ODyauE)&bSQ9qا*20(rZ=y)A=IH+/* (E`*i)b!ӎ䠇)e5 t2|1_B )#a]ue[-0 pƁ%KJY]^M@s39ә˥ׅ)mqti cS9u?g'/*w׀XNW+*]ƬLjĤ&񦩤FvҲTu#v꧂ֱױk{.ݛqY<.,z#&BL=*Y2 0""v.>VGV`L.u :J(^e yJE@[w\XWYZfuuʆR%9س3ri y9Luf 7N[ < ^/yGxm/O]8EGSyl@w|;?8L,+Vعrz-0*b=@uc褛$s=h/j~5[9:Hd:*CuD~*P}jfU äWC"= tK۴^NmlΛ; !l\pu2-uhOCFge T/oE"ç,c7coaw0ݗ!݃dTSu'=J"?l1Tbݝ臟ѓo%"__TK{oy]Pwku~zWt_3,BD Z[SD ภu#޴1d$WAJc|5o\Y=CnM:v~XBOK?7QwPF5.e?vfqֲ"]EBJ ae5 |yFmϦVW)5~6RP60%ӳ~)N^ DolN_o4xzDz8=ӃˍGx .JtzQm?/af^?( oci6ʦyy=W<=;(] |;;(ۻK԰ҩxȧm+'kpVG:6.7?K lrov5z? rWql7W5J,n>޲vBy"SP(~&n/&!MП V7\Ly`7/ḡQMA3 zEcRXg:}8t7lAgxΦx:}G `^n@& yU|W=!~|ڿ.d2o֛.9[.*-_䃋̖Whɻ}տ^xyk~l" ;{sEE5Agx{9|OzZ*΋~l_m'mN eC?PY)~jv{`Hu MUgg۽v7r[0ßWbJ^St56MV5itCzN\-"un{$mG-LpZ/ +2L.7DY3طR2D5ЎI;KW,s ţYKRZ !D0A0r.TW9\(S1Rkó!ՙh`չ&$^-1E:&ǃ(!؈s \e1 r74k+yRrS-k5\h"Q `\%  v+ -pYv?x-:@ЏN\ͮ:%GKzV@7Gke;_@_:@ @__]_`z!]RXt~g.:@݀|( ^Ǣߛ @T^L 2NL/Tf0 cEso/%\]kך]kWպv~ѮkVa68Vo lXV7j`kNE}3|Z]Cz]{Ftp-}loLZ{KҵJdT䄓ZgƐp<I:GrR`ܹ銾;sΗ= OMG}jY2+SWr ϡ2@*1RHͬ" cr.̆ 'ד 9ATH=Ԣ U!)j٭ձ-1y28-ji\fWW1L QoMpOv9Lxe*b3J!qʍvsfQR3PdR= 7/Pp[T86˒U U*CQ}>kmUxXb'ZjկA/6Pw?д-gMJ`@[ V&D_az1Oxܙ£|"kx0ϻV<ojέ@ $VqЩw9[HD%]~ +LIqH6`חGJ=qQKneÉ("ВJ29LͩvrHdaY:|1M ͝8kهĜ[.3Lr&A0 ;|F8|ǸHwaP B)DCu4+kt ߝ,ljkSwl5/.w_@_4 M90B!YJKL~9?L<mFE1JĜh4y"T1!owPq*gsAjI 9 @}djdp5^Dѐ'%_bgWK#\8)$νNr퉧:goysCXat|νZ3k&=Bx\ݹBm 0CW|47~]eh'2eq?ޣw:"L [%\I FY59?^YC8Tϱe߻NA2M58ivnCLcU e 5K̨ԬčSⵂPJ"kvGUܢ}UW2m,Z <}\l83gXHLvM &9W+Wy^")b8Wu{KLF}lFip yTyAN2X*rW&[nvepZqoM)#-R|"IxȤ1KUuT3'cDI^4AaB >PwPz1Y5,bEvBivP|Y/bnw;%TЦn(D@uMh<2v)@IvJ4Æ>p ã`WΞm۳ߓrG[dL-gj~OOZgQZaI}PʹQKhhB-|/e}SKJ+ dJi ql$WE{?]Jf/f(x Pyg>پ?VIRm Rf*X64%!& !qxW3\ds`u3Of3r'y7)lAH#pj#x`oJxM}[rczh mP f!b,YƟwͰt S5zR\ ÞYq̱9Hܚٻ9n#W w 9 rH.ȗ_V#iw~m{{8YgdZɇU*^a,\#ŠY)+ lXE3;20rй##S1Qc9EH%eLրJλvGy7Yf` l\ f¯8[Xyuu]+P FGna|˰}Y f4"y{^g~x}cT7}T'R)hNRIe4 n4uG8p#I}Ɍ5wDžnJ8  ܅nU4,kr^s𚜃״s)1ߎ>uT7yEϞCo?{MWUeuT0 x}RYՆ(=S;f䪲|GNIoۆ9g۟| wcߌQvF9&L@ۣ@701MF/5xep63Z|姊,@P}xTnB!3j_٢/ۓζ&'w բp|QeBcrq1̨٘_E ZO‰vLNCEQZKgL5+{&_Y[y;fPѕu"'!8JdA?PI`#^z{"0WOQ?%fDIG%Qgs{ zԒ/@-{Tt ="3c`vˢ^FcsZ^1؆t?g[@&%Qr[Rw3c^QdF]R>C SX٘dvF9_n*hHo}YmFя?oXyHkiNF^ :ѹpF˦tݝC H Kݘ]rI[3?.-yzF.E7QE$Qvwdn8lDҕ .{[֢čL&)ln緼7󉓄|E-N{sat2whQB:3oIFa|ʛj0jagLz1Zabpp&ݨ>PJ Qq λǃ ={eO>-[q RLe9۞|f['$唟:8H/KVϟ<;+EybϿU'LħnMj~* ;hSImY@魵 !xΨN6`{,G$%pp^Dkd7؂~@kA:%X&J0 e,0J(sIKI>H~UZj6_*x=?uf\)׀kxI` N|J _( 3Qư N#w4إ ۥ ۥ e̐a^J]J]J} 6T[=*u_{6]zm(y}Zaٵfkk\K=ڋ)75.&.ͥڥڥڪZjZ1֞Zn2Z</f3D*vmu?Aܴnbrz\s3Yݙdm"ZBs ΋9DKfMJ{I7yXGe#eMT:)Q̃LuIJFs@d7yER"2F#4sk7  u.xW2HC hR*ZNlBmID%Pjc"Q) S1FHτ"AЪ(318Dd 87N4 m F' h= |&nCP h/(6 n&Q`(u*AuN E YFqNSֆ\`BN 9%\ 9a(%+,-$3]h3Jt7)ɹ F i-gE(L=]F~BËBzũCKVTzYhUcuh闥K^Z^Zիc ZF*z)( =9xc (%q^$!8ϓW(x$Ф-N\EF @ )-;*xF'޹ogbW1ҠD:^!AFBеrW.#"J7( f=!t2@JuQ ΜBȈW0I_i0K0r燻=tLP0-ATW!8`DzPuec,$&û#}?dij~&}t+"a0x#UEoA7ZM6c_?dy0u-9Պ%]y!xӛƨOKUE+ce1uu4SrrPx>SfOvbq/oO 僻>V[EgKցHNl>x7Q6VR 7 @ڛ(*-",ߐ Is8'KCPF% Rfn rkIYzb,ᎃ& >c dH%½Gp>O]҅"]Ĭi/h|p94|Uk5'{~]g&1Q1˂64KJi :B3 >5 2%\*K0R^Rh2qKgŸa+p3Xe}z?ѦѹqFtpL((!PpTLJjԢ/9ՓuĨ{MÇ3кU/o>,?xe]KS[_Gxeɡa.`عyʳկwɶMUILt72o?rwUOnl+@55m W%ooK~dZ߄ϋGǿ`?^IBcĴ Ԋq_ҎXa2F,¿NDÓW9_ QV_;>]hpvk'I˙Lٖ#7M FlD4lNX"61kbkHOJVj%d1~ʈj~&F1۵bϦnvoU[%G>Rׯ]G$εüΚ?!>+_C\x_=#mn9>"89"/)" fW̓?IŊhapQ2j Y[b ՚I-`RMA%$|# b:G5M?->AńMAeU/jI#ɝZ8C7a+E醭k>ødZeRݻs|~ 6Q!服xOt\a(|mBCނl3d|Kp^n=zeRI}y_]*ǟ(<zugՙ\&ͅ;MXފvQ\vи*)o0"ƻꮇ!8~a «f29S5ٻ[}X?1d']gdgb;y ]OiaIQjje}YJDÉmk>c:YtIwmFIR\&-`W+X/_+}fz9Mk5ZT+RlqJG$ӄ"/v {}Yg}~f {Tw QSjZ nu;խC}LkնmsHS8ӓ@USfVy}+4_ѳ"߀-Fp?5Gu\~Ord}sTwɭˋnX5"`DBL8tbIr9K1)TwsZ{zÛ\j `ޠ!\ yZZ+;AmiAEk!h 1]:(ã5|8,S{뿚6Ƃ#";LFo]eDw8e:醡f2K iSS7W 'eF9ǽ}(1`]M0+ 5l[*ucX0_5ShNuT?oϽ8z3c7BetMl#ꬲ]GPw hh (v#zҗ8EKٸ#oGX]C>!EEe=Mz )zy(-+ˬC!MèPnGѻ햾q}vtǿѻJRh֪':J΃ z& w܍sNuFg H/)=1$jbhﴫC$Ԛa \ Y}ߥRs1-&{ M?|\?>@qI8{\ mEI Z ՙ7IPL`2LZ>2@DS&h {,V.?A>_8i!w]hR9r4Ovl.qqs\DQb%du޼=%J*u[Ϸ5[ N#EcBS$抵T]"Atl8*ۖZ/Rj5֖!so icQQx3<F %QjʱR䀣(f킶ZQXf9sߏYUt2:k <`=kM.* m3{/YJwX y> !&7ql~Z=LW1d05UCĴPWXk/ޥ5F6/UX"}[SfJgZp;zQó{R -U1mA/F3505~Cqi$ 6n'b6¡esKW*Z5{ ufg~ c=D6wTS|S-Z>w˱ ~\wx!+LiM^w&.Wvw8יQSL6Ie{|@ XQe}`M,g@ԫl0z-fs__hE(89/(>ؖf飯urmfyQ9Lr.M0cGs ˌg ViN(T~HB<(4MĽ iAni*97]zfnn*BuT.bB$01!{p (o{$.0 ~.zp4k!4h^R[w,6\%C )H}i8FCT 1pG׎D$mb(y8|P4@#|bnYMAeJ+kA &5&\@Id͛S3ٸ Gѝ)š{;lx?-낿pO0KѺq]ҫExzNO7`w|ord@PAҍDUXӌ3+QҘ%nDҫiVU{e#Ry4c'Ž]|Do3ЋWJk^Rg^=|)Ui}lFA0UMz׫Bol,Mlfe#G堩 ֗7;"TOӸB1ܪDi?m0T AAh9I*ƁüX~Q puyF1!/"/)3M..;0^Fsis)K=_4`W`Gax`* )e-h8vT1`s<x:r|=.T]T_Zd.UsamZPF*Mw uP^Uyv ˜}JdjiPR*m%,'I|)&a\qb7Y\+V%g.̡ݖnJgmp]'^\`] Tsq* z$a݆+kAL6( gB6b&֫x S[b$HC הz筊pBwiO| T[yؿR( B;|gʷU\7$b4 q^l Mg* R1:+zԆ#AJxA{Z:Ɵ`{$;k4?UږO$liM78 #8#zZju=<>ըOr""{Z1L QoMpOvk&^&GJS`F!T2S ByQvBs/~ 7oYB8%\VmknP^*SQl嗸x,S==#?nUYyB5%_їE|3ϫW)X}/T)hZ8U)ZS0CNxpq1꛻t;)#oƀ 㨑׸{)tw9+!2c]LXp:mf:⫰@61t28Ru O:uGWhpg^;Q\3?ړJ#JvL2M&_1d9YhߕJʹb7n o{V] ')x }d4fvi ^w-toQE\"O'O|SX,֍U;IC͐?O8sF9ҭ2]OV_"d̠L(xQ 2 ӼlBy}  `e#`:" _ : Z(l{qi=(0D8 =eC6J ]x?ݫdlT646KҐo26<( Eqgp>y2Up4d$4FEg0xRLTBNd{]Z=)e~lR;XJZVkBt<@d$.Ywj~9C8A ; t ώnUa|O6?ð_E* |>ݹIQ f1™#;ώvx4 Çt&̪~-OLoy]Llb9ܵ❓~z+yTvH>ۭ7P8gϤ?\_\LߧjG댴E2#\Fwo%!MBj̭Q%u e'Tvϥ*@_V`} :{&+Xz:gHq/enf23KY4cŋWmzmF{7M[4ݧͦ/ptՏx#y\FhuK_etbcbSsEdfkOD:?-+όg!検t,sʧ7h GT̈ĭfwwNV_bqE1dK)dH=&CՂ:@ToJ m{r,t=Ӈ F0rXsJ)O\y613Bt$̪.@$Cm|Ȑ<41ľ;/U`90]`cLC)]"[S#T&,dz/9Xh%7^ۤ|I?G6\oWsk]߱ _'RWlĻ)VpIُyuYټor@*6|/eZܳr. EsC,ډ:n[Y ݺbP:]qt* )Ѥ[J>8΢ ͷ璚IŤyީJUyfш`G͎B)'zwX}G C2w"#Z5ye %[X a$Ǟ!2m ҰQN8:‰XK;äݩmyvP^'vzsytH_ xѣV3VKi{N5:irX]{}dߒkSoP#\` 6&Ɠ\I}i`Zyc{P n ؗmJa&.B+"&UOt#7W׷gvrb>恞찢L*ɏ8wIlW ?1Oi#A=xT[kh1虜 Ahʹ+џ1- -V$ЛߦMW&do';`Pp 0&upNLpM]ܓߩ(Zk':fiF+n8(وdyclM)<ЂkP1nfKaPc58G]n'߬z ; ʇ[R\xU /a5Ӫ(xJ^Ы.M1<];GΟ}1w1?s|pCm~t6_ 9xy$4D3PPk2?:oAП`|j(Wp 3 }U |va8qdi^IRFKf/ߑĵJ{DY` <) ;+}IIKđXsvnJ{J;&tѲ_qn֓Jܬ5?oYf-@B8oҼeBS:<4&LNƨLHW=i&h 9B{;r={7hţ e"dWv=klq8|f5Y#mh|^!kԎ=8gpR;3dc2A@ǻMa:Ejh/ up ׌ <1<{S{coƋV[U UoMd:CGN7u#oG߃?&@V]WdzEO=wVY8puD k8Ym8pzKq&UvAb,́J:'6+׋ `^Wyi"|; C(x+ 04X1ĵ|@Zj}@虐^I9J}1 'A b >=R61^vը"(!ҝ)QRDt;,B*APabVa+VEםGAJQOe/GȂI$ҧ XtX!qr\&nū_̯jFxxwK.qL2%Q6.JB"׌ J& G/iB&MS/?=O^#}ʽLni\d Ayj&tBָۜk_zS5ܜ_X g"sݛOh{z"U%N5~VQpEw7[h!ʿ?B[ {q֐=6u\D}sDZOB&U=s3ي1'2|sc,@ͅߑ0=#5c {+ƞIY|}L^q-z|DaԼ"V pOOf^&Oə)&y#_)/[2k%cfx%Cnm DdEΜܣ%xcqIVdpYas^2s)ǻ p5oYʊ2~{$YFp-I0e #lhw[Ra$ S f )i4CQAZsD6 gVpJRYɆwŸ~E8#\c(]u[)RC(W;<̹)ޒ!fK|Ŋ ZOEmq7h9Sd6(g0&0#s!t8NN w&;:?MRmМ`:ƓD 0Z<#G d2{iR0^^oR iߣi90O~r&cU~&G8ig*j2̃u1(ȶB0B>(3Y kp9jXR϶4LR2V<j8}9 rULpCI)/GɧRSۏ.{C.2x(A7ĸn t%.Jh,r$]]]0xCፕ't>]^OS>Y2cy4Y爉ٻq#WfMDA<&QiIK\.>@UZTkf5!V"[uL{1cSj!oe;ƾ; Oo}]NeLf"҈}5GF 5Qh&*5y2\]8Z tu.AhJ>S(8n=:Ak0tcua˝b"qoU];=rU;.[HIRbᑕjۛS'm(}ygLT c PD<ODWN_lַ~ ~x]pϺـNgA0c~E]2aP<5cvRAG@"Ny-ӇyЪg[?2: d@Q4R~!ykۇk$ĕ޿{(q#(xaT?ꝉ WdCW`ɪ˰Pc! G6dC$(O\!G/E}9d ]WS&(jmfpI-Mnts\kjۂSYCΞf)k,8}.m|>Z-a%) ~S}T}GLH.QCN6Xvݒyu8Ϳ;s=;p9Lӝz:YOp@ :wy7CMmI/VzDwsSo0Q8\PײFΫ}qpq}UE4gRNJQ6 P ہh烓Šge4G=@#);z[޻Ff|ǨN2g/1exV21C+סBθ$rN#S oM?Ef<ˉ[ އ (‚"rlLcwj= 9eJ%a812Op'."z!11gL,O29ְS9}z I`˦C؉/`1'rk&aXLB< Ӊ5U9M}tǐjuRiᬭFǀkfpV<$"'"S(fU`PDD*_`I(bph5a<:Mq,(9Of֥-*mXt()UA1:劭Ҟ<3;]>ʹw[S-jI*Ώ_{V~7%t&WZdr~xfb{\1ȵϋ/` :PȠ-bb<-plS_KKL;x{<.ѽ|3`ʫ=U^GwjJfCg r-\4'aohEiw+DOFD+$#)6Ǩ'Ѻ⃤F9F~W\\SP,i7˦KlsW5tRkMD̢+V_̀,^ъJ@mXBcT-z!ˆ[ a L5hZy]/GU(UEk{0gHJuwpZZTs3BTF\EO+׉5R^i$Zx*P !굟_FES8hNADRϜ=6A'X t#QKQM5OЮm&A{aQ (SUȔQa#ZJ<\^ #] `[\Fn^SEbޣ.)ocJ= ,><*n3{E'TS͞Ok5{B VWNb&Q';aC27 $<4cDKX]jPyIK`A%e vSu6cezV4II (&N? s2~Ip_VSco_>}H9sjMm/Heh]l~9.vzud6t2!1Ԟpw s/hpY^؏c?6fgs ځ {ɦcyZ&ktSD rUUBy9kV8Z3kP&bdZU7ЌD3$ﻐ(nnuMDRu0#X^ |6I(@E XW+%@՚7l+mJD#u%B } 9==s*d9F)nJ8oU܌ĖR|ݾ*nH/kEfލ(_)Gm+(56<>sJc*9c{Dqx,d wb%1C).P c"9.'Al>>NGEkl}{oUo F}4qM$nRvh3HS}rŵ\-ZkWc/r2_E3*Sh}un^(\\OcN8tILF<̶s\_D0"(Lp$ 0P45o!-fm￐{"XK4fHg6a,f3! =Dau%o1oZ^O3_TtBZc:5bSp8 O.1ڝ*EJt:u(}`b-qb5ְ@tػaּB>p :3)b/CxЏ3=h0jrLڳPψcVփ6VƬ{_ԧ1e=Nn4~1#?kl@דc y5Y0gV2)0]*]TVR,1בe2@8^Of/dPO>G0`f%)<]AS /ԍ t|Y4ׂ{ V.ŬDʬ`v8ߜE_r7Uq۔n y&HN>H=*@SO5tkCYள^G89GbWbkP0^(AnOByr7a$N`<CʠgM{~4Nչs]Yvd0Ȯϩ7^NKbGl~4<,,tw8> L꩞/דzG 4>vQ?zoX(irc_>KCU^+˲DѲXTQ:$M=cz7kv%!pvGL_j:u9X/h>3fiٍu%_S p@ z5IiJG`˚grWbYi&uEЭ\;Ͽ t:=oN ؖD/Eoɻ~9ezAZ'E㓋u::9t7+9a߰s k//έ'R<'y#o]Ou^[mqlN ԓ닗/^[zͺ3-=IWտQ,˟^ ~[onʯOb] 5 Bu2~ևyTKi6\b^JHեiB*o*h/[ܟ{X _כ ]54ʷUBYj<[?N'FI X]hT^sUKK{ o\-[9^~*r{QNޣBYR$Z_;~mİ`O,W;8^f~L?Gz0ߟGn4/F_xt[4?ύg"(L{dէg&?,H=,ayY|=oZދ|L_[N"/;4jnOQ:II6 ґ$yVS]6(}-= ?ARFwں0oh<\`A⯃/4gL,,Vl k~6feZ~Zm <XhnaBLG84 .E1X|>M3g(o )7 UdBN@ "ėYς 7Ro˟nX0W|}4t>~$>`1&/^l&F`RC;=GA_FLo9g8ہw8`E?weh&jR<4MR 'ijֻnVW RC0M!<;ez}fIћ}O`pjSig{~Y 'v;ym T[zYooVm,~]e3uFeg 0+of6ȉʉTJbqEtơO_ξXMQ-%?OdN= ,ulҖRB ofl$)E"~la|6XaBb^3tOyT]̟̓SM8 i/;^\\ozG#hEH8B3k-P3fԕ>U.,g X]z耻{ܢ 1Hf/!TJ;U$,Q^] 4?QZ82.&MG+ ϧZߒ*x!&].D iүap(<.s0,kSģgalJ$>LMvx95JsfGZ2|NXĢg`Q\=Ņ&ѹaߖQ#:'0ojS1*!4?QV̦?4)pAe_CNx1tP(6©9©~TlhL4 gܗ;?Cg' N: ᩠'eeI-{l La1Koɐg3Rg_RiV0u2,䘣H;#׉OgΘdQ#ŨJ[FC"C"wgݼY;H`STeveDFI`@bR6(2yezpHPq#&qJ >lW6ѱ> QK%Dc.Q6Mϡ$o?vJ2a E^Eo8^zFC ԘH<FXĕDD98C&%[yd":?>)KnE&:HDFjC ᖅ `#d(*L=^(cfjJseVWYldzSyJ݉G3{dkGI=0xFH$l=|*4P3F 6" /x+T>2zBUo3,ѣ39}ySh ªMsJ`N`$c<%(jkRF YϸE $u;]=6~+H҅)F*FZpZMTZDVPJ¢ϯ s]@p`^!tizr2 _N#>di:MYq$ w5? =پ՟`M\zX$ZU#+='8@A@ ߼9ω[|GЬoVL }N`z(8փzkHS]ͫ~stvګ1T|%OHC+Gkm7mίO.zNjF5]:u=^M[)]${>f:\, sfJ *Rb 3IDEdyAɠtr%Dzkw %ӴbJ#+\'2o|uCGII{^ms%9E "TG[d4 Qwf@ =h{xF;2l ѡGwRK"'pTI\0qLֺBF>GHqV0gpbPA 'D*ߊPL:M/(UF]*pL)7d@-@zjJZwL$~Og+MۛMWۿJͥY@ Ȧżpy?jh OL방~:Nִ _HO#JmضXiįE4K$gݲ=ˀQG@:zFZHQ8@/%Ś$bg@z]\G)1xs"@g3RG16|ޯ VD!| [_\=b;@MWIO{1C"!z x:΁"hElKn ."-lئpo.WO=i.?ou\t{qIz;l|H;vVM{~loC/jre/͟z`~gra\%'O;x Wm┘BړK9E4/fWH) `_P >gx ǷajϽWRUf^~{IP* #Hz.y3eGÆB 1/|u#aK#O0q,HRITŤD5HV}#B+m*iP V" :1x]V(<ĝ0`G"h$@|L xF#e4L[bW1AcvUFŠHA EWZFgB 99o\H\"j BV(DsG]:%>F9Y6 yXmap#2}>kOWtܽkf***M)~ PX,x `Jk()$BpeP"h#P*@ ^MT_xd>S1; -aTv1QޚN_|*Ϸ1p<O#kw2u}~f`e EI#p@[C R))e2cGIhaD08AΑغ9Թ9^sQ1^ɘ $8!@ iVp.Ӷ"+%mEYǰrk RUctƃUܪ2޼^zuHcr\ <58a:zh@1d=B\9UZ ĎHy5PQ XIJhŀ\Au6>|QS!oxK ]R ƪʏRdRIc2l%'ef25N@ىK?[Y$ QtRVh!~Zc4Zam9=H 2) @XlU !uf gV85-'QTl=hx1S9ȴɑ-/-p;߀h7([_ bN6hUA<_nC̐5DG s -i@ ëӓ DLU@!#kg^=axϠn& jkJ@ \ njR'XAN9 .@ܽ~p:==~<&'l󻷚1f僪_1n~ cq>GD\6GLjRJ|P0 l ry_4ZCpXȜoIar=^ bc9ŠTi _3 +KܑĄ|yn0wx ?თMaف]ӵڃY_|:ƙkZ/ ȭXk4,^L@ 匓N+À(P\:Bs7]nzM>aR k}{rծpu~i:u.Oox*/GxE!Q9BH9?;t߽Ss+uyηdt&,W7^6lH~ Kx f(@' b) 5'Z!%PJ$7eJʢJWRX1͢%~te!3;6$/knt6yhseq3UFqNe#+RsJZaUTyK0T@AhN`U+kijɪ"yO]w /ʭ )8i&!gj?-J?1r6˗&0hj^$wuB">oO~^C)ZC-)iJ(+$5Z Un]acw]F<)hbꖅ*lA% ?vVT @a@?/w[OB4lŧn=q~_X~x^'i<|Vn@4]w2f+1RNŃBQ90)v ;X:mƵχeNQRYK1[*VuzÉ2x'J4=aVv0o*3RKYnx^(j1$35Ay;uR;; fr4Od`q1CN͗&wҹA-ʽaSs˃ جݪ CBbNsxC4hƖрQKdz[p5[G ޳(Rm3~t^ݜm'CQc0\w]8W=b1=w.pߑ̙C4Ff i&~yr] p~ ~ ].J1;)ͣ h/2P,3n՞VD#bp۟ba/c0r+!!G5o?Λ v5%##3/;FZlD}ے dx=U(]f/ 3Dz+u˩u$$R..a;)8ɂ}y. zK3cʉNH 9yK ;[B(3>%K_Ӥp`ڟ#Wh: (`An:%|XMzXSȽ|ߨ@b؄i̿oT>5t^f$wgoIi|.DI]G9KաXSW^uF w_B z9}=W_*D}$ŧŻx\M{UMmC9w2ĂFw U}qw'?<%k`{lI&;x_q)/о?OgS7.>'X/ #*mhu$;ݷ~jnGrQrh ܛ-3oXW I 9-\{zue!G**'yCHk*tތ'v0i5wD3DB+@+ *^/|3?HTzYBAfP*^ҺnZH[IqsaqҤ%NߙӏX&֙7v  q ϗa.ɑlݭ Awi^8O_~Ϛ`T^qk .#hwCgpJ)4(a0 A4Fjg`KN-G  & ?>F2u]-9' 9=y: ϷwKϧ}<8<߯Q#<jSE)7:Fq,q!0JRi4CF"K0čuQrf(RG80+/`*BP?fpUqyMD5kB>؜3r'c(GY'ԋQ\Qa[a_^0r%! ;bp:L{#k/RHp p^XohR 4I*&hOzO1F6&͹/{1)7w  :DžC>im AWy8Jzkq91DxY9+d4@[){%:,l"_EXWڋ'vvVwˁnfv}No?&Oٳ^'mܟ0[:!\F{zJظ6;D_>O@x ~s-5=F7-o;&7h7vAFv9[ Ȕn݆!)CO&QtyY^.nO2I͊͋j1].'fܟFm?&WB}yԁ O:ܼ]pvݼy[$ J77Wz¤gC<:_ˏƼq?|W~ݔ}oS n烯a pap. Åu 9оw@;u2ozW4/6;N"Np#RCy\[ )(y}cFTѤ*j d?[[i ~QCFr#,#8r]x;F=lO;F/=l+OÎto6#cV0H|\d cD5ssٳ{<{bd-Ӄ^Hvc֍r{m-'nQDHF.')_/YʗV"$ Ug6B\򼴺mN4iZV) ]봢&cVXǻ^wc&tғkǏ$$|db;a]: we :c_WlKxBfd֕:R0^H'3~v[S m yp[~\j^p!݁cv:ܭOSw[XF k V`dZbi"SÜaFW ,R@Jy% AD ^|}pIqTo~^fwWg o揢(BpGQD%#.y>L!sK ʕ#VlOK @IDyOD2"? %瞦gDhؒ.3"du L Ly/%$LI"`Oc$1s#tK9ύϗ-!< `ymΒ tK $B :8%bd;]RBPޡQ`E%D$ED4=IXLa7u 9bnp^,#r ׮6zW4}bޮbjڴg6 aψ VpIХ0*Fcjd\8d^nJ"S^ik q~>!W2Ŭ*$bĕpBu!|D C p8#|l15<AxxwD2Py'u xk7|H4;(Ae}b֤&X][o5qNMփ9y-㼹|tF|ޯ}; mr.ΫrqwrU"]c V߅5]c& -̯Dj3 oԛĥQ >f9L]|oټeؼG>lݸ}|2Cp2(Ge-sm9y]LV[6(rb,,@ =A$%doss~yvյm8X")T%Hp˃Ew9Ld6~mV.(#XeOJ dOr/Sm#IWүItx7) ࠕ/&eב<$ua߃d?{WH&lq7PF`0sh4fSHp:m#YUMjBKȌ`D(a[=>o WT5Ԛp+eαB M B=sΕFKs\tJ0721h(XyCkÄt4%PհBՐT_Xˆ$&4KׂRF[*b*Ł{-^0j`R(靓`m]lZg͚)P*b•9+Q ^S_z$- ǞԈDyL)æ u$ gݜg"m W/f7 HBU HRS H#qٓZ\RP>o]!@h#Mt|Hsـv֢ َi$$, )Sxitek@Q@6D;`ˎׄ $Zvtj!UԔ?a€wZ+8FjqCvg& !I:4t>A UuU[( OjXJ1mx^ dAZR]H*ay񎪎ڎ7|}Zmp^^?ݯ+=V/ˇ{;t^R~1_聩cㇰq/W` ǰɯz~&5 Vw=}aޞopa?lAKj1Rhʂrj*$eamTpQlWڕ6ItW>Mde_&'?(P%—BkjqkN]J_*eɜE2`7̛7NR@ނt% k3!:DSgG5 &0 bNBBi _R%RR8Z7cKQd!6gP:a\I֊1y!tGbi4g.S zdzA# ImP0[CPrG!H=#OlL oÝPUҵ6}2/rcG'{uhEOXbLUǗYK$ĩ;t"ΛRYQh5?Uh7VkhIE1bJzHZȶQ"M Y[z S>>|8sZeZ)p׌1je^'Vd4s1CKi%-b>;fh _7Mvh3o{lvϤuvЗo+N0|6V%cz*ڷZA]ȩ'5pdJIHЀBbf_ѕ/V:E1JQGZ;i9><;BfL;w:xp*Є)!p=9E6 Te)^SWzq5j r-۽_1#} OL8#W: Q B%B[QBBqݒZ y/8 Z3͓+g)#Pl<>o\yO^Ǹח|=ʶ$iVi~;&xJ۱sO ]/>)''oXP/>-'^$\ cP7YCS2VEDD9blu简g_gkiR|bNk~jnR},|i+ 1F yi/ij⒠>{(sѳGp~QomO3 'rp_(qtl3C}HS ҁ5*m`iA[-XR:-蝨TCziKٴѕLr%$va\| cA x "4y$J=XÁ;[Μ%u!vڥ bCz/64?rP}H(H%/3RkhÔZc4gYD,S" <|ce*e! 6SP g-0,e,lP\KAc#$*2;S:i8)u2q.W*ϟZRi:8ap*G:pr)I((VN}̙5nbЦ4hcڌGڝxnmx<.mC`!PΟ{G@ڌ'1vBKaF@q(Gdb٠%E\ RjѤlЀ4=DUieZ6 Y4*=c}H7 (A D](l0oJn]p΢y ǒa]IMR3 E>DaJHt: gswcFCr@y99F^ck,-񘤛(zvzuRpS2oxWێ+g)?ݿ.OsSm]@ҙ]|#2 7~ n+ْ!dȚj#j2*VS̿P1cjSFH_B M9ǚRoI55 5sW}3:v7>?{kgɅJ^= &o#-9`[_Wo[tZیXࣝ-#%N@+l5QHa@J¥H+J'JSmu(hoT U8FQ^ t |x;B+ժF/3 wYKRPw"UR<|Ju OЙRr\̄C-p\#(8}Eş"Ok%˧n8-Wų^|'m}zUl`e1_O &h3^ztSl:#!8Y eAnޟt r 1Og㟃ܲ&pF,_J;h[俦|$憺;n00B,Jnda&HX NY.PILLPb>B AbՍ/y7~4FSܢjhB"ߨ260%o@ G""WS r~]#VFFq5J]龻MW"^nD?MhNh `B*Z,e 9R}zqs-WEw*0LmhJvp\j(sD&z=JzZfHGFz]>G /zZlnKu4G- ܇5'qeQC`IX˳ck3 ' ڱxt9^@‚ ̟~$]ᏟnS{ Oj\(\h68(@zk'<pp4ڷP;nӈZJVC89_ѦT>j{tYb}*jb'/p5}ur4\|G%_zHlR8["'PD:O6CWp*qpmITk*ɂW* Moǥ/T$.T!R6݌媈G}^6[IO*`Ub:J]WՎkw&,D)/]֚poD'*[{)_ٿ 3"O>_[xu,z3½YQJȤʖ/jKW*@`fne7lUkPdn(U8U M:w͖HQ7(q{Q p@wX~{kn? |z[kGg X'},`zpFW~Ŝ"{&* E!K(   K#٣uc̄'yWqoC/r w/cѯ+LRk/լHGRhNW=Otu,ZLSAYdMQ:eݠN@ac+C%hO_$U/FG' Cel,\g~PiƻbGVQ`4)w9F(քe_qm#!w 2$( mK 2ԉ |P6) ۊP >?OoZ:UshKrV2gUO@ 19B#Z0M4pշdi2fDZgYx"Yy嫶Vپ:c}rL=kQ*M?\_h%[1[ߊ?=7I׿%\=+ԊP_Җ:F_djsvw?osCM}f KPMC-ݲO/c R)1SKŅ+-GkBf?_<m>4=Bk~H(?﷯vmo^!Cj57;˗ 8JNcl̖X.\3MGXRI2LJ (V_*ERZŗE z$=kRxJ0.Xl WfB)EHbv CDT-A-n[jazҳm;]|W4L;u/&R0;l^}%zM,{kc!/DI65 qnFhv`߂vkةZȂNeMdS.zn nDL23bnemT]<;Qn ,䅛(ɦr CRDŽGNZGy%Wz+#y&˄) c䅫&_|ʚ'/UO=a )۔sg51$sHĆXda1/m@^lPEWtz xBFP>y4>L #E4R[xO SD_|S: eYR }v/h! ExsW<Ukn =K'ZվV^`t'3&)RQ) ֶ|Zd*nOX.5H@mG@rcމe' \b>Lf ]tF*դœ йg`,ާd f9TQqgW*xܫQv=jlՁa[]\ uBdnc,Kc3:떷UL'}?SOX 7QM,-pnN-)C}9֯R11_*YЩB^l:֌5#v$bULح[1rFv :X 7QM#v3%USLsحyQ9Dt*n ynv)USLsح߹r>lFt*n$R[YH&l8T[EtSOX 7QMI&ߙsn ؍)C}9۪֯b:znv# :ZX 7QM9!wn ]ĩQJguЭ+bj櫻|B^RlJ1X|1K`7[hsYЩB^RlIG,T)C}9b*GrȂNeMdSF-12؍IWى8kB`ndA'O[h`!/DI6o>e7Qu0t8EL%o,YЩB^lJD>f7;d7&g[+;3% :X 7QM]8ݸ턞u7&58o b% :B^RlJO69f7w`qVyc7٭p}bS0Ѧ U|wv݁L5?d_j>\_n (ɠ ^1DX(,9~߼zo_߬G_O& l#~M)Bx#n,< &K|b@kO܀`O_^ikQT&v& jmv0q?w=z17O]=0\'_\ϫywzhMeѻmoo^ o_~-X_WVJ+sW_~@WX*?0װ)CؤOoJ\*Ջ׃x7~Q޼M} G 0A 7C&Lhe7N~bww7{ȼ1[`Q?h4̧}aCTtY9k}tŽݻ~s܍M>cPR7k| ?;PPR"4(`pŻ0G[~uگ͙v|#݇q3܇kn~!8谫,IwdeCE:UǏ$a {;<~7z(3pd ߥ229م8baT^!noPѲ*NbzIi6w) ̂Z4Tz8b*[jGwq'kG!dV)%1XBCfZeXazsټ1Jn/cnNTKPoKy(4lD-! ;=~֓A1ʏwwf4Kwע}p0Kpm~ GV~ǐ@2|DC=@vPٷ'GȽ[RIK)VDД|Al F͢ Er4}ƛ䑊x3$S&e)fN4HH[15 xYSKaBHd!Coֹf$(4\ 0fQ[Ea5p,wҞE1ǀ‹ >~fߒ1z"d­3We̓Ƚ 3 W r_)}7]EDgQ]%:(oP%p(jAǙ/QuH9.utpZNo@XƖDmUE]P`Ȧ`x/O* &<ୗTYJYo>!Ve?DGQ̪4?We۳=|!pmZbh#,Z\`3&sqL ɞ` ދs'M2bΥ䁣 =B>q.}/ Ezr$1_x-+XlQǵ:՞'b|?g(%)Aj1D}1L Jr,UPeW7+[P=[3I*Hӷ(Sʢ 0|*sso7k29M3.4>3g0 bJa$}e36d-6P7OcU0l "'ᦎ?- nx , MNSzHr eGn^1 =N"/ mж?ѕ(֙޴BpUhLhF-gv@hե]E,J0)m1 |t5f1 N*6 \+-0ֻމLԨK8jKQS?_m 9 *;1zL}1_?UD'yK@)<(YRHi,1$uOczG埀B%\ >~Z5 r8OM}228tEҁ%gnWu~l RtE8DTdDz0Zm t׫)gKp$@Z\P L_PZ\PM$u-4_4ZA@d5X0cbp-5TvPKzzw42x'`M[r1qws< PB lĪ&qԓY8rήf*5\.j{s{IpxCl3 zsO@`j]Gk]3G|  Uc8`Pd e)"=&^oc^sXhS7HqJa/͖ ,ҜJoҘ3F*w)9Di*wS7OcdV'y "iLy`ݦSkak,(TZrFkz<,Ilz:dH!O\ZKh'L@z@n=J;\rXм:,g4 R.%l *7,{TCQO 7:e0H0nZۤc S 'SRkp\FgZr]߷kY{kZ/10YmǤZGcŲ&ыI"du*cwrG~uhp=8ZK{-:_peB?;2N1-ż[6݄=NŸ}h.c8v_}\:We*dnKj#Ix>I;+yX9u,ZfudȾP2 2++2++3[d&O9cr13.dYnh[Fa (-8&iw\d9s )!:?֘\E.j)`7Mnn&ptIhlk 5<1d @$IZUP3w|0sg'2M2Uƾ L^.x^\'E*~w)w.Ϳt6ISMĴ_Hiޅ p1 Q_P|AXmO)0.,UV2/cxkKCA~!ۦ$A6an͗` /eAEJn2CoJJhNM@^N45IkrZ&o^]i>A3pgwZI{~< pBo.iOH@{+Q5V sb3WX1"x 0~!)C.Ma?axx8O ȴV [|D$CF5ZCbđ!Q蠸AiTJ,Dž߇g 1%*ŰE"ZB:T+LE"$Șˆ;aY c=7*PoA#GЏhQ *h|ZM-b!d諰40)_ eRAu(pZr$i0mH*΋$y"RYC1Y ?rYo9f܅'RUX 80XJN_.yeW#|"eQm;o:"(i8.|X}Igj UΥ\^~߃W(-3A%w oPv3=Hy_{0%Ո~[dnnt/,Q#?|Pq^:Yar!"j&uW5P?nÐܾAG0`xF TDZQie3 "Xm0Rq@ ޟ(+l*b6Z3@ 3d wԆkX3`kXazܐEB`IЃru;ުyF`Oǭwgei.5gz\Z۟!|&َBl /.&sH^BQ4B0"[![1X+pv7Gp<7]~ {3Z~wp6+jFK8hš.d<&;T:(B4<1u&QʞN) CQ#|d*L|ꙃ]Cp {[Dk@Vh4 @P:I S-])z0&/7͕|*c~ŧ GesawFYS > Doi# #l q\׊),dCUMl5ߘi5mu%. g+\\Ra">]o*:^ehEE#%$r?Hs {_Za4Hbch=l5T| +bGlc:%4J#YPD+A`J)fI^=nnͩt|AՆιz3| o]Jg敘šD 0F9љN[XPiIE"PyݕFIA[{cwm4z,NΧZX:*q;:{"<\#8\z+z:f} K6b0=r=ڀCUtfUJOǺqTw}KY).?&ʮe$9\nv2:YKRDh-]XA{EBvJKM0XE0XʯZ#KTOU~d66_o7k]ѥF_D*.2`M%^.*zּd.#rXJT\+7}R Ь,<(,?:;Tkyw< Vo?w27@StZ۴5,8IYLO&*FLQv;K7lYYB5|zdRIP;KAJ"r͗,&*}AEh7,M䁑:P54RW6._T5%BDw].bjAIk_R-pqil< b#$%)/s zAҨ4OB}T4)\QZD2OQw^.q1@}=5vv~ч_oU*URͫWe\=͆>bXYoT_suS ]*xHZHi9dJܼEp<.SoüsAm_tB=""m-qq6;}7fA*unSAYln4$HJ ҥ^^s] ;YfV+(A8ϩ(z*vOZg+ "1|P*v,>bޗсE,5'BX|TKeRط4 $Po¥sΔA(zVX հ#wc /clPCh_GE9˪_LЍ jZEWITv;5#rB.\g|Mnuٹ>[o̭wŃr0"j#U.9aD_#$ 7NNчyKhcp# дg-+Lh 0OpQK"45ge' 9ڐ 6F Z*I|4# T2Ⱦ.m? 8.hN<9m Cb:) _+zՂa>&7frj (E)Ku. ӹBT zh)Ph Z1!TH< U6kD޷qڢIw&W3^jmaP=ct]],pSp!179D(BU`F# !IDM}y¢sZ>Xo=!=* E1i&E $ ԢHF&P$FN-"?;KPat7?!ܛnD)X>FIxw Xh5+y S T}J2GDˤfMB\ Q2؝Ľ s\ܓ -Kz:.h)cw%#wjj9[P@ߏ<=;炳6ȌG{GX얄dQhOX<*sIPukTs;4n'ޏ&w({?^jj?eW?I)Wk8O9]&`;/rF,X߽o|-ywOATt|Kq+ 8Vi/TqʂgGEh!QIZ:ry%s6`X`#=JGAҠW`fS}Ҧ1i +l}DNf&we(}6TGF)5e(f"DJE5feRPyF{p`( jFdzҗҲ$#aMd{Q ( 'BAj/FV@"&8Bdr,²2~Rs*iM,2PUN`gqD^  >L4NwZKu*T|<5& /)ʳwl ^rɣ AZݒ +HT D{A"BRDFDC uPͤ[#ZՄ&& H,JE@F{Ey jPٳAtxՌƾfiSL/ P `(ePL5J=J_4J)C)vDt+F)7e(Վ[nҗҲ[.p(-˱R-9ݣ('}>CIjF lrTUyAw[R^<9^~-tNm$/o.Ng_m$ſv m$!M,zaA݋;"?Y8:pPws7> *"<ߟѫ~>xq^uD(ABi-"E6fFoK,YC"*,B/_R H^(բwg5B6ޒ^ԽХQ{ B>ݴ}޾cb' n4Y>wARiz$Z&Ji{@OrDT#R1F @zs"1HH8K橽t4v_.`}oD3q1򙹏y #We 8Y `ـ\lV<-ώ$'Z>n/rhE]D \B"]: K,N2 )ȣ۝Y=*"=W=C#wlTe>)A"Fʠr7N+ႝ@xjBE-͓VC؈: pf1Z_>D[=7ʨWMe,Uŗ_V|RhFOߏE_kW3Wc\< 젙5 F~{5F2%o*JxxףP-`!љ8WTOO9R 0Ԅ!(.~YDe;OKPh*ǣ?s+77wMyԥj90+UKN jޟzxaX1Dӛ0>]lNBƊs^WqkВ4-Ym CF T G sY46to\NcܵNͻS,˻6 6 I&H..\-uX SgrҼӻu#Z !ޏc|0C9 {9qXmӆǝ:@ү Nǣ\~k?˛~,2&TCy'?ݥS>Qu%ԕsqvtɆ@8kN!8DaJnkm.h<(gtn[P7݆,n@7A|=M7F_nQ A t>vb#7݆אjU h S)Λ3`z;>+3^Es]r9r^n"H\>xLoľ̄ 8Ñ$]&U dS bUdV8qhsY;,?w=7>%ͳ/ج$yq JKk ?s2\0@՗[VV)J1JӢVqR(]Y RwՌ׭"JLҕq'ҴA9Lr)(]Y \D;J Ң`%f ,9/,Z{R~u\KrjӢV0Z#JAE))$N0i|# fiøuQ*{|ֲ|Q*{|gt#R`I"[J 4XVI-MA$x20^pGu]IC͑3]1)}`*ŧ' D )@5խk>V:u[zciѲ F v {\bhr<'˕$.ބ&Kre;O~GS᭝t& ZpZ hԋb0c2U$/Ү+HW[&sdk8?IHrсgڜi緌Ջb#U Đbv 1d)&{hӋ"" &1VFꔅtKS&JtDJR`De6 Cj 2m9~r ^o_}+ &| Gڗwe~8K0qbrk 5ΫWxtX* r5 =:pZg4CȑM~Z9qVٮa0+ [f3@4hK׀J63I-F|< I;Yj}^oF/Py+ JIBA9v9j1)$s` 725 ]6{>JXrBc:LRgF6*t6ʃ@ɣ6{#@f:ejƣ/{?@Ց8|;[uGVKa&]F1V3QPX+^qRg)8wrj- J1Hae/}Fpt4T0=p@'Q8y!ZSfGog7` i٭)}FNvn|ʋ[C(`ĹTĔv$>#KnY*iW|l#UҮy!S+'),S>Q"!tnI0Ƈc[k}kWO>(zm+F>:=O”>2ঐIjF-qIe;U=S7aK$M!S'}4[ߊ֦~G+"?۝to:-s%8%UsJj=1\ ZhfQa 3dvq[ ;(JSʑclEg:$7D(r0o$9B(NHS|֙pa|x biz.bå $4r10'۩&iZN&% R&Ds͒0׽ -gkH[+8 8Ъ\$P .QWhdPaz.Šɻ{ZK%`V!18Fw8j%^[_FWBT $OY)zgFg #mK"4EU+!J}5mePXR:ZSJ˪Z .|!(wPDa2DSehYȃۡǔz"!_r6]42 ɐFFZk;V5Q~ CӠKh 㰴OPa5Ck  (m5dktlm͒.P'beؾ2}ȗ *lm1Bw=9c8$("))4"KS,50>\M0nMbs+oƦaH)'^10gKaFg|aCV3&]|`h PсMpW9p6' sZ~a6fD:LK63ID-˃ӁuOt`G~3z*WH4!{ ~I\mdaXtnrA3[0hP:`  S{C{yq{}gB_}oA#O FCzMzCX:_9j0LT3\$;t:!D ( ~3r%O|CK%7E1ժk?L(uVAJ>#'M8R`"䍇VL%rV#Kbkx+S?|>ݻ[wc~ ^o ӷ٧*X{F%O|`P8﫯. 7+Q?xsq~\-YZ}錱\Ugb\+orS*~} |U,:9״(@b`!TI`+'Uv!]|!pUM͇`³.:嚖Oz{9c/U,SK,rY+$YH5z#_¢'-s`jJ7a%0Kg^t )\gt0 oDE=I63۲WgƝtB Q.0Y*uܜkUB*Ht|~OEpI:W.aքxI.s:B8P}Q ߽ݞ%C0&ԍdFC`IOfΒ)aVLjp.nm8+jI 8jH[jfSA$^:hPo%pPy m&teښګ]8MUgX䮶PyeZk~zܪ QvؤXj%R߹\"gDxX*\t$(pvIɢGw=óm9J,X:z?J?> WDSTZMQ0SԼ64cpF`|<,h-HX*hC@ s@8yH0grvq>>U-߈9;J0EͣRTŤUA `h<'Wy-4wgԦ~z?g|jޙdW2 LFZe/;Ι_)as&!8&!Ҏcy*ATiݓ:A:ZF2f;g1bx0oU)OP;CQY*U "Cp9Yan՞匪F?I׉G|!o ؉cL$i=$n׼>|vg9|OXgxN > d[u/49 *Jfdc|}L(>[Eol88[EyO}a4ܯov-d2O݅pvMVVS(ua?>+o~w[}+z>3=]4`GdϸË)qM|[w{\jVWr!LU~x2t\)^+ڨ`[}mcy}d _0"]{Z:аw-6s9" |݂b-u\nH>:1 I, .Qg=ztJFqRYFtJMtRtSFY : `Z笫JQϾ7_"1/ކ[#"-Y][Kӝfꦟ> 1ul5k4(&m~b}/yuTf!L pb k(LbޏσbJy^)ׂswdinۺw44$5Ivҗ ͮI N+2Z*IbbӄxUyMwU|}024P{ 70=_~3"@5|-+ceeOY3*ȞoRXxkɽ4U{zS޾7k66;q~Kbȥ^kA/fQr9'h^9h^9h^9h^UY )2ʋD9Q&b"Ӫ,:3dzwZ{JV$^.e 7Vz= c^C.q=O]szo,7fy?~^[n8^f&x [Q!]t]> Pb",3N+B14OH\ʙAW ;7 Y mHropÛso+(#@F "ڔo7*;D ԎSLʜLܦXW\'Nt^؟I6skp#퍻ׅʕݕ#ʾIrt]>)y77OefW*UN%htb9O?o~~N%>//݇sZb~۟mNm@~x,N9HiWhϾ|cl}# f}1Àon\NDf;7=N7R+9R)Kk&SѢKs HC]4@m]^vm$b6኎6<)q C:z M].vCϥBD YMlTy͏*lOwQue }!ogw»K8h$TSx?Z p9Ud.w8k:Szhꦮldnzyb7RmYnbmz1ƦF.Zv3uZMyz(6vB̬W3aѣX7` %`rQF6Xmsɍ=&eڤSCZAe~䳴ugYZ; { gi}aekIeՙBpn}[."iCɍ6#|jx-LD1ZVsF?Jl.f.uE;A,'"PZ3mCS]"u;up뿔Y? o0}샟/}{|ZE7^W/˳+;%zXfd15ܼ&[R/m<İ#H9ρu%C!)c[|6mٗ«%=[y j3fBwR΃#՗5UˬOK,X& +Pj(cEZJRXR`T!n5pBQʕJy Q\Y ze%v7d@T/OcV_֭&T\liJY՗upBQJ#>c'TJD)y5JRTh'C=6(#L’0j(Shĭ'.~dR`(;X Lӈl R/5|(OgO G)Jɭ%ȥD~(-V\JRqJsü՗{V+vR\6JΞs.@>~ו*e}{]Ex7m-jAYRT~yuGn~HB`4uH^>BO.8yTaӊZ1^PlVAn V/iL:7@8 6HW3rtz*I, @IbFH M + 14.rV xb: ϖG^U6P`ABH tr pKb4M =&, ⾷I(MStVD+{Fch(bv mC#Ǥ mcAIfkAS'cˁrg"KAL+ 'Ѧ4i"u2r8yۺ]8/ǽF?qyS +Tʭb-܈4'`)$QYΉJ -SXʬSq$$uR]Hf~h!E#mTj 5ڠwm]b 3 rx~:yİA~% ,zQ 8s !KRd92gL UzHGZ*l%DiHNS[{T *jFn)Օ-Eπ 䣨yYn(y6H3w< C.CVԍ6əm$äAd4cyvm~{\WsQ 3 ( ;^?D]SC: ^)>>|t,ld]΁ׂd5Fq6$[+HO`+9V+";[-9>\YmA'*>"ԉ!Au$nYh|:hݦ܇n՞٭s`q3νF":hݦ\EՏ-_n} C)'x|4_n ٦tMmcYUq rkgy)6=6j^fP %h5 G8vAF3xwɫ$H5Ky06Ɏ^uNe# 3tindE~ r#$L ʈlu;n{`ы4Bi^0cp)EnrI*LT iq1 qI"&?xc2]mmc&Tb-$ y*hi@1&*- T/ufEӅ:b󔩜9H M>)DhXs* /sI5U 9%{^4^4SR*/7/rB7nPIJW&bR$f.$w]*',-Tis:u^;!I($Q4_pRW)Dَ]] s4>(e LiE>وp ?IM K("W Igt=R$ qj( TD&$=^F+ǦK#!#`F( h%edzu)**bG6 ⃶o노Y29pw<*k%Wfi[9Q@D=HD- ;goӣGŷ@34Eqv\TouoC 9ֺz| fJܛ&RU|>* ^v6OsƷpTzjC |ffѦ!;e|!j#q|.j@xK#td.Za{u"պr80!h|8Cцvtb#UxҎwLU%:Ne7)+b@7Z bL7r%2j_wu[NF] e󞆛.}8rzo1xd91B X43 ^TW\YE/iv˺$եXWYKQ0׸M\:$9E=eP4b?,Mckm$=&HV)g5v5iFtk1 =;ֵ\<ȌyW-chz6Tueb>s]ORHNX9^"pRȦ^kO{!c=]U( dEj슇Ye.qx*橂<BŤjuog݋^{A7a}% Q\$޵6r$_!y ~ٶX`6<}6u˱V̿o$IWeŢD?Y̊<X(4jS+}p h^VE0pB`[|62PnELN3ưehآPh&Gs]\U(KUQ 5U R­j;9͸׎{;w|̓~w3҉^ v%wbeBkJ_ V^`je*SBSC- .eAGxvOO) o&bVD~6q/w(O>YF7sȋ[m+kFФ;&SNaߓy(+y% s$K}³ޖ/#)*;Dfs49y6ыQSc=iG CynN病cSR{g;vv8#,G-5Lv Gm%~|0{pL|~n6ZO,t>j!@|~yb|~;d>Ӄkqei _G>x><Ԫw>Alp:ݾaAa} %ĥQ#?mQ`Q=ܶQlTžqՓi'e }uaGKو1Iypm"na5C^#Ja;27<[Ut>lя5 +)]ߑC sDpWcDb>,䕛hr׼[<1v苝Qf0(ZwB^v)8׼8|=v,[Kޭ y&bSkiTϤ~iKϻHRZBX/t&K&ٶeϤ~DE`Vg5L]AlG5xT9}󤍔 %jQgV0P/=fa_b0Kz/lk\$M N9fEUEThǬkf,T3/aO.헄bZ7K яKl<8[VtqVr"8l>b Şs[PD=%q|eLQʸru]AYO/ T^)mcȚҳ2T `ȗ4>(WPr(>Hnt҉s1ag1vf 2vsguZp3{.]l 2*sN*h.QaJ#~?Z>Ԏ]:0Wnծ};)s7L]yĚZ$qQˆrJ1߅ *iq-K%;U"KP%}vuԦo 9^ 11&ԠS>xka %8ڃ <MU ၈RsRJIMIlN?T<3o8 W˨[\ZQvp{OUڌcb]\<~x<6)Ϗ`Ahpo.r͵hIY΢i~1?.j]λwm,oJfrŵmZ돋4v?_=8ȟ0Wz\{H.܇rbSf3wmncz$NwdyE_]k ޭyTփrbS^?\nݺbb:]ߑݎE42w~7!ѻa!DWl*4 C(z. 0DJ CHLbŏ3a2aȎυ!GiYqv wgT>!lkl;xղ.):o7UŃ8<狇wwjrv^Άȷ3m/~ߋ!]RWpa>G,^췛 9Zu{X5oEߘ8pjEjEnNA !ɋ+6JFҹC xNjMjg};VHܔ4{eF`ZVtғRiVyJ{]R_J ^"Y)x%=Sֳ"YLjg+=i+%JR8wb1tWRJOJMRc/E5_KU58VzzVbb)VVٴ^tWRkξL9~z1tW+R}ӶR4+EZzǴR4+I륧miKQXiڪ\jsSRF%q碕QdOZZr>s^?m+M[էxzV?ZstVFLs`FaiV:##Vg+=E+%fd+Mҙښ/=e+5ߨ$TۥnE;6MGuLgb6&FP0 {p !sԱ]ؠ]%U1TUbZkyV.J1-#(Nx2yۚ{f(f.^ICpl߾ExY DԻϱ̜/Mu/7s}[FIR_.\M} Toɟ9~E*]ӣyӡ#2x_īv/50.g+eEauukV:G)cZ7}:ͷTw_[mMA ѥY_XrpLtfBƉ粞|NiA y:g]mu-mӶ`fRk!?o &n}qVgfRgRNR2u}]lȾo_&auK#Vnvb']GEI'] 멕v>fϨNTjK O[h>}c"umIY{ˆvXaYuɓV;)j힤4dmhvmo?2Ӓ/AY/hoxWq+Ha҇ ʇICg%2~owb!&ѶFѡ4 ہ՝'<8o{pI[uK!_2:΀9'8`QФt&T3F ~+]{IY9[Ii>U=EQ9G)ȶv{HwmIVF^]ޢOVFh:]+^T14A`XWq켲y!g Qd Zw,;{2 X!>P{S-!efak*PhU:+7LI 4\$ `{ pNސQ{'$M$螻vЛSpP\1ª**V\i8FF9iyJ*4 E}o`VŏMox0[bth6eLA)2]^.ǏVئ"5JJkԀLSm.h,m=7I5zSW*'jKK]UƵOJ 5KV&Xߢ$fic^1zI7z%ʘ%K8 S;ū3#e/GoDBaa= ;7|[njh k7K)K5VUpMS,Yw=VeMZ&yyhl/fそ|q0~A-KݥJM륖gԪ<\4C2bU:i?BF8RҌ҈K;Š踜EԘo}bRmKFѐ<]=^_e5Wl߼/d׾ 4as@}𾛗k?{^i*#ҩ[}! wD+;SlCy1=kf]3UOF=n"3jZ^;0O2}5yY1&erb9OɅ.A^McrkPGvYjt,v i,5k)=Y&8[An0Z bugZ&Vϼ֭ rEfFMcUR o ^\CktE=&ݧ{CBo%8Sv`|U4j]~ÄcU!{<ZΖA(KJ^on5B5Nmnq`Ip%U/s@{5)6W1Jjl5'1bljgrIlwޞv"-Cbq3j~]Qw&9_ݤr4~MbUuO';HBRH<r41T׋ZT"[K6+/LWtÆlYrz7%8EC8nuOD7Q.>`t :nÍO]Ji9F6r я8wVAt :nC"/: H8F6rp1 ŔþpVUstj8e4(&dC^ fkܚHJӅHh=a.gv704Z}ޡ jc3`J hL)7]@-sQB.a| aj#sz Yn&➲u<8g݋h%ˣ"C"ɿsCk78RqDNRǨ0pM,">ܟu `Ǯծ7J4҆rqdmWA@A6 `kl5$zZF0 5B[͈idsnfU~u(.ٯC*ǰq~u 9qp}9 uBw*[t unm0))JǻMQ :nNyL3ѭ 8E+⎜V\6=ת$agU)(ܟ%KяnXC.wɈ­TXRj!g\_Q&Xʤe)= zVt 3KX P.UfDys,Elj^M Z*E8hkK0qHH䩌) hi!,4\B\B{TK u/lk}8̼YڣG"i.yq?䨚g]]bJY#Tye/xثkj/|PΓo#ko$zD[JLJdzq{]Zos|eh\hDt6(Ǯ^9ZJ[Q7pUunji$ fS!x 82)9'9pE9$un\-E#S7@)zm;a`4 PQJÝL4E&r\ByunSR` P9hX*SaTVq6`L,fۨ@26*Z_aÐ0q-oa۪MdO uC`7Q5[#ln `U{5Ql^DCYtm''S4C>7ճ@JC1pт }\i>ӎOYDrc G6r l=WtycnP'>ѭG0ьUo-<1hP~ĹX8ˀ->' 8V2pnm0)ژS$/EvDݖxab̥┗ʑ?4OE^YefT7t._(]d}+#i^M\} erjĔ^)7NtLdwY|Xգ />z2V޵ӇyOM FO [{獱?d:\m_/ .gdY<U3| —τw#7Ⱥ|L! o}фgowPHdI3_R G>_oy=ٞ N[!_'fFvh)776`S0 Rk$J=L3/0UH@A+9E_ aoN7\7Pa;ވR+A8C4XF\$afp)' VST%(\DL)@PnRuSm@jk)ʶG^1vV1ּڔnt'bpمh\W)Tw4iĎ[), ߾Na!f3` [9>jm`H͹S)\NZX[s̈٥br]2Ms3tyƳb.y]ѕ^^҄ZE4[-H@9rG"JRғ9Fj1yS aԞr%O]Tk` ~Yg7'Lێbt6HűngU+/ Mn㌿O{N5<[ lnvX5m+2)]VVLo槏ja\d_\OdR2577H˔E cqDiuģL& ƚGiB+!(KsF͛ G9m͏ oW-v7Ct5ds+і^~ɪ;4vhG6)DsNe)RIdq4ORcG@RcMiK T?Ĝc\ BW/te6p-CQb+.$d?sF'?d"3eym{0f-Ynbm3$\%86oKI ȍ9q:<5y-k,17>ҀQ´UouNa)*o omFC6EJomD(T9Asdqg:bIh^mFE/7 챊@-ܐHWEHJ(Į\V()*E4ﱊ.caOEq=U;#tGK5B:?ܟu?jYq' ^wİ/*%{ 5gͭfخT~~b@m[e`-rK2 ~! ېsSu~z y6_PS޹޽ߌ6*x{|>kjI9UbَղTs@P \4B#8ܤMҥl`Q6 8E88NtC[(. }OtnJo|ڧ(jxրctk!'N NQ=ntd->' 9@nѭ 8EsS]Oc=+h"]T4>Dc B (1\Ц ]B)ilbUzKTrllJ ;ׁ()yUb*.wWE]\Wu֘vԘig4QDYTݛ jm}qq&HϐWlJQE \oxrNĪPþmB r˽U.; vZVx4aq| tQ\|fo o[D_l(y3lqݟ^%\x. _Ti-jD8ucڣn_DP6V]4Z"KuYjJ}|_ø)7X:YʈK1e!YʈK Qs,=nF%9X jLGR.X0J2 zcxfQKQ+b,Ej-y\z,+l=XzYj f)n,eFםќvְȬۨ)¶7AR#38nC1n!fGnGib1)4M 0OF?AC^)9z'OKn}1qHn3]*fjdtC^αRS4ctAFJtqdѭmQdtC^nFKmu#t|BԪ~? 66}ǻֿn5j?}wOm΋>~{K8 Y! !G:/M{RdrB~j""H/=G/ȉN^ʑosa >g/u&\0fKl/^z^j!K-cEx8/k y{)R"1f[jpK㼔x/%RZfe?s/ȼ^М^ʑyZjBljp/mkt~ǥٖe$xy{)Ezi(AxtT?ZgW~|z}7;NhvNjp)}=hgr%b̲:((s%]=&T+ecG&8IT>zRͭMBx>ٻ*xgUS5qT+_V͊`SQ<*}Y.:=+b>ۉ}P.iLs<Qq5]|( 0 W~%n ,굆߶TX,`=ODTu iqLVj#TX:3?a+L"Й!Dtr H8sg)qu@4E}\46'jY+(+k*pê,HW %؜PSdϮŞvٶH}:2Hk1z6 x,,T |=UMV%/TN^(Z~aqXڽvaSW*T&Zt}-15RL@Q\!#M,5x߀ +nkA9u,%oXHK{!7#!8;! )3P8GCږ qҲ:MkxS_Vk#uf@'dZz.MOݱiT!8 8Y"J3.C* )ڙٵ^>Ew d#xswwR5MOD?UJ Q0Q.uφEs]dəaw7n6}b9(c-p:4'#-(ZBtdDh2ӓ[qYԇ4,lΊP$Osʄ|I3} L29uN)&MIP!= B1Mޖ%5bkAԆOOڊSy?AܶH FT\jʝHR;$7R[ oxoorhm*2YkOdk!<䕻h/yz'6cN_nCx+w^>Es'EPJ 7(%g\ExӑC^SjL|% { _#O!еU".=fKjk܅J/=Rd$kA:X/=?/^>-K `ԘAO0Г3(+EihMU.@K!jꢩ\%F I*c3m%T+oW:涮{_??=P8.͖ hm$yҕJuZINt2˱vF!9r &NعGʾ&Qs 2 ru1Bh/uhta/ӟe`c\6ZI%qE9׫2$cjF*6թy(, ZF`ǹ|}bwex oz`eڦm̲Ƹ16`:Nv8MX/sCMв$^"=^צnEFʫZk[mL턥J1UmiO\E͵s&Q_ѡT8LMh& ʙzG ş';[1Γh*By *gSC,7Mgj_ טaWVMWM :>ȩ-U*-XC4km&StS8qP6A$)y4eFۨe^Lc1W%KxJ۲-豨,>0'Y`uŌ 9knA0bק:bgQFe8U}0Ů`2Fvgkﴄy~u60 bx}/U°DV12G.58EDO;^1ŎAȪAq~Df稙!2Rx#`@y|<0gp3NM)#cl[Q85A%l %a%<HĐXjEwxnW^txo&o";Rʆ 59XR SrO@+MPyEBqy]/sdzG:xFo,:vu==Dw<=1H4:6zvΊtM_۔]ӓ> (v4hLp|-OMXF.xY L63-w2SqZ`!/!Ł{vNzvļa_4&fcb\ֺ@_lzwozX͏P, w|fK $ӥd8y4gVx޲um~+B_62TVu%Jw8^w/:+{4 HYkUucxΒh97Ǚo/lDۖwܪhl2eh e# \R̄Y@T~1`q`\"02q/IPbq7-zY&D*Z[ע#|cr;Or=jr=7ܟ|?s=r< H3!7'Og7'wb Q~lQ2EO5S珙m6Ӱr52xɤh!L&"AHPNxc. X팿1A֮r|WO0{)]~zpnD#A 2^peL*!JHFH0pmv WޖŊj_]myÁ&rqʓ˰wp'R̓3bbr jZ`>'0{dKXE'DjQ{sRZ;L?=}7?u>LwmmݳW{bl}i~X_}p7¬۷`]f¿pMGỺ]=wWOX/B^TtnC*>T}dk;ԍuކ];Q~ 3PnWm}?{˼4Om1MC.obQUǧ7 àg;o}ņ[)9!uGFb\oɋFE7܃}m5dP E,+-7뽔5lǼyz(P竰]ex)z(M, '>͜0Ql2s3.| E&x~5{_՞$h6}.ɞLKP0jsx8LNڌ#5  6jH]um$y*X r:.5A]͋ƖNV 7.]s.Ԡ E[Ji}X0ZMVX76h*56$F*K#Ԡi;6+?AODw#9qÇ)1?G{|40#Fd:3Y?w?D|˅\NG}q3cbPZK t5ç|tɭ_Aj,k}n#7 K_r )PyUfSS. KBRݤ?`HQ7@̃\_,RFwOя_Se'eNmKz\َuR:eS"P/3 ݠVy]:,aa#¥J4Is{~%=tDJ#J e]ϊN2^Pnju]R e i5 " QJ02yqM٢?jTQ aF 0 *+JHEZfw#Ɣ A5Tʾ) Fq\.s4Es ZT0k]Bkr.1҆PGkvj\QQpo:@j#KRGyWe]3Am{whY!g'7WG~u&NoϾ`JJ;5?ݗs;k>lm-oAT_>w&ܼiwӱ^DNLjo= Lip1*A_^u{s홫R>YM_CU4K?zͺ`ry:Hny_^ά[Bc[U4K:yyӺ6{n'P*΀\ Vdg)Z~gOBypΉyyZ$6{>g9D$qs}S A5S^B0z/N!XPJ(N'B5LoR Vy" U#d'^L-:ԖBz5WI08Z9aeKf  Ɏgp$Sk?yCk~ VscQ]W'S E{k~|V._=p[k= 4Rqx% x'n(h{ {ÎׄK$i _%5MY`3,LHpʹ-Pe sV j% 5L6s ]M@FqP2p :mSؠY $AEAJ$)!T!!Nk%OYHNFHaیy^$'Tz9IOx$[˥^0qn, DPܨN'vtdVa G;7h`8Eޭc )I! f<#(j`ͮ5H1Ux-}Q>%$\7Leb%+CҔJgHaZ"=5s1 J(8hBЖT80F[p*IO+0Ĕ*-%Hw<ՂRk~N)Y)(K# V]bJJJF5|[ Z$F$&TĦMb,vy9wK'~?&霆ߥʌ3xﲕc]y!@xN%80Vx0aa0|?]O8~UZzom%ͷ=,hB? >] `%glW{ᠲGoM4ݣ#=X۰|^yv\TZ/T=1jR3K; 2;u)5b_ lqQet?շTz 좥g,R qZZQA-=k-%*NK']*Tth9j)qZpVӧvjދr-A}j>oשxsRyp]7ԺT߮SM@^3RjD.0 2{-?rӡڵuJ%5gؿM/c ğ3iB-@d[}ҫqv}[k t8hw$8eXMKGP*Xx;KtC[H. acp=^Vꊹ} VM‘\t'a{9=7D g3[P^L8_-&ڒegIj >i쮋'J@5UwC\L֠vZ}ٍ!DA`4$>)c>nlS:YT6nРR .,a4?O/|A#?4C]#70PpaJ(U2WE%`u;X P3KƠ+oa5*ߠ_o Rp>f0/0ļ_3{\@[=ͫf3Ovn~vHU#D)e[ϐP@ԈI1,$q\2D/<҄1n ҆kԄ*MeCdy6qjmYk 0ܷp$dBoZ`A#2Pj9kyaPeU pX eCBaNi$qU/[ݶ,Y,gj!@6BB[x0@i ]# -ތaF~p;g; vo==FHcz:+13տOZ,t``n+5ry2!h$9^}zkgYD9NǻlK5|Jkax< =o/'b#OkFM*dD!qsA\ λDBmvG!P17abgAׇ04läƹ ~Ee\t_ꍚ'1\Da={ ot7_-S0a*_?(\컅~t$0%<n܏"%R>^ǧ/_¶;f@q?6Pɽ]Ů~B;*T0UFJ)܍i"IC~D];9fA9inJ`֊Z@p Րگ@55G,(g%)!ڃ'=IOe#Zcs9Sd?vCYQŏ՝/W,㶩xŻ_O|CxI 'mC?^QWw.d4&QQ_|lW?B{8uU kLt2HlgPQgl!c#lS#K2RF䌒2n# TH:=M|vp%㏕E?Umn2~,"js&JLͼ-x[oϾFvjg3~/DB)܋O>x-o PE3^f܄c;tv2[‡TμwnquMAG;rnHC^!} FP.G˃}Fu2%kXmZJHքr)ZI<3ɆuքңuA>#źw 233iݚАW딢Éeֻg]ycsWTIu bneZ PeIIĂѐAWp~kJ1CQLi#TQ .g]L#[t(1"x?-\"÷Vwq0<~~A0ԉzz'w-׋g-ų -cv(qhE52!_Qq˫ ӓRq;^zRcD|[n.LC LJ_u rj͂F@4nHaz9r6ERK$и4}f TP(9"byd >(Bm-CF'P#Unvd*{kJ*84q(>rD-x:D􂿇sA?^ \hY"`oB!X?qpyeSS:a7UFf3_Pw8z~iozޢ/j/1&#iJ k}ء}| h.,,7d<XȽP.M_=tkY{7&\"ii~v6eW֦pNqE cy'T(/҂ )bC\ C4S1b^t(u1 y¡(9uGQbN\J,g{s.pGq퉓"s䫱9+S=EI^QSTГN&zRI zL!6'cwmI_aӥv!{zRw7uuj}ɝj6zxIZNj) |P+DuOsSl |'ܷ-re"y\VHܷuϴ[悶Z!D;dIBFdgM$}sߚ.ܷX4d~btz5AJI Fn6ӻS 쪘iZ;gb J,gS^˛ W&_Rf*X˼رpX.Rteo?FBG[{cz~pCrzL{}tg; Y4׏wyeWiC?|iЈ:֔#Asc@g# ;u"ő*uw+y:c ЕԏQG/6c[ os|(;6"-=FO#_VgrL*ƔjҶM*`Ϯ."9ߓדkJL~߿}w_w?ۏ¿(|rH&a*eEJ/?WMjw7X7?I^Q'Oi @j;υF0%}C Hcs,Uo7g!+w:6qhkF)(>V FM-;-{^N=9(E97;oD8cZ \#@R?(M>­ă|Wԡ;Sh.{N8)ehiH!,0dUU͟'2sTKCZ !\0~g1 U`ŧ[>Yv1PcZ(3_)g z#)7^p -8y@kbņ:ɇEzv R\塞߼0{T5CBkO055I1grA|D{kqjn>t^ pF+eLE?Ū*Th+AE>\ b:;< i.*]iԻ<\B8BP=-J똴2XGu$&yad;eZG!&tylMjt[_,4r'j`mhQcI0C>%=Ef(B ,WIǔ6N9=Si( z\lR}q6wYV(6gar /17u* as-҄ WZ| 3<:<OL:r[k";N;ly<l}Vy+^ŧ)~9tZQ{JSz>-|$^\2?႕T1M?}޽}YoW7WW[#@O?~s MRlt} ?EgBwlusonoL62\(PjRMn`l&妁ؠ]9cYW;i-z<\Ef$e*㶛 j :Rs*Z?PJBx2ǫ9^e5^]9i <Tu:( }AgЏ1#?ԹBuntPRO~)9-@4_{vSn eAtNBq XD u*~kB t4 "d B.q:/6@&JR 3ԑtRPBwݍC o܆؈J=+ۮbqJұ r9pR8R5ɾ7ZKY[!em%[-'_4hiV^Mi$lu͏8W廊رƠ:*LYNϪ_Vq0KWC8a!MlZؼX {p-?/kcbGvTZ go5kjڋͻgT/)_ϊ潇G}0l]V`+ޣxAnл2sZ߶.H81iT}U/!zwx #-ha଩hrMeȏsꂁlu v-$c}_pC엘/ңG!}CE"|4 Í<>Tt`,QpҙgtAa쓶4? qhæ1EyDtcW4Ur:3&gbD$t^5]% :QVJVM:)=~"_OݷֽlRͶ#BPM gq "&Y@)Ey ;*** VvW,!Kc}79TRڲܹϐKI*wtwFX±Cs.(.}ڂ ֏g_.d#F퐱VTk;ij:RΨ:#e698'.It/$QR%B͒8o"SFQCM"4 Cʠgt mNp%ڜѬwֈT:gPH?99|9|YMWs1:% (Ž0rܐ@~ 8YMj4kAk /ECL9ff@ӗ_ٜruQ R ~ЄrITs%~Pٱ${sp$sͱ~!&撴m~_u>hpIJPBtQ0kGUCEtȆwfBbX6L3ZìꁮM+s[MnZx'4zG$ETtibzy0or(F2?X{>QF"aLɃ"1X'UGJt2S`F}<#F?9`I 1`G=M*8qF e2T1JZ"o& t9sE\D>rAywgXQ93c0B}*7ey!RrR ^=u?T!)yDbc`%=cmv IPbLDřm&`ja6ix02)G5YSd |F=t PFy k H0Y8TO)IZlZ adT}legrzr(۠ zUz2&7t9jqJ:-q=WG*5t%T@2o8P&4j+p=uzz3(RQݧy[=mD-UasgILTA`CYH ՞֐F:pM?.THff13 vahc6% 0%$3)ИRDt81eUhZH$SZ+IUL'AӓH(LA.H꩔RG:9E!4,rB$5q8)1 h"fQQ@ `)hpp1?'mqxE`ٍ !$9lv6{]#6DTwkTܽҥnkJ9dLf8QR+>>)>Y9|vc/"r[㛩Ƞ)eWe}DސdKW#\.ý,◫4/֮zȌ 58QP!i U,Ws{ 5O΢/R(DNWn\ncd}e4qKǕoxtpZv@4W+!^&ʱ< jCc6W+coby$RZ3%1;DJ nd>@C*өo9ހj4aztuAkA 2?i4$j5oF.c .paFDZiyj]Y>_Z;ˆ 0+ϩZ5i {8;19l%`^?P1F + $QyiCzu>:8|_|f *g,UW6@sMz۠ZU6>DN3T|y[9}iU"]ӏ~1-!N.&)gwCYDPN{ M~ng_v2hƂ@_盇8+!ly+#e!b?? g߼̙O^̓ ʇ8rjIBN\DɔDun+jk^1BVnvkCBN\D_.SP]{/ w~X9Yg|Z\MˏΥgU1Y;/!. 9uw#)i^fQDjf(n𪹨.ٻ7n%W`y) aᝳX$79/'g@vm=q }II3Cndo{lw_E|p:>5J>D_)%PlX1aH}tSǛw<~мX{Sz{g&럽;>8)T~'K~w\Fel~x[^̹*QȮ|8 1-+֋_H،+D%ץWnPpUEr Rp; xi[#20T=j⦼yᢶO%Er%m%KL’88-{+)APkV#τ k9 APw}t p ȋe[ %  ( TJ++He e(T`*$uVeK`sDGP>Wm%i:K:D2#hFo фrV߭>$DrG;Ң-6$2\0HLd.X\gpZ&Bf$:T)-ƋviNa3 % &/4p$ BGD\єr)ojRLkEijiKm.+OզT}Z}Y-縧伩ۗTyx ZFsʙ۝ee8#qe$BU،g#[l♍PN{r'WPY: kgA_j28e\f}S&Cf핅hDӎU}5 -K>mpfCU% dzO +D|`sr}$btue8!4ukDž'+mOoY\Hpe^+Q+ @>HF?P4#XKbQ)QZPT& }sn>m.Ѷ׿7h65)`Sc+[9o5zkxZ Og r%R NPU0!|'mJB"+\BUB'y ds3vI~dx6;b3e*K,JIZWBXTmRȍ` X(E,.9JA V_ sR-r#G"sԒl πKob;dF${&­ފh rQaW䬻"84CRw4YlmXzMzX؞-%%P>k4w8ѵQ6ObT%AayjS^4p tR~aA1vDºKq|ͧeX=Q 0<2HY;T*AT–٩ɢF&P>LtиT:hB3)F6^_ +锜7u99ƸMQaֆľ88-pLY&id GI^mlRNHJtU)ErCUJvW5Q,E0_-jʙS0BN)TJ (Hxax[̹%Qw2a+#c(LH%퉪tNvHz4To %*}˺D%:yZiZQxd|d./T4`TJkQ z[oo5ކR@rTEU@?4TXI++ ☕9BRUL2TeQr?ؔ ˢ*E4q9-2$ؔvSKe53(4IhQP' ydQqR~㴗;lav#sH^|d3g%@CfOjg]NQz@gT6u-~bN]y?}K7;BC&&N*8X1MKQ.zPxD6ObDK(@03taI%BBE^Dp"kl.䣌R—n}?p)+0{t Ü'Q ecӊ%x*9CSVQXZR)2+LX%aQJdʼn$(&-Nm|u^(͊l2fЖtDL2fS?EXkT)Q U 7 3p_š@Pp11-:0#&9Fn2X`"dxHCU)L[{ڟ)"}#4@Z0-ԡ  $*S`sA)GӰ(S)cw- (F2f ^PB8jQxi.s4L1s `l.`DtZ*>T#ʵ,NH7bU=Po}z(TRN78x@9M&HXK;c$bnd<=1'᠒%C<y襂 )YKaܗ]A!oNI}S&o☌R9S9@EbjF0 H{"PoPF?7-hT4B/QU2NxSQn5DOjON?Ymgjl}v7tΓӨ y6^S*z̚Ï)t*܆u HNKeys~ <ٌSå0-|#߸pE)h)4UV,:5Ek J+W?t wEI`WOiZO .|.qGjCPA:֘EfPXi,\@v}$\}4;nώpvٗ()atÖ *!T_sHE ٗʾxc3v.X n#Cpú̵;5 2֔U`GV[Y pxa,"bħ0{ FK| !=I_K7+߳k}1ur}Czc.wo.^/| دx5__ݛ|XNgq4}ׯmwPW%N+0TSU ^ φ0@YiMj$Bc^Ygz :kh, =ͨܠ֣U6({~OZ닗Mx;Տ/vћZG|[m0 \=<(n*&3+7 t>n+;>.힁H){sD|:q[?KX{3 %B0j|@PG\C$a k5D!\ ȟ=ú 2i9_ġ1-HwmNB 97O!}FCfCʲeRAٍz:\ݟzQz˛G-fhME]<|О*=pP )p|Cj{+ʷE:et?|)52derj8q`R 4ը' ;vwrB91e)kt1>2fХ{¨w_=bcc]iwzpCX3uBUC+p뭔LTC+ L<t$)yG<z9S;qS3Q[ Sc'U7ObR8Ut6sjý+t}UE*BtN<@:$h *J.], 9{cE7 &0Z%:لGNnk{>Y2";vq@Hx!#z3^ ]CζD?Pd.q ڲNH\w!#ǭzSg0~-o&l~ͤv5R';l6g[+OÄ8>z[3V\,#LȲfDm{a#, ;%nL@u| |Hڭ\&j@ ɒBPBSq1 [a,da%Ҭ"N2# .Q_Rk,ӆ0JmoWYa І9r&, VQV%H TX(%q$H-0A^߭Vyk'kM|>7( r8Woyhcuova抏1$ ө'cD;,46l8’!)yQ\{2US(|= bjJiܖPh܈&QKL31MA+*nu˲#ȸF=02BxtkemkH;ZT'Xu?mvGKPD(U97yK#џ:ɑHR H1#sM3pmgSѰdvC cy$|Y^@b4Xi St7Wc] 2K`-tlT riWӗeMQ߬cC8meM9fB=~@ z:Z6WZ!|T+6߂v0D~ Na_n:7 ӽڇh+#)E*%3t` XJ`{XKɇ<ΙF_&o=yzxx7:/x~}8&ťzrdRJ?ޭˬSFEA߮OYÔQ9E9Q̑U">܆]TVHљtnXEa*V9"3!d"^q5sۍ]o-pώ}{go0_<|\p&~/znn^o?ڟEe@~;cmb֤abln_di%[9w(TTZsNI@ c^}oi!8GD45.SB}7b؇Ϫe]D ٟJq4q| sNs/A L48-G8W C 1JɌ:$m.cM9uA+zZu܂G fftVZshO^\[ KB"5`]9']%0LNQbrM8:TUzN(ѸS(\VJp%_ӤNҗ;' =UqYPb#Z| 2{WO[S"v+u\N); v%-*n]eCM{ֵϫDžZ+rwL|R,W)_}ީ]\n̯ M4Ǧ Hk>ʾw ̀-Fw.4(ӵwhwkBqͲ\\`*eۮ B1od+@Z0-=F@CJbI- Ч T8Qe j*2+7 U9ds")[Mݤ(&ZIwmnԧNM zdU?KlR1@2rM*'8sCX<|~]?8ʣ~d8;{+* k+}jm~Tlc\sПLvWbI L:5̭_)O9Wnh bR wvW8z\oERgV0Ag꧇Ç(א!yA*1$ Ѿ(KLokh1m%L:Dgʚ8_aeTwV)/+q&6vvoP)CCB jdd5 @ʣhf)/l~j6Jq.2eN]{i(0e@+ZF˨S^Ix0!x.x=61# w `;g+0s2(Уf1"euaO+wd(Nn ; 1uG '$ j415@㫡Ⱦmچᴦv9, .R(KHk7'mg@{0=-0J$q4:722^1s^[ZTPW& H=l(ʆ=Y[Dּ@Ä94flˁȰ@{[~{Ww60@nT43m6*D 2cL 7D2\9 `L6o9Q\O)WIdߠ q[$){q~mň|V<<>k#TgFes;|9Yx%v&'%HBy0{VڱQBA=# #DŽ{RԀjƅSz9mH5 _\ i~putu{!N:߮&oB mXjΪ6gifLɣ5rf՝6(b؟ J *I&4t!X Zο}+yšh jc۵TkIJIYbQ7/ LiTR^ϵMIRM<T_.P͘&;$Oܳ>k|0;z>J#FP?`ĆQȑ[:>|Sp|R8KLސtOg/{GWp7# *MF WM3ٻo\@VkT*$%FGDbxj5^y\ MT_.RiIKQKEMT_.RMyT2-d 6G[y=og=ҌFȽORft6)M Q!^ԈQ :o0TOU9~KG3B^W\ۇ_f-_Bw\ - 48gO\P&P-tFkљEkNs"%)xף9_# .GXs:JE1cr-~Gnrj.9ЇC e%u8 >^qF7Fj YHm.붯:v}ԁPZF3S;@wo^rj(=فj)SA25;W6„gn'.2#w7־F[H BMkT dF"fVP|woqco3Qӏqrfdekw% 0'd@ م Z-j$H:+j{1 $fzsٞXEԺmNnoC>=6r{nA~F75awCv֨ GL(l Er2 FkX3/Z)NWwBP@'9% qxfa0 bd0OSNA{!:<@HΔl$HU>PG9&$4%I ̝8P)X̟$3D;ܦ)IV%d8 )+RF)!OȘxh/P>9jYVtP7,V/?_W>Y>Rr ) %gVlZ|C 撩2a*IbMm՗TziM8)RNR3fhw!z#8o٭X^诵ןd+';9d֐[lx:dv@htYQcfQ-AJ%7f, Hf者 hmY2ч. ] vQk׆jJa^aL 7 LSAm:J.K*9J^ ^TlrjiW*=Zz+FZJ1ѓI@*GLFB[9oI'yC:,5u~jY=I̚(璶?4񕏨R ]F!-ZAYx"V(Q.VRZ^([gI[`T Sd1 aT'{zBN|}y'0JHXF.0`b%h;DֲRō! E N\y^MbE"=kÉc~}[NOt' "Ls{:5v&2٧9<^4[]XƍKqȕ$6(""x'z Ssn$7j^^m?8H.8>g2hVDR !dD4՜d 6Z@ZoFJ)澦h7tCy;gSͮz}^?:+X{[իP~}@faHm4617#VHu@D Kdn8U%~h4bk0mwoB֎a!H* T=p\ar-jqG?©v#8Li㬆:ղetO( ]3EbZBXTV,bP('dߖڏ՞eiq2^$#^ =1Z*8~(9ωޗm F(*z8\>ὶB!"cՎrrZeN-,S)v4r'rJl@irPF*Y$.QKIΕ k Xd?kk%6TYĺ*[3V,YT&0J#H:jNXBr`@`D&AUW`IW1l,0x"1p@x@iKL:X%ER"0Ur&ױjq^ʈGOhJmQU@&DpI7ZjUKlL Fu] 2#/I"=yp^.6"95k0!, 1I)k&y`y`XEE#rLR 6R¶K9Fl=ػ0 s"t,Wf 8N 1Zh`#6 {%" AB3"_5b!#εSNu *bR%,fҒ@AN\D95TocuKqg$DJw oS&hτàBɋ41 XzcԆ:S \y~bWS_AePSZ6U/%""_~P3 ]Րܝ|FQ}u6yp()bF'>x @jQLcq>E: ϲ.@[\E=ܛdw.]@BF~*;AP&SfFHׄMW}~e&QàA}`B  GV,Bɏ6g-b}Sgp8JAgˠg pn-r ӿ7!lrjJdz1OrX̪tLU3*a/Aw&")|G3.d~@+ Jݒ^wwNnnxpSԌL+O^˓[}~bC3K_`ށkG\]_2GR^ݗU7:4P37";"?vYIEͼC5뺀L]@ƚn8ԧ}I"O&uMZm%n;{Rצ/p)|z|ɏ?c?-Rɘء G̿J)N R–J LJ% 'Q,PI'CA%{|K F*m`Uzbh'@)'>ҧ,yL ]p;Gč$ό_BOINe˜tfmIkuN\cLyPT0-N!^Q$71^$#cm< /Jɔ4x:ལNퟖZ*}#Xl.KvϞU2Epkt@G'ZoH*znWosRCA[bb3 oDV1/$Bh[i4 1:ۀ6%F)FA:Sf|S+߀S݌T ՌJi |o#uewڐVb4H]h!(EVǖr8(+ᔏ6hw`w4yM2;gD?ӂ\Tu gVQAS\c)(EpЌqStDq|![;& T@ ޛR;o*L\dVĢw^UsU'}V[UdfbE EF>sf<4%hqdfsfLc2789 ݆ẐiQ#!=%? )ABF#!I==091b+8>CFuvDŽrh E MP9Bc~582[[VYbS-JPn 6˫Gn2z)Өؑ 6iqKƦTH 6W%%10NKy4ޑ|F X:#W}BrIg'7Pc8V&CoEɰﮉ"R|ÐVTNS8ST2"@pH˵ -C:j)bGݢ;s‰22Q2yդS=4!sLhMNXW-&18V+ө}Fw0Bv$C{Lև|&ZeSӀqn{7bi rL%m,BNޭLwBsݰ)iSz` =#m=?ԛsH " +^i}}}.2b>\ (3.WP K 퀱Q'ՑѢ:4^e O~=|ڻ|*g)<|۸]4Q'8OFqk47Dk.h[ AD.P;crmqr9yؗ9#M>ƓzFGn?zSo?nT(x[\Fn%΄ә`jYU=aY i4A2֖j-R)=PpV*;JeɶիfE.%RvCdzr!"{2? !g? @Ԕ4D馌b.u t㮛fTO3M&ygSM_֋pR(2Z}I3 pTg(> 9vz'mUl4 Rc6j87PĊvz~8˵V`y 2`k ,4fhmնԶg'b<@Wl@գq7iTpAUC]KBATGcF,€La0%V37:Q{84b<8\|N|c*:/#}gOA*I#fVE *;aBn_=do.lл^m! {I*{˃ydU&>29;i(p{^镌D7msD~܀)$5acRw-!aHt->e~OmO8jQPGn<~+nK|}om*y5JF*#=6@c!3M;/76j%bd?쥨K Mu%6X[QH#-锏n ᚅ~'_/0ںt8"z׋`u~{6U1~X.{cjXi/`wqrN9ښhTVjm4mPkrs*L&ihu#O?\^9{o}P2F`@lPL6er]vDdD̨f|zT)̤Mn|ͮ7s'U@2>j-ïiܔԄqq^Xa ZɜBl헗 (@@P**Jz\\Ạ Ly(y-7U"Zww)JJEwv.(hGҋm@}+g\**k9)xSCl1Λ:rpmE=&0KZ 3pw LL͌0ZE*ZvN,'mM*&?fڡ'qT]cmlTiKq.l7 -D׉rntV M jM㜏j]+;E[6=nmRvM41F!6nydWɮλv=b[_J2P֦ZZhZ +"Y3^d1}Pڇh;_Z^@f=j7~ckSTg]c5FNInFܼ|3LJCFyNp*XH̯fٕݪbnx~ioXtRũwneJ ~BBOUji Ow5[l* Z2^#)F x^$&)c*<<<ἫnĒ6mVIE[FB UpD1:K2K,:|UOjaԫ.ZM9 [_)Ut1fˁJh|}+T9HYBzP-?q kpŔ.s{.~faSSI + vcY e=VB&H4`Kµr+!˫W~U ɺ8$JHF (F Υg &?o:^ 73Q1qO sL#XIw%<-@1zWvs]nݏEu1 T{:% )XmtkHbAF&ՙM`fLr-՗y)1.^˜u6VX34oc?e):mDz<$pH\r933=҇Rĩ>9k^-:s/Z9b hUIL>b>y*ޓ5{al_R| Cd'X!nPeF0^Qz#*Z7yƎp EdP>챆!bI5 `@orGnpuC&.z(:M>cV=ХCp~aP5GZ6QI0/$QÝeuFxtU!4pG&Asd{aaVSV;ӏƘ!xҧ?hGGWN>՚js`#B 7UWvr IJ+蓐}֟K|T]:LK&G^n@͋J#GXn:݁3D##,E37G;՝WnlA͑Q%V?ƊrqcA nISx}gAO1 =W˜]O5BW׺ r?Ѷ vtzF(#5PQMU{x*ei }3{ \9}6{Wgy5W.'q.Uwv5mhFH+]D`M~dW6̌8dB1$rac: A8\4w_,^nO2?XU?U8bQߪL9Zքsd=X%goQL:EZV* Hݛ*w@Ix" Rbu,EJ9V~#5(%Zb4 f3U YiqLCP}MGkOeu`ii u ;?[ׁMz`OτTN:@GZ.«!,S>)Uԩul;Y Pou mSݱN]>82]oB-9bgc_H7旅BZ:1w"ylM90|sYIA7 f3t87n+Pd;Yg.KoG5GdꭏPohn\ ͼ -,Gϣk)ZsV*7xGY<̥ |8s 2Jr9K6*_fq "T#Ф><'9"F/wwbV0˩_#Xd҉y¬b0k|NY:>T|9RB9LR & 9 쮥+[;4')i. uf/Y֍W?*Gc|gU. ||~pMQI]د@fتar vV) =BKtֽ 蟬sBx}cG5hݸ>^%kccC8#bh2O- kAo?gq U)S3:V'};1lϙb)X؂#o `НcȻCn$0=>N61{]z"r0G|piO1}wiQиaGyr=9XhXc/0}>{ޭ E&d3$A?PkŴ Ɛ!0`mW Qhz,؏@=nmϏo{R[Q'&oRR7K duI Ia3{)A#XαlJˎlC${ )ˡ9h24'Klͅ/>'#&=$eR@D"v<*r7]TSK ʡ1[I7S=94w} =@RSkvnI'<{r~//AQG.S)n͕\с\'uxKܚ&ii;=Ҵm_s)q'~q-dl㚐k//j|L̖6siOW]*o&*8k&*8kG7+KbIQtJ )>IN +f-BCy^/~&~{6{+[9LE|wyٮϺnl_}\̴г#4XjMnmsNSͷ]bfE-7ɸ1XOO~ɳkN뤑8~[^_o^+Ͽ²4A#HFKT+k0ebB@dw1\GĜ:t"aFA Fydiː,ݢ`*D%O:B'ČF EM+62r֮!þXieb1 D⌍ylGbT>-LBH]BN=c2:F̉|Gb].q,;zoj9B,]p t"e" `8^+DLev ]Cid 0,,(EP*Pdٱ-kX3StlY_GhkHҲyYx Pi w=%%P~Jzy %^|_}w6No??OfB?ifdV?,nNj˟47̗gB<[m/t~yxSw7%!ږu-*goSR(&iOǾ^^' qp(i|һN`n_p<cm=[Dr m,0w8>Ѡ ^`X7]oa:i(QĮ0w/8i9zd,i%ݦ El;9ڤZPYkG#.3gi9yzfs6eQu޾0% #0@9oAqsCZ pMNU ɵ';LZE* 6UxdP,.7mĩ)}܁be%U+I9KjH& %M˚De|:|O(>@pixh LxOf˯On䤑'/dqקcjZ\Ձe7,r{rdc;$n@0Ò" PI.JP°ZtoH,p!NbX|"wc:u%1(t^vNVSu  O=N m.tD ߱Nls#:mnIEQښBmbSmjr/9snzJ0WZ6ѩo}IG$ٞC_.ry 8{LX01?6#<}H~V>{q~'_{䴓VI Y_v+CR;5$,qBjXىXpwQe4;!=smAw (/~c*zpf1k]^~~.gQTqh():*ayr8V4/~;ݪ +7 ݻV !v1D7M.C] ?zh #9 r *'O(/JuIGqOM{O/mvHgy#)v&QIQ$֜ @kOK#o^F C`jTfHbUbCM,,?ԅVrށt>^.D:#@ ID#>2sJ:58gGo>,1}4U;XXbwbͬpƍȼ9 %k~o~`/7kXfgTk x5IU5 ~Xs'k.5 FsB9F^V\M& 5?>L)>c~99 v{~Qxi^?^뇓egR#euOv+!UAi ʈr&(O9EE3&)!t%#J8͠JUVףg4`lP%;+ypL&56񬱉gM?Icp:7N;hhqz6#;C6Kwg˗6ݞT+bV}ΩAJ!pSF L>N㊪ΪyQC \wZJg(^KǻDu^&TJ JZk_J1F4F o!p`jI(Rm% f5\Ӆ'yxd2Vd}~iLIEͤh)aMٗ'^wR>+'7m}w6"!$|«b2&o s P2 cRiIPKPG)D-XLM?tJEn?g+=@v^eSI5 n慒%m &`r b=]( gLf}('N9mQhhWp&Ca ^eJηF$Ñ1<{pf)Spjua-H0T }1Y=LGx/"&Hf_d&k:ƋRmw-m_RwM|A'X,DLY{"66+}2+1;RiG槿ǟ߫_~b;,=xӇlʟFMhƕHkȸ +߻~lN웵f|\7(RNKBLɑ{ Auo ?|ok.1;x5z,wd&h%G%Kqz2PYy5$D0T3?6ZOƿuu:ušLWzrGgfRL}Av**{IUHqd'.hD>z.ΩF<ȃߨ%8+Q}N'S~5k ]p~ N;JZ_.][рl!5ar2nŐ6PETj ,t@-uadƪmT&9:0\,k3&j1.߼Bg3c.Q=d~ qvЯ~xۜOW"6hyMS(!3"^ o 90gva-U}:ٜC-N;@6]'[O8͒)9)4xXŴcQ8opFYS :Cf+gjQۣo48 i-xr"QQILY~zhkU|93X"[B7́9.A<h54͗]Ix6A((HYcc`n{<>˲Q'uǹ:~=yg<?ϺV }v`W1NZn, >]%;H^UK@d|]ϕ6%(̹rMPuX7u7/qC*yxA>Ӡٞr@7 IhF16-B~'`Hkr:>؝egMKanӒ5P ZRB“}m ,rѓ/yzwrLO޽,2\((Px[c% o&x.̲@GR#xxx`y E6[|[uh״MAg].T ` Ҿc"׈QM;}e)qr07^հJm)P}2ќ0UՌI~j 7!]݄6yp,7e#:7 DOuh&[4g@&Kgvu3J.T@(7&@br |h}:,rA?_ѐ#IŒs\"su=3w'9*iūچ\WB: Ckn`x\I Zgj!(J$*^+rU/p x9o$ZHh^YH!THQIIIUy(` 3ZI,s`v !@hP#TH6مNԬBO`TZ )Tx6G]QR , \ћҕJh깷NYcg{Cl ܐ"/7MZt*S=(zסLqF188r( Hv!4^^~iG{[-y"㕸&Y{i329/Z!e(Ͱ5ˬu`pA  YyH:D?e_.Ã\ P=fbvj:$_gd :YRI 9G3wF%.[zBFu-c,|M9̻1g`}sK q9lz)0 L:EAֳz[:G3E@\mXM*dv4]]Q`gb|Ĩ䂶᧋f[ҩy]̙{y^mTo!;wl }Si_ξTw<8tidOv˜_lj_}WZw@Md 51mrj:߸[ήGU hƆSh)D'-)%@A. Z7Um2.BA ZRyt[U9s՚֍|aGkaWbjX !*D–Z: edK@-9seK7PbFesCkuثWאHJKɮZqΔ[)21"Dep8qR_㤾UU-FD *A%Zq'5D+uepNԨ n/Agta* x~~IW~U(e%&D6:笇ٻ&mWXuj72yw٭uI!qM 8-OZQ3)Jj,eOO"/Ͻ8BFVB9'̸3*p5 ?9P!l2q׽bn O mpef9k3 opFAmxh~&c3A t9AjthBuxT> y SY\BD 5΅ XCםy"m r"װA.P-ۍyrºK&,q" Q\N"%QNp )W ~`q|'}z}aOKmzy^>Lo[Sޮ] |y2AnA\g62a8i֒Jn(%", D4UFS*CQ2%[ňT;q 9 cx!sE(GΘX B0&MJV8[cyw*N|wnsA8GKU:c))fu*  (g8)\`̠YROSn9# vVr.rld"]|,rg$2Z[UN|ι{UG!Yg0Gxm3=cIQZaʜ)%֤h\&b$r:b>*ì"P& Pl3cDA*ϭЮ'4X]8mau 7w|wPc"֧*lMƂm8Ƣ ps?Oe>툁JUxȉld+P+YWLӻ]VAv,z(zZgyNk-3-εr+5ኸ-am*tS^3܄-J 7ARh+£LI]Yw֞^hy# #v,y7y3ԃ^|޻6O-G;`"0oY]$PzM4t hN)N͙z^w lّp>A+9k "oGI0>Ts )UGfRBAia3 Bp -pT9X31jp9 䈴%N/ +}m-bDBU[Ȅ# &5>3F980FEi}^@$apJg+VN]ܫR\ۣ;K ޣ[]ݻ#]G9BB-(՟ZNJ8~@OoE)VŔ"<$AQyyGyb\(Tr.%d"O @2@T ZpIާ鲙+*\ч{jp|>h|Zah,50d-FV1iX}k`>כW!a$djd1qP^v*yY 5 *ӸfnAa)EO%nXq8lԇ.L]ҞMs*tyayvfֵ rW:ay7wI>̓+JiӾBЗ=?yUE=5T_]P^QNsl6Ʒ {5^] dFS R?ܦI3.p]gth'֞5U_;4M[2ԩ_P. x"Z:^yO6Kdh2^YfV7tk1FTiSMLU bpkI&"rzZk5CWS]jfǴ'>.[hE =j l\<}o7~yE`_FN &HDLgz]zSb'J~]^m/C)7BSueZj,(I eLkQ lݫDRgiP{MY%5r㽩F Ygx8q&7G% T}sI&q63Scc+I(x.q9n-ERM ՖBq$LI\ܿ۸v /3>93G\Zv㏃9LOΦ Kmd]D~ыۇiv~ Oޮ}s )6Kg-GvT[#AC̳c.bu ܯI"E*Yn΃ zS5iX{g6ɔ b2]a dr7@*Ĩe30 gG U^\0)9DrA(LY4R uTR@}5 D pܽD q" Fa)\\3,>S,l~/$O4|_DYd10 NB)n8]hS c2WQgL3׳RjT[8rd8=Mi? SC{PGwQMS{~/n}!nأڑ=<~LK"ٲeÀ,z"JIWhW^WeWԏVm{^ǂ9:z`-%0x „1*eOM 8ɹ?~}f/ybY f SV )4 RQ"ms Z@7:]~wPmgă͈iZM7&pgKAjhDM23찣(l.sNt c5D́44P!F_&KN~:x&_o*{˫~zjn SI b@aYسICuJH )zV$`2aq 5>2-B4qz&9WRhCLY&X afB"d rcc(}{(w0>7P0Pm &2d"tμ2 g E m!Bar01TqFJ"|{ͻZ%7pԁm BņNk+n~'a[ދI{sq$q}7XǔF#n=\!~jl5k'&-=A䧍5:a^M{i82p`6,]+p߮Zv6;ܒOxx̿n_K""l}P0!Y:7#;lȿb̗\G5އ۝C.led~$l~Ŗ,df[RkF̌-^,֋U[ƜpYDHR $zh_T@wv}]+{\;b:[Ls,Х>a,Op* @A;c,>R߅wwtr!3(8<2+Z0빲mU‹ɞX+ۦNJ.~ӫ_;03ce9]s"6_w.n5.+Ge}^6v'ZAouq˜i վ~:J~Fp`ٙ@|mFݝΗFHgx#T wIEK'jڥb9 xs).{ uZ2~cr5vj˄nQlu\WJP *B>AhhԖGb4sy=jz|_f3j}?mnǺk\wUa|L\sS]1<#>?|ml0Bi#s' ~,y[4>Lr'8qSq{wg.=Oiʋ8kr EL~4=v3TAuAt}Gv@KQhkֽP!!߸v)F-L1)auAt}Gv@L1foڭIBև|"&S# s?²XkKB?B\!eJҌNBLWE=CiB֘>WK's ~\p֧J1GFAJ~(}▵)S\[}\?%  Ft*TV;"7WSY}j ҢZzɍ ꀒTl,m4]s?O]'~ʸV )V1Ќ\+?h}`Iҙp8~*P9zکt!KpdmVS zEw,p^gމgy Zbc}sgXoF Vyamns̍!AsnanLQ9,֍ޏc$K)^n77̚?KJl![3yCrXlI6 rW=~SXD~o߅R\``{)hJ-v$ЌY!WJ(V@ EGg*@"ZiqpLASΓ/i>j'R^he)(tu&v9֡\>bmS]"0 CF̊Ҳ]n˭fR֠$(BIs!BIiS+IJZJrUT.1S2BRW ځKUAT3?,[{eV[ bڤ}Og~J1HPcZA(>(4"3Ԝe'w܊J -#׸G*4p+`iM-PWCp !|҄x0kPRQ!aT59괶ym`7送2*]ǰ8*"a=tw>~"bVa!!]KeoJKbJ_¤*ʚWw wS[XL2`8ܠ]#5@V[¸C({YI^xcgkt,yl5Q;&?&5AC(VġM/8^0,ӂ)㈀]y>0^|L<fVO6D bס>ѻy)?.(YÏ'5!Kg@E7,&o(,,,X.&WwD%kA0aISIDwfrϕh"jadrR=S͂_Yp]_l(uMbE J1Cgx yN*וvl9|T"% kj5mb>屎1UvL+_7bV en D/T6rʊUblPbI$&-VD4L DhՊ\Ҥrpq!Q#iBPh+0B<$tXՁYv\a%dm9MsnTŒ*X, Ԍ(J@Ӭ5XJ1DK-e4|HRc3ҶzQ3ٺ'G`4Ld>A.7ynTF`_:[ïR c1˻1| ɇν$[k PO,3Y)2C|(KViI~fpy4mZ ) 8NBŰNkhPIhF'{2q7oT K!1Q ͆Se dbPG&3Ǚq1pld*h1HjLU`ZY Ɠ7Fqu w}­e0d]=PHWB$}OKHX_7O*gBg>i(gTs tBBƝo4Q7Œ6ǒGP]LB3u0VO.r,FntQ0)]zy(`#(|r 5dѤ'٨ɻ*7ZSp ?oKnǸK 6^w;,iRUI{KXs^3i7FWƾ(^j@P fw@q$Wa57 䇻_o7v)/8łe~t.::v (]?]c@";B8OM9yUJw9kw$},Eu{`&K*REga>o)8z$)WL=SħalgW2]sI:G;!g0r,r:*EMx|4P=?зxv ߢ3HyB;/c\=ܰ܅%_?d︄Uѧl8h2i%O4-m s,͵[E8^Kpjg0z#fR:+xpS{i4Y'(\TWSEt{+ٙB?Y}{X)1ɰzZgN-厧 y|.>t!b`?͡6Za~aڑ(~(  -al Y-2Ӂs,\YC.ӢpiȬ]p9+JL@] k/f0 sHKYYὪv.hqD7:H^XK;{X]Fi ^E=Z>5L.Š"P:1 )R5S#B CORz}wQ49x0w4{{̄h̖03Y5O Xb/?/*D_ʌqAѐ$5ɅiIpN\7?+ `;_YT!kYIJZ^/ UP|y7z]ӠFht'i %DU"Ҩ=I⢒yΟGn;n!nyΊDy=N4erKKEtM]mV  .gӋ1 ((t 5i8+Es~Fd$BGVEǘz 1,#5CnUi0箺᤮(qQ J˟H kq3ŴSy0$e/_X|-JUǰmuCp~uW5oMCr>*ho*JMGtJU,) ]Q =sii!R)fPw@ׁ$pcԷj-j(_'VvUB @Y04 rmULM]Z~tؖ4G[W~fholˬ60s{7]fga"B T`XeQG!L{Y']c݃<8~;:c3ݞ&#Nm^ R#iI*y&Ǖ:QS=LEHBk]d$2^ KIA+T&v'8gTibEW+N+C)Ew+±WiC.f}HDt"^ZBǨ!ﴕj+wuˢyGNbhG(<(ۼ鯪n5R@bA92Vmh;).·n6Q K>g^Wa&$I̡a条vkHn1` dz>ӪhkZ* ?B&.[.C(dS ;׌aO_Ǜ׽^k)5.97b嘖.ZGd=8'"%CW``: U}:G_2t27ĀIfXgK5z07ۈ][+**2U(Q1.E )c[$3]\m~S}]3# Z.Es<5d!2f%~֧ZvmIpW}4\|.&A I!QB,~Qu4(AgQ0ˑ_c8%) g q ,wCQDRhyzt*;2G=GQЏƟ6ҧ+py_N>L/+C 5 `;etL`kK=X{4wͤ ?}ʨSH]"pb`JG^ޒϤ|_N桤UD?kD/GR]"H֊ImyOBZ pM~6 L daXlh %4feg_ʠ =O=6mk^EXFp[+凋3`PЕkMWv }2t![7ttяu0ף*>bJ\^YKͦ+X=Wys5u F\rd2fv>( ]^ΚzhN.Gh^,Н}.ٰУ$?ېhֻb(t.qG9PI~eC%y×,;d~?EK$+؀ĥƾ(2(iWr 7WЦ/*qBG1$،'3S~4{DsEV !@i˜j@d:*좋?O]afU#| g}2kN}R)$.Q5Fy+=iPe% ܣt^r~=9ו`$&SSμ[B(TԦY̚S-_+^-UMrQOfa[hW^!udתtޡMJ,q8`N)j hp!>>,Z/!(-H7bHe!t2x9ZI-iCѡʨt14n$S\ d2_9Y|P᫖mT4nxN-T;]ϚFkW57/@TW*JʼxɃ[ľ+od5.)e-<$q;#!W~ t;:tVcw @n ןE|3ӵp Ep|^w;]N_HLOY >Qts!!{!d#7mӯH VEL=aͲAu̜Ƹ [x#$i/>jc~+=]{ 'jʛmy#Inyfyx^_52ݴ_}}K3ʻڞ 8nAWoi9h~Nz˘0ҪoNse o`#S3m\N'/[U:ZCAџ 8盗_M5~ѬhVE4"US4`L籜I>LA(RvuA"$֗kSmNY\ ϛ"x'Aoz%6v54\~ g:P>{?|giTF8vn6ڪM}xM0G$\vҬ5 M@B .KRD@V#Rc`B%F$}dhM"/R6ޛBϩ#b7/U q@&ۏ9ra9Q::F0g3^fmu 2naqJfhJ_4Ϋ̳pJ*'jF,zkhPnT?'1 Or4UK)DpX@-iQmuCGmZ6}.(V)-R!rRvu*gM,VڸAQnEw(v!en9ɨ C j-2XlP̂M@'[Q+VԞդy/U7Etw^Z2v%,fۃu3~(+6zJ+e⹜=APN?R/?G7pɑC-" ] /g9(8 l~Sn.d{㙾 ?aNz%~5Xغ '9@qg@KT3 "-oB(-hsVmٽYF赒%VɖAYu#oj$;[d:''6N~iVaޞtjh²]&ߦ~5Hn8  ]7Z %VPД?`oي[vvliJLц-^>Fs[%XfohϠީWcwOcی7ZjKpQre 9=ۂ\Lu4T E*gLlja#5Ь-utyD%ng -P!FgJ%|Ic$n[d ݡpƟG3ďW4]c1M<Ӭ~ Mh*adogVŲ^um%,.J]L$ ]?O+I$w.{ɔ2GSn</ ڈξF=UxڸmԖڭ >2c8k7PGaK63nEhIi(n]H\D2V5{}tM1^/*`U鹚EZ~WrG[F5 ыFmF8]>{(R$Q; i4y;m}~̀WFY\0a^o@iYa2uXJ'ꉵރPNW7Ǐ h47 ͑dRD"+d;jy MJ/ڠ-m~QNYk(TvR*jOWKh)nJS̾jtQnr"$J"JdC$E<*@tJ>*+#ň{dN|*9HOÄ_GŅ C\[~Μ0Jnc;:X<-P۬^e$X?adG3veaꅠ/.KYZ2F[[c^a¬G,qj娴&شG°򓎖h?XьQ&^WCaMTyqM1 CΑW$ K\#cB:ދXzJ0^.e!!FZSk ( ٧D!_a4]o ғut U g)S+ɋgt ۃgR}}e6ICam+Y"%ljcQ(9y[}=4-=Hlh˟ MwU- %G]|Vf)h1xvWNT#|L]N9ȎPL߆ZRb F3p{& >kʹ; zld1itV$p9dbA/ߛV0^WG`B]oGW}: |#)ЯiS‡!UP%pfȑ%(r8ztWWqƞQךCn5R)2yT Nb ~R0wnD2DR"y#H_x"[""^%b+mgK؀W $Bgģo&Pҥڌ96J\0pnsay *\g_*B83313S c(9Vb| 09цLS < =˩DJP#!sQW"X'C'GXl[7CX0(e:cԙLY09|/ܔ^,"ѫR.%7`gR\f,S}1(f4ZVGK DeDZYEҹ&&])8Ŵ]Eb],9ws|8_M99l[%} j뇤Qm<yJ_ZmBL)2 F‌ƧԔz$ny&3A0eVƙ`L eLi1|f3w?܀I͵]鞑+(T!%5-&b%x^ +*/:o|0$0vdpp$u2xiw㩜M<Ieo9ڦT/@*ҍlGgl R5B]F=" a*;~|Mچ[.'Zo"5LG7%J]—Mgָ<&T3Lƨe‰NQ6~;مV/#Snoxhb UzvΔ;'fKgoAwΠ 8fL9|Ml|>46xgc\ɧfV[/0-WtMyGy4iv9jVsC˫pS.E~}!'o_WfAUrAjY\x 8"ר9N$8yp1ӲrnɤbʖETZECh-Pbʵ N \ǖt Q[2$HX\-.Kz?71|h/) ͺ4KӰbQLr=2@51cc : p<\z[L|8GM >GAU1M(NƇ/U1"~ 4:q};%v˫'cd_!7b>bwaa!W2p2g<ұÝaO2UƁs=Nlb~);hLnQi{VUQJgmӹ i cN19*JD@Qe<ԃ7;("M;M:l~bw.JAo/6MR >d0Yb v?@|0ؐcpQ|1sk@c ,1a1r:9#+S̺ ;>0f{}9cFxIHW$\l-O|$j_痔Lna=oxM\kJpyb+ JcF{$$UMGQ.|WyN=0T0Tj0O%@9غGPiKW@K0FAۃBNۆH;}WjبPT3|پYђJ$~\ô|ϷyPFKd UkYvjtS ;s5\ـ[/`5,;9sq0=*r[S_;y~>HLv;b<\Z5OnmH 52%(& s"ӛq0QzC:-/mIQZ#{|tY0gpixݐ(ʘtٶV2^Ii*^k4 ?E2-l$-f$dJ}6Y"*QXX,dI$TT7]vlj$,vTz=ٛq \g[1DP( 7ci R3X]W}U7x_id=Iu^SJ)4fjDp*}zDi>v>^)k"Vk;sN52 sO) LK$Ǟ1XaI52kC :\Ͳռ$ʻn_0Yb٬p JƬXO!SUJ+e1rar &p(Xp#OEiF  {%r8p 핰jr !r}n)˅t E"b(`K5&|RPjLP%Loc6jNf$qSV`tVXlT ϽÂ"a>:$ѭ#ѭ#JClJj : 0%6uAK=27՜SCF@Ztj[zbB1X&PiӊJ3RrCF9qsRRKDSCD @xsQ~S$|S #V4,y%2"4tt-9,5^ג:-'XnXc㼻nX-!ʞ:ځ+ xcE+ 3^W#]Z0l{M& w15CoBJ7y:ޣքT;kUnZUNQS Ru4}FG_MO5щ'ڐ.dJ[ FI[S RD3hyS[=Dֆpm$S֮:ǹ$}nM1HϨݎEP)WyHvkCB^/Sʷ¾Yiً@+8N 8ga'qN ȿ"1BhP߹S6Vm58)8ƾKg`ɐNT1^%ݔb"DptcRzis<ĄSι(̸23ƃϬ:KBcl1ed\qm LEs8e@9KߝTF Zp6LlvmO|ʕXgIB|XA1o5$K :u,~V䪜r)>-quu"T h:DG gL;%^S _w$*ł2杧:wE4/j &&W/סr*r-& /t[wM;e}˥-chi͵#Am!|H|3R`Au+irf {5&ɨYeO!A=:%\`eިZ^fnZ^du7 Igs<=?#$xBbUjX1貨ṢXmut30+#;H3YU ̩A_5v2+Wޥi ~}OYOKi9Ĥs)&r>SRӲVXM?2V(&r,B~Iىzdâewyw4*;T; cyMsefgS BB $ {[OaK-99kVyy ^JWOz JWoWEbxqBI;U}:@|!OS__o)B;"w폸p7ŭWƬzu3":\;Q5j3:$̈́E6[PHFj2O4fMVg&`RăvC I 3N( O䕦aiNXzBaN4uA֕Wܶa ^H)\A H# PoQ4LN[]۹csj`Ҩi<4.~,9S!5Ӛ aLȱvR(c(9v5\9( saYL4JlO$±d+ZM"F03j|v gnޖD rW|$m?Rn V+U|jcQg6,,yQ$ku^x$aiDs%[W,q ?` TQqý"@@N v#A0D cVkmt33cX>KrF- ?\ {4$¦#}ve;%ZkUW“_p8/"G᧏8y`Hm->l>t']w\..Vysl&;FcўݼNWƒUKk=@A Z/R[;M&K,lyP.#kklpÔ穡m۝p#7ЙXR^TCHVp-35r3}VzvbDC<.hʊ} ;/N,;. YΧm+d*qh8)(.1P莽\(n? +var/home/core/zuul-output/logs/kubelet.log0000644000000000000000005530027415140226056017704 0ustar rootrootFeb 02 21:19:38 crc systemd[1]: Starting Kubernetes Kubelet... Feb 02 21:19:38 crc restorecon[4746]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:38 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 21:19:39 crc restorecon[4746]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 21:19:39 crc restorecon[4746]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Feb 02 21:19:40 crc kubenswrapper[4789]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 02 21:19:40 crc kubenswrapper[4789]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 02 21:19:40 crc kubenswrapper[4789]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 02 21:19:40 crc kubenswrapper[4789]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 02 21:19:40 crc kubenswrapper[4789]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 02 21:19:40 crc kubenswrapper[4789]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.142522 4789 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.149801 4789 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.149833 4789 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.149842 4789 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.149851 4789 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.149861 4789 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.149871 4789 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.149880 4789 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.149889 4789 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.149897 4789 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.149906 4789 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.149916 4789 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.149924 4789 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.149933 4789 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.149941 4789 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.149950 4789 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.149958 4789 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.149966 4789 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.149975 4789 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.149983 4789 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.149990 4789 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.149998 4789 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.150006 4789 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.150013 4789 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.150032 4789 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.150040 4789 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.150048 4789 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.150056 4789 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.150063 4789 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.150071 4789 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.150079 4789 feature_gate.go:330] unrecognized feature gate: Example Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.150087 4789 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.150095 4789 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.150103 4789 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.150113 4789 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.150230 4789 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.150243 4789 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.150253 4789 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.150262 4789 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.150272 4789 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.150280 4789 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.150289 4789 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.150297 4789 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.150304 4789 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.150312 4789 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.150319 4789 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.150327 4789 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.150335 4789 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.150343 4789 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.150352 4789 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.150359 4789 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.150371 4789 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.150380 4789 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.150390 4789 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.150397 4789 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.150411 4789 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.150421 4789 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.150430 4789 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.150439 4789 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.150449 4789 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.150457 4789 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.150465 4789 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.150473 4789 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.150481 4789 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.150489 4789 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.150496 4789 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.150504 4789 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.150511 4789 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.150520 4789 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.150527 4789 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.150535 4789 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.150542 4789 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.150780 4789 flags.go:64] FLAG: --address="0.0.0.0" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.150800 4789 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.150821 4789 flags.go:64] FLAG: --anonymous-auth="true" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.150833 4789 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.150846 4789 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.150855 4789 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.150867 4789 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.150878 4789 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.150888 4789 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.150897 4789 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.150907 4789 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.150917 4789 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.150926 4789 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.150935 4789 flags.go:64] FLAG: --cgroup-root="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.150944 4789 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.150953 4789 flags.go:64] FLAG: --client-ca-file="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.150961 4789 flags.go:64] FLAG: --cloud-config="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.150971 4789 flags.go:64] FLAG: --cloud-provider="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.150979 4789 flags.go:64] FLAG: --cluster-dns="[]" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.150992 4789 flags.go:64] FLAG: --cluster-domain="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.151001 4789 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.151010 4789 flags.go:64] FLAG: --config-dir="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.151019 4789 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.151029 4789 flags.go:64] FLAG: --container-log-max-files="5" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.151045 4789 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.151054 4789 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.151063 4789 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.151073 4789 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.151081 4789 flags.go:64] FLAG: --contention-profiling="false" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.151090 4789 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.151100 4789 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.151109 4789 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.151118 4789 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.151129 4789 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.151139 4789 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.151148 4789 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.151157 4789 flags.go:64] FLAG: --enable-load-reader="false" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.151166 4789 flags.go:64] FLAG: --enable-server="true" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.151176 4789 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.151187 4789 flags.go:64] FLAG: --event-burst="100" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.151197 4789 flags.go:64] FLAG: --event-qps="50" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.151206 4789 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.151215 4789 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.151224 4789 flags.go:64] FLAG: --eviction-hard="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.151252 4789 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.151261 4789 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.151271 4789 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.151280 4789 flags.go:64] FLAG: --eviction-soft="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.151289 4789 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.151317 4789 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.151326 4789 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.151335 4789 flags.go:64] FLAG: --experimental-mounter-path="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.151344 4789 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.151353 4789 flags.go:64] FLAG: --fail-swap-on="true" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.151362 4789 flags.go:64] FLAG: --feature-gates="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.151373 4789 flags.go:64] FLAG: --file-check-frequency="20s" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.151382 4789 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.151391 4789 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.151400 4789 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.151409 4789 flags.go:64] FLAG: --healthz-port="10248" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.151419 4789 flags.go:64] FLAG: --help="false" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.151427 4789 flags.go:64] FLAG: --hostname-override="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.151436 4789 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.151445 4789 flags.go:64] FLAG: --http-check-frequency="20s" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.151454 4789 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.151463 4789 flags.go:64] FLAG: --image-credential-provider-config="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.151481 4789 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.151490 4789 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.151499 4789 flags.go:64] FLAG: --image-service-endpoint="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.151508 4789 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.151517 4789 flags.go:64] FLAG: --kube-api-burst="100" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.151526 4789 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.151536 4789 flags.go:64] FLAG: --kube-api-qps="50" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.151547 4789 flags.go:64] FLAG: --kube-reserved="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.151557 4789 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.151565 4789 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.151574 4789 flags.go:64] FLAG: --kubelet-cgroups="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.151612 4789 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.151621 4789 flags.go:64] FLAG: --lock-file="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.151630 4789 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.151639 4789 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.151651 4789 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.151664 4789 flags.go:64] FLAG: --log-json-split-stream="false" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.151675 4789 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.151684 4789 flags.go:64] FLAG: --log-text-split-stream="false" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.151693 4789 flags.go:64] FLAG: --logging-format="text" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.151702 4789 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.151711 4789 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.151720 4789 flags.go:64] FLAG: --manifest-url="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.151729 4789 flags.go:64] FLAG: --manifest-url-header="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.151741 4789 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.151751 4789 flags.go:64] FLAG: --max-open-files="1000000" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.151761 4789 flags.go:64] FLAG: --max-pods="110" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.151770 4789 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.151779 4789 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.151789 4789 flags.go:64] FLAG: --memory-manager-policy="None" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.151797 4789 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.151806 4789 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.151815 4789 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.151824 4789 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.151842 4789 flags.go:64] FLAG: --node-status-max-images="50" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.151851 4789 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.151860 4789 flags.go:64] FLAG: --oom-score-adj="-999" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.151869 4789 flags.go:64] FLAG: --pod-cidr="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.151877 4789 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.151892 4789 flags.go:64] FLAG: --pod-manifest-path="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.151901 4789 flags.go:64] FLAG: --pod-max-pids="-1" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.151910 4789 flags.go:64] FLAG: --pods-per-core="0" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.151919 4789 flags.go:64] FLAG: --port="10250" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.151936 4789 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.151946 4789 flags.go:64] FLAG: --provider-id="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.151955 4789 flags.go:64] FLAG: --qos-reserved="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.151964 4789 flags.go:64] FLAG: --read-only-port="10255" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.151973 4789 flags.go:64] FLAG: --register-node="true" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.151982 4789 flags.go:64] FLAG: --register-schedulable="true" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.151991 4789 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.152007 4789 flags.go:64] FLAG: --registry-burst="10" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.152015 4789 flags.go:64] FLAG: --registry-qps="5" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.152024 4789 flags.go:64] FLAG: --reserved-cpus="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.152032 4789 flags.go:64] FLAG: --reserved-memory="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.152044 4789 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.152052 4789 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.152062 4789 flags.go:64] FLAG: --rotate-certificates="false" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.152070 4789 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.152079 4789 flags.go:64] FLAG: --runonce="false" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.152087 4789 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.152097 4789 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.152106 4789 flags.go:64] FLAG: --seccomp-default="false" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.152115 4789 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.152123 4789 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.152132 4789 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.152142 4789 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.152151 4789 flags.go:64] FLAG: --storage-driver-password="root" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.152160 4789 flags.go:64] FLAG: --storage-driver-secure="false" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.152169 4789 flags.go:64] FLAG: --storage-driver-table="stats" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.152177 4789 flags.go:64] FLAG: --storage-driver-user="root" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.152187 4789 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.152197 4789 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.152206 4789 flags.go:64] FLAG: --system-cgroups="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.152215 4789 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.152228 4789 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.152238 4789 flags.go:64] FLAG: --tls-cert-file="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.152247 4789 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.152270 4789 flags.go:64] FLAG: --tls-min-version="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.152279 4789 flags.go:64] FLAG: --tls-private-key-file="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.152300 4789 flags.go:64] FLAG: --topology-manager-policy="none" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.152309 4789 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.152319 4789 flags.go:64] FLAG: --topology-manager-scope="container" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.152329 4789 flags.go:64] FLAG: --v="2" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.152340 4789 flags.go:64] FLAG: --version="false" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.152351 4789 flags.go:64] FLAG: --vmodule="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.152362 4789 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.152372 4789 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.152616 4789 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.152627 4789 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.152637 4789 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.152646 4789 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.152655 4789 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.152663 4789 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.152671 4789 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.152679 4789 feature_gate.go:330] unrecognized feature gate: Example Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.152687 4789 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.152695 4789 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.152703 4789 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.152710 4789 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.152718 4789 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.152726 4789 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.152733 4789 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.152741 4789 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.152749 4789 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.152756 4789 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.152766 4789 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.152776 4789 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.152785 4789 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.152793 4789 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.152801 4789 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.152809 4789 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.152817 4789 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.152825 4789 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.152833 4789 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.152842 4789 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.152851 4789 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.152859 4789 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.152867 4789 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.152875 4789 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.152883 4789 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.152891 4789 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.152899 4789 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.152907 4789 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.152915 4789 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.152922 4789 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.152930 4789 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.152938 4789 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.152948 4789 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.152959 4789 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.152968 4789 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.152977 4789 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.152986 4789 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.152993 4789 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.153002 4789 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.153009 4789 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.153017 4789 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.153024 4789 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.153032 4789 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.153040 4789 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.153048 4789 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.153055 4789 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.153064 4789 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.153072 4789 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.153081 4789 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.153088 4789 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.153099 4789 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.153108 4789 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.153116 4789 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.153124 4789 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.153134 4789 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.153144 4789 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.153154 4789 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.153165 4789 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.153175 4789 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.153184 4789 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.153194 4789 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.153202 4789 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.153210 4789 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.153223 4789 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.165971 4789 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.166330 4789 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.166474 4789 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.166496 4789 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.166506 4789 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.166517 4789 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.166526 4789 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.166535 4789 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.166543 4789 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.166550 4789 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.166558 4789 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.166567 4789 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.166574 4789 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.166604 4789 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.166616 4789 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.166629 4789 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.166640 4789 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.166650 4789 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.166658 4789 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.166668 4789 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.166676 4789 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.166684 4789 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.166693 4789 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.166703 4789 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.166713 4789 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.166721 4789 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.166729 4789 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.166737 4789 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.166746 4789 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.166756 4789 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.166764 4789 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.166774 4789 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.166782 4789 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.166790 4789 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.166798 4789 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.166806 4789 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.166816 4789 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.166825 4789 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.166833 4789 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.166841 4789 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.166849 4789 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.166857 4789 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.166865 4789 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.166872 4789 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.166880 4789 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.166888 4789 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.166895 4789 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.166903 4789 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.166911 4789 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.166922 4789 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.166932 4789 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.166940 4789 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.166950 4789 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.166958 4789 feature_gate.go:330] unrecognized feature gate: Example Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.166967 4789 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.166975 4789 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.166982 4789 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.166990 4789 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.167000 4789 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.167008 4789 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.167015 4789 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.167023 4789 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.167031 4789 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.167040 4789 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.167048 4789 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.167055 4789 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.167063 4789 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.167071 4789 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.167078 4789 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.167086 4789 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.167096 4789 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.167105 4789 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.167115 4789 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.167129 4789 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.167371 4789 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.167384 4789 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.167393 4789 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.167401 4789 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.167409 4789 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.167417 4789 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.167424 4789 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.167434 4789 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.167442 4789 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.167451 4789 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.167460 4789 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.167467 4789 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.167475 4789 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.167483 4789 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.167491 4789 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.167500 4789 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.167508 4789 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.167517 4789 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.167525 4789 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.167533 4789 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.167541 4789 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.167549 4789 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.167557 4789 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.167567 4789 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.167601 4789 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.167610 4789 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.167619 4789 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.167627 4789 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.167635 4789 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.167645 4789 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.167653 4789 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.167661 4789 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.167668 4789 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.167676 4789 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.167685 4789 feature_gate.go:330] unrecognized feature gate: Example Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.167693 4789 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.167701 4789 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.167709 4789 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.167716 4789 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.167724 4789 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.167732 4789 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.167740 4789 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.167748 4789 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.167755 4789 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.167763 4789 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.167771 4789 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.167780 4789 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.167791 4789 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.167800 4789 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.167809 4789 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.167817 4789 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.167826 4789 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.167834 4789 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.167843 4789 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.167850 4789 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.167861 4789 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.167870 4789 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.167879 4789 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.167887 4789 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.167896 4789 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.167905 4789 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.167914 4789 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.167922 4789 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.167931 4789 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.167941 4789 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.167950 4789 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.167959 4789 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.167970 4789 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.167978 4789 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.167986 4789 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.167995 4789 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.168009 4789 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.169150 4789 server.go:940] "Client rotation is on, will bootstrap in background" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.174644 4789 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.174766 4789 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.176515 4789 server.go:997] "Starting client certificate rotation" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.176565 4789 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.176800 4789 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2026-01-09 12:34:14.746318906 +0000 UTC Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.176922 4789 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.204984 4789 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.208523 4789 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 02 21:19:40 crc kubenswrapper[4789]: E0202 21:19:40.209393 4789 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.189:6443: connect: connection refused" logger="UnhandledError" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.226954 4789 log.go:25] "Validated CRI v1 runtime API" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.268163 4789 log.go:25] "Validated CRI v1 image API" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.270524 4789 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.278091 4789 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-02-02-21-14-57-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.278137 4789 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.304818 4789 manager.go:217] Machine: {Timestamp:2026-02-02 21:19:40.302533113 +0000 UTC m=+0.597558242 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:53ecbfdd-0b43-4d74-98ca-c7bcbc951d86 BootID:007d0037-9447-42ea-b3a4-6e1f0d669307 Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:e5:3f:37 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:e5:3f:37 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:10:14:c6 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:e7:08:a2 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:e3:76:35 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:4c:7c:f8 Speed:-1 Mtu:1496} {Name:ens7.23 MacAddress:52:54:00:58:1a:6a Speed:-1 Mtu:1496} {Name:eth10 MacAddress:86:00:fe:56:ea:d1 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:b2:fd:04:77:62:25 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.305048 4789 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.305143 4789 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.305694 4789 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.306165 4789 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.306248 4789 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.306640 4789 topology_manager.go:138] "Creating topology manager with none policy" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.306659 4789 container_manager_linux.go:303] "Creating device plugin manager" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.307235 4789 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.307303 4789 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.308440 4789 state_mem.go:36] "Initialized new in-memory state store" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.308575 4789 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.314848 4789 kubelet.go:418] "Attempting to sync node with API server" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.314882 4789 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.314952 4789 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.314973 4789 kubelet.go:324] "Adding apiserver pod source" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.314991 4789 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.325379 4789 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.325547 4789 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.189:6443: connect: connection refused Feb 02 21:19:40 crc kubenswrapper[4789]: E0202 21:19:40.325696 4789 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.189:6443: connect: connection refused" logger="UnhandledError" Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.325560 4789 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.189:6443: connect: connection refused Feb 02 21:19:40 crc kubenswrapper[4789]: E0202 21:19:40.325770 4789 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.189:6443: connect: connection refused" logger="UnhandledError" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.326447 4789 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.328529 4789 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.330507 4789 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.330556 4789 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.330572 4789 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.330610 4789 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.330652 4789 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.330665 4789 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.330702 4789 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.330726 4789 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.330743 4789 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.330756 4789 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.330787 4789 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.330801 4789 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.330835 4789 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.331625 4789 server.go:1280] "Started kubelet" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.331881 4789 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.338898 4789 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.340060 4789 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.342387 4789 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.189:6443: connect: connection refused Feb 02 21:19:40 crc systemd[1]: Started Kubernetes Kubelet. Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.343930 4789 server.go:460] "Adding debug handlers to kubelet server" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.346136 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.346199 4789 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 02 21:19:40 crc kubenswrapper[4789]: E0202 21:19:40.346935 4789 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.346539 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 12:54:01.48561867 +0000 UTC Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.347359 4789 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.347472 4789 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.347515 4789 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.348010 4789 factory.go:55] Registering systemd factory Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.348038 4789 factory.go:221] Registration of the systemd container factory successfully Feb 02 21:19:40 crc kubenswrapper[4789]: E0202 21:19:40.354661 4789 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.189:6443: connect: connection refused" interval="200ms" Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.354737 4789 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.189:6443: connect: connection refused Feb 02 21:19:40 crc kubenswrapper[4789]: E0202 21:19:40.354834 4789 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.189:6443: connect: connection refused" logger="UnhandledError" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.354954 4789 factory.go:153] Registering CRI-O factory Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.355027 4789 factory.go:221] Registration of the crio container factory successfully Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.355155 4789 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.355244 4789 factory.go:103] Registering Raw factory Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.355351 4789 manager.go:1196] Started watching for new ooms in manager Feb 02 21:19:40 crc kubenswrapper[4789]: E0202 21:19:40.354510 4789 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.189:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.18908ab8e83d2da3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 21:19:40.331543971 +0000 UTC m=+0.626569060,LastTimestamp:2026-02-02 21:19:40.331543971 +0000 UTC m=+0.626569060,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.356357 4789 manager.go:319] Starting recovery of all containers Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.360462 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.360512 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.360523 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.360534 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.360543 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.360552 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.360562 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.360572 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.360605 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.360615 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.360626 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.360636 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.360645 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.360676 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.360686 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.360695 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.360704 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.360750 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.360762 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.360775 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.360784 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.360795 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.360803 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.360812 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.360824 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.360833 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.360844 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.360856 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.360866 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.360878 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.360888 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.360898 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.360912 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.360923 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.360935 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.360948 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.360957 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.360967 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.360976 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.360988 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.360998 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.361022 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.361033 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.361043 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.361052 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.361063 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.362808 4789 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.362834 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.362847 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.362858 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.362869 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.362880 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.362890 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.362904 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.362916 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.362930 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.362944 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.362956 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.362970 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.362986 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.362998 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.363009 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.363019 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.363028 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.363038 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.363048 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.363059 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.363068 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.363078 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.363088 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.363098 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.363109 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.363119 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.363128 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.363138 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.363147 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.363156 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.363166 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.363176 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.363186 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.363195 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.363204 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.363214 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.363224 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.363235 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.363245 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.363274 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.363285 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.363295 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.363306 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.363329 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.363340 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.363350 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.363361 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.363371 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.363382 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.363391 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.363402 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.363413 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.363422 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.363433 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.363443 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.363454 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.363464 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.363473 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.363488 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.363499 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.363510 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.363552 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.363569 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.363601 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.363615 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.363626 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.363636 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.363647 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.363658 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.363667 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.363677 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.363687 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.363698 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.363707 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.363717 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.363728 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.363740 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.363750 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.363762 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.363774 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.363785 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.363795 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.363804 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.363815 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.363825 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.363841 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.363852 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.363861 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.363872 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.363884 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.363896 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.363907 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.363920 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.363933 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.363950 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.363961 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.363976 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.363990 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.364003 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.364016 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.364029 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.364055 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.364066 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.364078 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.364092 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.364105 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.364115 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.364126 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.364136 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.364146 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.364156 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.364166 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.364179 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.364191 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.364203 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.364216 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.364228 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.364242 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.364254 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.364266 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.364278 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.364292 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.364306 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.364318 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.364332 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.364344 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.364366 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.364379 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.364393 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.364406 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.364419 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.364430 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.364440 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.364450 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.364460 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.364474 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.364486 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.364499 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.364512 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.364524 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.364536 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.364550 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.364563 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.364595 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.364610 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.364624 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.364634 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.364645 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.364657 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.364668 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.364679 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.364692 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.364704 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.364716 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.364731 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.364744 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.364755 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.364768 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.364783 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.364794 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.364810 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.364821 4789 reconstruct.go:97] "Volume reconstruction finished" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.364831 4789 reconciler.go:26] "Reconciler: start to sync state" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.400900 4789 manager.go:324] Recovery completed Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.414681 4789 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.418255 4789 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.418292 4789 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.418320 4789 kubelet.go:2335] "Starting kubelet main sync loop" Feb 02 21:19:40 crc kubenswrapper[4789]: E0202 21:19:40.418366 4789 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.419750 4789 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.420644 4789 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.189:6443: connect: connection refused Feb 02 21:19:40 crc kubenswrapper[4789]: E0202 21:19:40.420823 4789 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.189:6443: connect: connection refused" logger="UnhandledError" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.421375 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.421415 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.421426 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.422512 4789 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.422545 4789 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.422689 4789 state_mem.go:36] "Initialized new in-memory state store" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.443064 4789 policy_none.go:49] "None policy: Start" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.444342 4789 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.444387 4789 state_mem.go:35] "Initializing new in-memory state store" Feb 02 21:19:40 crc kubenswrapper[4789]: E0202 21:19:40.447115 4789 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.501191 4789 manager.go:334] "Starting Device Plugin manager" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.501244 4789 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.501258 4789 server.go:79] "Starting device plugin registration server" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.501855 4789 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.501874 4789 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.502562 4789 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.502698 4789 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.502708 4789 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 02 21:19:40 crc kubenswrapper[4789]: E0202 21:19:40.514369 4789 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.519325 4789 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.519432 4789 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.520983 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.521045 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.521060 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.521296 4789 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.522976 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.523013 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.523031 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.523898 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.524076 4789 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.524005 4789 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.524133 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.525112 4789 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.525840 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.525860 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.525892 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.525906 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.525871 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.525963 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.526069 4789 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.526349 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.526383 4789 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.526705 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.526915 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.527394 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.526972 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.527746 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.527762 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.527878 4789 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.528049 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.528113 4789 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.528686 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.529105 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.529163 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.529419 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.529451 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.529347 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.528801 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.529551 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.529568 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.529793 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.529835 4789 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.530859 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.531100 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.531265 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:19:40 crc kubenswrapper[4789]: E0202 21:19:40.555510 4789 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.189:6443: connect: connection refused" interval="400ms" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.568887 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.568938 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.568976 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.569008 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.569043 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.569142 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.569191 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.569221 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.569251 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.569280 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.569314 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.569343 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.569373 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.569405 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.569435 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.606445 4789 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.607962 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.608032 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.608047 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.608084 4789 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 02 21:19:40 crc kubenswrapper[4789]: E0202 21:19:40.612762 4789 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.189:6443: connect: connection refused" node="crc" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.670422 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.670524 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.670604 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.670649 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.670683 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.670722 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.670740 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.670789 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.670840 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.670760 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.670914 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.670922 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.670937 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.670939 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.670989 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.671042 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.670943 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.671091 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.671116 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.671131 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.671050 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.671134 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.671224 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.671245 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.671092 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.671280 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.671285 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.671319 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.671386 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.671528 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.812897 4789 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.815377 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.815447 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.815468 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.815511 4789 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 02 21:19:40 crc kubenswrapper[4789]: E0202 21:19:40.816357 4789 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.189:6443: connect: connection refused" node="crc" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.873540 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.897113 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.908120 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.919627 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.921232 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-43989a621f6a8a377b75732a1c1f0859fe55547dfe6941c44dd1a17c5f5982c2 WatchSource:0}: Error finding container 43989a621f6a8a377b75732a1c1f0859fe55547dfe6941c44dd1a17c5f5982c2: Status 404 returned error can't find the container with id 43989a621f6a8a377b75732a1c1f0859fe55547dfe6941c44dd1a17c5f5982c2 Feb 02 21:19:40 crc kubenswrapper[4789]: I0202 21:19:40.940226 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.942450 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-fd326ebe8712e4218f33e13bd6ce57fe74710dc04e3953f50f8037e31e0f2a95 WatchSource:0}: Error finding container fd326ebe8712e4218f33e13bd6ce57fe74710dc04e3953f50f8037e31e0f2a95: Status 404 returned error can't find the container with id fd326ebe8712e4218f33e13bd6ce57fe74710dc04e3953f50f8037e31e0f2a95 Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.943833 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-84e25cbf5bf2f93bf59aeb9d3f940c897c433fbdb428a2f34e2ba904fcf4097a WatchSource:0}: Error finding container 84e25cbf5bf2f93bf59aeb9d3f940c897c433fbdb428a2f34e2ba904fcf4097a: Status 404 returned error can't find the container with id 84e25cbf5bf2f93bf59aeb9d3f940c897c433fbdb428a2f34e2ba904fcf4097a Feb 02 21:19:40 crc kubenswrapper[4789]: E0202 21:19:40.956082 4789 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.189:6443: connect: connection refused" interval="800ms" Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.960072 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-63597d4444edfd2a0edbfdfc6cd58e2ebc167115b888bc7d4f4d051951492370 WatchSource:0}: Error finding container 63597d4444edfd2a0edbfdfc6cd58e2ebc167115b888bc7d4f4d051951492370: Status 404 returned error can't find the container with id 63597d4444edfd2a0edbfdfc6cd58e2ebc167115b888bc7d4f4d051951492370 Feb 02 21:19:40 crc kubenswrapper[4789]: W0202 21:19:40.965809 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-8807f08932eedadafcd6193c99ef3abfa0d75391ecccb673c270c1b222013b1a WatchSource:0}: Error finding container 8807f08932eedadafcd6193c99ef3abfa0d75391ecccb673c270c1b222013b1a: Status 404 returned error can't find the container with id 8807f08932eedadafcd6193c99ef3abfa0d75391ecccb673c270c1b222013b1a Feb 02 21:19:41 crc kubenswrapper[4789]: I0202 21:19:41.216630 4789 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 21:19:41 crc kubenswrapper[4789]: I0202 21:19:41.220046 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:19:41 crc kubenswrapper[4789]: I0202 21:19:41.220103 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:19:41 crc kubenswrapper[4789]: I0202 21:19:41.220116 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:19:41 crc kubenswrapper[4789]: I0202 21:19:41.220147 4789 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 02 21:19:41 crc kubenswrapper[4789]: E0202 21:19:41.220843 4789 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.189:6443: connect: connection refused" node="crc" Feb 02 21:19:41 crc kubenswrapper[4789]: W0202 21:19:41.269940 4789 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.189:6443: connect: connection refused Feb 02 21:19:41 crc kubenswrapper[4789]: E0202 21:19:41.270076 4789 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.189:6443: connect: connection refused" logger="UnhandledError" Feb 02 21:19:41 crc kubenswrapper[4789]: I0202 21:19:41.343407 4789 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.189:6443: connect: connection refused Feb 02 21:19:41 crc kubenswrapper[4789]: I0202 21:19:41.348468 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 10:32:54.168325271 +0000 UTC Feb 02 21:19:41 crc kubenswrapper[4789]: I0202 21:19:41.423103 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"8807f08932eedadafcd6193c99ef3abfa0d75391ecccb673c270c1b222013b1a"} Feb 02 21:19:41 crc kubenswrapper[4789]: I0202 21:19:41.424798 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"63597d4444edfd2a0edbfdfc6cd58e2ebc167115b888bc7d4f4d051951492370"} Feb 02 21:19:41 crc kubenswrapper[4789]: I0202 21:19:41.426063 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"84e25cbf5bf2f93bf59aeb9d3f940c897c433fbdb428a2f34e2ba904fcf4097a"} Feb 02 21:19:41 crc kubenswrapper[4789]: I0202 21:19:41.427740 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"fd326ebe8712e4218f33e13bd6ce57fe74710dc04e3953f50f8037e31e0f2a95"} Feb 02 21:19:41 crc kubenswrapper[4789]: I0202 21:19:41.428910 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"43989a621f6a8a377b75732a1c1f0859fe55547dfe6941c44dd1a17c5f5982c2"} Feb 02 21:19:41 crc kubenswrapper[4789]: W0202 21:19:41.567222 4789 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.189:6443: connect: connection refused Feb 02 21:19:41 crc kubenswrapper[4789]: E0202 21:19:41.567340 4789 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.189:6443: connect: connection refused" logger="UnhandledError" Feb 02 21:19:41 crc kubenswrapper[4789]: W0202 21:19:41.691728 4789 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.189:6443: connect: connection refused Feb 02 21:19:41 crc kubenswrapper[4789]: E0202 21:19:41.691802 4789 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.189:6443: connect: connection refused" logger="UnhandledError" Feb 02 21:19:41 crc kubenswrapper[4789]: W0202 21:19:41.740162 4789 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.189:6443: connect: connection refused Feb 02 21:19:41 crc kubenswrapper[4789]: E0202 21:19:41.740276 4789 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.189:6443: connect: connection refused" logger="UnhandledError" Feb 02 21:19:41 crc kubenswrapper[4789]: E0202 21:19:41.757469 4789 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.189:6443: connect: connection refused" interval="1.6s" Feb 02 21:19:42 crc kubenswrapper[4789]: I0202 21:19:42.021600 4789 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 21:19:42 crc kubenswrapper[4789]: I0202 21:19:42.023783 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:19:42 crc kubenswrapper[4789]: I0202 21:19:42.023851 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:19:42 crc kubenswrapper[4789]: I0202 21:19:42.023872 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:19:42 crc kubenswrapper[4789]: I0202 21:19:42.023909 4789 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 02 21:19:42 crc kubenswrapper[4789]: E0202 21:19:42.024647 4789 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.189:6443: connect: connection refused" node="crc" Feb 02 21:19:42 crc kubenswrapper[4789]: I0202 21:19:42.343417 4789 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.189:6443: connect: connection refused Feb 02 21:19:42 crc kubenswrapper[4789]: I0202 21:19:42.348744 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 13:15:00.288876848 +0000 UTC Feb 02 21:19:42 crc kubenswrapper[4789]: I0202 21:19:42.387312 4789 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 02 21:19:42 crc kubenswrapper[4789]: E0202 21:19:42.389035 4789 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.189:6443: connect: connection refused" logger="UnhandledError" Feb 02 21:19:42 crc kubenswrapper[4789]: I0202 21:19:42.433982 4789 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="7d78edf2f7e647edfcc306a3feff71c983907077ad05a684af9fa2990deaf3b3" exitCode=0 Feb 02 21:19:42 crc kubenswrapper[4789]: I0202 21:19:42.434099 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"7d78edf2f7e647edfcc306a3feff71c983907077ad05a684af9fa2990deaf3b3"} Feb 02 21:19:42 crc kubenswrapper[4789]: I0202 21:19:42.434195 4789 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 21:19:42 crc kubenswrapper[4789]: I0202 21:19:42.435673 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:19:42 crc kubenswrapper[4789]: I0202 21:19:42.435730 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:19:42 crc kubenswrapper[4789]: I0202 21:19:42.435750 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:19:42 crc kubenswrapper[4789]: I0202 21:19:42.437337 4789 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="94bdc1c30a3c895835afc4f205ec20725ec13707db6957046cae7791b8850679" exitCode=0 Feb 02 21:19:42 crc kubenswrapper[4789]: I0202 21:19:42.437419 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"94bdc1c30a3c895835afc4f205ec20725ec13707db6957046cae7791b8850679"} Feb 02 21:19:42 crc kubenswrapper[4789]: I0202 21:19:42.437516 4789 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 21:19:42 crc kubenswrapper[4789]: I0202 21:19:42.438697 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:19:42 crc kubenswrapper[4789]: I0202 21:19:42.438779 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:19:42 crc kubenswrapper[4789]: I0202 21:19:42.438791 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:19:42 crc kubenswrapper[4789]: I0202 21:19:42.440521 4789 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 21:19:42 crc kubenswrapper[4789]: I0202 21:19:42.441934 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:19:42 crc kubenswrapper[4789]: I0202 21:19:42.441961 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:19:42 crc kubenswrapper[4789]: I0202 21:19:42.441972 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:19:42 crc kubenswrapper[4789]: I0202 21:19:42.443958 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a3685c506f77d3807711a442d68da94932064737cfdc5cb74557da135906e24f"} Feb 02 21:19:42 crc kubenswrapper[4789]: I0202 21:19:42.444013 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"1c292754c81dd4929f8e599e8ec352fd5e90431c60bb741c070a10ffa91fad72"} Feb 02 21:19:42 crc kubenswrapper[4789]: I0202 21:19:42.444023 4789 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 21:19:42 crc kubenswrapper[4789]: I0202 21:19:42.444032 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"9d85613f300f8d008d919038b40efff3fb67e228e25959cbfd2424640c6bc7a8"} Feb 02 21:19:42 crc kubenswrapper[4789]: I0202 21:19:42.444133 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"460e055777327e1734238bf51929751678ea5a757b00a344daa31c8c95e4c463"} Feb 02 21:19:42 crc kubenswrapper[4789]: I0202 21:19:42.445151 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:19:42 crc kubenswrapper[4789]: I0202 21:19:42.445203 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:19:42 crc kubenswrapper[4789]: I0202 21:19:42.445222 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:19:42 crc kubenswrapper[4789]: I0202 21:19:42.447863 4789 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="071e779f17817dde231ae50cb1fbe6f00143c4352a32fff682846c5e79283057" exitCode=0 Feb 02 21:19:42 crc kubenswrapper[4789]: I0202 21:19:42.447962 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"071e779f17817dde231ae50cb1fbe6f00143c4352a32fff682846c5e79283057"} Feb 02 21:19:42 crc kubenswrapper[4789]: I0202 21:19:42.448122 4789 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 21:19:42 crc kubenswrapper[4789]: I0202 21:19:42.454054 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:19:42 crc kubenswrapper[4789]: I0202 21:19:42.454095 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:19:42 crc kubenswrapper[4789]: I0202 21:19:42.454108 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:19:42 crc kubenswrapper[4789]: I0202 21:19:42.458009 4789 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="d1470060c44a356a82b453ed22ef5c3841993bce37eba8523a13c49331499224" exitCode=0 Feb 02 21:19:42 crc kubenswrapper[4789]: I0202 21:19:42.458054 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"d1470060c44a356a82b453ed22ef5c3841993bce37eba8523a13c49331499224"} Feb 02 21:19:42 crc kubenswrapper[4789]: I0202 21:19:42.458165 4789 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 21:19:42 crc kubenswrapper[4789]: I0202 21:19:42.459897 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:19:42 crc kubenswrapper[4789]: I0202 21:19:42.459960 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:19:42 crc kubenswrapper[4789]: I0202 21:19:42.459979 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:19:43 crc kubenswrapper[4789]: W0202 21:19:43.296284 4789 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.189:6443: connect: connection refused Feb 02 21:19:43 crc kubenswrapper[4789]: E0202 21:19:43.296697 4789 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.189:6443: connect: connection refused" logger="UnhandledError" Feb 02 21:19:43 crc kubenswrapper[4789]: I0202 21:19:43.343463 4789 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.189:6443: connect: connection refused Feb 02 21:19:43 crc kubenswrapper[4789]: I0202 21:19:43.724791 4789 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 21:19:43 crc kubenswrapper[4789]: I0202 21:19:43.724887 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 12:59:11.036049679 +0000 UTC Feb 02 21:19:43 crc kubenswrapper[4789]: E0202 21:19:43.725377 4789 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.189:6443: connect: connection refused" interval="3.2s" Feb 02 21:19:43 crc kubenswrapper[4789]: W0202 21:19:43.725420 4789 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.189:6443: connect: connection refused Feb 02 21:19:43 crc kubenswrapper[4789]: E0202 21:19:43.725607 4789 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.189:6443: connect: connection refused" logger="UnhandledError" Feb 02 21:19:43 crc kubenswrapper[4789]: W0202 21:19:43.725562 4789 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.189:6443: connect: connection refused Feb 02 21:19:43 crc kubenswrapper[4789]: E0202 21:19:43.725706 4789 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.189:6443: connect: connection refused" logger="UnhandledError" Feb 02 21:19:43 crc kubenswrapper[4789]: I0202 21:19:43.728187 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:19:43 crc kubenswrapper[4789]: I0202 21:19:43.728242 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:19:43 crc kubenswrapper[4789]: I0202 21:19:43.728262 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:19:43 crc kubenswrapper[4789]: I0202 21:19:43.728303 4789 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 02 21:19:43 crc kubenswrapper[4789]: E0202 21:19:43.728846 4789 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.189:6443: connect: connection refused" node="crc" Feb 02 21:19:43 crc kubenswrapper[4789]: I0202 21:19:43.738272 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"9ed17431bc8880523ea84349c68ea56e389033b550390d88e60373f663d1491f"} Feb 02 21:19:43 crc kubenswrapper[4789]: I0202 21:19:43.738333 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"eface13a61e8f9d1e8e9512c78a4c70973bfad708c3cdea7f7251f6fa408a59f"} Feb 02 21:19:43 crc kubenswrapper[4789]: I0202 21:19:43.738370 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"e3c3d527f77f26e052c4ef9c0577938dc23c802918e742b9fb9020cb6ba705f4"} Feb 02 21:19:43 crc kubenswrapper[4789]: I0202 21:19:43.738337 4789 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 21:19:43 crc kubenswrapper[4789]: I0202 21:19:43.740020 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:19:43 crc kubenswrapper[4789]: I0202 21:19:43.740061 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:19:43 crc kubenswrapper[4789]: I0202 21:19:43.740080 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:19:43 crc kubenswrapper[4789]: I0202 21:19:43.744221 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"9cc46c408bb643834e494025442760929995b22175ce2af65a14004761848c2f"} Feb 02 21:19:43 crc kubenswrapper[4789]: I0202 21:19:43.744279 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"3e95c9e553b8e14141ddde6a221c0e5e4d46533d1631893e5d55416b9d16f4a2"} Feb 02 21:19:43 crc kubenswrapper[4789]: I0202 21:19:43.744307 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"72bf64d682578abb47e9fabf5e6c31e3a20594cc4a2d13304b2036f4198af265"} Feb 02 21:19:43 crc kubenswrapper[4789]: I0202 21:19:43.746979 4789 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="af6e1b9f1634a6356429edaacc8ca24aa1c6f76d08b4996af50735c22fccf6b8" exitCode=0 Feb 02 21:19:43 crc kubenswrapper[4789]: I0202 21:19:43.747028 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"af6e1b9f1634a6356429edaacc8ca24aa1c6f76d08b4996af50735c22fccf6b8"} Feb 02 21:19:43 crc kubenswrapper[4789]: I0202 21:19:43.747107 4789 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 21:19:43 crc kubenswrapper[4789]: I0202 21:19:43.748218 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:19:43 crc kubenswrapper[4789]: I0202 21:19:43.748255 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:19:43 crc kubenswrapper[4789]: I0202 21:19:43.748264 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:19:43 crc kubenswrapper[4789]: I0202 21:19:43.750263 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"c523dec61c09703463bce6b000fb79c832b3c190a960fe0097b654fd672477c3"} Feb 02 21:19:43 crc kubenswrapper[4789]: I0202 21:19:43.750300 4789 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 21:19:43 crc kubenswrapper[4789]: I0202 21:19:43.750332 4789 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 21:19:43 crc kubenswrapper[4789]: I0202 21:19:43.751481 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:19:43 crc kubenswrapper[4789]: I0202 21:19:43.751509 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:19:43 crc kubenswrapper[4789]: I0202 21:19:43.751518 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:19:43 crc kubenswrapper[4789]: I0202 21:19:43.751567 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:19:43 crc kubenswrapper[4789]: I0202 21:19:43.751634 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:19:43 crc kubenswrapper[4789]: I0202 21:19:43.751653 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:19:44 crc kubenswrapper[4789]: W0202 21:19:44.149805 4789 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.189:6443: connect: connection refused Feb 02 21:19:44 crc kubenswrapper[4789]: E0202 21:19:44.149917 4789 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.189:6443: connect: connection refused" logger="UnhandledError" Feb 02 21:19:44 crc kubenswrapper[4789]: I0202 21:19:44.343283 4789 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.189:6443: connect: connection refused Feb 02 21:19:44 crc kubenswrapper[4789]: I0202 21:19:44.725216 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 05:04:45.579780313 +0000 UTC Feb 02 21:19:44 crc kubenswrapper[4789]: I0202 21:19:44.758641 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d3b4af558134618b21a9bc8c039a224f70bd73a606dd1f3965ec71961c7cd675"} Feb 02 21:19:44 crc kubenswrapper[4789]: I0202 21:19:44.758687 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"5cbe4a41d8721562e34639bc3ab24cc7f735d8de00e14d38b1b623b58740c5b9"} Feb 02 21:19:44 crc kubenswrapper[4789]: I0202 21:19:44.758965 4789 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 21:19:44 crc kubenswrapper[4789]: I0202 21:19:44.760668 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:19:44 crc kubenswrapper[4789]: I0202 21:19:44.760698 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:19:44 crc kubenswrapper[4789]: I0202 21:19:44.760706 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:19:44 crc kubenswrapper[4789]: I0202 21:19:44.764415 4789 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="07979540ce64d2a7ae83b290fd86f7146e81a911a880c37ab73c3db14f6d00df" exitCode=0 Feb 02 21:19:44 crc kubenswrapper[4789]: I0202 21:19:44.764545 4789 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 21:19:44 crc kubenswrapper[4789]: I0202 21:19:44.764597 4789 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 21:19:44 crc kubenswrapper[4789]: I0202 21:19:44.764635 4789 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 21:19:44 crc kubenswrapper[4789]: I0202 21:19:44.764638 4789 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 21:19:44 crc kubenswrapper[4789]: I0202 21:19:44.765224 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"07979540ce64d2a7ae83b290fd86f7146e81a911a880c37ab73c3db14f6d00df"} Feb 02 21:19:44 crc kubenswrapper[4789]: I0202 21:19:44.766345 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:19:44 crc kubenswrapper[4789]: I0202 21:19:44.766395 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:19:44 crc kubenswrapper[4789]: I0202 21:19:44.766423 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:19:44 crc kubenswrapper[4789]: I0202 21:19:44.766469 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:19:44 crc kubenswrapper[4789]: I0202 21:19:44.766484 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:19:44 crc kubenswrapper[4789]: I0202 21:19:44.766492 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:19:44 crc kubenswrapper[4789]: I0202 21:19:44.766645 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:19:44 crc kubenswrapper[4789]: I0202 21:19:44.766662 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:19:44 crc kubenswrapper[4789]: I0202 21:19:44.766669 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:19:44 crc kubenswrapper[4789]: I0202 21:19:44.858717 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 21:19:45 crc kubenswrapper[4789]: I0202 21:19:45.726042 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 04:04:13.350623117 +0000 UTC Feb 02 21:19:45 crc kubenswrapper[4789]: I0202 21:19:45.772733 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"4e943cb2ca6cd627faca126484144191f1e66200e7532b03ae34d8ba2ce55b55"} Feb 02 21:19:45 crc kubenswrapper[4789]: I0202 21:19:45.772820 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"f0b49204c1d59d2d4b80e160161b16cba7e45b50c5e3b2c2f0c8249140c7fd08"} Feb 02 21:19:45 crc kubenswrapper[4789]: I0202 21:19:45.772842 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e7f62245b06747aa7501fa552aff9d13f26b867b8ac6e26543b42e258351ba1e"} Feb 02 21:19:45 crc kubenswrapper[4789]: I0202 21:19:45.772846 4789 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 21:19:45 crc kubenswrapper[4789]: I0202 21:19:45.772891 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 21:19:45 crc kubenswrapper[4789]: I0202 21:19:45.775001 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:19:45 crc kubenswrapper[4789]: I0202 21:19:45.775415 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:19:45 crc kubenswrapper[4789]: I0202 21:19:45.775468 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:19:46 crc kubenswrapper[4789]: I0202 21:19:46.547270 4789 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 02 21:19:46 crc kubenswrapper[4789]: I0202 21:19:46.726968 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 20:56:46.168478396 +0000 UTC Feb 02 21:19:46 crc kubenswrapper[4789]: I0202 21:19:46.782663 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"6775862ab8c12cb8e8a61fab828f307d6dd94289ad0f86d897edee089db17f31"} Feb 02 21:19:46 crc kubenswrapper[4789]: I0202 21:19:46.782746 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"74fffc681c9c96016d5d3898000dc8910d872e08a31d2fb520a6ad0a9ae3307a"} Feb 02 21:19:46 crc kubenswrapper[4789]: I0202 21:19:46.782804 4789 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 21:19:46 crc kubenswrapper[4789]: I0202 21:19:46.782920 4789 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 21:19:46 crc kubenswrapper[4789]: I0202 21:19:46.784492 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:19:46 crc kubenswrapper[4789]: I0202 21:19:46.784636 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:19:46 crc kubenswrapper[4789]: I0202 21:19:46.784645 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:19:46 crc kubenswrapper[4789]: I0202 21:19:46.784683 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:19:46 crc kubenswrapper[4789]: I0202 21:19:46.784697 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:19:46 crc kubenswrapper[4789]: I0202 21:19:46.784734 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:19:46 crc kubenswrapper[4789]: I0202 21:19:46.929500 4789 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 21:19:46 crc kubenswrapper[4789]: I0202 21:19:46.931486 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:19:46 crc kubenswrapper[4789]: I0202 21:19:46.931545 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:19:46 crc kubenswrapper[4789]: I0202 21:19:46.931563 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:19:46 crc kubenswrapper[4789]: I0202 21:19:46.931633 4789 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 02 21:19:46 crc kubenswrapper[4789]: I0202 21:19:46.944539 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 21:19:46 crc kubenswrapper[4789]: I0202 21:19:46.944788 4789 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 21:19:46 crc kubenswrapper[4789]: I0202 21:19:46.946087 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:19:46 crc kubenswrapper[4789]: I0202 21:19:46.946176 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:19:46 crc kubenswrapper[4789]: I0202 21:19:46.946248 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:19:46 crc kubenswrapper[4789]: I0202 21:19:46.950520 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 02 21:19:47 crc kubenswrapper[4789]: I0202 21:19:47.122713 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 21:19:47 crc kubenswrapper[4789]: I0202 21:19:47.129296 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 21:19:47 crc kubenswrapper[4789]: I0202 21:19:47.543151 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 21:19:47 crc kubenswrapper[4789]: I0202 21:19:47.650622 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 21:19:47 crc kubenswrapper[4789]: I0202 21:19:47.654496 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 21:19:47 crc kubenswrapper[4789]: I0202 21:19:47.727641 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 21:41:16.854132066 +0000 UTC Feb 02 21:19:47 crc kubenswrapper[4789]: I0202 21:19:47.788116 4789 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 21:19:47 crc kubenswrapper[4789]: I0202 21:19:47.788154 4789 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 21:19:47 crc kubenswrapper[4789]: I0202 21:19:47.788303 4789 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 21:19:47 crc kubenswrapper[4789]: I0202 21:19:47.789722 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:19:47 crc kubenswrapper[4789]: I0202 21:19:47.789755 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:19:47 crc kubenswrapper[4789]: I0202 21:19:47.789723 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:19:47 crc kubenswrapper[4789]: I0202 21:19:47.789766 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:19:47 crc kubenswrapper[4789]: I0202 21:19:47.789778 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:19:47 crc kubenswrapper[4789]: I0202 21:19:47.789791 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:19:47 crc kubenswrapper[4789]: I0202 21:19:47.790062 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:19:47 crc kubenswrapper[4789]: I0202 21:19:47.790135 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:19:47 crc kubenswrapper[4789]: I0202 21:19:47.790152 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:19:48 crc kubenswrapper[4789]: I0202 21:19:48.728659 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 06:23:11.702298924 +0000 UTC Feb 02 21:19:48 crc kubenswrapper[4789]: I0202 21:19:48.790771 4789 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 21:19:48 crc kubenswrapper[4789]: I0202 21:19:48.790890 4789 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 21:19:48 crc kubenswrapper[4789]: I0202 21:19:48.792187 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:19:48 crc kubenswrapper[4789]: I0202 21:19:48.792227 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:19:48 crc kubenswrapper[4789]: I0202 21:19:48.792242 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:19:48 crc kubenswrapper[4789]: I0202 21:19:48.792244 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:19:48 crc kubenswrapper[4789]: I0202 21:19:48.792264 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:19:48 crc kubenswrapper[4789]: I0202 21:19:48.792277 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:19:49 crc kubenswrapper[4789]: I0202 21:19:49.543209 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 21:19:49 crc kubenswrapper[4789]: I0202 21:19:49.543400 4789 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 21:19:49 crc kubenswrapper[4789]: I0202 21:19:49.545127 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:19:49 crc kubenswrapper[4789]: I0202 21:19:49.545192 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:19:49 crc kubenswrapper[4789]: I0202 21:19:49.545227 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:19:49 crc kubenswrapper[4789]: I0202 21:19:49.586802 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Feb 02 21:19:49 crc kubenswrapper[4789]: I0202 21:19:49.729899 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 10:51:46.103739251 +0000 UTC Feb 02 21:19:49 crc kubenswrapper[4789]: I0202 21:19:49.794475 4789 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 21:19:49 crc kubenswrapper[4789]: I0202 21:19:49.796194 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:19:49 crc kubenswrapper[4789]: I0202 21:19:49.796258 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:19:49 crc kubenswrapper[4789]: I0202 21:19:49.796279 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:19:50 crc kubenswrapper[4789]: E0202 21:19:50.514570 4789 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 02 21:19:50 crc kubenswrapper[4789]: I0202 21:19:50.651518 4789 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 02 21:19:50 crc kubenswrapper[4789]: I0202 21:19:50.651664 4789 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 02 21:19:50 crc kubenswrapper[4789]: I0202 21:19:50.731243 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-06 23:38:24.935730991 +0000 UTC Feb 02 21:19:51 crc kubenswrapper[4789]: I0202 21:19:51.731686 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 04:03:56.400110558 +0000 UTC Feb 02 21:19:52 crc kubenswrapper[4789]: I0202 21:19:52.732124 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 13:42:30.123860315 +0000 UTC Feb 02 21:19:53 crc kubenswrapper[4789]: I0202 21:19:53.733194 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 13:27:46.455575465 +0000 UTC Feb 02 21:19:54 crc kubenswrapper[4789]: I0202 21:19:54.734239 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 15:31:31.932428268 +0000 UTC Feb 02 21:19:55 crc kubenswrapper[4789]: I0202 21:19:55.345325 4789 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Feb 02 21:19:55 crc kubenswrapper[4789]: I0202 21:19:55.602062 4789 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 02 21:19:55 crc kubenswrapper[4789]: I0202 21:19:55.602133 4789 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 02 21:19:55 crc kubenswrapper[4789]: I0202 21:19:55.611879 4789 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 02 21:19:55 crc kubenswrapper[4789]: I0202 21:19:55.611984 4789 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 02 21:19:55 crc kubenswrapper[4789]: I0202 21:19:55.734984 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 03:20:43.654024413 +0000 UTC Feb 02 21:19:55 crc kubenswrapper[4789]: I0202 21:19:55.812873 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 02 21:19:55 crc kubenswrapper[4789]: I0202 21:19:55.814953 4789 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="d3b4af558134618b21a9bc8c039a224f70bd73a606dd1f3965ec71961c7cd675" exitCode=255 Feb 02 21:19:55 crc kubenswrapper[4789]: I0202 21:19:55.815016 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"d3b4af558134618b21a9bc8c039a224f70bd73a606dd1f3965ec71961c7cd675"} Feb 02 21:19:55 crc kubenswrapper[4789]: I0202 21:19:55.815161 4789 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 21:19:55 crc kubenswrapper[4789]: I0202 21:19:55.815943 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:19:55 crc kubenswrapper[4789]: I0202 21:19:55.815972 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:19:55 crc kubenswrapper[4789]: I0202 21:19:55.815985 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:19:55 crc kubenswrapper[4789]: I0202 21:19:55.816486 4789 scope.go:117] "RemoveContainer" containerID="d3b4af558134618b21a9bc8c039a224f70bd73a606dd1f3965ec71961c7cd675" Feb 02 21:19:56 crc kubenswrapper[4789]: I0202 21:19:56.736082 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 00:51:08.994954445 +0000 UTC Feb 02 21:19:56 crc kubenswrapper[4789]: I0202 21:19:56.820916 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 02 21:19:56 crc kubenswrapper[4789]: I0202 21:19:56.823819 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"5d8bd6ccf6c7345f0ddc84a2983b618218fc65283b8d351c2df30d1221f304b7"} Feb 02 21:19:56 crc kubenswrapper[4789]: I0202 21:19:56.824023 4789 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 21:19:56 crc kubenswrapper[4789]: I0202 21:19:56.825209 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:19:56 crc kubenswrapper[4789]: I0202 21:19:56.825259 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:19:56 crc kubenswrapper[4789]: I0202 21:19:56.825276 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:19:56 crc kubenswrapper[4789]: I0202 21:19:56.983383 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 02 21:19:56 crc kubenswrapper[4789]: I0202 21:19:56.983554 4789 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 21:19:56 crc kubenswrapper[4789]: I0202 21:19:56.984868 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:19:56 crc kubenswrapper[4789]: I0202 21:19:56.985025 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:19:56 crc kubenswrapper[4789]: I0202 21:19:56.985128 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:19:56 crc kubenswrapper[4789]: I0202 21:19:56.997310 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 02 21:19:57 crc kubenswrapper[4789]: I0202 21:19:57.553887 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 21:19:57 crc kubenswrapper[4789]: I0202 21:19:57.661042 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 21:19:57 crc kubenswrapper[4789]: I0202 21:19:57.661286 4789 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 21:19:57 crc kubenswrapper[4789]: I0202 21:19:57.662989 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:19:57 crc kubenswrapper[4789]: I0202 21:19:57.663063 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:19:57 crc kubenswrapper[4789]: I0202 21:19:57.663092 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:19:57 crc kubenswrapper[4789]: I0202 21:19:57.736485 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 04:37:15.134730498 +0000 UTC Feb 02 21:19:57 crc kubenswrapper[4789]: I0202 21:19:57.826185 4789 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 21:19:57 crc kubenswrapper[4789]: I0202 21:19:57.826329 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 21:19:57 crc kubenswrapper[4789]: I0202 21:19:57.826394 4789 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 21:19:57 crc kubenswrapper[4789]: I0202 21:19:57.828275 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:19:57 crc kubenswrapper[4789]: I0202 21:19:57.828663 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:19:57 crc kubenswrapper[4789]: I0202 21:19:57.828886 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:19:57 crc kubenswrapper[4789]: I0202 21:19:57.828548 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:19:57 crc kubenswrapper[4789]: I0202 21:19:57.829131 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:19:57 crc kubenswrapper[4789]: I0202 21:19:57.829140 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:19:57 crc kubenswrapper[4789]: I0202 21:19:57.831230 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 21:19:58 crc kubenswrapper[4789]: I0202 21:19:58.736896 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 12:18:02.059338277 +0000 UTC Feb 02 21:19:58 crc kubenswrapper[4789]: I0202 21:19:58.830028 4789 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 21:19:58 crc kubenswrapper[4789]: I0202 21:19:58.831656 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:19:58 crc kubenswrapper[4789]: I0202 21:19:58.831699 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:19:58 crc kubenswrapper[4789]: I0202 21:19:58.831718 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:19:59 crc kubenswrapper[4789]: I0202 21:19:59.738041 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 15:29:51.369595844 +0000 UTC Feb 02 21:19:59 crc kubenswrapper[4789]: I0202 21:19:59.832616 4789 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 21:19:59 crc kubenswrapper[4789]: I0202 21:19:59.833751 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:19:59 crc kubenswrapper[4789]: I0202 21:19:59.833782 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:19:59 crc kubenswrapper[4789]: I0202 21:19:59.833791 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:00 crc kubenswrapper[4789]: E0202 21:20:00.514736 4789 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 02 21:20:00 crc kubenswrapper[4789]: E0202 21:20:00.589455 4789 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Feb 02 21:20:00 crc kubenswrapper[4789]: I0202 21:20:00.592420 4789 trace.go:236] Trace[983031182]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (02-Feb-2026 21:19:49.225) (total time: 11366ms): Feb 02 21:20:00 crc kubenswrapper[4789]: Trace[983031182]: ---"Objects listed" error: 11366ms (21:20:00.592) Feb 02 21:20:00 crc kubenswrapper[4789]: Trace[983031182]: [11.366357469s] [11.366357469s] END Feb 02 21:20:00 crc kubenswrapper[4789]: I0202 21:20:00.592473 4789 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 02 21:20:00 crc kubenswrapper[4789]: I0202 21:20:00.593777 4789 trace.go:236] Trace[304991720]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (02-Feb-2026 21:19:49.387) (total time: 11206ms): Feb 02 21:20:00 crc kubenswrapper[4789]: Trace[304991720]: ---"Objects listed" error: 11206ms (21:20:00.593) Feb 02 21:20:00 crc kubenswrapper[4789]: Trace[304991720]: [11.20617645s] [11.20617645s] END Feb 02 21:20:00 crc kubenswrapper[4789]: I0202 21:20:00.594068 4789 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 02 21:20:00 crc kubenswrapper[4789]: I0202 21:20:00.594437 4789 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 02 21:20:00 crc kubenswrapper[4789]: I0202 21:20:00.594629 4789 trace.go:236] Trace[519830364]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (02-Feb-2026 21:19:47.899) (total time: 12695ms): Feb 02 21:20:00 crc kubenswrapper[4789]: Trace[519830364]: ---"Objects listed" error: 12695ms (21:20:00.594) Feb 02 21:20:00 crc kubenswrapper[4789]: Trace[519830364]: [12.695422376s] [12.695422376s] END Feb 02 21:20:00 crc kubenswrapper[4789]: I0202 21:20:00.594672 4789 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 02 21:20:00 crc kubenswrapper[4789]: E0202 21:20:00.594978 4789 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Feb 02 21:20:00 crc kubenswrapper[4789]: I0202 21:20:00.596450 4789 trace.go:236] Trace[1569721549]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (02-Feb-2026 21:19:47.285) (total time: 13310ms): Feb 02 21:20:00 crc kubenswrapper[4789]: Trace[1569721549]: ---"Objects listed" error: 13310ms (21:20:00.596) Feb 02 21:20:00 crc kubenswrapper[4789]: Trace[1569721549]: [13.310701593s] [13.310701593s] END Feb 02 21:20:00 crc kubenswrapper[4789]: I0202 21:20:00.596489 4789 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 02 21:20:00 crc kubenswrapper[4789]: I0202 21:20:00.620414 4789 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 02 21:20:00 crc kubenswrapper[4789]: I0202 21:20:00.652162 4789 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 02 21:20:00 crc kubenswrapper[4789]: I0202 21:20:00.652516 4789 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 02 21:20:00 crc kubenswrapper[4789]: I0202 21:20:00.664180 4789 csr.go:261] certificate signing request csr-45tjh is approved, waiting to be issued Feb 02 21:20:00 crc kubenswrapper[4789]: I0202 21:20:00.681558 4789 csr.go:257] certificate signing request csr-45tjh is issued Feb 02 21:20:00 crc kubenswrapper[4789]: I0202 21:20:00.738807 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 05:41:54.314353947 +0000 UTC Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.329111 4789 apiserver.go:52] "Watching apiserver" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.333569 4789 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.334138 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-dns/node-resolver-6l576","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h"] Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.334575 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.334814 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-6l576" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.334714 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.334723 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.334732 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.334701 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.334878 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 21:20:01 crc kubenswrapper[4789]: E0202 21:20:01.335884 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 21:20:01 crc kubenswrapper[4789]: E0202 21:20:01.335922 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 21:20:01 crc kubenswrapper[4789]: E0202 21:20:01.336023 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.338642 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.339120 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.339280 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.339546 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.339556 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.340143 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.340383 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.340528 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.340965 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.341833 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.342311 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.342433 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.356030 4789 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.388623 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.399448 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.399486 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.399503 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.399522 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.399540 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.399557 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.399592 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.399611 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.399625 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.399642 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.399659 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.399675 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.399694 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.399708 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.399724 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.399739 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.399755 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.399772 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.399786 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.399800 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.399817 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.399832 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.399847 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.399863 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.399877 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.399891 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.399905 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.399922 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.399938 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.399957 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.399974 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.399988 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.400003 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.400017 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.400033 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.400049 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.400063 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.400078 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.400094 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.400109 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.400123 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.400138 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.400154 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.400188 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.400204 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.400221 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.400238 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.400252 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.400267 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.400281 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.400297 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.400312 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.400327 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.400341 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.400358 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.400374 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.400389 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.400405 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.400421 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.400436 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.400455 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.400470 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.400485 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.400501 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.400516 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.400530 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.400544 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.400559 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.400592 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.400608 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.400628 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.400645 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.400661 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.400677 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.400693 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.400710 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.400727 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.400742 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.400758 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.400773 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.400790 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.400806 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.400822 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.400838 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.400855 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.400871 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.400890 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.400905 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.400921 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.400937 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.400952 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.400967 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.400983 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.400997 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.401013 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.401029 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.401044 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.401060 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.401075 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.401090 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.401106 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.401121 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.401137 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.401153 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.401169 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.401184 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.401200 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.401216 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.401233 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.401248 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.401264 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.401279 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.401296 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.401311 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.401329 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.401345 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.401361 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.401376 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.401393 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.401408 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.401425 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.401442 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.401461 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.401477 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.401493 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.401594 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.401612 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.401628 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.401645 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.401661 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.401679 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.401697 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.401714 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.401730 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.401747 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.401764 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.401784 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.401801 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.401817 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.401836 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.401852 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.401868 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.401883 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.401898 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.401914 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.401930 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.401946 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.401962 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.401978 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.401993 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.402011 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.402027 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.402043 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.402060 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.402076 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.402092 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.402108 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.402124 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.402141 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.402156 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.402173 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.402189 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.402208 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.402227 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.402244 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.402262 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.402281 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.402298 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.402314 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.402332 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.402349 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.402366 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.402382 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.402398 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.402415 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.402431 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.402448 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.402464 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.402480 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.402496 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.402512 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.402528 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.402552 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.402569 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.402599 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.402616 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.402635 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.402651 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.402668 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.402685 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.402702 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.402718 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.402734 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.402752 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.402770 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.402817 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.402838 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.402860 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.402882 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.402903 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.402921 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.402939 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwbqf\" (UniqueName: \"kubernetes.io/projected/cd970d28-4009-48b2-a0f4-2b8b1d54a2cb-kube-api-access-jwbqf\") pod \"node-resolver-6l576\" (UID: \"cd970d28-4009-48b2-a0f4-2b8b1d54a2cb\") " pod="openshift-dns/node-resolver-6l576" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.402958 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.402978 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.403034 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.403052 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/cd970d28-4009-48b2-a0f4-2b8b1d54a2cb-hosts-file\") pod \"node-resolver-6l576\" (UID: \"cd970d28-4009-48b2-a0f4-2b8b1d54a2cb\") " pod="openshift-dns/node-resolver-6l576" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.403095 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.403113 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.403132 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.403186 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.403209 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 21:20:01 crc kubenswrapper[4789]: E0202 21:20:01.403296 4789 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 21:20:01 crc kubenswrapper[4789]: E0202 21:20:01.403353 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 21:20:01.903335723 +0000 UTC m=+22.198360742 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.403593 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.403575 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.403743 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.403815 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.404073 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.404188 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.404324 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.404483 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.404680 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.404798 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.404822 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.404900 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.404968 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.405076 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.405162 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.405205 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.405288 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.405400 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.405508 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.405718 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.405725 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.405746 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.405795 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.405938 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.405970 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.406007 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.406120 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.406140 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.406226 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.406352 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.406382 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.406432 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.406521 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.406538 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.406811 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.407121 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.407613 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.407824 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.407928 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.408043 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.408053 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.408247 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.408253 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.408467 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.408508 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.408688 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.409098 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.409272 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.409490 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.410878 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.411222 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.411470 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.411670 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.411754 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.412062 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.412288 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.412313 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.412569 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.413081 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.413203 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.413432 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.413705 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.413726 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.413875 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.414069 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.414221 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.414744 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.414861 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.415069 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.415226 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.415306 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.415349 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.415384 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.415480 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.415619 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.415676 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.415721 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.415772 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.415833 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: E0202 21:20:01.415893 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 21:20:01.915875953 +0000 UTC m=+22.210900972 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.415980 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.416015 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.415942 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.416106 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.416144 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.416715 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.416838 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.416871 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.417394 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.417631 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.418190 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.418520 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.418542 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.418657 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.418936 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.419014 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.419230 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.419307 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.419598 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.419834 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.420096 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.420315 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.420773 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.421259 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.421360 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.421572 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.421652 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.421715 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.421828 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.422009 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.422022 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.422119 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.422277 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.422445 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.422541 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.422546 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.422610 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.422780 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.423658 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.423766 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.423955 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.424053 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.424255 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.424256 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.424271 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.424277 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.424459 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6l576" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd970d28-4009-48b2-a0f4-2b8b1d54a2cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwbqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6l576\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.424489 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.424639 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.424774 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.424898 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.426745 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.433030 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.433543 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.433903 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.434196 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.434281 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.434497 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.434885 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.434940 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.435001 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.435230 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.435248 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.435467 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.435490 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.435647 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.435750 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.435930 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.436205 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.436252 4789 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.436440 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.436526 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.436651 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.436698 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.436738 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.436859 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.436915 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.436885 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.436954 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.436949 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.436992 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.437157 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.437195 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.437200 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.437439 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.437529 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.437525 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.437767 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.437875 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.438075 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.438257 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.438389 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: E0202 21:20:01.438669 4789 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 21:20:01 crc kubenswrapper[4789]: E0202 21:20:01.438761 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 21:20:01.938738388 +0000 UTC m=+22.233763417 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.438883 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.438933 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.439291 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.441257 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.441377 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.441515 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.441908 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.442571 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.442872 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.442979 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.443824 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.447647 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.449719 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.453310 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.453746 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.454095 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: E0202 21:20:01.454221 4789 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 21:20:01 crc kubenswrapper[4789]: E0202 21:20:01.454249 4789 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.454258 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: E0202 21:20:01.454264 4789 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 21:20:01 crc kubenswrapper[4789]: E0202 21:20:01.454326 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-02 21:20:01.954309014 +0000 UTC m=+22.249334053 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.454388 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.454483 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.455325 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.455791 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.456231 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.456307 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.456793 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: E0202 21:20:01.457854 4789 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 21:20:01 crc kubenswrapper[4789]: E0202 21:20:01.457901 4789 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 21:20:01 crc kubenswrapper[4789]: E0202 21:20:01.457917 4789 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 21:20:01 crc kubenswrapper[4789]: E0202 21:20:01.457990 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-02 21:20:01.957947358 +0000 UTC m=+22.252972387 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.460799 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.460984 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.476215 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.476723 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.477103 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.477119 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.477165 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.478214 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.489118 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.498998 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.501825 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.504222 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.504513 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.504643 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwbqf\" (UniqueName: \"kubernetes.io/projected/cd970d28-4009-48b2-a0f4-2b8b1d54a2cb-kube-api-access-jwbqf\") pod \"node-resolver-6l576\" (UID: \"cd970d28-4009-48b2-a0f4-2b8b1d54a2cb\") " pod="openshift-dns/node-resolver-6l576" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.504795 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.504834 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/cd970d28-4009-48b2-a0f4-2b8b1d54a2cb-hosts-file\") pod \"node-resolver-6l576\" (UID: \"cd970d28-4009-48b2-a0f4-2b8b1d54a2cb\") " pod="openshift-dns/node-resolver-6l576" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.504913 4789 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.504928 4789 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.504930 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.504939 4789 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.504984 4789 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.505005 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.505024 4789 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.505043 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/cd970d28-4009-48b2-a0f4-2b8b1d54a2cb-hosts-file\") pod \"node-resolver-6l576\" (UID: \"cd970d28-4009-48b2-a0f4-2b8b1d54a2cb\") " pod="openshift-dns/node-resolver-6l576" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.505041 4789 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.505069 4789 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.505077 4789 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.505087 4789 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.505095 4789 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.505104 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.505113 4789 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.505122 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.505131 4789 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.505141 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.505150 4789 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.505159 4789 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.505168 4789 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.505176 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.505184 4789 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.505192 4789 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.505200 4789 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.505208 4789 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.505217 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.505225 4789 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.505234 4789 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.505242 4789 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.505251 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.505259 4789 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.505266 4789 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.505274 4789 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.505282 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.505290 4789 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.505298 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.505306 4789 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.505316 4789 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.505324 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.505332 4789 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.505340 4789 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.505348 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.505356 4789 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.505366 4789 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.505374 4789 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.505384 4789 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.505393 4789 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.505401 4789 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.505409 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.505417 4789 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.505424 4789 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.505432 4789 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.505440 4789 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.505448 4789 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.505456 4789 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.505464 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.505472 4789 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.505480 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.505488 4789 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.505498 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.505508 4789 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.505517 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.505525 4789 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.505534 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.505542 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.505550 4789 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.505558 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.505567 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.505574 4789 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.505595 4789 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.505603 4789 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.505611 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.505622 4789 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.505629 4789 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.505637 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.505645 4789 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.505653 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.505661 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.505669 4789 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.505677 4789 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.505684 4789 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.505692 4789 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.505700 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.505708 4789 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.505716 4789 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.505724 4789 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.505731 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.505739 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.505747 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.505755 4789 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.505763 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.505771 4789 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.505779 4789 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.505787 4789 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.505795 4789 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.505821 4789 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.505829 4789 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.505837 4789 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.505844 4789 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.505852 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.505860 4789 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.505867 4789 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.505875 4789 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.505883 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.505892 4789 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.505900 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.505907 4789 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.505915 4789 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.505923 4789 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.505931 4789 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.505940 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.505949 4789 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.505958 4789 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.505965 4789 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.505973 4789 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.505981 4789 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.505988 4789 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.505995 4789 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.506003 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.506012 4789 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.506020 4789 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.506027 4789 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.506035 4789 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.506042 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.506050 4789 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.506058 4789 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.506066 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.506074 4789 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.506084 4789 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.506091 4789 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.506099 4789 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.506107 4789 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.506114 4789 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.506123 4789 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.506130 4789 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.506138 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.506147 4789 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.506154 4789 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.506162 4789 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.506169 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.506178 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.506186 4789 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.506193 4789 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.506201 4789 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.506209 4789 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.506216 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.506224 4789 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.506232 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.506240 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.506248 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.506256 4789 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.506264 4789 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.506272 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.506279 4789 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.506287 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.506294 4789 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.506302 4789 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.506310 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.506317 4789 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.506325 4789 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.506333 4789 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.506341 4789 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.506349 4789 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.506357 4789 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.506365 4789 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.506372 4789 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.506380 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.506375 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.506388 4789 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.506475 4789 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.506491 4789 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.506503 4789 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.506515 4789 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.506527 4789 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.506541 4789 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.506551 4789 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.506562 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.506574 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.506686 4789 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.506698 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.506709 4789 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.506719 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.506729 4789 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.506740 4789 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.506752 4789 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.506764 4789 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.506775 4789 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.506786 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.506797 4789 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.506808 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.506820 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.506830 4789 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.506842 4789 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.506853 4789 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.506865 4789 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.515900 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.518438 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwbqf\" (UniqueName: \"kubernetes.io/projected/cd970d28-4009-48b2-a0f4-2b8b1d54a2cb-kube-api-access-jwbqf\") pod \"node-resolver-6l576\" (UID: \"cd970d28-4009-48b2-a0f4-2b8b1d54a2cb\") " pod="openshift-dns/node-resolver-6l576" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.523748 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.544371 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.608227 4789 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.657128 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.671558 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-6l576" Feb 02 21:20:01 crc kubenswrapper[4789]: W0202 21:20:01.681778 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcd970d28_4009_48b2_a0f4_2b8b1d54a2cb.slice/crio-ad520b0b14cf4c9530c0dc3469bb706890f63e399744276a9e07ea91509f3772 WatchSource:0}: Error finding container ad520b0b14cf4c9530c0dc3469bb706890f63e399744276a9e07ea91509f3772: Status 404 returned error can't find the container with id ad520b0b14cf4c9530c0dc3469bb706890f63e399744276a9e07ea91509f3772 Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.682483 4789 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-02 21:15:00 +0000 UTC, rotation deadline is 2026-11-27 18:01:16.781340075 +0000 UTC Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.682517 4789 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7148h41m15.098826703s for next certificate rotation Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.694678 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 21:20:01 crc kubenswrapper[4789]: W0202 21:20:01.708263 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-64a73c29a614cc13d071dd89f1532b6d6cc8798563cb59e6fd78a2dfe66a499b WatchSource:0}: Error finding container 64a73c29a614cc13d071dd89f1532b6d6cc8798563cb59e6fd78a2dfe66a499b: Status 404 returned error can't find the container with id 64a73c29a614cc13d071dd89f1532b6d6cc8798563cb59e6fd78a2dfe66a499b Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.714134 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.739340 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 07:14:24.59974461 +0000 UTC Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.846782 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-6l576" event={"ID":"cd970d28-4009-48b2-a0f4-2b8b1d54a2cb","Type":"ContainerStarted","Data":"ad520b0b14cf4c9530c0dc3469bb706890f63e399744276a9e07ea91509f3772"} Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.851774 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"11ed0ccb1bf0af071ec7d0d54fed6736b5cdb9b31461cd70384452a1d2de22b8"} Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.858056 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"787e60f23108aec3c718a6a3a6b7502e2d5969056e16bd9818871f09f756ad32"} Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.861727 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"64a73c29a614cc13d071dd89f1532b6d6cc8798563cb59e6fd78a2dfe66a499b"} Feb 02 21:20:01 crc kubenswrapper[4789]: I0202 21:20:01.910717 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 21:20:01 crc kubenswrapper[4789]: E0202 21:20:01.910811 4789 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 21:20:01 crc kubenswrapper[4789]: E0202 21:20:01.910871 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 21:20:02.910855354 +0000 UTC m=+23.205880383 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 21:20:02 crc kubenswrapper[4789]: I0202 21:20:02.011359 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:20:02 crc kubenswrapper[4789]: I0202 21:20:02.011444 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 21:20:02 crc kubenswrapper[4789]: I0202 21:20:02.011468 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 21:20:02 crc kubenswrapper[4789]: I0202 21:20:02.011486 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 21:20:02 crc kubenswrapper[4789]: E0202 21:20:02.011601 4789 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 21:20:02 crc kubenswrapper[4789]: E0202 21:20:02.011723 4789 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 21:20:02 crc kubenswrapper[4789]: E0202 21:20:02.011736 4789 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 21:20:02 crc kubenswrapper[4789]: E0202 21:20:02.011748 4789 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 21:20:02 crc kubenswrapper[4789]: E0202 21:20:02.011788 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 21:20:03.011636851 +0000 UTC m=+23.306661870 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 21:20:02 crc kubenswrapper[4789]: E0202 21:20:02.011803 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 21:20:03.011795606 +0000 UTC m=+23.306820625 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:20:02 crc kubenswrapper[4789]: E0202 21:20:02.011813 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-02 21:20:03.011809136 +0000 UTC m=+23.306834155 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 21:20:02 crc kubenswrapper[4789]: E0202 21:20:02.011816 4789 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 21:20:02 crc kubenswrapper[4789]: E0202 21:20:02.011849 4789 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 21:20:02 crc kubenswrapper[4789]: E0202 21:20:02.011863 4789 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 21:20:02 crc kubenswrapper[4789]: E0202 21:20:02.011926 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-02 21:20:03.011910069 +0000 UTC m=+23.306935088 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 21:20:02 crc kubenswrapper[4789]: I0202 21:20:02.423872 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 02 21:20:02 crc kubenswrapper[4789]: I0202 21:20:02.424650 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 02 21:20:02 crc kubenswrapper[4789]: I0202 21:20:02.425554 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 02 21:20:02 crc kubenswrapper[4789]: I0202 21:20:02.426293 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 02 21:20:02 crc kubenswrapper[4789]: I0202 21:20:02.427028 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 02 21:20:02 crc kubenswrapper[4789]: I0202 21:20:02.427692 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 02 21:20:02 crc kubenswrapper[4789]: I0202 21:20:02.428369 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 02 21:20:02 crc kubenswrapper[4789]: I0202 21:20:02.429056 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 02 21:20:02 crc kubenswrapper[4789]: I0202 21:20:02.429790 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 02 21:20:02 crc kubenswrapper[4789]: I0202 21:20:02.432262 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 02 21:20:02 crc kubenswrapper[4789]: I0202 21:20:02.432853 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 02 21:20:02 crc kubenswrapper[4789]: I0202 21:20:02.434049 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 02 21:20:02 crc kubenswrapper[4789]: I0202 21:20:02.434638 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 02 21:20:02 crc kubenswrapper[4789]: I0202 21:20:02.435258 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 02 21:20:02 crc kubenswrapper[4789]: I0202 21:20:02.436300 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 02 21:20:02 crc kubenswrapper[4789]: I0202 21:20:02.436919 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 02 21:20:02 crc kubenswrapper[4789]: I0202 21:20:02.437996 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 02 21:20:02 crc kubenswrapper[4789]: I0202 21:20:02.438436 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 02 21:20:02 crc kubenswrapper[4789]: I0202 21:20:02.439342 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 02 21:20:02 crc kubenswrapper[4789]: I0202 21:20:02.440508 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 02 21:20:02 crc kubenswrapper[4789]: I0202 21:20:02.441051 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 02 21:20:02 crc kubenswrapper[4789]: I0202 21:20:02.442156 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 02 21:20:02 crc kubenswrapper[4789]: I0202 21:20:02.442693 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 02 21:20:02 crc kubenswrapper[4789]: I0202 21:20:02.443477 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 02 21:20:02 crc kubenswrapper[4789]: I0202 21:20:02.444372 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 02 21:20:02 crc kubenswrapper[4789]: I0202 21:20:02.445100 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 02 21:20:02 crc kubenswrapper[4789]: I0202 21:20:02.446314 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 02 21:20:02 crc kubenswrapper[4789]: I0202 21:20:02.446910 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 02 21:20:02 crc kubenswrapper[4789]: I0202 21:20:02.448369 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 02 21:20:02 crc kubenswrapper[4789]: I0202 21:20:02.449028 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 02 21:20:02 crc kubenswrapper[4789]: I0202 21:20:02.449682 4789 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 02 21:20:02 crc kubenswrapper[4789]: I0202 21:20:02.450317 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 02 21:20:02 crc kubenswrapper[4789]: I0202 21:20:02.452527 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 02 21:20:02 crc kubenswrapper[4789]: I0202 21:20:02.453101 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 02 21:20:02 crc kubenswrapper[4789]: I0202 21:20:02.453887 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 02 21:20:02 crc kubenswrapper[4789]: I0202 21:20:02.455318 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 02 21:20:02 crc kubenswrapper[4789]: I0202 21:20:02.455989 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 02 21:20:02 crc kubenswrapper[4789]: I0202 21:20:02.456862 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 02 21:20:02 crc kubenswrapper[4789]: I0202 21:20:02.457451 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 02 21:20:02 crc kubenswrapper[4789]: I0202 21:20:02.458409 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 02 21:20:02 crc kubenswrapper[4789]: I0202 21:20:02.458941 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 02 21:20:02 crc kubenswrapper[4789]: I0202 21:20:02.459927 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 02 21:20:02 crc kubenswrapper[4789]: I0202 21:20:02.460498 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 02 21:20:02 crc kubenswrapper[4789]: I0202 21:20:02.461396 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 02 21:20:02 crc kubenswrapper[4789]: I0202 21:20:02.461844 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 02 21:20:02 crc kubenswrapper[4789]: I0202 21:20:02.462701 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 02 21:20:02 crc kubenswrapper[4789]: I0202 21:20:02.463171 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 02 21:20:02 crc kubenswrapper[4789]: I0202 21:20:02.464251 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 02 21:20:02 crc kubenswrapper[4789]: I0202 21:20:02.464714 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 02 21:20:02 crc kubenswrapper[4789]: I0202 21:20:02.465482 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 02 21:20:02 crc kubenswrapper[4789]: I0202 21:20:02.465934 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 02 21:20:02 crc kubenswrapper[4789]: I0202 21:20:02.466846 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 02 21:20:02 crc kubenswrapper[4789]: I0202 21:20:02.467411 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 02 21:20:02 crc kubenswrapper[4789]: I0202 21:20:02.467886 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 02 21:20:02 crc kubenswrapper[4789]: I0202 21:20:02.740211 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 23:44:04.719700323 +0000 UTC Feb 02 21:20:02 crc kubenswrapper[4789]: I0202 21:20:02.865646 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"19928db343f460eed7b046f14b45c6756081f57ea4b3ad77acc86f29eb856a82"} Feb 02 21:20:02 crc kubenswrapper[4789]: I0202 21:20:02.865711 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"106225a63f3d4ecb7ba8c5266f738990ec7cc5603f9fabde6d234ae265dcc310"} Feb 02 21:20:02 crc kubenswrapper[4789]: I0202 21:20:02.867410 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-6l576" event={"ID":"cd970d28-4009-48b2-a0f4-2b8b1d54a2cb","Type":"ContainerStarted","Data":"3b641bb7caefade4d07fe41965573b059bd8b23a83bb9f60a9637d0152dffb07"} Feb 02 21:20:02 crc kubenswrapper[4789]: I0202 21:20:02.869413 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"fa57a708f883719dd81200ccf975947c0af244b92ead81daee23171c9be412f5"} Feb 02 21:20:02 crc kubenswrapper[4789]: I0202 21:20:02.871352 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 02 21:20:02 crc kubenswrapper[4789]: I0202 21:20:02.871859 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 02 21:20:02 crc kubenswrapper[4789]: I0202 21:20:02.873750 4789 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5d8bd6ccf6c7345f0ddc84a2983b618218fc65283b8d351c2df30d1221f304b7" exitCode=255 Feb 02 21:20:02 crc kubenswrapper[4789]: I0202 21:20:02.873781 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"5d8bd6ccf6c7345f0ddc84a2983b618218fc65283b8d351c2df30d1221f304b7"} Feb 02 21:20:02 crc kubenswrapper[4789]: I0202 21:20:02.873879 4789 scope.go:117] "RemoveContainer" containerID="d3b4af558134618b21a9bc8c039a224f70bd73a606dd1f3965ec71961c7cd675" Feb 02 21:20:02 crc kubenswrapper[4789]: I0202 21:20:02.883705 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:02Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:02 crc kubenswrapper[4789]: I0202 21:20:02.903624 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6l576" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd970d28-4009-48b2-a0f4-2b8b1d54a2cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwbqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6l576\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:02Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:02 crc kubenswrapper[4789]: I0202 21:20:02.919004 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 21:20:02 crc kubenswrapper[4789]: E0202 21:20:02.919145 4789 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 21:20:02 crc kubenswrapper[4789]: E0202 21:20:02.919272 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 21:20:04.919244055 +0000 UTC m=+25.214269114 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 21:20:02 crc kubenswrapper[4789]: I0202 21:20:02.920247 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:02Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:02 crc kubenswrapper[4789]: I0202 21:20:02.932052 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:02Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:02 crc kubenswrapper[4789]: I0202 21:20:02.944871 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:02Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:02 crc kubenswrapper[4789]: I0202 21:20:02.957016 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:02Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:02 crc kubenswrapper[4789]: I0202 21:20:02.968803 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19928db343f460eed7b046f14b45c6756081f57ea4b3ad77acc86f29eb856a82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://106225a63f3d4ecb7ba8c5266f738990ec7cc5603f9fabde6d234ae265dcc310\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:02Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:02 crc kubenswrapper[4789]: I0202 21:20:02.989298 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:02Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:02 crc kubenswrapper[4789]: I0202 21:20:02.992606 4789 scope.go:117] "RemoveContainer" containerID="5d8bd6ccf6c7345f0ddc84a2983b618218fc65283b8d351c2df30d1221f304b7" Feb 02 21:20:02 crc kubenswrapper[4789]: E0202 21:20:02.992800 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 02 21:20:02 crc kubenswrapper[4789]: I0202 21:20:02.994781 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.013107 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa57a708f883719dd81200ccf975947c0af244b92ead81daee23171c9be412f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:03Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.020206 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.020263 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.020288 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.020347 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 21:20:03 crc kubenswrapper[4789]: E0202 21:20:03.020477 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 21:20:05.020437084 +0000 UTC m=+25.315462143 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:20:03 crc kubenswrapper[4789]: E0202 21:20:03.020706 4789 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 21:20:03 crc kubenswrapper[4789]: E0202 21:20:03.020727 4789 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 21:20:03 crc kubenswrapper[4789]: E0202 21:20:03.020738 4789 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 21:20:03 crc kubenswrapper[4789]: E0202 21:20:03.020775 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-02 21:20:05.020766183 +0000 UTC m=+25.315791202 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 21:20:03 crc kubenswrapper[4789]: E0202 21:20:03.021076 4789 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 21:20:03 crc kubenswrapper[4789]: E0202 21:20:03.021111 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 21:20:05.021103973 +0000 UTC m=+25.316128982 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 21:20:03 crc kubenswrapper[4789]: E0202 21:20:03.021552 4789 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 21:20:03 crc kubenswrapper[4789]: E0202 21:20:03.021572 4789 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 21:20:03 crc kubenswrapper[4789]: E0202 21:20:03.021601 4789 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 21:20:03 crc kubenswrapper[4789]: E0202 21:20:03.021626 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-02 21:20:05.021617808 +0000 UTC m=+25.316642827 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.023492 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6l576" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd970d28-4009-48b2-a0f4-2b8b1d54a2cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b641bb7caefade4d07fe41965573b059bd8b23a83bb9f60a9637d0152dffb07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwbqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6l576\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:03Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.038414 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19928db343f460eed7b046f14b45c6756081f57ea4b3ad77acc86f29eb856a82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://106225a63f3d4ecb7ba8c5266f738990ec7cc5603f9fabde6d234ae265dcc310\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:03Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.048696 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:03Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.060938 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:03Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.072409 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:03Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.418560 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.418660 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 21:20:03 crc kubenswrapper[4789]: E0202 21:20:03.418705 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.418735 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 21:20:03 crc kubenswrapper[4789]: E0202 21:20:03.418801 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 21:20:03 crc kubenswrapper[4789]: E0202 21:20:03.418874 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.429536 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-dsv6b"] Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.430241 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-c8vcn"] Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.430397 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-dsv6b" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.430787 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.431531 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-2x5ws"] Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.431904 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-2x5ws" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.434012 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.434085 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.434219 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.435139 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.435266 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.435327 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.435339 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.436548 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.436555 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.436733 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.436809 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.437789 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.452916 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9563eded-ca82-4eb6-90d4-e62b8acbe296\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72bf64d682578abb47e9fabf5e6c31e3a20594cc4a2d13304b2036f4198af265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cc46c408bb643834e494025442760929995b22175ce2af65a14004761848c2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e95c9e553b8e14141ddde6a221c0e5e4d46533d1631893e5d55416b9d16f4a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8bd6ccf6c7345f0ddc84a2983b618218fc65283b8d351c2df30d1221f304b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3b4af558134618b21a9bc8c039a224f70bd73a606dd1f3965ec71961c7cd675\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T21:19:54Z\\\",\\\"message\\\":\\\"W0202 21:19:44.191403 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0202 21:19:44.191886 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770067184 cert, and key in /tmp/serving-cert-1595180290/serving-signer.crt, /tmp/serving-cert-1595180290/serving-signer.key\\\\nI0202 21:19:44.432693 1 observer_polling.go:159] Starting file observer\\\\nW0202 21:19:44.436054 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0202 21:19:44.436266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 21:19:44.440363 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1595180290/tls.crt::/tmp/serving-cert-1595180290/tls.key\\\\\\\"\\\\nF0202 21:19:54.825764 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d8bd6ccf6c7345f0ddc84a2983b618218fc65283b8d351c2df30d1221f304b7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\" 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 21:20:01.773773 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 21:20:01.773776 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 21:20:01.773779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 21:20:01.773999 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0202 21:20:01.777333 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3307553149/tls.crt::/tmp/serving-cert-3307553149/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770067196\\\\\\\\\\\\\\\" (2026-02-02 21:19:55 +0000 UTC to 2026-03-04 21:19:56 +0000 UTC (now=2026-02-02 21:20:01.777292377 +0000 UTC))\\\\\\\"\\\\nI0202 21:20:01.777438 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770067201\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770067201\\\\\\\\\\\\\\\" (2026-02-02 20:20:01 +0000 UTC to 2027-02-02 20:20:01 +0000 UTC (now=2026-02-02 21:20:01.777417111 +0000 UTC))\\\\\\\"\\\\nI0202 21:20:01.777455 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 21:20:01.777484 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 21:20:01.777505 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0202 21:20:01.777526 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0202 21:20:01.777548 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3307553149/tls.crt::/tmp/serving-cert-3307553149/tls.key\\\\\\\"\\\\nF0202 21:20:01.777545 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cbe4a41d8721562e34639bc3ab24cc7f735d8de00e14d38b1b623b58740c5b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94bdc1c30a3c895835afc4f205ec20725ec13707db6957046cae7791b8850679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94bdc1c30a3c895835afc4f205ec20725ec13707db6957046cae7791b8850679\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:19:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:19:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:03Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.474504 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa57a708f883719dd81200ccf975947c0af244b92ead81daee23171c9be412f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:03Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.485945 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6l576" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd970d28-4009-48b2-a0f4-2b8b1d54a2cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b641bb7caefade4d07fe41965573b059bd8b23a83bb9f60a9637d0152dffb07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwbqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6l576\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:03Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.498395 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:03Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.511301 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dsv6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcbed546-a1c3-4ba4-96b4-61471010b1c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dsv6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:03Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.522889 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:03Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.524210 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/bcbed546-a1c3-4ba4-96b4-61471010b1c2-cnibin\") pod \"multus-additional-cni-plugins-dsv6b\" (UID: \"bcbed546-a1c3-4ba4-96b4-61471010b1c2\") " pod="openshift-multus/multus-additional-cni-plugins-dsv6b" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.524259 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bdf018b4-1451-4d37-be6e-05802b67c73e-mcd-auth-proxy-config\") pod \"machine-config-daemon-c8vcn\" (UID: \"bdf018b4-1451-4d37-be6e-05802b67c73e\") " pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.524286 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/70a32268-2a2d-47f3-9fc6-4281b8dc6a02-host-run-k8s-cni-cncf-io\") pod \"multus-2x5ws\" (UID: \"70a32268-2a2d-47f3-9fc6-4281b8dc6a02\") " pod="openshift-multus/multus-2x5ws" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.524307 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/70a32268-2a2d-47f3-9fc6-4281b8dc6a02-host-var-lib-cni-bin\") pod \"multus-2x5ws\" (UID: \"70a32268-2a2d-47f3-9fc6-4281b8dc6a02\") " pod="openshift-multus/multus-2x5ws" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.524327 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/70a32268-2a2d-47f3-9fc6-4281b8dc6a02-host-var-lib-cni-multus\") pod \"multus-2x5ws\" (UID: \"70a32268-2a2d-47f3-9fc6-4281b8dc6a02\") " pod="openshift-multus/multus-2x5ws" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.524415 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/70a32268-2a2d-47f3-9fc6-4281b8dc6a02-etc-kubernetes\") pod \"multus-2x5ws\" (UID: \"70a32268-2a2d-47f3-9fc6-4281b8dc6a02\") " pod="openshift-multus/multus-2x5ws" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.524465 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/70a32268-2a2d-47f3-9fc6-4281b8dc6a02-multus-conf-dir\") pod \"multus-2x5ws\" (UID: \"70a32268-2a2d-47f3-9fc6-4281b8dc6a02\") " pod="openshift-multus/multus-2x5ws" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.524484 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/70a32268-2a2d-47f3-9fc6-4281b8dc6a02-host-var-lib-kubelet\") pod \"multus-2x5ws\" (UID: \"70a32268-2a2d-47f3-9fc6-4281b8dc6a02\") " pod="openshift-multus/multus-2x5ws" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.524510 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/70a32268-2a2d-47f3-9fc6-4281b8dc6a02-multus-socket-dir-parent\") pod \"multus-2x5ws\" (UID: \"70a32268-2a2d-47f3-9fc6-4281b8dc6a02\") " pod="openshift-multus/multus-2x5ws" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.524527 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nf446\" (UniqueName: \"kubernetes.io/projected/70a32268-2a2d-47f3-9fc6-4281b8dc6a02-kube-api-access-nf446\") pod \"multus-2x5ws\" (UID: \"70a32268-2a2d-47f3-9fc6-4281b8dc6a02\") " pod="openshift-multus/multus-2x5ws" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.524625 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/bdf018b4-1451-4d37-be6e-05802b67c73e-rootfs\") pod \"machine-config-daemon-c8vcn\" (UID: \"bdf018b4-1451-4d37-be6e-05802b67c73e\") " pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.524671 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/bdf018b4-1451-4d37-be6e-05802b67c73e-proxy-tls\") pod \"machine-config-daemon-c8vcn\" (UID: \"bdf018b4-1451-4d37-be6e-05802b67c73e\") " pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.524728 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/bcbed546-a1c3-4ba4-96b4-61471010b1c2-tuning-conf-dir\") pod \"multus-additional-cni-plugins-dsv6b\" (UID: \"bcbed546-a1c3-4ba4-96b4-61471010b1c2\") " pod="openshift-multus/multus-additional-cni-plugins-dsv6b" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.524757 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5726m\" (UniqueName: \"kubernetes.io/projected/bcbed546-a1c3-4ba4-96b4-61471010b1c2-kube-api-access-5726m\") pod \"multus-additional-cni-plugins-dsv6b\" (UID: \"bcbed546-a1c3-4ba4-96b4-61471010b1c2\") " pod="openshift-multus/multus-additional-cni-plugins-dsv6b" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.524782 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/70a32268-2a2d-47f3-9fc6-4281b8dc6a02-host-run-multus-certs\") pod \"multus-2x5ws\" (UID: \"70a32268-2a2d-47f3-9fc6-4281b8dc6a02\") " pod="openshift-multus/multus-2x5ws" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.524803 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/70a32268-2a2d-47f3-9fc6-4281b8dc6a02-system-cni-dir\") pod \"multus-2x5ws\" (UID: \"70a32268-2a2d-47f3-9fc6-4281b8dc6a02\") " pod="openshift-multus/multus-2x5ws" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.524819 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/70a32268-2a2d-47f3-9fc6-4281b8dc6a02-host-run-netns\") pod \"multus-2x5ws\" (UID: \"70a32268-2a2d-47f3-9fc6-4281b8dc6a02\") " pod="openshift-multus/multus-2x5ws" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.524834 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/70a32268-2a2d-47f3-9fc6-4281b8dc6a02-hostroot\") pod \"multus-2x5ws\" (UID: \"70a32268-2a2d-47f3-9fc6-4281b8dc6a02\") " pod="openshift-multus/multus-2x5ws" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.524867 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/70a32268-2a2d-47f3-9fc6-4281b8dc6a02-multus-daemon-config\") pod \"multus-2x5ws\" (UID: \"70a32268-2a2d-47f3-9fc6-4281b8dc6a02\") " pod="openshift-multus/multus-2x5ws" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.524915 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/70a32268-2a2d-47f3-9fc6-4281b8dc6a02-os-release\") pod \"multus-2x5ws\" (UID: \"70a32268-2a2d-47f3-9fc6-4281b8dc6a02\") " pod="openshift-multus/multus-2x5ws" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.524943 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/70a32268-2a2d-47f3-9fc6-4281b8dc6a02-cni-binary-copy\") pod \"multus-2x5ws\" (UID: \"70a32268-2a2d-47f3-9fc6-4281b8dc6a02\") " pod="openshift-multus/multus-2x5ws" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.524964 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/70a32268-2a2d-47f3-9fc6-4281b8dc6a02-cnibin\") pod \"multus-2x5ws\" (UID: \"70a32268-2a2d-47f3-9fc6-4281b8dc6a02\") " pod="openshift-multus/multus-2x5ws" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.524998 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/70a32268-2a2d-47f3-9fc6-4281b8dc6a02-multus-cni-dir\") pod \"multus-2x5ws\" (UID: \"70a32268-2a2d-47f3-9fc6-4281b8dc6a02\") " pod="openshift-multus/multus-2x5ws" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.525036 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwk2n\" (UniqueName: \"kubernetes.io/projected/bdf018b4-1451-4d37-be6e-05802b67c73e-kube-api-access-wwk2n\") pod \"machine-config-daemon-c8vcn\" (UID: \"bdf018b4-1451-4d37-be6e-05802b67c73e\") " pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.525058 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/bcbed546-a1c3-4ba4-96b4-61471010b1c2-cni-binary-copy\") pod \"multus-additional-cni-plugins-dsv6b\" (UID: \"bcbed546-a1c3-4ba4-96b4-61471010b1c2\") " pod="openshift-multus/multus-additional-cni-plugins-dsv6b" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.525104 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/bcbed546-a1c3-4ba4-96b4-61471010b1c2-system-cni-dir\") pod \"multus-additional-cni-plugins-dsv6b\" (UID: \"bcbed546-a1c3-4ba4-96b4-61471010b1c2\") " pod="openshift-multus/multus-additional-cni-plugins-dsv6b" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.525141 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/bcbed546-a1c3-4ba4-96b4-61471010b1c2-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-dsv6b\" (UID: \"bcbed546-a1c3-4ba4-96b4-61471010b1c2\") " pod="openshift-multus/multus-additional-cni-plugins-dsv6b" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.525189 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/bcbed546-a1c3-4ba4-96b4-61471010b1c2-os-release\") pod \"multus-additional-cni-plugins-dsv6b\" (UID: \"bcbed546-a1c3-4ba4-96b4-61471010b1c2\") " pod="openshift-multus/multus-additional-cni-plugins-dsv6b" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.533276 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:03Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.543149 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:03Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.555680 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19928db343f460eed7b046f14b45c6756081f57ea4b3ad77acc86f29eb856a82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://106225a63f3d4ecb7ba8c5266f738990ec7cc5603f9fabde6d234ae265dcc310\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:03Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.575632 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2x5ws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70a32268-2a2d-47f3-9fc6-4281b8dc6a02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf446\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2x5ws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:03Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.589456 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa57a708f883719dd81200ccf975947c0af244b92ead81daee23171c9be412f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:03Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.600264 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6l576" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd970d28-4009-48b2-a0f4-2b8b1d54a2cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b641bb7caefade4d07fe41965573b059bd8b23a83bb9f60a9637d0152dffb07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwbqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6l576\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:03Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.614070 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:03Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.625644 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/bdf018b4-1451-4d37-be6e-05802b67c73e-rootfs\") pod \"machine-config-daemon-c8vcn\" (UID: \"bdf018b4-1451-4d37-be6e-05802b67c73e\") " pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.625684 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/bdf018b4-1451-4d37-be6e-05802b67c73e-proxy-tls\") pod \"machine-config-daemon-c8vcn\" (UID: \"bdf018b4-1451-4d37-be6e-05802b67c73e\") " pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.625708 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/bcbed546-a1c3-4ba4-96b4-61471010b1c2-tuning-conf-dir\") pod \"multus-additional-cni-plugins-dsv6b\" (UID: \"bcbed546-a1c3-4ba4-96b4-61471010b1c2\") " pod="openshift-multus/multus-additional-cni-plugins-dsv6b" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.625733 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5726m\" (UniqueName: \"kubernetes.io/projected/bcbed546-a1c3-4ba4-96b4-61471010b1c2-kube-api-access-5726m\") pod \"multus-additional-cni-plugins-dsv6b\" (UID: \"bcbed546-a1c3-4ba4-96b4-61471010b1c2\") " pod="openshift-multus/multus-additional-cni-plugins-dsv6b" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.625757 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/70a32268-2a2d-47f3-9fc6-4281b8dc6a02-system-cni-dir\") pod \"multus-2x5ws\" (UID: \"70a32268-2a2d-47f3-9fc6-4281b8dc6a02\") " pod="openshift-multus/multus-2x5ws" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.625778 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/70a32268-2a2d-47f3-9fc6-4281b8dc6a02-host-run-netns\") pod \"multus-2x5ws\" (UID: \"70a32268-2a2d-47f3-9fc6-4281b8dc6a02\") " pod="openshift-multus/multus-2x5ws" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.625800 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/70a32268-2a2d-47f3-9fc6-4281b8dc6a02-hostroot\") pod \"multus-2x5ws\" (UID: \"70a32268-2a2d-47f3-9fc6-4281b8dc6a02\") " pod="openshift-multus/multus-2x5ws" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.625822 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/70a32268-2a2d-47f3-9fc6-4281b8dc6a02-host-run-multus-certs\") pod \"multus-2x5ws\" (UID: \"70a32268-2a2d-47f3-9fc6-4281b8dc6a02\") " pod="openshift-multus/multus-2x5ws" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.625825 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/bdf018b4-1451-4d37-be6e-05802b67c73e-rootfs\") pod \"machine-config-daemon-c8vcn\" (UID: \"bdf018b4-1451-4d37-be6e-05802b67c73e\") " pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.625753 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:03Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.625846 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/70a32268-2a2d-47f3-9fc6-4281b8dc6a02-os-release\") pod \"multus-2x5ws\" (UID: \"70a32268-2a2d-47f3-9fc6-4281b8dc6a02\") " pod="openshift-multus/multus-2x5ws" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.625920 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/70a32268-2a2d-47f3-9fc6-4281b8dc6a02-cni-binary-copy\") pod \"multus-2x5ws\" (UID: \"70a32268-2a2d-47f3-9fc6-4281b8dc6a02\") " pod="openshift-multus/multus-2x5ws" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.625940 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/70a32268-2a2d-47f3-9fc6-4281b8dc6a02-multus-daemon-config\") pod \"multus-2x5ws\" (UID: \"70a32268-2a2d-47f3-9fc6-4281b8dc6a02\") " pod="openshift-multus/multus-2x5ws" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.625974 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/70a32268-2a2d-47f3-9fc6-4281b8dc6a02-multus-cni-dir\") pod \"multus-2x5ws\" (UID: \"70a32268-2a2d-47f3-9fc6-4281b8dc6a02\") " pod="openshift-multus/multus-2x5ws" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.625991 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/70a32268-2a2d-47f3-9fc6-4281b8dc6a02-cnibin\") pod \"multus-2x5ws\" (UID: \"70a32268-2a2d-47f3-9fc6-4281b8dc6a02\") " pod="openshift-multus/multus-2x5ws" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.626026 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwk2n\" (UniqueName: \"kubernetes.io/projected/bdf018b4-1451-4d37-be6e-05802b67c73e-kube-api-access-wwk2n\") pod \"machine-config-daemon-c8vcn\" (UID: \"bdf018b4-1451-4d37-be6e-05802b67c73e\") " pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.626038 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/70a32268-2a2d-47f3-9fc6-4281b8dc6a02-hostroot\") pod \"multus-2x5ws\" (UID: \"70a32268-2a2d-47f3-9fc6-4281b8dc6a02\") " pod="openshift-multus/multus-2x5ws" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.626044 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/bcbed546-a1c3-4ba4-96b4-61471010b1c2-cni-binary-copy\") pod \"multus-additional-cni-plugins-dsv6b\" (UID: \"bcbed546-a1c3-4ba4-96b4-61471010b1c2\") " pod="openshift-multus/multus-additional-cni-plugins-dsv6b" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.626104 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/bcbed546-a1c3-4ba4-96b4-61471010b1c2-system-cni-dir\") pod \"multus-additional-cni-plugins-dsv6b\" (UID: \"bcbed546-a1c3-4ba4-96b4-61471010b1c2\") " pod="openshift-multus/multus-additional-cni-plugins-dsv6b" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.626110 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/70a32268-2a2d-47f3-9fc6-4281b8dc6a02-host-run-multus-certs\") pod \"multus-2x5ws\" (UID: \"70a32268-2a2d-47f3-9fc6-4281b8dc6a02\") " pod="openshift-multus/multus-2x5ws" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.626129 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/bcbed546-a1c3-4ba4-96b4-61471010b1c2-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-dsv6b\" (UID: \"bcbed546-a1c3-4ba4-96b4-61471010b1c2\") " pod="openshift-multus/multus-additional-cni-plugins-dsv6b" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.626159 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/bcbed546-a1c3-4ba4-96b4-61471010b1c2-os-release\") pod \"multus-additional-cni-plugins-dsv6b\" (UID: \"bcbed546-a1c3-4ba4-96b4-61471010b1c2\") " pod="openshift-multus/multus-additional-cni-plugins-dsv6b" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.626179 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/70a32268-2a2d-47f3-9fc6-4281b8dc6a02-os-release\") pod \"multus-2x5ws\" (UID: \"70a32268-2a2d-47f3-9fc6-4281b8dc6a02\") " pod="openshift-multus/multus-2x5ws" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.626193 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/bcbed546-a1c3-4ba4-96b4-61471010b1c2-cnibin\") pod \"multus-additional-cni-plugins-dsv6b\" (UID: \"bcbed546-a1c3-4ba4-96b4-61471010b1c2\") " pod="openshift-multus/multus-additional-cni-plugins-dsv6b" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.626220 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/bcbed546-a1c3-4ba4-96b4-61471010b1c2-cnibin\") pod \"multus-additional-cni-plugins-dsv6b\" (UID: \"bcbed546-a1c3-4ba4-96b4-61471010b1c2\") " pod="openshift-multus/multus-additional-cni-plugins-dsv6b" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.626238 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/70a32268-2a2d-47f3-9fc6-4281b8dc6a02-host-run-k8s-cni-cncf-io\") pod \"multus-2x5ws\" (UID: \"70a32268-2a2d-47f3-9fc6-4281b8dc6a02\") " pod="openshift-multus/multus-2x5ws" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.626250 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/bcbed546-a1c3-4ba4-96b4-61471010b1c2-system-cni-dir\") pod \"multus-additional-cni-plugins-dsv6b\" (UID: \"bcbed546-a1c3-4ba4-96b4-61471010b1c2\") " pod="openshift-multus/multus-additional-cni-plugins-dsv6b" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.626262 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/70a32268-2a2d-47f3-9fc6-4281b8dc6a02-host-var-lib-cni-bin\") pod \"multus-2x5ws\" (UID: \"70a32268-2a2d-47f3-9fc6-4281b8dc6a02\") " pod="openshift-multus/multus-2x5ws" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.626284 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/70a32268-2a2d-47f3-9fc6-4281b8dc6a02-host-var-lib-cni-multus\") pod \"multus-2x5ws\" (UID: \"70a32268-2a2d-47f3-9fc6-4281b8dc6a02\") " pod="openshift-multus/multus-2x5ws" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.626305 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/70a32268-2a2d-47f3-9fc6-4281b8dc6a02-etc-kubernetes\") pod \"multus-2x5ws\" (UID: \"70a32268-2a2d-47f3-9fc6-4281b8dc6a02\") " pod="openshift-multus/multus-2x5ws" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.626325 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bdf018b4-1451-4d37-be6e-05802b67c73e-mcd-auth-proxy-config\") pod \"machine-config-daemon-c8vcn\" (UID: \"bdf018b4-1451-4d37-be6e-05802b67c73e\") " pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.626348 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/70a32268-2a2d-47f3-9fc6-4281b8dc6a02-multus-conf-dir\") pod \"multus-2x5ws\" (UID: \"70a32268-2a2d-47f3-9fc6-4281b8dc6a02\") " pod="openshift-multus/multus-2x5ws" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.626369 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/70a32268-2a2d-47f3-9fc6-4281b8dc6a02-host-var-lib-kubelet\") pod \"multus-2x5ws\" (UID: \"70a32268-2a2d-47f3-9fc6-4281b8dc6a02\") " pod="openshift-multus/multus-2x5ws" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.626347 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/70a32268-2a2d-47f3-9fc6-4281b8dc6a02-multus-cni-dir\") pod \"multus-2x5ws\" (UID: \"70a32268-2a2d-47f3-9fc6-4281b8dc6a02\") " pod="openshift-multus/multus-2x5ws" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.626390 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/70a32268-2a2d-47f3-9fc6-4281b8dc6a02-multus-socket-dir-parent\") pod \"multus-2x5ws\" (UID: \"70a32268-2a2d-47f3-9fc6-4281b8dc6a02\") " pod="openshift-multus/multus-2x5ws" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.626411 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nf446\" (UniqueName: \"kubernetes.io/projected/70a32268-2a2d-47f3-9fc6-4281b8dc6a02-kube-api-access-nf446\") pod \"multus-2x5ws\" (UID: \"70a32268-2a2d-47f3-9fc6-4281b8dc6a02\") " pod="openshift-multus/multus-2x5ws" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.626416 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/70a32268-2a2d-47f3-9fc6-4281b8dc6a02-cnibin\") pod \"multus-2x5ws\" (UID: \"70a32268-2a2d-47f3-9fc6-4281b8dc6a02\") " pod="openshift-multus/multus-2x5ws" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.626571 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/bcbed546-a1c3-4ba4-96b4-61471010b1c2-tuning-conf-dir\") pod \"multus-additional-cni-plugins-dsv6b\" (UID: \"bcbed546-a1c3-4ba4-96b4-61471010b1c2\") " pod="openshift-multus/multus-additional-cni-plugins-dsv6b" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.626804 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/70a32268-2a2d-47f3-9fc6-4281b8dc6a02-multus-socket-dir-parent\") pod \"multus-2x5ws\" (UID: \"70a32268-2a2d-47f3-9fc6-4281b8dc6a02\") " pod="openshift-multus/multus-2x5ws" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.626699 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/70a32268-2a2d-47f3-9fc6-4281b8dc6a02-host-run-k8s-cni-cncf-io\") pod \"multus-2x5ws\" (UID: \"70a32268-2a2d-47f3-9fc6-4281b8dc6a02\") " pod="openshift-multus/multus-2x5ws" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.625852 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/70a32268-2a2d-47f3-9fc6-4281b8dc6a02-system-cni-dir\") pod \"multus-2x5ws\" (UID: \"70a32268-2a2d-47f3-9fc6-4281b8dc6a02\") " pod="openshift-multus/multus-2x5ws" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.626722 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/70a32268-2a2d-47f3-9fc6-4281b8dc6a02-host-var-lib-cni-bin\") pod \"multus-2x5ws\" (UID: \"70a32268-2a2d-47f3-9fc6-4281b8dc6a02\") " pod="openshift-multus/multus-2x5ws" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.626745 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/70a32268-2a2d-47f3-9fc6-4281b8dc6a02-multus-conf-dir\") pod \"multus-2x5ws\" (UID: \"70a32268-2a2d-47f3-9fc6-4281b8dc6a02\") " pod="openshift-multus/multus-2x5ws" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.626766 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/70a32268-2a2d-47f3-9fc6-4281b8dc6a02-host-var-lib-kubelet\") pod \"multus-2x5ws\" (UID: \"70a32268-2a2d-47f3-9fc6-4281b8dc6a02\") " pod="openshift-multus/multus-2x5ws" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.626705 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/bcbed546-a1c3-4ba4-96b4-61471010b1c2-cni-binary-copy\") pod \"multus-additional-cni-plugins-dsv6b\" (UID: \"bcbed546-a1c3-4ba4-96b4-61471010b1c2\") " pod="openshift-multus/multus-additional-cni-plugins-dsv6b" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.626855 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/70a32268-2a2d-47f3-9fc6-4281b8dc6a02-etc-kubernetes\") pod \"multus-2x5ws\" (UID: \"70a32268-2a2d-47f3-9fc6-4281b8dc6a02\") " pod="openshift-multus/multus-2x5ws" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.626847 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/70a32268-2a2d-47f3-9fc6-4281b8dc6a02-host-var-lib-cni-multus\") pod \"multus-2x5ws\" (UID: \"70a32268-2a2d-47f3-9fc6-4281b8dc6a02\") " pod="openshift-multus/multus-2x5ws" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.626859 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/bcbed546-a1c3-4ba4-96b4-61471010b1c2-os-release\") pod \"multus-additional-cni-plugins-dsv6b\" (UID: \"bcbed546-a1c3-4ba4-96b4-61471010b1c2\") " pod="openshift-multus/multus-additional-cni-plugins-dsv6b" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.627144 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/bcbed546-a1c3-4ba4-96b4-61471010b1c2-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-dsv6b\" (UID: \"bcbed546-a1c3-4ba4-96b4-61471010b1c2\") " pod="openshift-multus/multus-additional-cni-plugins-dsv6b" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.627202 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/70a32268-2a2d-47f3-9fc6-4281b8dc6a02-host-run-netns\") pod \"multus-2x5ws\" (UID: \"70a32268-2a2d-47f3-9fc6-4281b8dc6a02\") " pod="openshift-multus/multus-2x5ws" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.627241 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/70a32268-2a2d-47f3-9fc6-4281b8dc6a02-multus-daemon-config\") pod \"multus-2x5ws\" (UID: \"70a32268-2a2d-47f3-9fc6-4281b8dc6a02\") " pod="openshift-multus/multus-2x5ws" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.627296 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/70a32268-2a2d-47f3-9fc6-4281b8dc6a02-cni-binary-copy\") pod \"multus-2x5ws\" (UID: \"70a32268-2a2d-47f3-9fc6-4281b8dc6a02\") " pod="openshift-multus/multus-2x5ws" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.627345 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bdf018b4-1451-4d37-be6e-05802b67c73e-mcd-auth-proxy-config\") pod \"machine-config-daemon-c8vcn\" (UID: \"bdf018b4-1451-4d37-be6e-05802b67c73e\") " pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.634460 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/bdf018b4-1451-4d37-be6e-05802b67c73e-proxy-tls\") pod \"machine-config-daemon-c8vcn\" (UID: \"bdf018b4-1451-4d37-be6e-05802b67c73e\") " pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.644376 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nf446\" (UniqueName: \"kubernetes.io/projected/70a32268-2a2d-47f3-9fc6-4281b8dc6a02-kube-api-access-nf446\") pod \"multus-2x5ws\" (UID: \"70a32268-2a2d-47f3-9fc6-4281b8dc6a02\") " pod="openshift-multus/multus-2x5ws" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.646902 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5726m\" (UniqueName: \"kubernetes.io/projected/bcbed546-a1c3-4ba4-96b4-61471010b1c2-kube-api-access-5726m\") pod \"multus-additional-cni-plugins-dsv6b\" (UID: \"bcbed546-a1c3-4ba4-96b4-61471010b1c2\") " pod="openshift-multus/multus-additional-cni-plugins-dsv6b" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.647448 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwk2n\" (UniqueName: \"kubernetes.io/projected/bdf018b4-1451-4d37-be6e-05802b67c73e-kube-api-access-wwk2n\") pod \"machine-config-daemon-c8vcn\" (UID: \"bdf018b4-1451-4d37-be6e-05802b67c73e\") " pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.647437 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:03Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.664384 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:03Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.677401 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19928db343f460eed7b046f14b45c6756081f57ea4b3ad77acc86f29eb856a82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://106225a63f3d4ecb7ba8c5266f738990ec7cc5603f9fabde6d234ae265dcc310\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:03Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.688170 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdf018b4-1451-4d37-be6e-05802b67c73e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwk2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwk2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c8vcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:03Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.698980 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9563eded-ca82-4eb6-90d4-e62b8acbe296\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72bf64d682578abb47e9fabf5e6c31e3a20594cc4a2d13304b2036f4198af265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cc46c408bb643834e494025442760929995b22175ce2af65a14004761848c2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e95c9e553b8e14141ddde6a221c0e5e4d46533d1631893e5d55416b9d16f4a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8bd6ccf6c7345f0ddc84a2983b618218fc65283b8d351c2df30d1221f304b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3b4af558134618b21a9bc8c039a224f70bd73a606dd1f3965ec71961c7cd675\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T21:19:54Z\\\",\\\"message\\\":\\\"W0202 21:19:44.191403 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0202 21:19:44.191886 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770067184 cert, and key in /tmp/serving-cert-1595180290/serving-signer.crt, /tmp/serving-cert-1595180290/serving-signer.key\\\\nI0202 21:19:44.432693 1 observer_polling.go:159] Starting file observer\\\\nW0202 21:19:44.436054 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0202 21:19:44.436266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 21:19:44.440363 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1595180290/tls.crt::/tmp/serving-cert-1595180290/tls.key\\\\\\\"\\\\nF0202 21:19:54.825764 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d8bd6ccf6c7345f0ddc84a2983b618218fc65283b8d351c2df30d1221f304b7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\" 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 21:20:01.773773 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 21:20:01.773776 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 21:20:01.773779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 21:20:01.773999 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0202 21:20:01.777333 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3307553149/tls.crt::/tmp/serving-cert-3307553149/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770067196\\\\\\\\\\\\\\\" (2026-02-02 21:19:55 +0000 UTC to 2026-03-04 21:19:56 +0000 UTC (now=2026-02-02 21:20:01.777292377 +0000 UTC))\\\\\\\"\\\\nI0202 21:20:01.777438 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770067201\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770067201\\\\\\\\\\\\\\\" (2026-02-02 20:20:01 +0000 UTC to 2027-02-02 20:20:01 +0000 UTC (now=2026-02-02 21:20:01.777417111 +0000 UTC))\\\\\\\"\\\\nI0202 21:20:01.777455 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 21:20:01.777484 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 21:20:01.777505 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0202 21:20:01.777526 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0202 21:20:01.777548 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3307553149/tls.crt::/tmp/serving-cert-3307553149/tls.key\\\\\\\"\\\\nF0202 21:20:01.777545 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cbe4a41d8721562e34639bc3ab24cc7f735d8de00e14d38b1b623b58740c5b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94bdc1c30a3c895835afc4f205ec20725ec13707db6957046cae7791b8850679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94bdc1c30a3c895835afc4f205ec20725ec13707db6957046cae7791b8850679\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:19:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:19:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:03Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.711419 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dsv6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcbed546-a1c3-4ba4-96b4-61471010b1c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dsv6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:03Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.740965 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 09:48:53.874024722 +0000 UTC Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.742179 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-dsv6b" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.749276 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.754933 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-2x5ws" Feb 02 21:20:03 crc kubenswrapper[4789]: W0202 21:20:03.766842 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbdf018b4_1451_4d37_be6e_05802b67c73e.slice/crio-6396987fede852f549e2d4ae9daab8d83b76fbf6e5d071d3dc3a77afdde29f03 WatchSource:0}: Error finding container 6396987fede852f549e2d4ae9daab8d83b76fbf6e5d071d3dc3a77afdde29f03: Status 404 returned error can't find the container with id 6396987fede852f549e2d4ae9daab8d83b76fbf6e5d071d3dc3a77afdde29f03 Feb 02 21:20:03 crc kubenswrapper[4789]: W0202 21:20:03.769493 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbcbed546_a1c3_4ba4_96b4_61471010b1c2.slice/crio-f8744fae6dd9167dc99fa54e0789e586ccc439923ab94cd0680459af56adea85 WatchSource:0}: Error finding container f8744fae6dd9167dc99fa54e0789e586ccc439923ab94cd0680459af56adea85: Status 404 returned error can't find the container with id f8744fae6dd9167dc99fa54e0789e586ccc439923ab94cd0680459af56adea85 Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.819071 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-w8vkt"] Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.820908 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.822557 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.823509 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.827409 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.827513 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.828077 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.828779 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.828856 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.841021 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:03Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.857955 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:03Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.880829 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:03Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.885782 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" event={"ID":"bdf018b4-1451-4d37-be6e-05802b67c73e","Type":"ContainerStarted","Data":"6396987fede852f549e2d4ae9daab8d83b76fbf6e5d071d3dc3a77afdde29f03"} Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.888098 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.891003 4789 scope.go:117] "RemoveContainer" containerID="5d8bd6ccf6c7345f0ddc84a2983b618218fc65283b8d351c2df30d1221f304b7" Feb 02 21:20:03 crc kubenswrapper[4789]: E0202 21:20:03.891171 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.892105 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dsv6b" event={"ID":"bcbed546-a1c3-4ba4-96b4-61471010b1c2","Type":"ContainerStarted","Data":"f8744fae6dd9167dc99fa54e0789e586ccc439923ab94cd0680459af56adea85"} Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.893054 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2x5ws" event={"ID":"70a32268-2a2d-47f3-9fc6-4281b8dc6a02","Type":"ContainerStarted","Data":"4cfa6543738cb09fc162dcf8ee227983f52d7a3b3589b1b96eeda042709fd77e"} Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.896118 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19928db343f460eed7b046f14b45c6756081f57ea4b3ad77acc86f29eb856a82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://106225a63f3d4ecb7ba8c5266f738990ec7cc5603f9fabde6d234ae265dcc310\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:03Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.911818 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdf018b4-1451-4d37-be6e-05802b67c73e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwk2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwk2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c8vcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:03Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.928208 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-systemd-units\") pod \"ovnkube-node-w8vkt\" (UID: \"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.928241 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-etc-openvswitch\") pod \"ovnkube-node-w8vkt\" (UID: \"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.928259 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnmmf\" (UniqueName: \"kubernetes.io/projected/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-kube-api-access-bnmmf\") pod \"ovnkube-node-w8vkt\" (UID: \"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.928276 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-host-slash\") pod \"ovnkube-node-w8vkt\" (UID: \"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.928290 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-host-cni-netd\") pod \"ovnkube-node-w8vkt\" (UID: \"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.928313 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-host-run-netns\") pod \"ovnkube-node-w8vkt\" (UID: \"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.928329 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-env-overrides\") pod \"ovnkube-node-w8vkt\" (UID: \"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.928345 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-w8vkt\" (UID: \"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.928363 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-run-openvswitch\") pod \"ovnkube-node-w8vkt\" (UID: \"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.928378 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-log-socket\") pod \"ovnkube-node-w8vkt\" (UID: \"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.928394 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-ovnkube-script-lib\") pod \"ovnkube-node-w8vkt\" (UID: \"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.928416 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-host-kubelet\") pod \"ovnkube-node-w8vkt\" (UID: \"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.928430 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-run-systemd\") pod \"ovnkube-node-w8vkt\" (UID: \"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.928444 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-host-run-ovn-kubernetes\") pod \"ovnkube-node-w8vkt\" (UID: \"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.928458 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-node-log\") pod \"ovnkube-node-w8vkt\" (UID: \"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.928473 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-var-lib-openvswitch\") pod \"ovnkube-node-w8vkt\" (UID: \"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.928489 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-ovnkube-config\") pod \"ovnkube-node-w8vkt\" (UID: \"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.928512 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-ovn-node-metrics-cert\") pod \"ovnkube-node-w8vkt\" (UID: \"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.928531 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-run-ovn\") pod \"ovnkube-node-w8vkt\" (UID: \"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.928545 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-host-cni-bin\") pod \"ovnkube-node-w8vkt\" (UID: \"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.939262 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9563eded-ca82-4eb6-90d4-e62b8acbe296\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72bf64d682578abb47e9fabf5e6c31e3a20594cc4a2d13304b2036f4198af265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cc46c408bb643834e494025442760929995b22175ce2af65a14004761848c2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e95c9e553b8e14141ddde6a221c0e5e4d46533d1631893e5d55416b9d16f4a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8bd6ccf6c7345f0ddc84a2983b618218fc65283b8d351c2df30d1221f304b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3b4af558134618b21a9bc8c039a224f70bd73a606dd1f3965ec71961c7cd675\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T21:19:54Z\\\",\\\"message\\\":\\\"W0202 21:19:44.191403 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0202 21:19:44.191886 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770067184 cert, and key in /tmp/serving-cert-1595180290/serving-signer.crt, /tmp/serving-cert-1595180290/serving-signer.key\\\\nI0202 21:19:44.432693 1 observer_polling.go:159] Starting file observer\\\\nW0202 21:19:44.436054 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0202 21:19:44.436266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 21:19:44.440363 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1595180290/tls.crt::/tmp/serving-cert-1595180290/tls.key\\\\\\\"\\\\nF0202 21:19:54.825764 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d8bd6ccf6c7345f0ddc84a2983b618218fc65283b8d351c2df30d1221f304b7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\" 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 21:20:01.773773 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 21:20:01.773776 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 21:20:01.773779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 21:20:01.773999 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0202 21:20:01.777333 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3307553149/tls.crt::/tmp/serving-cert-3307553149/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770067196\\\\\\\\\\\\\\\" (2026-02-02 21:19:55 +0000 UTC to 2026-03-04 21:19:56 +0000 UTC (now=2026-02-02 21:20:01.777292377 +0000 UTC))\\\\\\\"\\\\nI0202 21:20:01.777438 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770067201\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770067201\\\\\\\\\\\\\\\" (2026-02-02 20:20:01 +0000 UTC to 2027-02-02 20:20:01 +0000 UTC (now=2026-02-02 21:20:01.777417111 +0000 UTC))\\\\\\\"\\\\nI0202 21:20:01.777455 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 21:20:01.777484 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 21:20:01.777505 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0202 21:20:01.777526 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0202 21:20:01.777548 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3307553149/tls.crt::/tmp/serving-cert-3307553149/tls.key\\\\\\\"\\\\nF0202 21:20:01.777545 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cbe4a41d8721562e34639bc3ab24cc7f735d8de00e14d38b1b623b58740c5b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94bdc1c30a3c895835afc4f205ec20725ec13707db6957046cae7791b8850679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94bdc1c30a3c895835afc4f205ec20725ec13707db6957046cae7791b8850679\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:19:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:19:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:03Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.957164 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dsv6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcbed546-a1c3-4ba4-96b4-61471010b1c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dsv6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:03Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:03 crc kubenswrapper[4789]: I0202 21:20:03.978948 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w8vkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:03Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:04 crc kubenswrapper[4789]: I0202 21:20:04.001363 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2x5ws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70a32268-2a2d-47f3-9fc6-4281b8dc6a02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf446\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2x5ws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:03Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:04 crc kubenswrapper[4789]: I0202 21:20:04.028942 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-host-kubelet\") pod \"ovnkube-node-w8vkt\" (UID: \"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" Feb 02 21:20:04 crc kubenswrapper[4789]: I0202 21:20:04.028980 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-run-systemd\") pod \"ovnkube-node-w8vkt\" (UID: \"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" Feb 02 21:20:04 crc kubenswrapper[4789]: I0202 21:20:04.028995 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-host-run-ovn-kubernetes\") pod \"ovnkube-node-w8vkt\" (UID: \"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" Feb 02 21:20:04 crc kubenswrapper[4789]: I0202 21:20:04.029023 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-node-log\") pod \"ovnkube-node-w8vkt\" (UID: \"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" Feb 02 21:20:04 crc kubenswrapper[4789]: I0202 21:20:04.029040 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-var-lib-openvswitch\") pod \"ovnkube-node-w8vkt\" (UID: \"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" Feb 02 21:20:04 crc kubenswrapper[4789]: I0202 21:20:04.029062 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-ovnkube-config\") pod \"ovnkube-node-w8vkt\" (UID: \"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" Feb 02 21:20:04 crc kubenswrapper[4789]: I0202 21:20:04.029096 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-ovn-node-metrics-cert\") pod \"ovnkube-node-w8vkt\" (UID: \"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" Feb 02 21:20:04 crc kubenswrapper[4789]: I0202 21:20:04.029114 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-run-ovn\") pod \"ovnkube-node-w8vkt\" (UID: \"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" Feb 02 21:20:04 crc kubenswrapper[4789]: I0202 21:20:04.029130 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-host-cni-bin\") pod \"ovnkube-node-w8vkt\" (UID: \"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" Feb 02 21:20:04 crc kubenswrapper[4789]: I0202 21:20:04.029149 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-systemd-units\") pod \"ovnkube-node-w8vkt\" (UID: \"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" Feb 02 21:20:04 crc kubenswrapper[4789]: I0202 21:20:04.029164 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-etc-openvswitch\") pod \"ovnkube-node-w8vkt\" (UID: \"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" Feb 02 21:20:04 crc kubenswrapper[4789]: I0202 21:20:04.029180 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-host-slash\") pod \"ovnkube-node-w8vkt\" (UID: \"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" Feb 02 21:20:04 crc kubenswrapper[4789]: I0202 21:20:04.029195 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-host-cni-netd\") pod \"ovnkube-node-w8vkt\" (UID: \"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" Feb 02 21:20:04 crc kubenswrapper[4789]: I0202 21:20:04.029208 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnmmf\" (UniqueName: \"kubernetes.io/projected/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-kube-api-access-bnmmf\") pod \"ovnkube-node-w8vkt\" (UID: \"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" Feb 02 21:20:04 crc kubenswrapper[4789]: I0202 21:20:04.029229 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-host-run-netns\") pod \"ovnkube-node-w8vkt\" (UID: \"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" Feb 02 21:20:04 crc kubenswrapper[4789]: I0202 21:20:04.029242 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-env-overrides\") pod \"ovnkube-node-w8vkt\" (UID: \"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" Feb 02 21:20:04 crc kubenswrapper[4789]: I0202 21:20:04.029266 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-w8vkt\" (UID: \"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" Feb 02 21:20:04 crc kubenswrapper[4789]: I0202 21:20:04.029295 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-run-openvswitch\") pod \"ovnkube-node-w8vkt\" (UID: \"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" Feb 02 21:20:04 crc kubenswrapper[4789]: I0202 21:20:04.029320 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-log-socket\") pod \"ovnkube-node-w8vkt\" (UID: \"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" Feb 02 21:20:04 crc kubenswrapper[4789]: I0202 21:20:04.029335 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-ovnkube-script-lib\") pod \"ovnkube-node-w8vkt\" (UID: \"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" Feb 02 21:20:04 crc kubenswrapper[4789]: I0202 21:20:04.029562 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-systemd-units\") pod \"ovnkube-node-w8vkt\" (UID: \"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" Feb 02 21:20:04 crc kubenswrapper[4789]: I0202 21:20:04.029628 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-host-kubelet\") pod \"ovnkube-node-w8vkt\" (UID: \"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" Feb 02 21:20:04 crc kubenswrapper[4789]: I0202 21:20:04.029652 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-run-systemd\") pod \"ovnkube-node-w8vkt\" (UID: \"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" Feb 02 21:20:04 crc kubenswrapper[4789]: I0202 21:20:04.029675 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-host-run-ovn-kubernetes\") pod \"ovnkube-node-w8vkt\" (UID: \"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" Feb 02 21:20:04 crc kubenswrapper[4789]: I0202 21:20:04.029667 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-etc-openvswitch\") pod \"ovnkube-node-w8vkt\" (UID: \"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" Feb 02 21:20:04 crc kubenswrapper[4789]: I0202 21:20:04.029704 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-host-slash\") pod \"ovnkube-node-w8vkt\" (UID: \"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" Feb 02 21:20:04 crc kubenswrapper[4789]: I0202 21:20:04.029760 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-host-cni-netd\") pod \"ovnkube-node-w8vkt\" (UID: \"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" Feb 02 21:20:04 crc kubenswrapper[4789]: I0202 21:20:04.029896 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-node-log\") pod \"ovnkube-node-w8vkt\" (UID: \"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" Feb 02 21:20:04 crc kubenswrapper[4789]: I0202 21:20:04.029924 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-var-lib-openvswitch\") pod \"ovnkube-node-w8vkt\" (UID: \"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" Feb 02 21:20:04 crc kubenswrapper[4789]: I0202 21:20:04.029963 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-run-openvswitch\") pod \"ovnkube-node-w8vkt\" (UID: \"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" Feb 02 21:20:04 crc kubenswrapper[4789]: I0202 21:20:04.030151 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-log-socket\") pod \"ovnkube-node-w8vkt\" (UID: \"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" Feb 02 21:20:04 crc kubenswrapper[4789]: I0202 21:20:04.030170 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-host-run-netns\") pod \"ovnkube-node-w8vkt\" (UID: \"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" Feb 02 21:20:04 crc kubenswrapper[4789]: I0202 21:20:04.030519 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-ovnkube-config\") pod \"ovnkube-node-w8vkt\" (UID: \"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" Feb 02 21:20:04 crc kubenswrapper[4789]: I0202 21:20:04.030556 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-run-ovn\") pod \"ovnkube-node-w8vkt\" (UID: \"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" Feb 02 21:20:04 crc kubenswrapper[4789]: I0202 21:20:04.030599 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-host-cni-bin\") pod \"ovnkube-node-w8vkt\" (UID: \"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" Feb 02 21:20:04 crc kubenswrapper[4789]: I0202 21:20:04.030611 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-env-overrides\") pod \"ovnkube-node-w8vkt\" (UID: \"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" Feb 02 21:20:04 crc kubenswrapper[4789]: I0202 21:20:04.030769 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-ovnkube-script-lib\") pod \"ovnkube-node-w8vkt\" (UID: \"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" Feb 02 21:20:04 crc kubenswrapper[4789]: I0202 21:20:04.030438 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-w8vkt\" (UID: \"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" Feb 02 21:20:04 crc kubenswrapper[4789]: I0202 21:20:04.035881 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-ovn-node-metrics-cert\") pod \"ovnkube-node-w8vkt\" (UID: \"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" Feb 02 21:20:04 crc kubenswrapper[4789]: I0202 21:20:04.044765 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa57a708f883719dd81200ccf975947c0af244b92ead81daee23171c9be412f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:04Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:04 crc kubenswrapper[4789]: I0202 21:20:04.047177 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnmmf\" (UniqueName: \"kubernetes.io/projected/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-kube-api-access-bnmmf\") pod \"ovnkube-node-w8vkt\" (UID: \"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" Feb 02 21:20:04 crc kubenswrapper[4789]: I0202 21:20:04.070777 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6l576" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd970d28-4009-48b2-a0f4-2b8b1d54a2cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b641bb7caefade4d07fe41965573b059bd8b23a83bb9f60a9637d0152dffb07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwbqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6l576\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:04Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:04 crc kubenswrapper[4789]: I0202 21:20:04.083087 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:04Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:04 crc kubenswrapper[4789]: I0202 21:20:04.095266 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa57a708f883719dd81200ccf975947c0af244b92ead81daee23171c9be412f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:04Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:04 crc kubenswrapper[4789]: I0202 21:20:04.103649 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6l576" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd970d28-4009-48b2-a0f4-2b8b1d54a2cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b641bb7caefade4d07fe41965573b059bd8b23a83bb9f60a9637d0152dffb07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwbqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6l576\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:04Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:04 crc kubenswrapper[4789]: I0202 21:20:04.117191 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:04Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:04 crc kubenswrapper[4789]: I0202 21:20:04.127056 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:04Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:04 crc kubenswrapper[4789]: I0202 21:20:04.136972 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:04Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:04 crc kubenswrapper[4789]: I0202 21:20:04.146267 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:04Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:04 crc kubenswrapper[4789]: I0202 21:20:04.157999 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19928db343f460eed7b046f14b45c6756081f57ea4b3ad77acc86f29eb856a82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://106225a63f3d4ecb7ba8c5266f738990ec7cc5603f9fabde6d234ae265dcc310\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:04Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:04 crc kubenswrapper[4789]: I0202 21:20:04.167729 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdf018b4-1451-4d37-be6e-05802b67c73e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwk2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwk2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c8vcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:04Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:04 crc kubenswrapper[4789]: I0202 21:20:04.177976 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" Feb 02 21:20:04 crc kubenswrapper[4789]: I0202 21:20:04.181125 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9563eded-ca82-4eb6-90d4-e62b8acbe296\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72bf64d682578abb47e9fabf5e6c31e3a20594cc4a2d13304b2036f4198af265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cc46c408bb643834e494025442760929995b22175ce2af65a14004761848c2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e95c9e553b8e14141ddde6a221c0e5e4d46533d1631893e5d55416b9d16f4a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8bd6ccf6c7345f0ddc84a2983b618218fc65283b8d351c2df30d1221f304b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d8bd6ccf6c7345f0ddc84a2983b618218fc65283b8d351c2df30d1221f304b7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\" 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 21:20:01.773773 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 21:20:01.773776 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 21:20:01.773779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 21:20:01.773999 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0202 21:20:01.777333 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3307553149/tls.crt::/tmp/serving-cert-3307553149/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770067196\\\\\\\\\\\\\\\" (2026-02-02 21:19:55 +0000 UTC to 2026-03-04 21:19:56 +0000 UTC (now=2026-02-02 21:20:01.777292377 +0000 UTC))\\\\\\\"\\\\nI0202 21:20:01.777438 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770067201\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770067201\\\\\\\\\\\\\\\" (2026-02-02 20:20:01 +0000 UTC to 2027-02-02 20:20:01 +0000 UTC (now=2026-02-02 21:20:01.777417111 +0000 UTC))\\\\\\\"\\\\nI0202 21:20:01.777455 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 21:20:01.777484 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 21:20:01.777505 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0202 21:20:01.777526 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0202 21:20:01.777548 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3307553149/tls.crt::/tmp/serving-cert-3307553149/tls.key\\\\\\\"\\\\nF0202 21:20:01.777545 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cbe4a41d8721562e34639bc3ab24cc7f735d8de00e14d38b1b623b58740c5b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94bdc1c30a3c895835afc4f205ec20725ec13707db6957046cae7791b8850679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94bdc1c30a3c895835afc4f205ec20725ec13707db6957046cae7791b8850679\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:19:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:19:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:04Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:04 crc kubenswrapper[4789]: W0202 21:20:04.193747 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2e38c22e_bcd6_4aa8_89e3_b02b691c8fd6.slice/crio-86a6091cd023dfaf81ce0bb5e71d75ae4bf89cf422d491ec479ccaacb3d2e3bf WatchSource:0}: Error finding container 86a6091cd023dfaf81ce0bb5e71d75ae4bf89cf422d491ec479ccaacb3d2e3bf: Status 404 returned error can't find the container with id 86a6091cd023dfaf81ce0bb5e71d75ae4bf89cf422d491ec479ccaacb3d2e3bf Feb 02 21:20:04 crc kubenswrapper[4789]: I0202 21:20:04.194225 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dsv6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcbed546-a1c3-4ba4-96b4-61471010b1c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dsv6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:04Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:04 crc kubenswrapper[4789]: I0202 21:20:04.214807 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w8vkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:04Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:04 crc kubenswrapper[4789]: I0202 21:20:04.227226 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2x5ws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70a32268-2a2d-47f3-9fc6-4281b8dc6a02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf446\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2x5ws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:04Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:04 crc kubenswrapper[4789]: I0202 21:20:04.742007 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 05:48:45.42371839 +0000 UTC Feb 02 21:20:04 crc kubenswrapper[4789]: I0202 21:20:04.899541 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"40b3784b5ba59f0147d2889d897d17140759155e24587e9a323338e0a3125ddd"} Feb 02 21:20:04 crc kubenswrapper[4789]: I0202 21:20:04.903808 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" event={"ID":"bdf018b4-1451-4d37-be6e-05802b67c73e","Type":"ContainerStarted","Data":"ad1716ba11e1b21eb68642e6935312760c389a669a007822290fbe72573bfaad"} Feb 02 21:20:04 crc kubenswrapper[4789]: I0202 21:20:04.903885 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" event={"ID":"bdf018b4-1451-4d37-be6e-05802b67c73e","Type":"ContainerStarted","Data":"b108a17d437cce7e94365267d96e99e84944d9dd12174c5acb380bc1a1f9c885"} Feb 02 21:20:04 crc kubenswrapper[4789]: I0202 21:20:04.905949 4789 generic.go:334] "Generic (PLEG): container finished" podID="2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6" containerID="e90a8e698f2e9f30a596864d0d038bfad686cca1a56b1a8bc72fab5fe594f355" exitCode=0 Feb 02 21:20:04 crc kubenswrapper[4789]: I0202 21:20:04.906073 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" event={"ID":"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6","Type":"ContainerDied","Data":"e90a8e698f2e9f30a596864d0d038bfad686cca1a56b1a8bc72fab5fe594f355"} Feb 02 21:20:04 crc kubenswrapper[4789]: I0202 21:20:04.906124 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" event={"ID":"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6","Type":"ContainerStarted","Data":"86a6091cd023dfaf81ce0bb5e71d75ae4bf89cf422d491ec479ccaacb3d2e3bf"} Feb 02 21:20:04 crc kubenswrapper[4789]: I0202 21:20:04.908897 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2x5ws" event={"ID":"70a32268-2a2d-47f3-9fc6-4281b8dc6a02","Type":"ContainerStarted","Data":"9eaf3ca59ce89187ee20f3915b7fd4a8867e156c0be18b435511d2af95ffd949"} Feb 02 21:20:04 crc kubenswrapper[4789]: I0202 21:20:04.911533 4789 generic.go:334] "Generic (PLEG): container finished" podID="bcbed546-a1c3-4ba4-96b4-61471010b1c2" containerID="e48b8508849b24e18223b62bd73348759ba4c04a525afb7d4055175bceb0183c" exitCode=0 Feb 02 21:20:04 crc kubenswrapper[4789]: I0202 21:20:04.911617 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dsv6b" event={"ID":"bcbed546-a1c3-4ba4-96b4-61471010b1c2","Type":"ContainerDied","Data":"e48b8508849b24e18223b62bd73348759ba4c04a525afb7d4055175bceb0183c"} Feb 02 21:20:04 crc kubenswrapper[4789]: I0202 21:20:04.926394 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2x5ws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70a32268-2a2d-47f3-9fc6-4281b8dc6a02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf446\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2x5ws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:04Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:04 crc kubenswrapper[4789]: I0202 21:20:04.937281 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 21:20:04 crc kubenswrapper[4789]: E0202 21:20:04.937443 4789 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 21:20:04 crc kubenswrapper[4789]: E0202 21:20:04.937540 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 21:20:08.937517338 +0000 UTC m=+29.232542387 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 21:20:04 crc kubenswrapper[4789]: I0202 21:20:04.952961 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa57a708f883719dd81200ccf975947c0af244b92ead81daee23171c9be412f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:04Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:04 crc kubenswrapper[4789]: I0202 21:20:04.968372 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6l576" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd970d28-4009-48b2-a0f4-2b8b1d54a2cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b641bb7caefade4d07fe41965573b059bd8b23a83bb9f60a9637d0152dffb07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwbqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6l576\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:04Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:04 crc kubenswrapper[4789]: I0202 21:20:04.988840 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:04Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:05 crc kubenswrapper[4789]: I0202 21:20:05.003291 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:05Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:05 crc kubenswrapper[4789]: I0202 21:20:05.015254 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:05Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:05 crc kubenswrapper[4789]: I0202 21:20:05.029126 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19928db343f460eed7b046f14b45c6756081f57ea4b3ad77acc86f29eb856a82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://106225a63f3d4ecb7ba8c5266f738990ec7cc5603f9fabde6d234ae265dcc310\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:05Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:05 crc kubenswrapper[4789]: I0202 21:20:05.037805 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:20:05 crc kubenswrapper[4789]: I0202 21:20:05.037927 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 21:20:05 crc kubenswrapper[4789]: E0202 21:20:05.038179 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 21:20:09.03814253 +0000 UTC m=+29.333167569 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:20:05 crc kubenswrapper[4789]: E0202 21:20:05.038200 4789 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 21:20:05 crc kubenswrapper[4789]: E0202 21:20:05.038320 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 21:20:09.038290625 +0000 UTC m=+29.333315684 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 21:20:05 crc kubenswrapper[4789]: I0202 21:20:05.038476 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 21:20:05 crc kubenswrapper[4789]: E0202 21:20:05.038604 4789 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 21:20:05 crc kubenswrapper[4789]: E0202 21:20:05.038646 4789 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 21:20:05 crc kubenswrapper[4789]: E0202 21:20:05.038657 4789 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 21:20:05 crc kubenswrapper[4789]: E0202 21:20:05.038795 4789 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 21:20:05 crc kubenswrapper[4789]: E0202 21:20:05.038836 4789 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 21:20:05 crc kubenswrapper[4789]: E0202 21:20:05.038856 4789 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 21:20:05 crc kubenswrapper[4789]: E0202 21:20:05.038868 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-02 21:20:09.038854211 +0000 UTC m=+29.333879230 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 21:20:05 crc kubenswrapper[4789]: I0202 21:20:05.038647 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 21:20:05 crc kubenswrapper[4789]: E0202 21:20:05.038940 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-02 21:20:09.038911523 +0000 UTC m=+29.333936602 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 21:20:05 crc kubenswrapper[4789]: I0202 21:20:05.042156 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdf018b4-1451-4d37-be6e-05802b67c73e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwk2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwk2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c8vcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:05Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:05 crc kubenswrapper[4789]: I0202 21:20:05.057135 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40b3784b5ba59f0147d2889d897d17140759155e24587e9a323338e0a3125ddd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:05Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:05 crc kubenswrapper[4789]: I0202 21:20:05.071723 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9563eded-ca82-4eb6-90d4-e62b8acbe296\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72bf64d682578abb47e9fabf5e6c31e3a20594cc4a2d13304b2036f4198af265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cc46c408bb643834e494025442760929995b22175ce2af65a14004761848c2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e95c9e553b8e14141ddde6a221c0e5e4d46533d1631893e5d55416b9d16f4a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8bd6ccf6c7345f0ddc84a2983b618218fc65283b8d351c2df30d1221f304b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d8bd6ccf6c7345f0ddc84a2983b618218fc65283b8d351c2df30d1221f304b7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\" 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 21:20:01.773773 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 21:20:01.773776 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 21:20:01.773779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 21:20:01.773999 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0202 21:20:01.777333 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3307553149/tls.crt::/tmp/serving-cert-3307553149/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770067196\\\\\\\\\\\\\\\" (2026-02-02 21:19:55 +0000 UTC to 2026-03-04 21:19:56 +0000 UTC (now=2026-02-02 21:20:01.777292377 +0000 UTC))\\\\\\\"\\\\nI0202 21:20:01.777438 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770067201\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770067201\\\\\\\\\\\\\\\" (2026-02-02 20:20:01 +0000 UTC to 2027-02-02 20:20:01 +0000 UTC (now=2026-02-02 21:20:01.777417111 +0000 UTC))\\\\\\\"\\\\nI0202 21:20:01.777455 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 21:20:01.777484 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 21:20:01.777505 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0202 21:20:01.777526 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0202 21:20:01.777548 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3307553149/tls.crt::/tmp/serving-cert-3307553149/tls.key\\\\\\\"\\\\nF0202 21:20:01.777545 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cbe4a41d8721562e34639bc3ab24cc7f735d8de00e14d38b1b623b58740c5b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94bdc1c30a3c895835afc4f205ec20725ec13707db6957046cae7791b8850679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94bdc1c30a3c895835afc4f205ec20725ec13707db6957046cae7791b8850679\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:19:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:19:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:05Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:05 crc kubenswrapper[4789]: I0202 21:20:05.087476 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dsv6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcbed546-a1c3-4ba4-96b4-61471010b1c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dsv6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:05Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:05 crc kubenswrapper[4789]: I0202 21:20:05.108093 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w8vkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:05Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:05 crc kubenswrapper[4789]: I0202 21:20:05.127125 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40b3784b5ba59f0147d2889d897d17140759155e24587e9a323338e0a3125ddd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:05Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:05 crc kubenswrapper[4789]: I0202 21:20:05.146342 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:05Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:05 crc kubenswrapper[4789]: I0202 21:20:05.161974 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:05Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:05 crc kubenswrapper[4789]: I0202 21:20:05.179347 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19928db343f460eed7b046f14b45c6756081f57ea4b3ad77acc86f29eb856a82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://106225a63f3d4ecb7ba8c5266f738990ec7cc5603f9fabde6d234ae265dcc310\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:05Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:05 crc kubenswrapper[4789]: I0202 21:20:05.195069 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdf018b4-1451-4d37-be6e-05802b67c73e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad1716ba11e1b21eb68642e6935312760c389a669a007822290fbe72573bfaad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwk2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b108a17d437cce7e94365267d96e99e84944d9dd12174c5acb380bc1a1f9c885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwk2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c8vcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:05Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:05 crc kubenswrapper[4789]: I0202 21:20:05.210198 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9563eded-ca82-4eb6-90d4-e62b8acbe296\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72bf64d682578abb47e9fabf5e6c31e3a20594cc4a2d13304b2036f4198af265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cc46c408bb643834e494025442760929995b22175ce2af65a14004761848c2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e95c9e553b8e14141ddde6a221c0e5e4d46533d1631893e5d55416b9d16f4a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8bd6ccf6c7345f0ddc84a2983b618218fc65283b8d351c2df30d1221f304b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d8bd6ccf6c7345f0ddc84a2983b618218fc65283b8d351c2df30d1221f304b7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\" 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 21:20:01.773773 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 21:20:01.773776 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 21:20:01.773779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 21:20:01.773999 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0202 21:20:01.777333 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3307553149/tls.crt::/tmp/serving-cert-3307553149/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770067196\\\\\\\\\\\\\\\" (2026-02-02 21:19:55 +0000 UTC to 2026-03-04 21:19:56 +0000 UTC (now=2026-02-02 21:20:01.777292377 +0000 UTC))\\\\\\\"\\\\nI0202 21:20:01.777438 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770067201\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770067201\\\\\\\\\\\\\\\" (2026-02-02 20:20:01 +0000 UTC to 2027-02-02 20:20:01 +0000 UTC (now=2026-02-02 21:20:01.777417111 +0000 UTC))\\\\\\\"\\\\nI0202 21:20:01.777455 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 21:20:01.777484 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 21:20:01.777505 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0202 21:20:01.777526 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0202 21:20:01.777548 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3307553149/tls.crt::/tmp/serving-cert-3307553149/tls.key\\\\\\\"\\\\nF0202 21:20:01.777545 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cbe4a41d8721562e34639bc3ab24cc7f735d8de00e14d38b1b623b58740c5b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94bdc1c30a3c895835afc4f205ec20725ec13707db6957046cae7791b8850679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94bdc1c30a3c895835afc4f205ec20725ec13707db6957046cae7791b8850679\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:19:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:19:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:05Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:05 crc kubenswrapper[4789]: I0202 21:20:05.227020 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dsv6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcbed546-a1c3-4ba4-96b4-61471010b1c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48b8508849b24e18223b62bd73348759ba4c04a525afb7d4055175bceb0183c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e48b8508849b24e18223b62bd73348759ba4c04a525afb7d4055175bceb0183c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dsv6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:05Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:05 crc kubenswrapper[4789]: I0202 21:20:05.252683 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e90a8e698f2e9f30a596864d0d038bfad686cca1a56b1a8bc72fab5fe594f355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e90a8e698f2e9f30a596864d0d038bfad686cca1a56b1a8bc72fab5fe594f355\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w8vkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:05Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:05 crc kubenswrapper[4789]: I0202 21:20:05.265885 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2x5ws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70a32268-2a2d-47f3-9fc6-4281b8dc6a02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9eaf3ca59ce89187ee20f3915b7fd4a8867e156c0be18b435511d2af95ffd949\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf446\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2x5ws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:05Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:05 crc kubenswrapper[4789]: I0202 21:20:05.284085 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa57a708f883719dd81200ccf975947c0af244b92ead81daee23171c9be412f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:05Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:05 crc kubenswrapper[4789]: I0202 21:20:05.296017 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6l576" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd970d28-4009-48b2-a0f4-2b8b1d54a2cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b641bb7caefade4d07fe41965573b059bd8b23a83bb9f60a9637d0152dffb07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwbqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6l576\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:05Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:05 crc kubenswrapper[4789]: I0202 21:20:05.309719 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:05Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:05 crc kubenswrapper[4789]: I0202 21:20:05.418763 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 21:20:05 crc kubenswrapper[4789]: I0202 21:20:05.418882 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 21:20:05 crc kubenswrapper[4789]: E0202 21:20:05.418932 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 21:20:05 crc kubenswrapper[4789]: E0202 21:20:05.419000 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 21:20:05 crc kubenswrapper[4789]: I0202 21:20:05.419049 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 21:20:05 crc kubenswrapper[4789]: E0202 21:20:05.419092 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 21:20:05 crc kubenswrapper[4789]: I0202 21:20:05.736391 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-wlsw6"] Feb 02 21:20:05 crc kubenswrapper[4789]: I0202 21:20:05.736747 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-wlsw6" Feb 02 21:20:05 crc kubenswrapper[4789]: I0202 21:20:05.739288 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 02 21:20:05 crc kubenswrapper[4789]: I0202 21:20:05.739331 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 02 21:20:05 crc kubenswrapper[4789]: I0202 21:20:05.739449 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 02 21:20:05 crc kubenswrapper[4789]: I0202 21:20:05.740218 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 02 21:20:05 crc kubenswrapper[4789]: I0202 21:20:05.742140 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 21:36:32.244505396 +0000 UTC Feb 02 21:20:05 crc kubenswrapper[4789]: I0202 21:20:05.754752 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa57a708f883719dd81200ccf975947c0af244b92ead81daee23171c9be412f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:05Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:05 crc kubenswrapper[4789]: I0202 21:20:05.769000 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6l576" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd970d28-4009-48b2-a0f4-2b8b1d54a2cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b641bb7caefade4d07fe41965573b059bd8b23a83bb9f60a9637d0152dffb07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwbqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6l576\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:05Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:05 crc kubenswrapper[4789]: I0202 21:20:05.783432 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:05Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:05 crc kubenswrapper[4789]: I0202 21:20:05.793947 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40b3784b5ba59f0147d2889d897d17140759155e24587e9a323338e0a3125ddd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:05Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:05 crc kubenswrapper[4789]: I0202 21:20:05.815009 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:05Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:05 crc kubenswrapper[4789]: I0202 21:20:05.831012 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:05Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:05 crc kubenswrapper[4789]: I0202 21:20:05.844027 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19928db343f460eed7b046f14b45c6756081f57ea4b3ad77acc86f29eb856a82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://106225a63f3d4ecb7ba8c5266f738990ec7cc5603f9fabde6d234ae265dcc310\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:05Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:05 crc kubenswrapper[4789]: I0202 21:20:05.851034 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snjrv\" (UniqueName: \"kubernetes.io/projected/1b25d791-42e1-4e08-b7da-41803cc40f4a-kube-api-access-snjrv\") pod \"node-ca-wlsw6\" (UID: \"1b25d791-42e1-4e08-b7da-41803cc40f4a\") " pod="openshift-image-registry/node-ca-wlsw6" Feb 02 21:20:05 crc kubenswrapper[4789]: I0202 21:20:05.851117 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/1b25d791-42e1-4e08-b7da-41803cc40f4a-serviceca\") pod \"node-ca-wlsw6\" (UID: \"1b25d791-42e1-4e08-b7da-41803cc40f4a\") " pod="openshift-image-registry/node-ca-wlsw6" Feb 02 21:20:05 crc kubenswrapper[4789]: I0202 21:20:05.851140 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1b25d791-42e1-4e08-b7da-41803cc40f4a-host\") pod \"node-ca-wlsw6\" (UID: \"1b25d791-42e1-4e08-b7da-41803cc40f4a\") " pod="openshift-image-registry/node-ca-wlsw6" Feb 02 21:20:05 crc kubenswrapper[4789]: I0202 21:20:05.861116 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdf018b4-1451-4d37-be6e-05802b67c73e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad1716ba11e1b21eb68642e6935312760c389a669a007822290fbe72573bfaad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwk2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b108a17d437cce7e94365267d96e99e84944d9dd12174c5acb380bc1a1f9c885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwk2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c8vcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:05Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:05 crc kubenswrapper[4789]: I0202 21:20:05.877830 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9563eded-ca82-4eb6-90d4-e62b8acbe296\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72bf64d682578abb47e9fabf5e6c31e3a20594cc4a2d13304b2036f4198af265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cc46c408bb643834e494025442760929995b22175ce2af65a14004761848c2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e95c9e553b8e14141ddde6a221c0e5e4d46533d1631893e5d55416b9d16f4a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8bd6ccf6c7345f0ddc84a2983b618218fc65283b8d351c2df30d1221f304b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d8bd6ccf6c7345f0ddc84a2983b618218fc65283b8d351c2df30d1221f304b7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\" 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 21:20:01.773773 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 21:20:01.773776 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 21:20:01.773779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 21:20:01.773999 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0202 21:20:01.777333 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3307553149/tls.crt::/tmp/serving-cert-3307553149/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770067196\\\\\\\\\\\\\\\" (2026-02-02 21:19:55 +0000 UTC to 2026-03-04 21:19:56 +0000 UTC (now=2026-02-02 21:20:01.777292377 +0000 UTC))\\\\\\\"\\\\nI0202 21:20:01.777438 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770067201\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770067201\\\\\\\\\\\\\\\" (2026-02-02 20:20:01 +0000 UTC to 2027-02-02 20:20:01 +0000 UTC (now=2026-02-02 21:20:01.777417111 +0000 UTC))\\\\\\\"\\\\nI0202 21:20:01.777455 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 21:20:01.777484 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 21:20:01.777505 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0202 21:20:01.777526 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0202 21:20:01.777548 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3307553149/tls.crt::/tmp/serving-cert-3307553149/tls.key\\\\\\\"\\\\nF0202 21:20:01.777545 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cbe4a41d8721562e34639bc3ab24cc7f735d8de00e14d38b1b623b58740c5b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94bdc1c30a3c895835afc4f205ec20725ec13707db6957046cae7791b8850679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94bdc1c30a3c895835afc4f205ec20725ec13707db6957046cae7791b8850679\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:19:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:19:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:05Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:05 crc kubenswrapper[4789]: I0202 21:20:05.899407 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dsv6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcbed546-a1c3-4ba4-96b4-61471010b1c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48b8508849b24e18223b62bd73348759ba4c04a525afb7d4055175bceb0183c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e48b8508849b24e18223b62bd73348759ba4c04a525afb7d4055175bceb0183c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dsv6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:05Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:05 crc kubenswrapper[4789]: I0202 21:20:05.917297 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dsv6b" event={"ID":"bcbed546-a1c3-4ba4-96b4-61471010b1c2","Type":"ContainerStarted","Data":"92b13f24aa00608821909c688ace0441fd4e432d3c5fc9a1cc06b4239bf86955"} Feb 02 21:20:05 crc kubenswrapper[4789]: I0202 21:20:05.920073 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" event={"ID":"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6","Type":"ContainerStarted","Data":"fd508dcd4022163f662ad27cf3ffbc4cca551d4020809ca5fac60cee8da80601"} Feb 02 21:20:05 crc kubenswrapper[4789]: I0202 21:20:05.920220 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" event={"ID":"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6","Type":"ContainerStarted","Data":"021453f1c74b708ad3f39c2defa2664f098a552f3e22315479c483d24110e583"} Feb 02 21:20:05 crc kubenswrapper[4789]: I0202 21:20:05.949737 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e90a8e698f2e9f30a596864d0d038bfad686cca1a56b1a8bc72fab5fe594f355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e90a8e698f2e9f30a596864d0d038bfad686cca1a56b1a8bc72fab5fe594f355\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w8vkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:05Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:05 crc kubenswrapper[4789]: I0202 21:20:05.952254 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1b25d791-42e1-4e08-b7da-41803cc40f4a-host\") pod \"node-ca-wlsw6\" (UID: \"1b25d791-42e1-4e08-b7da-41803cc40f4a\") " pod="openshift-image-registry/node-ca-wlsw6" Feb 02 21:20:05 crc kubenswrapper[4789]: I0202 21:20:05.952308 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snjrv\" (UniqueName: \"kubernetes.io/projected/1b25d791-42e1-4e08-b7da-41803cc40f4a-kube-api-access-snjrv\") pod \"node-ca-wlsw6\" (UID: \"1b25d791-42e1-4e08-b7da-41803cc40f4a\") " pod="openshift-image-registry/node-ca-wlsw6" Feb 02 21:20:05 crc kubenswrapper[4789]: I0202 21:20:05.952373 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/1b25d791-42e1-4e08-b7da-41803cc40f4a-serviceca\") pod \"node-ca-wlsw6\" (UID: \"1b25d791-42e1-4e08-b7da-41803cc40f4a\") " pod="openshift-image-registry/node-ca-wlsw6" Feb 02 21:20:05 crc kubenswrapper[4789]: I0202 21:20:05.952570 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1b25d791-42e1-4e08-b7da-41803cc40f4a-host\") pod \"node-ca-wlsw6\" (UID: \"1b25d791-42e1-4e08-b7da-41803cc40f4a\") " pod="openshift-image-registry/node-ca-wlsw6" Feb 02 21:20:05 crc kubenswrapper[4789]: I0202 21:20:05.953454 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/1b25d791-42e1-4e08-b7da-41803cc40f4a-serviceca\") pod \"node-ca-wlsw6\" (UID: \"1b25d791-42e1-4e08-b7da-41803cc40f4a\") " pod="openshift-image-registry/node-ca-wlsw6" Feb 02 21:20:05 crc kubenswrapper[4789]: I0202 21:20:05.961912 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wlsw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b25d791-42e1-4e08-b7da-41803cc40f4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:05Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:05Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-snjrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wlsw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:05Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:05 crc kubenswrapper[4789]: I0202 21:20:05.975183 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2x5ws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70a32268-2a2d-47f3-9fc6-4281b8dc6a02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9eaf3ca59ce89187ee20f3915b7fd4a8867e156c0be18b435511d2af95ffd949\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf446\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2x5ws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:05Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:05 crc kubenswrapper[4789]: I0202 21:20:05.983650 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snjrv\" (UniqueName: \"kubernetes.io/projected/1b25d791-42e1-4e08-b7da-41803cc40f4a-kube-api-access-snjrv\") pod \"node-ca-wlsw6\" (UID: \"1b25d791-42e1-4e08-b7da-41803cc40f4a\") " pod="openshift-image-registry/node-ca-wlsw6" Feb 02 21:20:05 crc kubenswrapper[4789]: I0202 21:20:05.989540 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2x5ws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70a32268-2a2d-47f3-9fc6-4281b8dc6a02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9eaf3ca59ce89187ee20f3915b7fd4a8867e156c0be18b435511d2af95ffd949\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf446\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2x5ws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:05Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:06 crc kubenswrapper[4789]: I0202 21:20:06.003667 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa57a708f883719dd81200ccf975947c0af244b92ead81daee23171c9be412f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:06Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:06 crc kubenswrapper[4789]: I0202 21:20:06.014366 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6l576" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd970d28-4009-48b2-a0f4-2b8b1d54a2cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b641bb7caefade4d07fe41965573b059bd8b23a83bb9f60a9637d0152dffb07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwbqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6l576\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:06Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:06 crc kubenswrapper[4789]: I0202 21:20:06.034936 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:06Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:06 crc kubenswrapper[4789]: I0202 21:20:06.045094 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdf018b4-1451-4d37-be6e-05802b67c73e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad1716ba11e1b21eb68642e6935312760c389a669a007822290fbe72573bfaad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwk2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b108a17d437cce7e94365267d96e99e84944d9dd12174c5acb380bc1a1f9c885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwk2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c8vcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:06Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:06 crc kubenswrapper[4789]: I0202 21:20:06.054891 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40b3784b5ba59f0147d2889d897d17140759155e24587e9a323338e0a3125ddd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:06Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:06 crc kubenswrapper[4789]: I0202 21:20:06.067342 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:06Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:06 crc kubenswrapper[4789]: I0202 21:20:06.080975 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:06Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:06 crc kubenswrapper[4789]: I0202 21:20:06.093734 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19928db343f460eed7b046f14b45c6756081f57ea4b3ad77acc86f29eb856a82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://106225a63f3d4ecb7ba8c5266f738990ec7cc5603f9fabde6d234ae265dcc310\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:06Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:06 crc kubenswrapper[4789]: I0202 21:20:06.103162 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wlsw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b25d791-42e1-4e08-b7da-41803cc40f4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:05Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:05Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-snjrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wlsw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:06Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:06 crc kubenswrapper[4789]: I0202 21:20:06.117371 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9563eded-ca82-4eb6-90d4-e62b8acbe296\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72bf64d682578abb47e9fabf5e6c31e3a20594cc4a2d13304b2036f4198af265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cc46c408bb643834e494025442760929995b22175ce2af65a14004761848c2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e95c9e553b8e14141ddde6a221c0e5e4d46533d1631893e5d55416b9d16f4a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8bd6ccf6c7345f0ddc84a2983b618218fc65283b8d351c2df30d1221f304b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d8bd6ccf6c7345f0ddc84a2983b618218fc65283b8d351c2df30d1221f304b7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\" 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 21:20:01.773773 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 21:20:01.773776 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 21:20:01.773779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 21:20:01.773999 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0202 21:20:01.777333 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3307553149/tls.crt::/tmp/serving-cert-3307553149/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770067196\\\\\\\\\\\\\\\" (2026-02-02 21:19:55 +0000 UTC to 2026-03-04 21:19:56 +0000 UTC (now=2026-02-02 21:20:01.777292377 +0000 UTC))\\\\\\\"\\\\nI0202 21:20:01.777438 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770067201\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770067201\\\\\\\\\\\\\\\" (2026-02-02 20:20:01 +0000 UTC to 2027-02-02 20:20:01 +0000 UTC (now=2026-02-02 21:20:01.777417111 +0000 UTC))\\\\\\\"\\\\nI0202 21:20:01.777455 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 21:20:01.777484 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 21:20:01.777505 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0202 21:20:01.777526 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0202 21:20:01.777548 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3307553149/tls.crt::/tmp/serving-cert-3307553149/tls.key\\\\\\\"\\\\nF0202 21:20:01.777545 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cbe4a41d8721562e34639bc3ab24cc7f735d8de00e14d38b1b623b58740c5b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94bdc1c30a3c895835afc4f205ec20725ec13707db6957046cae7791b8850679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94bdc1c30a3c895835afc4f205ec20725ec13707db6957046cae7791b8850679\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:19:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:19:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:06Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:06 crc kubenswrapper[4789]: I0202 21:20:06.137256 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dsv6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcbed546-a1c3-4ba4-96b4-61471010b1c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48b8508849b24e18223b62bd73348759ba4c04a525afb7d4055175bceb0183c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e48b8508849b24e18223b62bd73348759ba4c04a525afb7d4055175bceb0183c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92b13f24aa00608821909c688ace0441fd4e432d3c5fc9a1cc06b4239bf86955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dsv6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:06Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:06 crc kubenswrapper[4789]: I0202 21:20:06.167626 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e90a8e698f2e9f30a596864d0d038bfad686cca1a56b1a8bc72fab5fe594f355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e90a8e698f2e9f30a596864d0d038bfad686cca1a56b1a8bc72fab5fe594f355\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w8vkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:06Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:06 crc kubenswrapper[4789]: I0202 21:20:06.273663 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-wlsw6" Feb 02 21:20:06 crc kubenswrapper[4789]: W0202 21:20:06.285046 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1b25d791_42e1_4e08_b7da_41803cc40f4a.slice/crio-bd97ec06e327b46a7383a684266d059266596553d6fd0e4f806e08ae1da1da28 WatchSource:0}: Error finding container bd97ec06e327b46a7383a684266d059266596553d6fd0e4f806e08ae1da1da28: Status 404 returned error can't find the container with id bd97ec06e327b46a7383a684266d059266596553d6fd0e4f806e08ae1da1da28 Feb 02 21:20:06 crc kubenswrapper[4789]: I0202 21:20:06.742281 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 00:53:08.14006051 +0000 UTC Feb 02 21:20:06 crc kubenswrapper[4789]: I0202 21:20:06.926895 4789 generic.go:334] "Generic (PLEG): container finished" podID="bcbed546-a1c3-4ba4-96b4-61471010b1c2" containerID="92b13f24aa00608821909c688ace0441fd4e432d3c5fc9a1cc06b4239bf86955" exitCode=0 Feb 02 21:20:06 crc kubenswrapper[4789]: I0202 21:20:06.927025 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dsv6b" event={"ID":"bcbed546-a1c3-4ba4-96b4-61471010b1c2","Type":"ContainerDied","Data":"92b13f24aa00608821909c688ace0441fd4e432d3c5fc9a1cc06b4239bf86955"} Feb 02 21:20:06 crc kubenswrapper[4789]: I0202 21:20:06.931872 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" event={"ID":"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6","Type":"ContainerStarted","Data":"05637515d4c19e054ce43cc23bb6d8e32c4752282dab08786fff3062eb976461"} Feb 02 21:20:06 crc kubenswrapper[4789]: I0202 21:20:06.931936 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" event={"ID":"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6","Type":"ContainerStarted","Data":"9f05a85f44fd6a7b9efa0f591ec0afebd7818a80b132ec3dddd0faa22c380eea"} Feb 02 21:20:06 crc kubenswrapper[4789]: I0202 21:20:06.931961 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" event={"ID":"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6","Type":"ContainerStarted","Data":"047a98525fa1b06be5e6f3bab6b417973a13bb32df7fad26ca90b4558ee4e364"} Feb 02 21:20:06 crc kubenswrapper[4789]: I0202 21:20:06.931980 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" event={"ID":"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6","Type":"ContainerStarted","Data":"29dad69c19f9411676fab81684dd613d866c494df1e333b46bfff35b98430103"} Feb 02 21:20:06 crc kubenswrapper[4789]: I0202 21:20:06.933557 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-wlsw6" event={"ID":"1b25d791-42e1-4e08-b7da-41803cc40f4a","Type":"ContainerStarted","Data":"a0e2a17d3ade1aa229b2729d66fe953acc746673549e0d929076a3bd18839833"} Feb 02 21:20:06 crc kubenswrapper[4789]: I0202 21:20:06.933636 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-wlsw6" event={"ID":"1b25d791-42e1-4e08-b7da-41803cc40f4a","Type":"ContainerStarted","Data":"bd97ec06e327b46a7383a684266d059266596553d6fd0e4f806e08ae1da1da28"} Feb 02 21:20:06 crc kubenswrapper[4789]: I0202 21:20:06.950655 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19928db343f460eed7b046f14b45c6756081f57ea4b3ad77acc86f29eb856a82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://106225a63f3d4ecb7ba8c5266f738990ec7cc5603f9fabde6d234ae265dcc310\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:06Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:06 crc kubenswrapper[4789]: I0202 21:20:06.968988 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdf018b4-1451-4d37-be6e-05802b67c73e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad1716ba11e1b21eb68642e6935312760c389a669a007822290fbe72573bfaad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwk2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b108a17d437cce7e94365267d96e99e84944d9dd12174c5acb380bc1a1f9c885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwk2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c8vcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:06Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:06 crc kubenswrapper[4789]: I0202 21:20:06.981325 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40b3784b5ba59f0147d2889d897d17140759155e24587e9a323338e0a3125ddd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:06Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:06 crc kubenswrapper[4789]: I0202 21:20:06.993363 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:06Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:06 crc kubenswrapper[4789]: I0202 21:20:06.995282 4789 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 21:20:06 crc kubenswrapper[4789]: I0202 21:20:06.997309 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:06 crc kubenswrapper[4789]: I0202 21:20:06.997349 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:06 crc kubenswrapper[4789]: I0202 21:20:06.997362 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:06 crc kubenswrapper[4789]: I0202 21:20:06.997510 4789 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 02 21:20:06 crc kubenswrapper[4789]: I0202 21:20:06.998652 4789 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 21:20:06 crc kubenswrapper[4789]: I0202 21:20:06.999204 4789 scope.go:117] "RemoveContainer" containerID="5d8bd6ccf6c7345f0ddc84a2983b618218fc65283b8d351c2df30d1221f304b7" Feb 02 21:20:06 crc kubenswrapper[4789]: E0202 21:20:06.999328 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.005284 4789 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.005461 4789 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.006504 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.006564 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.006594 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.006611 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.006623 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:07Z","lastTransitionTime":"2026-02-02T21:20:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.014004 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:07Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:07 crc kubenswrapper[4789]: E0202 21:20:07.023810 4789 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"007d0037-9447-42ea-b3a4-6e1f0d669307\\\",\\\"systemUUID\\\":\\\"53ecbfdd-0b43-4d74-98ca-c7bcbc951d86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:07Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.028647 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.028681 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.028693 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.028710 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.028722 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:07Z","lastTransitionTime":"2026-02-02T21:20:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:07 crc kubenswrapper[4789]: E0202 21:20:07.041165 4789 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"007d0037-9447-42ea-b3a4-6e1f0d669307\\\",\\\"systemUUID\\\":\\\"53ecbfdd-0b43-4d74-98ca-c7bcbc951d86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:07Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.044699 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e90a8e698f2e9f30a596864d0d038bfad686cca1a56b1a8bc72fab5fe594f355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e90a8e698f2e9f30a596864d0d038bfad686cca1a56b1a8bc72fab5fe594f355\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w8vkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:07Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.045570 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.045655 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.045675 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.045699 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.045719 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:07Z","lastTransitionTime":"2026-02-02T21:20:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.057613 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wlsw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b25d791-42e1-4e08-b7da-41803cc40f4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:05Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:05Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-snjrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wlsw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:07Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:07 crc kubenswrapper[4789]: E0202 21:20:07.065219 4789 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"007d0037-9447-42ea-b3a4-6e1f0d669307\\\",\\\"systemUUID\\\":\\\"53ecbfdd-0b43-4d74-98ca-c7bcbc951d86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:07Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.068431 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.068475 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.068484 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.068501 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.068511 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:07Z","lastTransitionTime":"2026-02-02T21:20:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.075871 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9563eded-ca82-4eb6-90d4-e62b8acbe296\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72bf64d682578abb47e9fabf5e6c31e3a20594cc4a2d13304b2036f4198af265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cc46c408bb643834e494025442760929995b22175ce2af65a14004761848c2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e95c9e553b8e14141ddde6a221c0e5e4d46533d1631893e5d55416b9d16f4a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8bd6ccf6c7345f0ddc84a2983b618218fc65283b8d351c2df30d1221f304b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d8bd6ccf6c7345f0ddc84a2983b618218fc65283b8d351c2df30d1221f304b7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\" 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 21:20:01.773773 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 21:20:01.773776 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 21:20:01.773779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 21:20:01.773999 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0202 21:20:01.777333 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3307553149/tls.crt::/tmp/serving-cert-3307553149/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770067196\\\\\\\\\\\\\\\" (2026-02-02 21:19:55 +0000 UTC to 2026-03-04 21:19:56 +0000 UTC (now=2026-02-02 21:20:01.777292377 +0000 UTC))\\\\\\\"\\\\nI0202 21:20:01.777438 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770067201\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770067201\\\\\\\\\\\\\\\" (2026-02-02 20:20:01 +0000 UTC to 2027-02-02 20:20:01 +0000 UTC (now=2026-02-02 21:20:01.777417111 +0000 UTC))\\\\\\\"\\\\nI0202 21:20:01.777455 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 21:20:01.777484 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 21:20:01.777505 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0202 21:20:01.777526 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0202 21:20:01.777548 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3307553149/tls.crt::/tmp/serving-cert-3307553149/tls.key\\\\\\\"\\\\nF0202 21:20:01.777545 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cbe4a41d8721562e34639bc3ab24cc7f735d8de00e14d38b1b623b58740c5b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94bdc1c30a3c895835afc4f205ec20725ec13707db6957046cae7791b8850679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94bdc1c30a3c895835afc4f205ec20725ec13707db6957046cae7791b8850679\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:19:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:19:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:07Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:07 crc kubenswrapper[4789]: E0202 21:20:07.084321 4789 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"007d0037-9447-42ea-b3a4-6e1f0d669307\\\",\\\"systemUUID\\\":\\\"53ecbfdd-0b43-4d74-98ca-c7bcbc951d86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:07Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.087566 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.087606 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.087616 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.087627 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.087637 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:07Z","lastTransitionTime":"2026-02-02T21:20:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.100275 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dsv6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcbed546-a1c3-4ba4-96b4-61471010b1c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48b8508849b24e18223b62bd73348759ba4c04a525afb7d4055175bceb0183c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e48b8508849b24e18223b62bd73348759ba4c04a525afb7d4055175bceb0183c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92b13f24aa00608821909c688ace0441fd4e432d3c5fc9a1cc06b4239bf86955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92b13f24aa00608821909c688ace0441fd4e432d3c5fc9a1cc06b4239bf86955\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dsv6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:07Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:07 crc kubenswrapper[4789]: E0202 21:20:07.102030 4789 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"007d0037-9447-42ea-b3a4-6e1f0d669307\\\",\\\"systemUUID\\\":\\\"53ecbfdd-0b43-4d74-98ca-c7bcbc951d86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:07Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:07 crc kubenswrapper[4789]: E0202 21:20:07.102169 4789 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.103988 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.104026 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.104038 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.104056 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.104070 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:07Z","lastTransitionTime":"2026-02-02T21:20:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.113533 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2x5ws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70a32268-2a2d-47f3-9fc6-4281b8dc6a02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9eaf3ca59ce89187ee20f3915b7fd4a8867e156c0be18b435511d2af95ffd949\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf446\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2x5ws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:07Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.126765 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:07Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.140178 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa57a708f883719dd81200ccf975947c0af244b92ead81daee23171c9be412f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:07Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.153854 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6l576" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd970d28-4009-48b2-a0f4-2b8b1d54a2cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b641bb7caefade4d07fe41965573b059bd8b23a83bb9f60a9637d0152dffb07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwbqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6l576\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:07Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.169306 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa57a708f883719dd81200ccf975947c0af244b92ead81daee23171c9be412f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:07Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.179964 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6l576" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd970d28-4009-48b2-a0f4-2b8b1d54a2cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b641bb7caefade4d07fe41965573b059bd8b23a83bb9f60a9637d0152dffb07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwbqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6l576\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:07Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.195965 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:07Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.207421 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.207460 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.207471 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.207486 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.207497 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:07Z","lastTransitionTime":"2026-02-02T21:20:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.214531 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40b3784b5ba59f0147d2889d897d17140759155e24587e9a323338e0a3125ddd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:07Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.232005 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:07Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.244099 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:07Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.259203 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19928db343f460eed7b046f14b45c6756081f57ea4b3ad77acc86f29eb856a82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://106225a63f3d4ecb7ba8c5266f738990ec7cc5603f9fabde6d234ae265dcc310\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:07Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.274258 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdf018b4-1451-4d37-be6e-05802b67c73e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad1716ba11e1b21eb68642e6935312760c389a669a007822290fbe72573bfaad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwk2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b108a17d437cce7e94365267d96e99e84944d9dd12174c5acb380bc1a1f9c885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwk2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c8vcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:07Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.287924 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9563eded-ca82-4eb6-90d4-e62b8acbe296\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72bf64d682578abb47e9fabf5e6c31e3a20594cc4a2d13304b2036f4198af265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cc46c408bb643834e494025442760929995b22175ce2af65a14004761848c2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e95c9e553b8e14141ddde6a221c0e5e4d46533d1631893e5d55416b9d16f4a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8bd6ccf6c7345f0ddc84a2983b618218fc65283b8d351c2df30d1221f304b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d8bd6ccf6c7345f0ddc84a2983b618218fc65283b8d351c2df30d1221f304b7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\" 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 21:20:01.773773 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 21:20:01.773776 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 21:20:01.773779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 21:20:01.773999 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0202 21:20:01.777333 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3307553149/tls.crt::/tmp/serving-cert-3307553149/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770067196\\\\\\\\\\\\\\\" (2026-02-02 21:19:55 +0000 UTC to 2026-03-04 21:19:56 +0000 UTC (now=2026-02-02 21:20:01.777292377 +0000 UTC))\\\\\\\"\\\\nI0202 21:20:01.777438 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770067201\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770067201\\\\\\\\\\\\\\\" (2026-02-02 20:20:01 +0000 UTC to 2027-02-02 20:20:01 +0000 UTC (now=2026-02-02 21:20:01.777417111 +0000 UTC))\\\\\\\"\\\\nI0202 21:20:01.777455 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 21:20:01.777484 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 21:20:01.777505 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0202 21:20:01.777526 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0202 21:20:01.777548 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3307553149/tls.crt::/tmp/serving-cert-3307553149/tls.key\\\\\\\"\\\\nF0202 21:20:01.777545 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cbe4a41d8721562e34639bc3ab24cc7f735d8de00e14d38b1b623b58740c5b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94bdc1c30a3c895835afc4f205ec20725ec13707db6957046cae7791b8850679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94bdc1c30a3c895835afc4f205ec20725ec13707db6957046cae7791b8850679\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:19:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:19:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:07Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.300653 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dsv6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcbed546-a1c3-4ba4-96b4-61471010b1c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48b8508849b24e18223b62bd73348759ba4c04a525afb7d4055175bceb0183c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e48b8508849b24e18223b62bd73348759ba4c04a525afb7d4055175bceb0183c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92b13f24aa00608821909c688ace0441fd4e432d3c5fc9a1cc06b4239bf86955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92b13f24aa00608821909c688ace0441fd4e432d3c5fc9a1cc06b4239bf86955\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dsv6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:07Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.310404 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.310470 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.310483 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.310524 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.310539 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:07Z","lastTransitionTime":"2026-02-02T21:20:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.320621 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e90a8e698f2e9f30a596864d0d038bfad686cca1a56b1a8bc72fab5fe594f355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e90a8e698f2e9f30a596864d0d038bfad686cca1a56b1a8bc72fab5fe594f355\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w8vkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:07Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.333435 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wlsw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b25d791-42e1-4e08-b7da-41803cc40f4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0e2a17d3ade1aa229b2729d66fe953acc746673549e0d929076a3bd18839833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-snjrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wlsw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:07Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.345071 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2x5ws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70a32268-2a2d-47f3-9fc6-4281b8dc6a02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9eaf3ca59ce89187ee20f3915b7fd4a8867e156c0be18b435511d2af95ffd949\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf446\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2x5ws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:07Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.414368 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.414437 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.414461 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.414492 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.414515 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:07Z","lastTransitionTime":"2026-02-02T21:20:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.419045 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.419095 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.419163 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 21:20:07 crc kubenswrapper[4789]: E0202 21:20:07.419359 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 21:20:07 crc kubenswrapper[4789]: E0202 21:20:07.419481 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 21:20:07 crc kubenswrapper[4789]: E0202 21:20:07.419658 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.518067 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.518140 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.518166 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.518202 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.518225 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:07Z","lastTransitionTime":"2026-02-02T21:20:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.622569 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.622653 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.622674 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.622703 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.622728 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:07Z","lastTransitionTime":"2026-02-02T21:20:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.657281 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.663771 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.670424 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.675385 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40b3784b5ba59f0147d2889d897d17140759155e24587e9a323338e0a3125ddd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:07Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.693920 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:07Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.710502 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:07Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.726776 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.727030 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.727170 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.727336 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.727468 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:07Z","lastTransitionTime":"2026-02-02T21:20:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.732167 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19928db343f460eed7b046f14b45c6756081f57ea4b3ad77acc86f29eb856a82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://106225a63f3d4ecb7ba8c5266f738990ec7cc5603f9fabde6d234ae265dcc310\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:07Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.743542 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 03:00:20.804563773 +0000 UTC Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.749866 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdf018b4-1451-4d37-be6e-05802b67c73e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad1716ba11e1b21eb68642e6935312760c389a669a007822290fbe72573bfaad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwk2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b108a17d437cce7e94365267d96e99e84944d9dd12174c5acb380bc1a1f9c885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwk2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c8vcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:07Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.774481 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9563eded-ca82-4eb6-90d4-e62b8acbe296\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72bf64d682578abb47e9fabf5e6c31e3a20594cc4a2d13304b2036f4198af265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cc46c408bb643834e494025442760929995b22175ce2af65a14004761848c2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e95c9e553b8e14141ddde6a221c0e5e4d46533d1631893e5d55416b9d16f4a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8bd6ccf6c7345f0ddc84a2983b618218fc65283b8d351c2df30d1221f304b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d8bd6ccf6c7345f0ddc84a2983b618218fc65283b8d351c2df30d1221f304b7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\" 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 21:20:01.773773 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 21:20:01.773776 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 21:20:01.773779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 21:20:01.773999 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0202 21:20:01.777333 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3307553149/tls.crt::/tmp/serving-cert-3307553149/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770067196\\\\\\\\\\\\\\\" (2026-02-02 21:19:55 +0000 UTC to 2026-03-04 21:19:56 +0000 UTC (now=2026-02-02 21:20:01.777292377 +0000 UTC))\\\\\\\"\\\\nI0202 21:20:01.777438 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770067201\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770067201\\\\\\\\\\\\\\\" (2026-02-02 20:20:01 +0000 UTC to 2027-02-02 20:20:01 +0000 UTC (now=2026-02-02 21:20:01.777417111 +0000 UTC))\\\\\\\"\\\\nI0202 21:20:01.777455 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 21:20:01.777484 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 21:20:01.777505 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0202 21:20:01.777526 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0202 21:20:01.777548 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3307553149/tls.crt::/tmp/serving-cert-3307553149/tls.key\\\\\\\"\\\\nF0202 21:20:01.777545 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cbe4a41d8721562e34639bc3ab24cc7f735d8de00e14d38b1b623b58740c5b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94bdc1c30a3c895835afc4f205ec20725ec13707db6957046cae7791b8850679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94bdc1c30a3c895835afc4f205ec20725ec13707db6957046cae7791b8850679\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:19:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:19:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:07Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.798313 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dsv6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcbed546-a1c3-4ba4-96b4-61471010b1c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48b8508849b24e18223b62bd73348759ba4c04a525afb7d4055175bceb0183c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e48b8508849b24e18223b62bd73348759ba4c04a525afb7d4055175bceb0183c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92b13f24aa00608821909c688ace0441fd4e432d3c5fc9a1cc06b4239bf86955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92b13f24aa00608821909c688ace0441fd4e432d3c5fc9a1cc06b4239bf86955\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dsv6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:07Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.826561 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e90a8e698f2e9f30a596864d0d038bfad686cca1a56b1a8bc72fab5fe594f355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e90a8e698f2e9f30a596864d0d038bfad686cca1a56b1a8bc72fab5fe594f355\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w8vkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:07Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.830945 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.831129 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.831250 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.831369 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.831470 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:07Z","lastTransitionTime":"2026-02-02T21:20:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.843401 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wlsw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b25d791-42e1-4e08-b7da-41803cc40f4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0e2a17d3ade1aa229b2729d66fe953acc746673549e0d929076a3bd18839833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-snjrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wlsw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:07Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.864318 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2x5ws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70a32268-2a2d-47f3-9fc6-4281b8dc6a02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9eaf3ca59ce89187ee20f3915b7fd4a8867e156c0be18b435511d2af95ffd949\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf446\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2x5ws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:07Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.885773 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa57a708f883719dd81200ccf975947c0af244b92ead81daee23171c9be412f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:07Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.900808 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6l576" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd970d28-4009-48b2-a0f4-2b8b1d54a2cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b641bb7caefade4d07fe41965573b059bd8b23a83bb9f60a9637d0152dffb07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwbqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6l576\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:07Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.921770 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:07Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.934628 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.934699 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.934723 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.934778 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.934804 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:07Z","lastTransitionTime":"2026-02-02T21:20:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.939630 4789 generic.go:334] "Generic (PLEG): container finished" podID="bcbed546-a1c3-4ba4-96b4-61471010b1c2" containerID="75d7fd5bba471f9371960e5c6cfd5d7cb49402d47459acad01a9c438ea33de0a" exitCode=0 Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.939682 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dsv6b" event={"ID":"bcbed546-a1c3-4ba4-96b4-61471010b1c2","Type":"ContainerDied","Data":"75d7fd5bba471f9371960e5c6cfd5d7cb49402d47459acad01a9c438ea33de0a"} Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.943345 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:07Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.967841 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:07Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.982740 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19928db343f460eed7b046f14b45c6756081f57ea4b3ad77acc86f29eb856a82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://106225a63f3d4ecb7ba8c5266f738990ec7cc5603f9fabde6d234ae265dcc310\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:07Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:07 crc kubenswrapper[4789]: I0202 21:20:07.995107 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdf018b4-1451-4d37-be6e-05802b67c73e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad1716ba11e1b21eb68642e6935312760c389a669a007822290fbe72573bfaad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwk2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b108a17d437cce7e94365267d96e99e84944d9dd12174c5acb380bc1a1f9c885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwk2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c8vcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:07Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:08 crc kubenswrapper[4789]: I0202 21:20:08.009950 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40b3784b5ba59f0147d2889d897d17140759155e24587e9a323338e0a3125ddd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:08Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:08 crc kubenswrapper[4789]: I0202 21:20:08.030927 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9563eded-ca82-4eb6-90d4-e62b8acbe296\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72bf64d682578abb47e9fabf5e6c31e3a20594cc4a2d13304b2036f4198af265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cc46c408bb643834e494025442760929995b22175ce2af65a14004761848c2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e95c9e553b8e14141ddde6a221c0e5e4d46533d1631893e5d55416b9d16f4a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8bd6ccf6c7345f0ddc84a2983b618218fc65283b8d351c2df30d1221f304b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d8bd6ccf6c7345f0ddc84a2983b618218fc65283b8d351c2df30d1221f304b7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\" 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 21:20:01.773773 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 21:20:01.773776 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 21:20:01.773779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 21:20:01.773999 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0202 21:20:01.777333 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3307553149/tls.crt::/tmp/serving-cert-3307553149/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770067196\\\\\\\\\\\\\\\" (2026-02-02 21:19:55 +0000 UTC to 2026-03-04 21:19:56 +0000 UTC (now=2026-02-02 21:20:01.777292377 +0000 UTC))\\\\\\\"\\\\nI0202 21:20:01.777438 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770067201\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770067201\\\\\\\\\\\\\\\" (2026-02-02 20:20:01 +0000 UTC to 2027-02-02 20:20:01 +0000 UTC (now=2026-02-02 21:20:01.777417111 +0000 UTC))\\\\\\\"\\\\nI0202 21:20:01.777455 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 21:20:01.777484 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 21:20:01.777505 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0202 21:20:01.777526 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0202 21:20:01.777548 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3307553149/tls.crt::/tmp/serving-cert-3307553149/tls.key\\\\\\\"\\\\nF0202 21:20:01.777545 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cbe4a41d8721562e34639bc3ab24cc7f735d8de00e14d38b1b623b58740c5b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94bdc1c30a3c895835afc4f205ec20725ec13707db6957046cae7791b8850679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94bdc1c30a3c895835afc4f205ec20725ec13707db6957046cae7791b8850679\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:19:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:19:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:08Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:08 crc kubenswrapper[4789]: I0202 21:20:08.038571 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:08 crc kubenswrapper[4789]: I0202 21:20:08.038636 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:08 crc kubenswrapper[4789]: I0202 21:20:08.038650 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:08 crc kubenswrapper[4789]: I0202 21:20:08.038671 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:08 crc kubenswrapper[4789]: I0202 21:20:08.038686 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:08Z","lastTransitionTime":"2026-02-02T21:20:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:08 crc kubenswrapper[4789]: I0202 21:20:08.049860 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dsv6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcbed546-a1c3-4ba4-96b4-61471010b1c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48b8508849b24e18223b62bd73348759ba4c04a525afb7d4055175bceb0183c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e48b8508849b24e18223b62bd73348759ba4c04a525afb7d4055175bceb0183c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92b13f24aa00608821909c688ace0441fd4e432d3c5fc9a1cc06b4239bf86955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92b13f24aa00608821909c688ace0441fd4e432d3c5fc9a1cc06b4239bf86955\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dsv6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:08Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:08 crc kubenswrapper[4789]: I0202 21:20:08.076898 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e90a8e698f2e9f30a596864d0d038bfad686cca1a56b1a8bc72fab5fe594f355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e90a8e698f2e9f30a596864d0d038bfad686cca1a56b1a8bc72fab5fe594f355\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w8vkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:08Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:08 crc kubenswrapper[4789]: I0202 21:20:08.096292 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wlsw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b25d791-42e1-4e08-b7da-41803cc40f4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0e2a17d3ade1aa229b2729d66fe953acc746673549e0d929076a3bd18839833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-snjrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wlsw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:08Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:08 crc kubenswrapper[4789]: I0202 21:20:08.116242 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fb5d89e-c901-4857-a5e8-b4d8e3701714\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d85613f300f8d008d919038b40efff3fb67e228e25959cbfd2424640c6bc7a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://460e055777327e1734238bf51929751678ea5a757b00a344daa31c8c95e4c463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c292754c81dd4929f8e599e8ec352fd5e90431c60bb741c070a10ffa91fad72\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3685c506f77d3807711a442d68da94932064737cfdc5cb74557da135906e24f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:19:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:08Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:08 crc kubenswrapper[4789]: I0202 21:20:08.128556 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2x5ws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70a32268-2a2d-47f3-9fc6-4281b8dc6a02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9eaf3ca59ce89187ee20f3915b7fd4a8867e156c0be18b435511d2af95ffd949\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf446\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2x5ws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:08Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:08 crc kubenswrapper[4789]: I0202 21:20:08.141395 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:08 crc kubenswrapper[4789]: I0202 21:20:08.141431 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:08 crc kubenswrapper[4789]: I0202 21:20:08.141445 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:08 crc kubenswrapper[4789]: I0202 21:20:08.141466 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:08 crc kubenswrapper[4789]: I0202 21:20:08.141481 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:08Z","lastTransitionTime":"2026-02-02T21:20:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:08 crc kubenswrapper[4789]: I0202 21:20:08.144542 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa57a708f883719dd81200ccf975947c0af244b92ead81daee23171c9be412f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:08Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:08 crc kubenswrapper[4789]: I0202 21:20:08.154797 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6l576" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd970d28-4009-48b2-a0f4-2b8b1d54a2cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b641bb7caefade4d07fe41965573b059bd8b23a83bb9f60a9637d0152dffb07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwbqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6l576\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:08Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:08 crc kubenswrapper[4789]: I0202 21:20:08.170078 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:08Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:08 crc kubenswrapper[4789]: I0202 21:20:08.186201 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fb5d89e-c901-4857-a5e8-b4d8e3701714\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d85613f300f8d008d919038b40efff3fb67e228e25959cbfd2424640c6bc7a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://460e055777327e1734238bf51929751678ea5a757b00a344daa31c8c95e4c463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c292754c81dd4929f8e599e8ec352fd5e90431c60bb741c070a10ffa91fad72\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3685c506f77d3807711a442d68da94932064737cfdc5cb74557da135906e24f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:19:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:08Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:08 crc kubenswrapper[4789]: I0202 21:20:08.204190 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2x5ws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70a32268-2a2d-47f3-9fc6-4281b8dc6a02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9eaf3ca59ce89187ee20f3915b7fd4a8867e156c0be18b435511d2af95ffd949\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf446\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2x5ws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:08Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:08 crc kubenswrapper[4789]: I0202 21:20:08.219994 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa57a708f883719dd81200ccf975947c0af244b92ead81daee23171c9be412f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:08Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:08 crc kubenswrapper[4789]: I0202 21:20:08.229496 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6l576" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd970d28-4009-48b2-a0f4-2b8b1d54a2cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b641bb7caefade4d07fe41965573b059bd8b23a83bb9f60a9637d0152dffb07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwbqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6l576\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:08Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:08 crc kubenswrapper[4789]: I0202 21:20:08.243812 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:08Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:08 crc kubenswrapper[4789]: I0202 21:20:08.245359 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:08 crc kubenswrapper[4789]: I0202 21:20:08.245390 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:08 crc kubenswrapper[4789]: I0202 21:20:08.245401 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:08 crc kubenswrapper[4789]: I0202 21:20:08.245417 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:08 crc kubenswrapper[4789]: I0202 21:20:08.245429 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:08Z","lastTransitionTime":"2026-02-02T21:20:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:08 crc kubenswrapper[4789]: I0202 21:20:08.259990 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:08Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:08 crc kubenswrapper[4789]: I0202 21:20:08.273836 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:08Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:08 crc kubenswrapper[4789]: I0202 21:20:08.286639 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19928db343f460eed7b046f14b45c6756081f57ea4b3ad77acc86f29eb856a82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://106225a63f3d4ecb7ba8c5266f738990ec7cc5603f9fabde6d234ae265dcc310\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:08Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:08 crc kubenswrapper[4789]: I0202 21:20:08.317435 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdf018b4-1451-4d37-be6e-05802b67c73e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad1716ba11e1b21eb68642e6935312760c389a669a007822290fbe72573bfaad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwk2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b108a17d437cce7e94365267d96e99e84944d9dd12174c5acb380bc1a1f9c885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwk2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c8vcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:08Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:08 crc kubenswrapper[4789]: I0202 21:20:08.348469 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:08 crc kubenswrapper[4789]: I0202 21:20:08.348637 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:08 crc kubenswrapper[4789]: I0202 21:20:08.348719 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:08 crc kubenswrapper[4789]: I0202 21:20:08.348805 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:08 crc kubenswrapper[4789]: I0202 21:20:08.348876 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:08Z","lastTransitionTime":"2026-02-02T21:20:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:08 crc kubenswrapper[4789]: I0202 21:20:08.361317 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40b3784b5ba59f0147d2889d897d17140759155e24587e9a323338e0a3125ddd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:08Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:08 crc kubenswrapper[4789]: I0202 21:20:08.404567 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9563eded-ca82-4eb6-90d4-e62b8acbe296\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72bf64d682578abb47e9fabf5e6c31e3a20594cc4a2d13304b2036f4198af265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cc46c408bb643834e494025442760929995b22175ce2af65a14004761848c2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e95c9e553b8e14141ddde6a221c0e5e4d46533d1631893e5d55416b9d16f4a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8bd6ccf6c7345f0ddc84a2983b618218fc65283b8d351c2df30d1221f304b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d8bd6ccf6c7345f0ddc84a2983b618218fc65283b8d351c2df30d1221f304b7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\" 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 21:20:01.773773 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 21:20:01.773776 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 21:20:01.773779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 21:20:01.773999 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0202 21:20:01.777333 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3307553149/tls.crt::/tmp/serving-cert-3307553149/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770067196\\\\\\\\\\\\\\\" (2026-02-02 21:19:55 +0000 UTC to 2026-03-04 21:19:56 +0000 UTC (now=2026-02-02 21:20:01.777292377 +0000 UTC))\\\\\\\"\\\\nI0202 21:20:01.777438 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770067201\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770067201\\\\\\\\\\\\\\\" (2026-02-02 20:20:01 +0000 UTC to 2027-02-02 20:20:01 +0000 UTC (now=2026-02-02 21:20:01.777417111 +0000 UTC))\\\\\\\"\\\\nI0202 21:20:01.777455 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 21:20:01.777484 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 21:20:01.777505 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0202 21:20:01.777526 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0202 21:20:01.777548 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3307553149/tls.crt::/tmp/serving-cert-3307553149/tls.key\\\\\\\"\\\\nF0202 21:20:01.777545 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cbe4a41d8721562e34639bc3ab24cc7f735d8de00e14d38b1b623b58740c5b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94bdc1c30a3c895835afc4f205ec20725ec13707db6957046cae7791b8850679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94bdc1c30a3c895835afc4f205ec20725ec13707db6957046cae7791b8850679\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:19:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:19:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:08Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:08 crc kubenswrapper[4789]: I0202 21:20:08.443367 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dsv6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcbed546-a1c3-4ba4-96b4-61471010b1c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48b8508849b24e18223b62bd73348759ba4c04a525afb7d4055175bceb0183c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e48b8508849b24e18223b62bd73348759ba4c04a525afb7d4055175bceb0183c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92b13f24aa00608821909c688ace0441fd4e432d3c5fc9a1cc06b4239bf86955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92b13f24aa00608821909c688ace0441fd4e432d3c5fc9a1cc06b4239bf86955\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75d7fd5bba471f9371960e5c6cfd5d7cb49402d47459acad01a9c438ea33de0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75d7fd5bba471f9371960e5c6cfd5d7cb49402d47459acad01a9c438ea33de0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dsv6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:08Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:08 crc kubenswrapper[4789]: I0202 21:20:08.452093 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:08 crc kubenswrapper[4789]: I0202 21:20:08.452121 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:08 crc kubenswrapper[4789]: I0202 21:20:08.452129 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:08 crc kubenswrapper[4789]: I0202 21:20:08.452145 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:08 crc kubenswrapper[4789]: I0202 21:20:08.452160 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:08Z","lastTransitionTime":"2026-02-02T21:20:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:08 crc kubenswrapper[4789]: I0202 21:20:08.491080 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e90a8e698f2e9f30a596864d0d038bfad686cca1a56b1a8bc72fab5fe594f355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e90a8e698f2e9f30a596864d0d038bfad686cca1a56b1a8bc72fab5fe594f355\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w8vkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:08Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:08 crc kubenswrapper[4789]: I0202 21:20:08.518293 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wlsw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b25d791-42e1-4e08-b7da-41803cc40f4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0e2a17d3ade1aa229b2729d66fe953acc746673549e0d929076a3bd18839833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-snjrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wlsw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:08Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:08 crc kubenswrapper[4789]: I0202 21:20:08.559151 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:08 crc kubenswrapper[4789]: I0202 21:20:08.559209 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:08 crc kubenswrapper[4789]: I0202 21:20:08.559226 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:08 crc kubenswrapper[4789]: I0202 21:20:08.559249 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:08 crc kubenswrapper[4789]: I0202 21:20:08.559266 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:08Z","lastTransitionTime":"2026-02-02T21:20:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:08 crc kubenswrapper[4789]: I0202 21:20:08.662805 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:08 crc kubenswrapper[4789]: I0202 21:20:08.662870 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:08 crc kubenswrapper[4789]: I0202 21:20:08.662888 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:08 crc kubenswrapper[4789]: I0202 21:20:08.662913 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:08 crc kubenswrapper[4789]: I0202 21:20:08.662931 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:08Z","lastTransitionTime":"2026-02-02T21:20:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:08 crc kubenswrapper[4789]: I0202 21:20:08.744683 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 12:19:04.457450524 +0000 UTC Feb 02 21:20:08 crc kubenswrapper[4789]: I0202 21:20:08.765942 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:08 crc kubenswrapper[4789]: I0202 21:20:08.765994 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:08 crc kubenswrapper[4789]: I0202 21:20:08.766011 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:08 crc kubenswrapper[4789]: I0202 21:20:08.766034 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:08 crc kubenswrapper[4789]: I0202 21:20:08.766052 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:08Z","lastTransitionTime":"2026-02-02T21:20:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:08 crc kubenswrapper[4789]: I0202 21:20:08.869619 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:08 crc kubenswrapper[4789]: I0202 21:20:08.869690 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:08 crc kubenswrapper[4789]: I0202 21:20:08.869712 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:08 crc kubenswrapper[4789]: I0202 21:20:08.869742 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:08 crc kubenswrapper[4789]: I0202 21:20:08.869762 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:08Z","lastTransitionTime":"2026-02-02T21:20:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:08 crc kubenswrapper[4789]: I0202 21:20:08.947737 4789 generic.go:334] "Generic (PLEG): container finished" podID="bcbed546-a1c3-4ba4-96b4-61471010b1c2" containerID="6eb07d53076d60f3363a1232e742bb4bd801d49104a2be1095af34a61a5b61cb" exitCode=0 Feb 02 21:20:08 crc kubenswrapper[4789]: I0202 21:20:08.947849 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dsv6b" event={"ID":"bcbed546-a1c3-4ba4-96b4-61471010b1c2","Type":"ContainerDied","Data":"6eb07d53076d60f3363a1232e742bb4bd801d49104a2be1095af34a61a5b61cb"} Feb 02 21:20:08 crc kubenswrapper[4789]: I0202 21:20:08.971478 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" event={"ID":"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6","Type":"ContainerStarted","Data":"f30344b8366cf5ef712d971dc4d8d552e7b50c1ecea8d2ace88bb80fce62231e"} Feb 02 21:20:08 crc kubenswrapper[4789]: I0202 21:20:08.973699 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:08 crc kubenswrapper[4789]: I0202 21:20:08.973766 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:08 crc kubenswrapper[4789]: I0202 21:20:08.973792 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:08 crc kubenswrapper[4789]: I0202 21:20:08.973822 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:08 crc kubenswrapper[4789]: I0202 21:20:08.973845 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:08Z","lastTransitionTime":"2026-02-02T21:20:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:08 crc kubenswrapper[4789]: I0202 21:20:08.979371 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fb5d89e-c901-4857-a5e8-b4d8e3701714\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d85613f300f8d008d919038b40efff3fb67e228e25959cbfd2424640c6bc7a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://460e055777327e1734238bf51929751678ea5a757b00a344daa31c8c95e4c463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c292754c81dd4929f8e599e8ec352fd5e90431c60bb741c070a10ffa91fad72\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3685c506f77d3807711a442d68da94932064737cfdc5cb74557da135906e24f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:19:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:08Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:08 crc kubenswrapper[4789]: I0202 21:20:08.985764 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 21:20:08 crc kubenswrapper[4789]: E0202 21:20:08.985878 4789 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 21:20:08 crc kubenswrapper[4789]: E0202 21:20:08.985928 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 21:20:16.985913364 +0000 UTC m=+37.280938383 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 21:20:09 crc kubenswrapper[4789]: I0202 21:20:09.002261 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2x5ws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70a32268-2a2d-47f3-9fc6-4281b8dc6a02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9eaf3ca59ce89187ee20f3915b7fd4a8867e156c0be18b435511d2af95ffd949\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf446\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2x5ws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:09Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:09 crc kubenswrapper[4789]: I0202 21:20:09.022013 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa57a708f883719dd81200ccf975947c0af244b92ead81daee23171c9be412f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:09Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:09 crc kubenswrapper[4789]: I0202 21:20:09.034791 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6l576" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd970d28-4009-48b2-a0f4-2b8b1d54a2cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b641bb7caefade4d07fe41965573b059bd8b23a83bb9f60a9637d0152dffb07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwbqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6l576\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:09Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:09 crc kubenswrapper[4789]: I0202 21:20:09.052134 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:09Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:09 crc kubenswrapper[4789]: I0202 21:20:09.069384 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40b3784b5ba59f0147d2889d897d17140759155e24587e9a323338e0a3125ddd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:09Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:09 crc kubenswrapper[4789]: I0202 21:20:09.076076 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:09 crc kubenswrapper[4789]: I0202 21:20:09.076116 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:09 crc kubenswrapper[4789]: I0202 21:20:09.076128 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:09 crc kubenswrapper[4789]: I0202 21:20:09.076144 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:09 crc kubenswrapper[4789]: I0202 21:20:09.076155 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:09Z","lastTransitionTime":"2026-02-02T21:20:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:09 crc kubenswrapper[4789]: I0202 21:20:09.086551 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:20:09 crc kubenswrapper[4789]: E0202 21:20:09.086698 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 21:20:17.086678451 +0000 UTC m=+37.381703480 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:20:09 crc kubenswrapper[4789]: I0202 21:20:09.086742 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 21:20:09 crc kubenswrapper[4789]: I0202 21:20:09.086793 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 21:20:09 crc kubenswrapper[4789]: I0202 21:20:09.086821 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 21:20:09 crc kubenswrapper[4789]: I0202 21:20:09.086984 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:09Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:09 crc kubenswrapper[4789]: E0202 21:20:09.087041 4789 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 21:20:09 crc kubenswrapper[4789]: E0202 21:20:09.087091 4789 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 21:20:09 crc kubenswrapper[4789]: E0202 21:20:09.087109 4789 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 21:20:09 crc kubenswrapper[4789]: E0202 21:20:09.087121 4789 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 21:20:09 crc kubenswrapper[4789]: E0202 21:20:09.087095 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 21:20:17.087080082 +0000 UTC m=+37.382105121 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 21:20:09 crc kubenswrapper[4789]: E0202 21:20:09.087161 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-02 21:20:17.087150734 +0000 UTC m=+37.382175753 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 21:20:09 crc kubenswrapper[4789]: E0202 21:20:09.087283 4789 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 21:20:09 crc kubenswrapper[4789]: E0202 21:20:09.087306 4789 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 21:20:09 crc kubenswrapper[4789]: E0202 21:20:09.087321 4789 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 21:20:09 crc kubenswrapper[4789]: E0202 21:20:09.087372 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-02 21:20:17.08735417 +0000 UTC m=+37.382379199 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 21:20:09 crc kubenswrapper[4789]: I0202 21:20:09.108434 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:09Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:09 crc kubenswrapper[4789]: I0202 21:20:09.121420 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19928db343f460eed7b046f14b45c6756081f57ea4b3ad77acc86f29eb856a82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://106225a63f3d4ecb7ba8c5266f738990ec7cc5603f9fabde6d234ae265dcc310\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:09Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:09 crc kubenswrapper[4789]: I0202 21:20:09.135894 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdf018b4-1451-4d37-be6e-05802b67c73e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad1716ba11e1b21eb68642e6935312760c389a669a007822290fbe72573bfaad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwk2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b108a17d437cce7e94365267d96e99e84944d9dd12174c5acb380bc1a1f9c885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwk2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c8vcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:09Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:09 crc kubenswrapper[4789]: I0202 21:20:09.150558 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9563eded-ca82-4eb6-90d4-e62b8acbe296\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72bf64d682578abb47e9fabf5e6c31e3a20594cc4a2d13304b2036f4198af265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cc46c408bb643834e494025442760929995b22175ce2af65a14004761848c2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e95c9e553b8e14141ddde6a221c0e5e4d46533d1631893e5d55416b9d16f4a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8bd6ccf6c7345f0ddc84a2983b618218fc65283b8d351c2df30d1221f304b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d8bd6ccf6c7345f0ddc84a2983b618218fc65283b8d351c2df30d1221f304b7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\" 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 21:20:01.773773 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 21:20:01.773776 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 21:20:01.773779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 21:20:01.773999 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0202 21:20:01.777333 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3307553149/tls.crt::/tmp/serving-cert-3307553149/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770067196\\\\\\\\\\\\\\\" (2026-02-02 21:19:55 +0000 UTC to 2026-03-04 21:19:56 +0000 UTC (now=2026-02-02 21:20:01.777292377 +0000 UTC))\\\\\\\"\\\\nI0202 21:20:01.777438 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770067201\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770067201\\\\\\\\\\\\\\\" (2026-02-02 20:20:01 +0000 UTC to 2027-02-02 20:20:01 +0000 UTC (now=2026-02-02 21:20:01.777417111 +0000 UTC))\\\\\\\"\\\\nI0202 21:20:01.777455 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 21:20:01.777484 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 21:20:01.777505 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0202 21:20:01.777526 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0202 21:20:01.777548 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3307553149/tls.crt::/tmp/serving-cert-3307553149/tls.key\\\\\\\"\\\\nF0202 21:20:01.777545 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cbe4a41d8721562e34639bc3ab24cc7f735d8de00e14d38b1b623b58740c5b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94bdc1c30a3c895835afc4f205ec20725ec13707db6957046cae7791b8850679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94bdc1c30a3c895835afc4f205ec20725ec13707db6957046cae7791b8850679\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:19:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:19:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:09Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:09 crc kubenswrapper[4789]: I0202 21:20:09.165912 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dsv6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcbed546-a1c3-4ba4-96b4-61471010b1c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48b8508849b24e18223b62bd73348759ba4c04a525afb7d4055175bceb0183c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e48b8508849b24e18223b62bd73348759ba4c04a525afb7d4055175bceb0183c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92b13f24aa00608821909c688ace0441fd4e432d3c5fc9a1cc06b4239bf86955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92b13f24aa00608821909c688ace0441fd4e432d3c5fc9a1cc06b4239bf86955\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75d7fd5bba471f9371960e5c6cfd5d7cb49402d47459acad01a9c438ea33de0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75d7fd5bba471f9371960e5c6cfd5d7cb49402d47459acad01a9c438ea33de0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eb07d53076d60f3363a1232e742bb4bd801d49104a2be1095af34a61a5b61cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eb07d53076d60f3363a1232e742bb4bd801d49104a2be1095af34a61a5b61cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dsv6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:09Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:09 crc kubenswrapper[4789]: I0202 21:20:09.179190 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:09 crc kubenswrapper[4789]: I0202 21:20:09.179231 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:09 crc kubenswrapper[4789]: I0202 21:20:09.179242 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:09 crc kubenswrapper[4789]: I0202 21:20:09.179258 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:09 crc kubenswrapper[4789]: I0202 21:20:09.179270 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:09Z","lastTransitionTime":"2026-02-02T21:20:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:09 crc kubenswrapper[4789]: I0202 21:20:09.188138 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e90a8e698f2e9f30a596864d0d038bfad686cca1a56b1a8bc72fab5fe594f355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e90a8e698f2e9f30a596864d0d038bfad686cca1a56b1a8bc72fab5fe594f355\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w8vkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:09Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:09 crc kubenswrapper[4789]: I0202 21:20:09.200554 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wlsw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b25d791-42e1-4e08-b7da-41803cc40f4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0e2a17d3ade1aa229b2729d66fe953acc746673549e0d929076a3bd18839833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-snjrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wlsw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:09Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:09 crc kubenswrapper[4789]: I0202 21:20:09.282455 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:09 crc kubenswrapper[4789]: I0202 21:20:09.282502 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:09 crc kubenswrapper[4789]: I0202 21:20:09.282515 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:09 crc kubenswrapper[4789]: I0202 21:20:09.282536 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:09 crc kubenswrapper[4789]: I0202 21:20:09.282549 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:09Z","lastTransitionTime":"2026-02-02T21:20:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:09 crc kubenswrapper[4789]: I0202 21:20:09.386179 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:09 crc kubenswrapper[4789]: I0202 21:20:09.386254 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:09 crc kubenswrapper[4789]: I0202 21:20:09.386272 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:09 crc kubenswrapper[4789]: I0202 21:20:09.386296 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:09 crc kubenswrapper[4789]: I0202 21:20:09.386314 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:09Z","lastTransitionTime":"2026-02-02T21:20:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:09 crc kubenswrapper[4789]: I0202 21:20:09.419463 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 21:20:09 crc kubenswrapper[4789]: I0202 21:20:09.419481 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 21:20:09 crc kubenswrapper[4789]: I0202 21:20:09.419780 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 21:20:09 crc kubenswrapper[4789]: E0202 21:20:09.419679 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 21:20:09 crc kubenswrapper[4789]: E0202 21:20:09.419916 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 21:20:09 crc kubenswrapper[4789]: E0202 21:20:09.420056 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 21:20:09 crc kubenswrapper[4789]: I0202 21:20:09.489986 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:09 crc kubenswrapper[4789]: I0202 21:20:09.490231 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:09 crc kubenswrapper[4789]: I0202 21:20:09.490243 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:09 crc kubenswrapper[4789]: I0202 21:20:09.490260 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:09 crc kubenswrapper[4789]: I0202 21:20:09.490273 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:09Z","lastTransitionTime":"2026-02-02T21:20:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:09 crc kubenswrapper[4789]: I0202 21:20:09.594270 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:09 crc kubenswrapper[4789]: I0202 21:20:09.594336 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:09 crc kubenswrapper[4789]: I0202 21:20:09.594357 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:09 crc kubenswrapper[4789]: I0202 21:20:09.594382 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:09 crc kubenswrapper[4789]: I0202 21:20:09.594400 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:09Z","lastTransitionTime":"2026-02-02T21:20:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:09 crc kubenswrapper[4789]: I0202 21:20:09.696808 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:09 crc kubenswrapper[4789]: I0202 21:20:09.696847 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:09 crc kubenswrapper[4789]: I0202 21:20:09.696859 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:09 crc kubenswrapper[4789]: I0202 21:20:09.696875 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:09 crc kubenswrapper[4789]: I0202 21:20:09.696886 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:09Z","lastTransitionTime":"2026-02-02T21:20:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:09 crc kubenswrapper[4789]: I0202 21:20:09.744884 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 17:38:38.700980856 +0000 UTC Feb 02 21:20:09 crc kubenswrapper[4789]: I0202 21:20:09.799939 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:09 crc kubenswrapper[4789]: I0202 21:20:09.799984 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:09 crc kubenswrapper[4789]: I0202 21:20:09.800001 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:09 crc kubenswrapper[4789]: I0202 21:20:09.800020 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:09 crc kubenswrapper[4789]: I0202 21:20:09.800037 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:09Z","lastTransitionTime":"2026-02-02T21:20:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:09 crc kubenswrapper[4789]: I0202 21:20:09.903439 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:09 crc kubenswrapper[4789]: I0202 21:20:09.903509 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:09 crc kubenswrapper[4789]: I0202 21:20:09.903534 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:09 crc kubenswrapper[4789]: I0202 21:20:09.903564 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:09 crc kubenswrapper[4789]: I0202 21:20:09.903621 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:09Z","lastTransitionTime":"2026-02-02T21:20:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:09 crc kubenswrapper[4789]: I0202 21:20:09.979503 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dsv6b" event={"ID":"bcbed546-a1c3-4ba4-96b4-61471010b1c2","Type":"ContainerStarted","Data":"e5339fbb27f553f96497abbf4b0584cbffe399e44fafd90cd0e568e2f4efad6a"} Feb 02 21:20:10 crc kubenswrapper[4789]: I0202 21:20:10.003508 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa57a708f883719dd81200ccf975947c0af244b92ead81daee23171c9be412f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:10Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:10 crc kubenswrapper[4789]: I0202 21:20:10.006503 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:10 crc kubenswrapper[4789]: I0202 21:20:10.006543 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:10 crc kubenswrapper[4789]: I0202 21:20:10.006560 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:10 crc kubenswrapper[4789]: I0202 21:20:10.006604 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:10 crc kubenswrapper[4789]: I0202 21:20:10.006622 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:10Z","lastTransitionTime":"2026-02-02T21:20:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:10 crc kubenswrapper[4789]: I0202 21:20:10.021096 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6l576" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd970d28-4009-48b2-a0f4-2b8b1d54a2cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b641bb7caefade4d07fe41965573b059bd8b23a83bb9f60a9637d0152dffb07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwbqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6l576\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:10Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:10 crc kubenswrapper[4789]: I0202 21:20:10.042108 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:10Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:10 crc kubenswrapper[4789]: I0202 21:20:10.060950 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdf018b4-1451-4d37-be6e-05802b67c73e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad1716ba11e1b21eb68642e6935312760c389a669a007822290fbe72573bfaad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwk2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b108a17d437cce7e94365267d96e99e84944d9dd12174c5acb380bc1a1f9c885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwk2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c8vcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:10Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:10 crc kubenswrapper[4789]: I0202 21:20:10.081145 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40b3784b5ba59f0147d2889d897d17140759155e24587e9a323338e0a3125ddd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:10Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:10 crc kubenswrapper[4789]: I0202 21:20:10.103985 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:10Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:10 crc kubenswrapper[4789]: I0202 21:20:10.109561 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:10 crc kubenswrapper[4789]: I0202 21:20:10.109646 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:10 crc kubenswrapper[4789]: I0202 21:20:10.109669 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:10 crc kubenswrapper[4789]: I0202 21:20:10.109696 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:10 crc kubenswrapper[4789]: I0202 21:20:10.109716 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:10Z","lastTransitionTime":"2026-02-02T21:20:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:10 crc kubenswrapper[4789]: I0202 21:20:10.127297 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:10Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:10 crc kubenswrapper[4789]: I0202 21:20:10.148322 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19928db343f460eed7b046f14b45c6756081f57ea4b3ad77acc86f29eb856a82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://106225a63f3d4ecb7ba8c5266f738990ec7cc5603f9fabde6d234ae265dcc310\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:10Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:10 crc kubenswrapper[4789]: I0202 21:20:10.165545 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wlsw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b25d791-42e1-4e08-b7da-41803cc40f4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0e2a17d3ade1aa229b2729d66fe953acc746673549e0d929076a3bd18839833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-snjrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wlsw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:10Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:10 crc kubenswrapper[4789]: I0202 21:20:10.176437 4789 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 02 21:20:10 crc kubenswrapper[4789]: I0202 21:20:10.181228 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9563eded-ca82-4eb6-90d4-e62b8acbe296\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72bf64d682578abb47e9fabf5e6c31e3a20594cc4a2d13304b2036f4198af265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cc46c408bb643834e494025442760929995b22175ce2af65a14004761848c2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e95c9e553b8e14141ddde6a221c0e5e4d46533d1631893e5d55416b9d16f4a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8bd6ccf6c7345f0ddc84a2983b618218fc65283b8d351c2df30d1221f304b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d8bd6ccf6c7345f0ddc84a2983b618218fc65283b8d351c2df30d1221f304b7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\" 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 21:20:01.773773 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 21:20:01.773776 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 21:20:01.773779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 21:20:01.773999 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0202 21:20:01.777333 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3307553149/tls.crt::/tmp/serving-cert-3307553149/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770067196\\\\\\\\\\\\\\\" (2026-02-02 21:19:55 +0000 UTC to 2026-03-04 21:19:56 +0000 UTC (now=2026-02-02 21:20:01.777292377 +0000 UTC))\\\\\\\"\\\\nI0202 21:20:01.777438 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770067201\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770067201\\\\\\\\\\\\\\\" (2026-02-02 20:20:01 +0000 UTC to 2027-02-02 20:20:01 +0000 UTC (now=2026-02-02 21:20:01.777417111 +0000 UTC))\\\\\\\"\\\\nI0202 21:20:01.777455 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 21:20:01.777484 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 21:20:01.777505 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0202 21:20:01.777526 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0202 21:20:01.777548 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3307553149/tls.crt::/tmp/serving-cert-3307553149/tls.key\\\\\\\"\\\\nF0202 21:20:01.777545 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cbe4a41d8721562e34639bc3ab24cc7f735d8de00e14d38b1b623b58740c5b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94bdc1c30a3c895835afc4f205ec20725ec13707db6957046cae7791b8850679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94bdc1c30a3c895835afc4f205ec20725ec13707db6957046cae7791b8850679\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:19:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:19:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc/status\": read tcp 38.102.83.189:41184->38.102.83.189:6443: use of closed network connection" Feb 02 21:20:10 crc kubenswrapper[4789]: I0202 21:20:10.213778 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:10 crc kubenswrapper[4789]: I0202 21:20:10.213842 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:10 crc kubenswrapper[4789]: I0202 21:20:10.213859 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:10 crc kubenswrapper[4789]: I0202 21:20:10.213885 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:10 crc kubenswrapper[4789]: I0202 21:20:10.213902 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:10Z","lastTransitionTime":"2026-02-02T21:20:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:10 crc kubenswrapper[4789]: I0202 21:20:10.220874 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dsv6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcbed546-a1c3-4ba4-96b4-61471010b1c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48b8508849b24e18223b62bd73348759ba4c04a525afb7d4055175bceb0183c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e48b8508849b24e18223b62bd73348759ba4c04a525afb7d4055175bceb0183c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92b13f24aa00608821909c688ace0441fd4e432d3c5fc9a1cc06b4239bf86955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92b13f24aa00608821909c688ace0441fd4e432d3c5fc9a1cc06b4239bf86955\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75d7fd5bba471f9371960e5c6cfd5d7cb49402d47459acad01a9c438ea33de0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75d7fd5bba471f9371960e5c6cfd5d7cb49402d47459acad01a9c438ea33de0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eb07d53076d60f3363a1232e742bb4bd801d49104a2be1095af34a61a5b61cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eb07d53076d60f3363a1232e742bb4bd801d49104a2be1095af34a61a5b61cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5339fbb27f553f96497abbf4b0584cbffe399e44fafd90cd0e568e2f4efad6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dsv6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:10Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:10 crc kubenswrapper[4789]: I0202 21:20:10.253919 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e90a8e698f2e9f30a596864d0d038bfad686cca1a56b1a8bc72fab5fe594f355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e90a8e698f2e9f30a596864d0d038bfad686cca1a56b1a8bc72fab5fe594f355\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w8vkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:10Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:10 crc kubenswrapper[4789]: I0202 21:20:10.275396 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fb5d89e-c901-4857-a5e8-b4d8e3701714\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d85613f300f8d008d919038b40efff3fb67e228e25959cbfd2424640c6bc7a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://460e055777327e1734238bf51929751678ea5a757b00a344daa31c8c95e4c463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c292754c81dd4929f8e599e8ec352fd5e90431c60bb741c070a10ffa91fad72\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3685c506f77d3807711a442d68da94932064737cfdc5cb74557da135906e24f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:19:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:10Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:10 crc kubenswrapper[4789]: I0202 21:20:10.297198 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2x5ws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70a32268-2a2d-47f3-9fc6-4281b8dc6a02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9eaf3ca59ce89187ee20f3915b7fd4a8867e156c0be18b435511d2af95ffd949\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf446\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2x5ws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:10Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:10 crc kubenswrapper[4789]: I0202 21:20:10.317111 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:10 crc kubenswrapper[4789]: I0202 21:20:10.317174 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:10 crc kubenswrapper[4789]: I0202 21:20:10.317191 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:10 crc kubenswrapper[4789]: I0202 21:20:10.317214 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:10 crc kubenswrapper[4789]: I0202 21:20:10.317228 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:10Z","lastTransitionTime":"2026-02-02T21:20:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:10 crc kubenswrapper[4789]: I0202 21:20:10.420989 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:10 crc kubenswrapper[4789]: I0202 21:20:10.421032 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:10 crc kubenswrapper[4789]: I0202 21:20:10.421044 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:10 crc kubenswrapper[4789]: I0202 21:20:10.421064 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:10 crc kubenswrapper[4789]: I0202 21:20:10.421077 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:10Z","lastTransitionTime":"2026-02-02T21:20:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:10 crc kubenswrapper[4789]: I0202 21:20:10.453544 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e90a8e698f2e9f30a596864d0d038bfad686cca1a56b1a8bc72fab5fe594f355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e90a8e698f2e9f30a596864d0d038bfad686cca1a56b1a8bc72fab5fe594f355\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w8vkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:10Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:10 crc kubenswrapper[4789]: I0202 21:20:10.470900 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wlsw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b25d791-42e1-4e08-b7da-41803cc40f4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0e2a17d3ade1aa229b2729d66fe953acc746673549e0d929076a3bd18839833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-snjrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wlsw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:10Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:10 crc kubenswrapper[4789]: I0202 21:20:10.494468 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9563eded-ca82-4eb6-90d4-e62b8acbe296\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72bf64d682578abb47e9fabf5e6c31e3a20594cc4a2d13304b2036f4198af265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cc46c408bb643834e494025442760929995b22175ce2af65a14004761848c2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e95c9e553b8e14141ddde6a221c0e5e4d46533d1631893e5d55416b9d16f4a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8bd6ccf6c7345f0ddc84a2983b618218fc65283b8d351c2df30d1221f304b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d8bd6ccf6c7345f0ddc84a2983b618218fc65283b8d351c2df30d1221f304b7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\" 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 21:20:01.773773 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 21:20:01.773776 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 21:20:01.773779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 21:20:01.773999 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0202 21:20:01.777333 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3307553149/tls.crt::/tmp/serving-cert-3307553149/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770067196\\\\\\\\\\\\\\\" (2026-02-02 21:19:55 +0000 UTC to 2026-03-04 21:19:56 +0000 UTC (now=2026-02-02 21:20:01.777292377 +0000 UTC))\\\\\\\"\\\\nI0202 21:20:01.777438 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770067201\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770067201\\\\\\\\\\\\\\\" (2026-02-02 20:20:01 +0000 UTC to 2027-02-02 20:20:01 +0000 UTC (now=2026-02-02 21:20:01.777417111 +0000 UTC))\\\\\\\"\\\\nI0202 21:20:01.777455 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 21:20:01.777484 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 21:20:01.777505 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0202 21:20:01.777526 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0202 21:20:01.777548 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3307553149/tls.crt::/tmp/serving-cert-3307553149/tls.key\\\\\\\"\\\\nF0202 21:20:01.777545 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cbe4a41d8721562e34639bc3ab24cc7f735d8de00e14d38b1b623b58740c5b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94bdc1c30a3c895835afc4f205ec20725ec13707db6957046cae7791b8850679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94bdc1c30a3c895835afc4f205ec20725ec13707db6957046cae7791b8850679\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:19:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:19:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:10Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:10 crc kubenswrapper[4789]: I0202 21:20:10.524420 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:10 crc kubenswrapper[4789]: I0202 21:20:10.524964 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:10 crc kubenswrapper[4789]: I0202 21:20:10.525028 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:10 crc kubenswrapper[4789]: I0202 21:20:10.525556 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:10 crc kubenswrapper[4789]: I0202 21:20:10.525704 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:10Z","lastTransitionTime":"2026-02-02T21:20:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:10 crc kubenswrapper[4789]: I0202 21:20:10.526404 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dsv6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcbed546-a1c3-4ba4-96b4-61471010b1c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48b8508849b24e18223b62bd73348759ba4c04a525afb7d4055175bceb0183c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e48b8508849b24e18223b62bd73348759ba4c04a525afb7d4055175bceb0183c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92b13f24aa00608821909c688ace0441fd4e432d3c5fc9a1cc06b4239bf86955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92b13f24aa00608821909c688ace0441fd4e432d3c5fc9a1cc06b4239bf86955\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75d7fd5bba471f9371960e5c6cfd5d7cb49402d47459acad01a9c438ea33de0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75d7fd5bba471f9371960e5c6cfd5d7cb49402d47459acad01a9c438ea33de0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eb07d53076d60f3363a1232e742bb4bd801d49104a2be1095af34a61a5b61cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eb07d53076d60f3363a1232e742bb4bd801d49104a2be1095af34a61a5b61cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5339fbb27f553f96497abbf4b0584cbffe399e44fafd90cd0e568e2f4efad6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dsv6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:10Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:10 crc kubenswrapper[4789]: I0202 21:20:10.541702 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fb5d89e-c901-4857-a5e8-b4d8e3701714\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d85613f300f8d008d919038b40efff3fb67e228e25959cbfd2424640c6bc7a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://460e055777327e1734238bf51929751678ea5a757b00a344daa31c8c95e4c463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c292754c81dd4929f8e599e8ec352fd5e90431c60bb741c070a10ffa91fad72\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3685c506f77d3807711a442d68da94932064737cfdc5cb74557da135906e24f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:19:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:10Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:10 crc kubenswrapper[4789]: I0202 21:20:10.560949 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2x5ws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70a32268-2a2d-47f3-9fc6-4281b8dc6a02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9eaf3ca59ce89187ee20f3915b7fd4a8867e156c0be18b435511d2af95ffd949\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf446\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2x5ws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:10Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:10 crc kubenswrapper[4789]: I0202 21:20:10.575172 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:10Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:10 crc kubenswrapper[4789]: I0202 21:20:10.590154 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa57a708f883719dd81200ccf975947c0af244b92ead81daee23171c9be412f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:10Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:10 crc kubenswrapper[4789]: I0202 21:20:10.605001 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6l576" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd970d28-4009-48b2-a0f4-2b8b1d54a2cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b641bb7caefade4d07fe41965573b059bd8b23a83bb9f60a9637d0152dffb07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwbqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6l576\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:10Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:10 crc kubenswrapper[4789]: I0202 21:20:10.621390 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19928db343f460eed7b046f14b45c6756081f57ea4b3ad77acc86f29eb856a82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://106225a63f3d4ecb7ba8c5266f738990ec7cc5603f9fabde6d234ae265dcc310\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:10Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:10 crc kubenswrapper[4789]: I0202 21:20:10.630124 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:10 crc kubenswrapper[4789]: I0202 21:20:10.630178 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:10 crc kubenswrapper[4789]: I0202 21:20:10.630207 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:10 crc kubenswrapper[4789]: I0202 21:20:10.630229 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:10 crc kubenswrapper[4789]: I0202 21:20:10.630245 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:10Z","lastTransitionTime":"2026-02-02T21:20:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:10 crc kubenswrapper[4789]: I0202 21:20:10.640382 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdf018b4-1451-4d37-be6e-05802b67c73e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad1716ba11e1b21eb68642e6935312760c389a669a007822290fbe72573bfaad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwk2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b108a17d437cce7e94365267d96e99e84944d9dd12174c5acb380bc1a1f9c885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwk2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c8vcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:10Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:10 crc kubenswrapper[4789]: I0202 21:20:10.658784 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40b3784b5ba59f0147d2889d897d17140759155e24587e9a323338e0a3125ddd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:10Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:10 crc kubenswrapper[4789]: I0202 21:20:10.673543 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:10Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:10 crc kubenswrapper[4789]: I0202 21:20:10.687443 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:10Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:10 crc kubenswrapper[4789]: I0202 21:20:10.733051 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:10 crc kubenswrapper[4789]: I0202 21:20:10.733128 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:10 crc kubenswrapper[4789]: I0202 21:20:10.733145 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:10 crc kubenswrapper[4789]: I0202 21:20:10.733167 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:10 crc kubenswrapper[4789]: I0202 21:20:10.733183 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:10Z","lastTransitionTime":"2026-02-02T21:20:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:10 crc kubenswrapper[4789]: I0202 21:20:10.745697 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 05:17:06.329407836 +0000 UTC Feb 02 21:20:10 crc kubenswrapper[4789]: I0202 21:20:10.835953 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:10 crc kubenswrapper[4789]: I0202 21:20:10.836005 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:10 crc kubenswrapper[4789]: I0202 21:20:10.836023 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:10 crc kubenswrapper[4789]: I0202 21:20:10.836050 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:10 crc kubenswrapper[4789]: I0202 21:20:10.836072 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:10Z","lastTransitionTime":"2026-02-02T21:20:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:10 crc kubenswrapper[4789]: I0202 21:20:10.939471 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:10 crc kubenswrapper[4789]: I0202 21:20:10.939527 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:10 crc kubenswrapper[4789]: I0202 21:20:10.939544 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:10 crc kubenswrapper[4789]: I0202 21:20:10.939568 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:10 crc kubenswrapper[4789]: I0202 21:20:10.939628 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:10Z","lastTransitionTime":"2026-02-02T21:20:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:10 crc kubenswrapper[4789]: I0202 21:20:10.989744 4789 generic.go:334] "Generic (PLEG): container finished" podID="bcbed546-a1c3-4ba4-96b4-61471010b1c2" containerID="e5339fbb27f553f96497abbf4b0584cbffe399e44fafd90cd0e568e2f4efad6a" exitCode=0 Feb 02 21:20:10 crc kubenswrapper[4789]: I0202 21:20:10.990131 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dsv6b" event={"ID":"bcbed546-a1c3-4ba4-96b4-61471010b1c2","Type":"ContainerDied","Data":"e5339fbb27f553f96497abbf4b0584cbffe399e44fafd90cd0e568e2f4efad6a"} Feb 02 21:20:11 crc kubenswrapper[4789]: I0202 21:20:11.008093 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fb5d89e-c901-4857-a5e8-b4d8e3701714\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d85613f300f8d008d919038b40efff3fb67e228e25959cbfd2424640c6bc7a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://460e055777327e1734238bf51929751678ea5a757b00a344daa31c8c95e4c463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c292754c81dd4929f8e599e8ec352fd5e90431c60bb741c070a10ffa91fad72\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3685c506f77d3807711a442d68da94932064737cfdc5cb74557da135906e24f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:19:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:11Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:11 crc kubenswrapper[4789]: I0202 21:20:11.024408 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2x5ws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70a32268-2a2d-47f3-9fc6-4281b8dc6a02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9eaf3ca59ce89187ee20f3915b7fd4a8867e156c0be18b435511d2af95ffd949\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf446\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2x5ws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:11Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:11 crc kubenswrapper[4789]: I0202 21:20:11.040623 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa57a708f883719dd81200ccf975947c0af244b92ead81daee23171c9be412f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:11Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:11 crc kubenswrapper[4789]: I0202 21:20:11.042118 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:11 crc kubenswrapper[4789]: I0202 21:20:11.042338 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:11 crc kubenswrapper[4789]: I0202 21:20:11.042478 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:11 crc kubenswrapper[4789]: I0202 21:20:11.042660 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:11 crc kubenswrapper[4789]: I0202 21:20:11.042833 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:11Z","lastTransitionTime":"2026-02-02T21:20:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:11 crc kubenswrapper[4789]: I0202 21:20:11.057229 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6l576" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd970d28-4009-48b2-a0f4-2b8b1d54a2cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b641bb7caefade4d07fe41965573b059bd8b23a83bb9f60a9637d0152dffb07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwbqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6l576\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:11Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:11 crc kubenswrapper[4789]: I0202 21:20:11.074636 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:11Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:11 crc kubenswrapper[4789]: I0202 21:20:11.089189 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40b3784b5ba59f0147d2889d897d17140759155e24587e9a323338e0a3125ddd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:11Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:11 crc kubenswrapper[4789]: I0202 21:20:11.100654 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:11Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:11 crc kubenswrapper[4789]: I0202 21:20:11.111643 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:11Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:11 crc kubenswrapper[4789]: I0202 21:20:11.128259 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19928db343f460eed7b046f14b45c6756081f57ea4b3ad77acc86f29eb856a82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://106225a63f3d4ecb7ba8c5266f738990ec7cc5603f9fabde6d234ae265dcc310\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:11Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:11 crc kubenswrapper[4789]: I0202 21:20:11.140507 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdf018b4-1451-4d37-be6e-05802b67c73e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad1716ba11e1b21eb68642e6935312760c389a669a007822290fbe72573bfaad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwk2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b108a17d437cce7e94365267d96e99e84944d9dd12174c5acb380bc1a1f9c885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwk2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c8vcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:11Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:11 crc kubenswrapper[4789]: I0202 21:20:11.145212 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:11 crc kubenswrapper[4789]: I0202 21:20:11.145258 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:11 crc kubenswrapper[4789]: I0202 21:20:11.145272 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:11 crc kubenswrapper[4789]: I0202 21:20:11.145294 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:11 crc kubenswrapper[4789]: I0202 21:20:11.145311 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:11Z","lastTransitionTime":"2026-02-02T21:20:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:11 crc kubenswrapper[4789]: I0202 21:20:11.163289 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9563eded-ca82-4eb6-90d4-e62b8acbe296\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72bf64d682578abb47e9fabf5e6c31e3a20594cc4a2d13304b2036f4198af265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cc46c408bb643834e494025442760929995b22175ce2af65a14004761848c2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e95c9e553b8e14141ddde6a221c0e5e4d46533d1631893e5d55416b9d16f4a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8bd6ccf6c7345f0ddc84a2983b618218fc65283b8d351c2df30d1221f304b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d8bd6ccf6c7345f0ddc84a2983b618218fc65283b8d351c2df30d1221f304b7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\" 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 21:20:01.773773 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 21:20:01.773776 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 21:20:01.773779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 21:20:01.773999 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0202 21:20:01.777333 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3307553149/tls.crt::/tmp/serving-cert-3307553149/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770067196\\\\\\\\\\\\\\\" (2026-02-02 21:19:55 +0000 UTC to 2026-03-04 21:19:56 +0000 UTC (now=2026-02-02 21:20:01.777292377 +0000 UTC))\\\\\\\"\\\\nI0202 21:20:01.777438 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770067201\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770067201\\\\\\\\\\\\\\\" (2026-02-02 20:20:01 +0000 UTC to 2027-02-02 20:20:01 +0000 UTC (now=2026-02-02 21:20:01.777417111 +0000 UTC))\\\\\\\"\\\\nI0202 21:20:01.777455 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 21:20:01.777484 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 21:20:01.777505 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0202 21:20:01.777526 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0202 21:20:01.777548 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3307553149/tls.crt::/tmp/serving-cert-3307553149/tls.key\\\\\\\"\\\\nF0202 21:20:01.777545 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cbe4a41d8721562e34639bc3ab24cc7f735d8de00e14d38b1b623b58740c5b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94bdc1c30a3c895835afc4f205ec20725ec13707db6957046cae7791b8850679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94bdc1c30a3c895835afc4f205ec20725ec13707db6957046cae7791b8850679\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:19:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:19:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:11Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:11 crc kubenswrapper[4789]: I0202 21:20:11.182598 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dsv6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcbed546-a1c3-4ba4-96b4-61471010b1c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48b8508849b24e18223b62bd73348759ba4c04a525afb7d4055175bceb0183c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e48b8508849b24e18223b62bd73348759ba4c04a525afb7d4055175bceb0183c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92b13f24aa00608821909c688ace0441fd4e432d3c5fc9a1cc06b4239bf86955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92b13f24aa00608821909c688ace0441fd4e432d3c5fc9a1cc06b4239bf86955\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75d7fd5bba471f9371960e5c6cfd5d7cb49402d47459acad01a9c438ea33de0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75d7fd5bba471f9371960e5c6cfd5d7cb49402d47459acad01a9c438ea33de0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eb07d53076d60f3363a1232e742bb4bd801d49104a2be1095af34a61a5b61cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eb07d53076d60f3363a1232e742bb4bd801d49104a2be1095af34a61a5b61cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5339fbb27f553f96497abbf4b0584cbffe399e44fafd90cd0e568e2f4efad6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5339fbb27f553f96497abbf4b0584cbffe399e44fafd90cd0e568e2f4efad6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dsv6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:11Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:11 crc kubenswrapper[4789]: I0202 21:20:11.203046 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e90a8e698f2e9f30a596864d0d038bfad686cca1a56b1a8bc72fab5fe594f355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e90a8e698f2e9f30a596864d0d038bfad686cca1a56b1a8bc72fab5fe594f355\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w8vkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:11Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:11 crc kubenswrapper[4789]: I0202 21:20:11.213319 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wlsw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b25d791-42e1-4e08-b7da-41803cc40f4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0e2a17d3ade1aa229b2729d66fe953acc746673549e0d929076a3bd18839833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-snjrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wlsw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:11Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:11 crc kubenswrapper[4789]: I0202 21:20:11.248139 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:11 crc kubenswrapper[4789]: I0202 21:20:11.248548 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:11 crc kubenswrapper[4789]: I0202 21:20:11.248771 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:11 crc kubenswrapper[4789]: I0202 21:20:11.248965 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:11 crc kubenswrapper[4789]: I0202 21:20:11.249110 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:11Z","lastTransitionTime":"2026-02-02T21:20:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:11 crc kubenswrapper[4789]: I0202 21:20:11.352125 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:11 crc kubenswrapper[4789]: I0202 21:20:11.352195 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:11 crc kubenswrapper[4789]: I0202 21:20:11.352220 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:11 crc kubenswrapper[4789]: I0202 21:20:11.352250 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:11 crc kubenswrapper[4789]: I0202 21:20:11.352273 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:11Z","lastTransitionTime":"2026-02-02T21:20:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:11 crc kubenswrapper[4789]: I0202 21:20:11.418929 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 21:20:11 crc kubenswrapper[4789]: I0202 21:20:11.418988 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 21:20:11 crc kubenswrapper[4789]: E0202 21:20:11.419055 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 21:20:11 crc kubenswrapper[4789]: E0202 21:20:11.419218 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 21:20:11 crc kubenswrapper[4789]: I0202 21:20:11.419014 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 21:20:11 crc kubenswrapper[4789]: E0202 21:20:11.419404 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 21:20:11 crc kubenswrapper[4789]: I0202 21:20:11.455374 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:11 crc kubenswrapper[4789]: I0202 21:20:11.455441 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:11 crc kubenswrapper[4789]: I0202 21:20:11.455462 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:11 crc kubenswrapper[4789]: I0202 21:20:11.455489 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:11 crc kubenswrapper[4789]: I0202 21:20:11.455508 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:11Z","lastTransitionTime":"2026-02-02T21:20:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:11 crc kubenswrapper[4789]: I0202 21:20:11.558008 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:11 crc kubenswrapper[4789]: I0202 21:20:11.558069 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:11 crc kubenswrapper[4789]: I0202 21:20:11.558086 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:11 crc kubenswrapper[4789]: I0202 21:20:11.558107 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:11 crc kubenswrapper[4789]: I0202 21:20:11.558121 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:11Z","lastTransitionTime":"2026-02-02T21:20:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:11 crc kubenswrapper[4789]: I0202 21:20:11.668367 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:11 crc kubenswrapper[4789]: I0202 21:20:11.668432 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:11 crc kubenswrapper[4789]: I0202 21:20:11.668460 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:11 crc kubenswrapper[4789]: I0202 21:20:11.668485 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:11 crc kubenswrapper[4789]: I0202 21:20:11.668502 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:11Z","lastTransitionTime":"2026-02-02T21:20:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:11 crc kubenswrapper[4789]: I0202 21:20:11.746332 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 15:50:21.185148484 +0000 UTC Feb 02 21:20:11 crc kubenswrapper[4789]: I0202 21:20:11.771793 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:11 crc kubenswrapper[4789]: I0202 21:20:11.771847 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:11 crc kubenswrapper[4789]: I0202 21:20:11.771864 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:11 crc kubenswrapper[4789]: I0202 21:20:11.771889 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:11 crc kubenswrapper[4789]: I0202 21:20:11.771911 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:11Z","lastTransitionTime":"2026-02-02T21:20:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:11 crc kubenswrapper[4789]: I0202 21:20:11.875807 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:11 crc kubenswrapper[4789]: I0202 21:20:11.875893 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:11 crc kubenswrapper[4789]: I0202 21:20:11.875924 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:11 crc kubenswrapper[4789]: I0202 21:20:11.875961 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:11 crc kubenswrapper[4789]: I0202 21:20:11.876050 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:11Z","lastTransitionTime":"2026-02-02T21:20:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:11 crc kubenswrapper[4789]: I0202 21:20:11.979413 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:11 crc kubenswrapper[4789]: I0202 21:20:11.979895 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:11 crc kubenswrapper[4789]: I0202 21:20:11.979917 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:11 crc kubenswrapper[4789]: I0202 21:20:11.979944 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:11 crc kubenswrapper[4789]: I0202 21:20:11.979961 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:11Z","lastTransitionTime":"2026-02-02T21:20:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:12 crc kubenswrapper[4789]: I0202 21:20:12.000132 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" event={"ID":"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6","Type":"ContainerStarted","Data":"61ddf86cf1be2810942de4465d9f3ff475fea38a612b6b9c033941c8f7ac5286"} Feb 02 21:20:12 crc kubenswrapper[4789]: I0202 21:20:12.000882 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" Feb 02 21:20:12 crc kubenswrapper[4789]: I0202 21:20:12.001030 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" Feb 02 21:20:12 crc kubenswrapper[4789]: I0202 21:20:12.010989 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dsv6b" event={"ID":"bcbed546-a1c3-4ba4-96b4-61471010b1c2","Type":"ContainerStarted","Data":"ef304631b36ef7a2edc5ce89f5b4d1efecd4947d1d71f3bb4068395d76aea346"} Feb 02 21:20:12 crc kubenswrapper[4789]: I0202 21:20:12.026998 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9563eded-ca82-4eb6-90d4-e62b8acbe296\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72bf64d682578abb47e9fabf5e6c31e3a20594cc4a2d13304b2036f4198af265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cc46c408bb643834e494025442760929995b22175ce2af65a14004761848c2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e95c9e553b8e14141ddde6a221c0e5e4d46533d1631893e5d55416b9d16f4a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8bd6ccf6c7345f0ddc84a2983b618218fc65283b8d351c2df30d1221f304b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d8bd6ccf6c7345f0ddc84a2983b618218fc65283b8d351c2df30d1221f304b7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\" 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 21:20:01.773773 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 21:20:01.773776 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 21:20:01.773779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 21:20:01.773999 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0202 21:20:01.777333 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3307553149/tls.crt::/tmp/serving-cert-3307553149/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770067196\\\\\\\\\\\\\\\" (2026-02-02 21:19:55 +0000 UTC to 2026-03-04 21:19:56 +0000 UTC (now=2026-02-02 21:20:01.777292377 +0000 UTC))\\\\\\\"\\\\nI0202 21:20:01.777438 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770067201\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770067201\\\\\\\\\\\\\\\" (2026-02-02 20:20:01 +0000 UTC to 2027-02-02 20:20:01 +0000 UTC (now=2026-02-02 21:20:01.777417111 +0000 UTC))\\\\\\\"\\\\nI0202 21:20:01.777455 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 21:20:01.777484 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 21:20:01.777505 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0202 21:20:01.777526 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0202 21:20:01.777548 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3307553149/tls.crt::/tmp/serving-cert-3307553149/tls.key\\\\\\\"\\\\nF0202 21:20:01.777545 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cbe4a41d8721562e34639bc3ab24cc7f735d8de00e14d38b1b623b58740c5b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94bdc1c30a3c895835afc4f205ec20725ec13707db6957046cae7791b8850679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94bdc1c30a3c895835afc4f205ec20725ec13707db6957046cae7791b8850679\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:19:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:19:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:12Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:12 crc kubenswrapper[4789]: I0202 21:20:12.038935 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" Feb 02 21:20:12 crc kubenswrapper[4789]: I0202 21:20:12.040151 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" Feb 02 21:20:12 crc kubenswrapper[4789]: I0202 21:20:12.049525 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dsv6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcbed546-a1c3-4ba4-96b4-61471010b1c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48b8508849b24e18223b62bd73348759ba4c04a525afb7d4055175bceb0183c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e48b8508849b24e18223b62bd73348759ba4c04a525afb7d4055175bceb0183c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92b13f24aa00608821909c688ace0441fd4e432d3c5fc9a1cc06b4239bf86955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92b13f24aa00608821909c688ace0441fd4e432d3c5fc9a1cc06b4239bf86955\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75d7fd5bba471f9371960e5c6cfd5d7cb49402d47459acad01a9c438ea33de0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75d7fd5bba471f9371960e5c6cfd5d7cb49402d47459acad01a9c438ea33de0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eb07d53076d60f3363a1232e742bb4bd801d49104a2be1095af34a61a5b61cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eb07d53076d60f3363a1232e742bb4bd801d49104a2be1095af34a61a5b61cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5339fbb27f553f96497abbf4b0584cbffe399e44fafd90cd0e568e2f4efad6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5339fbb27f553f96497abbf4b0584cbffe399e44fafd90cd0e568e2f4efad6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dsv6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:12Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:12 crc kubenswrapper[4789]: I0202 21:20:12.072469 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29dad69c19f9411676fab81684dd613d866c494df1e333b46bfff35b98430103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://047a98525fa1b06be5e6f3bab6b417973a13bb32df7fad26ca90b4558ee4e364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05637515d4c19e054ce43cc23bb6d8e32c4752282dab08786fff3062eb976461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f05a85f44fd6a7b9efa0f591ec0afebd7818a80b132ec3dddd0faa22c380eea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd508dcd4022163f662ad27cf3ffbc4cca551d4020809ca5fac60cee8da80601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://021453f1c74b708ad3f39c2defa2664f098a552f3e22315479c483d24110e583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ddf86cf1be2810942de4465d9f3ff475fea38a612b6b9c033941c8f7ac5286\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f30344b8366cf5ef712d971dc4d8d552e7b50c1ecea8d2ace88bb80fce62231e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e90a8e698f2e9f30a596864d0d038bfad686cca1a56b1a8bc72fab5fe594f355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e90a8e698f2e9f30a596864d0d038bfad686cca1a56b1a8bc72fab5fe594f355\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w8vkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:12Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:12 crc kubenswrapper[4789]: I0202 21:20:12.082260 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:12 crc kubenswrapper[4789]: I0202 21:20:12.082290 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:12 crc kubenswrapper[4789]: I0202 21:20:12.082302 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:12 crc kubenswrapper[4789]: I0202 21:20:12.082319 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:12 crc kubenswrapper[4789]: I0202 21:20:12.082331 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:12Z","lastTransitionTime":"2026-02-02T21:20:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:12 crc kubenswrapper[4789]: I0202 21:20:12.086937 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wlsw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b25d791-42e1-4e08-b7da-41803cc40f4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0e2a17d3ade1aa229b2729d66fe953acc746673549e0d929076a3bd18839833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-snjrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wlsw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:12Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:12 crc kubenswrapper[4789]: I0202 21:20:12.099790 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fb5d89e-c901-4857-a5e8-b4d8e3701714\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d85613f300f8d008d919038b40efff3fb67e228e25959cbfd2424640c6bc7a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://460e055777327e1734238bf51929751678ea5a757b00a344daa31c8c95e4c463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c292754c81dd4929f8e599e8ec352fd5e90431c60bb741c070a10ffa91fad72\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3685c506f77d3807711a442d68da94932064737cfdc5cb74557da135906e24f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:19:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:12Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:12 crc kubenswrapper[4789]: I0202 21:20:12.113911 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2x5ws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70a32268-2a2d-47f3-9fc6-4281b8dc6a02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9eaf3ca59ce89187ee20f3915b7fd4a8867e156c0be18b435511d2af95ffd949\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf446\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2x5ws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:12Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:12 crc kubenswrapper[4789]: I0202 21:20:12.141512 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa57a708f883719dd81200ccf975947c0af244b92ead81daee23171c9be412f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:12Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:12 crc kubenswrapper[4789]: I0202 21:20:12.156546 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6l576" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd970d28-4009-48b2-a0f4-2b8b1d54a2cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b641bb7caefade4d07fe41965573b059bd8b23a83bb9f60a9637d0152dffb07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwbqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6l576\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:12Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:12 crc kubenswrapper[4789]: I0202 21:20:12.172408 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:12Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:12 crc kubenswrapper[4789]: I0202 21:20:12.184892 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:12 crc kubenswrapper[4789]: I0202 21:20:12.184931 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:12 crc kubenswrapper[4789]: I0202 21:20:12.184942 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:12 crc kubenswrapper[4789]: I0202 21:20:12.184957 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:12 crc kubenswrapper[4789]: I0202 21:20:12.184969 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:12Z","lastTransitionTime":"2026-02-02T21:20:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:12 crc kubenswrapper[4789]: I0202 21:20:12.193110 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40b3784b5ba59f0147d2889d897d17140759155e24587e9a323338e0a3125ddd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:12Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:12 crc kubenswrapper[4789]: I0202 21:20:12.214082 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:12Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:12 crc kubenswrapper[4789]: I0202 21:20:12.232551 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:12Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:12 crc kubenswrapper[4789]: I0202 21:20:12.252597 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19928db343f460eed7b046f14b45c6756081f57ea4b3ad77acc86f29eb856a82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://106225a63f3d4ecb7ba8c5266f738990ec7cc5603f9fabde6d234ae265dcc310\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:12Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:12 crc kubenswrapper[4789]: I0202 21:20:12.266403 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdf018b4-1451-4d37-be6e-05802b67c73e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad1716ba11e1b21eb68642e6935312760c389a669a007822290fbe72573bfaad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwk2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b108a17d437cce7e94365267d96e99e84944d9dd12174c5acb380bc1a1f9c885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwk2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c8vcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:12Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:12 crc kubenswrapper[4789]: I0202 21:20:12.281872 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19928db343f460eed7b046f14b45c6756081f57ea4b3ad77acc86f29eb856a82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://106225a63f3d4ecb7ba8c5266f738990ec7cc5603f9fabde6d234ae265dcc310\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:12Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:12 crc kubenswrapper[4789]: I0202 21:20:12.287408 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:12 crc kubenswrapper[4789]: I0202 21:20:12.287451 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:12 crc kubenswrapper[4789]: I0202 21:20:12.287463 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:12 crc kubenswrapper[4789]: I0202 21:20:12.287480 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:12 crc kubenswrapper[4789]: I0202 21:20:12.287491 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:12Z","lastTransitionTime":"2026-02-02T21:20:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:12 crc kubenswrapper[4789]: I0202 21:20:12.299078 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdf018b4-1451-4d37-be6e-05802b67c73e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad1716ba11e1b21eb68642e6935312760c389a669a007822290fbe72573bfaad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwk2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b108a17d437cce7e94365267d96e99e84944d9dd12174c5acb380bc1a1f9c885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwk2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c8vcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:12Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:12 crc kubenswrapper[4789]: I0202 21:20:12.315687 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40b3784b5ba59f0147d2889d897d17140759155e24587e9a323338e0a3125ddd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:12Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:12 crc kubenswrapper[4789]: I0202 21:20:12.332918 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:12Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:12 crc kubenswrapper[4789]: I0202 21:20:12.350116 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:12Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:12 crc kubenswrapper[4789]: I0202 21:20:12.379457 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29dad69c19f9411676fab81684dd613d866c494df1e333b46bfff35b98430103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://047a98525fa1b06be5e6f3bab6b417973a13bb32df7fad26ca90b4558ee4e364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05637515d4c19e054ce43cc23bb6d8e32c4752282dab08786fff3062eb976461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f05a85f44fd6a7b9efa0f591ec0afebd7818a80b132ec3dddd0faa22c380eea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd508dcd4022163f662ad27cf3ffbc4cca551d4020809ca5fac60cee8da80601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://021453f1c74b708ad3f39c2defa2664f098a552f3e22315479c483d24110e583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ddf86cf1be2810942de4465d9f3ff475fea38a612b6b9c033941c8f7ac5286\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f30344b8366cf5ef712d971dc4d8d552e7b50c1ecea8d2ace88bb80fce62231e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e90a8e698f2e9f30a596864d0d038bfad686cca1a56b1a8bc72fab5fe594f355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e90a8e698f2e9f30a596864d0d038bfad686cca1a56b1a8bc72fab5fe594f355\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w8vkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:12Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:12 crc kubenswrapper[4789]: I0202 21:20:12.389781 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:12 crc kubenswrapper[4789]: I0202 21:20:12.389825 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:12 crc kubenswrapper[4789]: I0202 21:20:12.389836 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:12 crc kubenswrapper[4789]: I0202 21:20:12.389853 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:12 crc kubenswrapper[4789]: I0202 21:20:12.389869 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:12Z","lastTransitionTime":"2026-02-02T21:20:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:12 crc kubenswrapper[4789]: I0202 21:20:12.395978 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wlsw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b25d791-42e1-4e08-b7da-41803cc40f4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0e2a17d3ade1aa229b2729d66fe953acc746673549e0d929076a3bd18839833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-snjrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wlsw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:12Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:12 crc kubenswrapper[4789]: I0202 21:20:12.414741 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9563eded-ca82-4eb6-90d4-e62b8acbe296\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72bf64d682578abb47e9fabf5e6c31e3a20594cc4a2d13304b2036f4198af265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cc46c408bb643834e494025442760929995b22175ce2af65a14004761848c2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e95c9e553b8e14141ddde6a221c0e5e4d46533d1631893e5d55416b9d16f4a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8bd6ccf6c7345f0ddc84a2983b618218fc65283b8d351c2df30d1221f304b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d8bd6ccf6c7345f0ddc84a2983b618218fc65283b8d351c2df30d1221f304b7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\" 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 21:20:01.773773 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 21:20:01.773776 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 21:20:01.773779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 21:20:01.773999 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0202 21:20:01.777333 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3307553149/tls.crt::/tmp/serving-cert-3307553149/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770067196\\\\\\\\\\\\\\\" (2026-02-02 21:19:55 +0000 UTC to 2026-03-04 21:19:56 +0000 UTC (now=2026-02-02 21:20:01.777292377 +0000 UTC))\\\\\\\"\\\\nI0202 21:20:01.777438 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770067201\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770067201\\\\\\\\\\\\\\\" (2026-02-02 20:20:01 +0000 UTC to 2027-02-02 20:20:01 +0000 UTC (now=2026-02-02 21:20:01.777417111 +0000 UTC))\\\\\\\"\\\\nI0202 21:20:01.777455 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 21:20:01.777484 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 21:20:01.777505 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0202 21:20:01.777526 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0202 21:20:01.777548 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3307553149/tls.crt::/tmp/serving-cert-3307553149/tls.key\\\\\\\"\\\\nF0202 21:20:01.777545 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cbe4a41d8721562e34639bc3ab24cc7f735d8de00e14d38b1b623b58740c5b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94bdc1c30a3c895835afc4f205ec20725ec13707db6957046cae7791b8850679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94bdc1c30a3c895835afc4f205ec20725ec13707db6957046cae7791b8850679\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:19:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:19:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:12Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:12 crc kubenswrapper[4789]: I0202 21:20:12.433618 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dsv6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcbed546-a1c3-4ba4-96b4-61471010b1c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48b8508849b24e18223b62bd73348759ba4c04a525afb7d4055175bceb0183c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e48b8508849b24e18223b62bd73348759ba4c04a525afb7d4055175bceb0183c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92b13f24aa00608821909c688ace0441fd4e432d3c5fc9a1cc06b4239bf86955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92b13f24aa00608821909c688ace0441fd4e432d3c5fc9a1cc06b4239bf86955\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75d7fd5bba471f9371960e5c6cfd5d7cb49402d47459acad01a9c438ea33de0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75d7fd5bba471f9371960e5c6cfd5d7cb49402d47459acad01a9c438ea33de0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eb07d53076d60f3363a1232e742bb4bd801d49104a2be1095af34a61a5b61cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eb07d53076d60f3363a1232e742bb4bd801d49104a2be1095af34a61a5b61cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5339fbb27f553f96497abbf4b0584cbffe399e44fafd90cd0e568e2f4efad6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5339fbb27f553f96497abbf4b0584cbffe399e44fafd90cd0e568e2f4efad6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef304631b36ef7a2edc5ce89f5b4d1efecd4947d1d71f3bb4068395d76aea346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dsv6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:12Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:12 crc kubenswrapper[4789]: I0202 21:20:12.451010 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fb5d89e-c901-4857-a5e8-b4d8e3701714\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d85613f300f8d008d919038b40efff3fb67e228e25959cbfd2424640c6bc7a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://460e055777327e1734238bf51929751678ea5a757b00a344daa31c8c95e4c463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c292754c81dd4929f8e599e8ec352fd5e90431c60bb741c070a10ffa91fad72\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3685c506f77d3807711a442d68da94932064737cfdc5cb74557da135906e24f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:19:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:12Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:12 crc kubenswrapper[4789]: I0202 21:20:12.466632 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2x5ws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70a32268-2a2d-47f3-9fc6-4281b8dc6a02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9eaf3ca59ce89187ee20f3915b7fd4a8867e156c0be18b435511d2af95ffd949\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf446\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2x5ws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:12Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:12 crc kubenswrapper[4789]: I0202 21:20:12.483467 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:12Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:12 crc kubenswrapper[4789]: I0202 21:20:12.494283 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:12 crc kubenswrapper[4789]: I0202 21:20:12.494310 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:12 crc kubenswrapper[4789]: I0202 21:20:12.494323 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:12 crc kubenswrapper[4789]: I0202 21:20:12.494340 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:12 crc kubenswrapper[4789]: I0202 21:20:12.494355 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:12Z","lastTransitionTime":"2026-02-02T21:20:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:12 crc kubenswrapper[4789]: I0202 21:20:12.497800 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa57a708f883719dd81200ccf975947c0af244b92ead81daee23171c9be412f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:12Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:12 crc kubenswrapper[4789]: I0202 21:20:12.507749 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6l576" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd970d28-4009-48b2-a0f4-2b8b1d54a2cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b641bb7caefade4d07fe41965573b059bd8b23a83bb9f60a9637d0152dffb07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwbqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6l576\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:12Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:12 crc kubenswrapper[4789]: I0202 21:20:12.597189 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:12 crc kubenswrapper[4789]: I0202 21:20:12.597294 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:12 crc kubenswrapper[4789]: I0202 21:20:12.597316 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:12 crc kubenswrapper[4789]: I0202 21:20:12.597340 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:12 crc kubenswrapper[4789]: I0202 21:20:12.597358 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:12Z","lastTransitionTime":"2026-02-02T21:20:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:12 crc kubenswrapper[4789]: I0202 21:20:12.699488 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:12 crc kubenswrapper[4789]: I0202 21:20:12.699521 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:12 crc kubenswrapper[4789]: I0202 21:20:12.699532 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:12 crc kubenswrapper[4789]: I0202 21:20:12.699551 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:12 crc kubenswrapper[4789]: I0202 21:20:12.699562 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:12Z","lastTransitionTime":"2026-02-02T21:20:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:12 crc kubenswrapper[4789]: I0202 21:20:12.746759 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 03:53:22.808948424 +0000 UTC Feb 02 21:20:12 crc kubenswrapper[4789]: I0202 21:20:12.802682 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:12 crc kubenswrapper[4789]: I0202 21:20:12.802717 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:12 crc kubenswrapper[4789]: I0202 21:20:12.802726 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:12 crc kubenswrapper[4789]: I0202 21:20:12.802740 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:12 crc kubenswrapper[4789]: I0202 21:20:12.802749 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:12Z","lastTransitionTime":"2026-02-02T21:20:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:12 crc kubenswrapper[4789]: I0202 21:20:12.906533 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:12 crc kubenswrapper[4789]: I0202 21:20:12.906637 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:12 crc kubenswrapper[4789]: I0202 21:20:12.906664 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:12 crc kubenswrapper[4789]: I0202 21:20:12.906695 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:12 crc kubenswrapper[4789]: I0202 21:20:12.906722 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:12Z","lastTransitionTime":"2026-02-02T21:20:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:13 crc kubenswrapper[4789]: I0202 21:20:13.010823 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:13 crc kubenswrapper[4789]: I0202 21:20:13.010885 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:13 crc kubenswrapper[4789]: I0202 21:20:13.010904 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:13 crc kubenswrapper[4789]: I0202 21:20:13.010927 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:13 crc kubenswrapper[4789]: I0202 21:20:13.010960 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:13Z","lastTransitionTime":"2026-02-02T21:20:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:13 crc kubenswrapper[4789]: I0202 21:20:13.019804 4789 generic.go:334] "Generic (PLEG): container finished" podID="bcbed546-a1c3-4ba4-96b4-61471010b1c2" containerID="ef304631b36ef7a2edc5ce89f5b4d1efecd4947d1d71f3bb4068395d76aea346" exitCode=0 Feb 02 21:20:13 crc kubenswrapper[4789]: I0202 21:20:13.019875 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dsv6b" event={"ID":"bcbed546-a1c3-4ba4-96b4-61471010b1c2","Type":"ContainerDied","Data":"ef304631b36ef7a2edc5ce89f5b4d1efecd4947d1d71f3bb4068395d76aea346"} Feb 02 21:20:13 crc kubenswrapper[4789]: I0202 21:20:13.020376 4789 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 21:20:13 crc kubenswrapper[4789]: I0202 21:20:13.073027 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:13Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:13 crc kubenswrapper[4789]: I0202 21:20:13.096818 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa57a708f883719dd81200ccf975947c0af244b92ead81daee23171c9be412f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:13Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:13 crc kubenswrapper[4789]: I0202 21:20:13.115068 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:13 crc kubenswrapper[4789]: I0202 21:20:13.115108 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:13 crc kubenswrapper[4789]: I0202 21:20:13.115121 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:13 crc kubenswrapper[4789]: I0202 21:20:13.115139 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:13 crc kubenswrapper[4789]: I0202 21:20:13.115152 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:13Z","lastTransitionTime":"2026-02-02T21:20:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:13 crc kubenswrapper[4789]: I0202 21:20:13.117148 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6l576" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd970d28-4009-48b2-a0f4-2b8b1d54a2cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b641bb7caefade4d07fe41965573b059bd8b23a83bb9f60a9637d0152dffb07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwbqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6l576\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:13Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:13 crc kubenswrapper[4789]: I0202 21:20:13.139307 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19928db343f460eed7b046f14b45c6756081f57ea4b3ad77acc86f29eb856a82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://106225a63f3d4ecb7ba8c5266f738990ec7cc5603f9fabde6d234ae265dcc310\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:13Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:13 crc kubenswrapper[4789]: I0202 21:20:13.155674 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdf018b4-1451-4d37-be6e-05802b67c73e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad1716ba11e1b21eb68642e6935312760c389a669a007822290fbe72573bfaad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwk2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b108a17d437cce7e94365267d96e99e84944d9dd12174c5acb380bc1a1f9c885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwk2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c8vcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:13Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:13 crc kubenswrapper[4789]: I0202 21:20:13.174668 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40b3784b5ba59f0147d2889d897d17140759155e24587e9a323338e0a3125ddd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:13Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:13 crc kubenswrapper[4789]: I0202 21:20:13.190067 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:13Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:13 crc kubenswrapper[4789]: I0202 21:20:13.206207 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:13Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:13 crc kubenswrapper[4789]: I0202 21:20:13.217905 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:13 crc kubenswrapper[4789]: I0202 21:20:13.217944 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:13 crc kubenswrapper[4789]: I0202 21:20:13.217957 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:13 crc kubenswrapper[4789]: I0202 21:20:13.217979 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:13 crc kubenswrapper[4789]: I0202 21:20:13.218033 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:13Z","lastTransitionTime":"2026-02-02T21:20:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:13 crc kubenswrapper[4789]: I0202 21:20:13.229416 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29dad69c19f9411676fab81684dd613d866c494df1e333b46bfff35b98430103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://047a98525fa1b06be5e6f3bab6b417973a13bb32df7fad26ca90b4558ee4e364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05637515d4c19e054ce43cc23bb6d8e32c4752282dab08786fff3062eb976461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f05a85f44fd6a7b9efa0f591ec0afebd7818a80b132ec3dddd0faa22c380eea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd508dcd4022163f662ad27cf3ffbc4cca551d4020809ca5fac60cee8da80601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://021453f1c74b708ad3f39c2defa2664f098a552f3e22315479c483d24110e583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ddf86cf1be2810942de4465d9f3ff475fea38a612b6b9c033941c8f7ac5286\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f30344b8366cf5ef712d971dc4d8d552e7b50c1ecea8d2ace88bb80fce62231e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e90a8e698f2e9f30a596864d0d038bfad686cca1a56b1a8bc72fab5fe594f355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e90a8e698f2e9f30a596864d0d038bfad686cca1a56b1a8bc72fab5fe594f355\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w8vkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:13Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:13 crc kubenswrapper[4789]: I0202 21:20:13.240737 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wlsw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b25d791-42e1-4e08-b7da-41803cc40f4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0e2a17d3ade1aa229b2729d66fe953acc746673549e0d929076a3bd18839833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-snjrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wlsw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:13Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:13 crc kubenswrapper[4789]: I0202 21:20:13.254630 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9563eded-ca82-4eb6-90d4-e62b8acbe296\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72bf64d682578abb47e9fabf5e6c31e3a20594cc4a2d13304b2036f4198af265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cc46c408bb643834e494025442760929995b22175ce2af65a14004761848c2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e95c9e553b8e14141ddde6a221c0e5e4d46533d1631893e5d55416b9d16f4a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8bd6ccf6c7345f0ddc84a2983b618218fc65283b8d351c2df30d1221f304b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d8bd6ccf6c7345f0ddc84a2983b618218fc65283b8d351c2df30d1221f304b7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\" 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 21:20:01.773773 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 21:20:01.773776 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 21:20:01.773779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 21:20:01.773999 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0202 21:20:01.777333 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3307553149/tls.crt::/tmp/serving-cert-3307553149/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770067196\\\\\\\\\\\\\\\" (2026-02-02 21:19:55 +0000 UTC to 2026-03-04 21:19:56 +0000 UTC (now=2026-02-02 21:20:01.777292377 +0000 UTC))\\\\\\\"\\\\nI0202 21:20:01.777438 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770067201\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770067201\\\\\\\\\\\\\\\" (2026-02-02 20:20:01 +0000 UTC to 2027-02-02 20:20:01 +0000 UTC (now=2026-02-02 21:20:01.777417111 +0000 UTC))\\\\\\\"\\\\nI0202 21:20:01.777455 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 21:20:01.777484 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 21:20:01.777505 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0202 21:20:01.777526 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0202 21:20:01.777548 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3307553149/tls.crt::/tmp/serving-cert-3307553149/tls.key\\\\\\\"\\\\nF0202 21:20:01.777545 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cbe4a41d8721562e34639bc3ab24cc7f735d8de00e14d38b1b623b58740c5b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94bdc1c30a3c895835afc4f205ec20725ec13707db6957046cae7791b8850679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94bdc1c30a3c895835afc4f205ec20725ec13707db6957046cae7791b8850679\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:19:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:19:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:13Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:13 crc kubenswrapper[4789]: I0202 21:20:13.265905 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dsv6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcbed546-a1c3-4ba4-96b4-61471010b1c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48b8508849b24e18223b62bd73348759ba4c04a525afb7d4055175bceb0183c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e48b8508849b24e18223b62bd73348759ba4c04a525afb7d4055175bceb0183c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92b13f24aa00608821909c688ace0441fd4e432d3c5fc9a1cc06b4239bf86955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92b13f24aa00608821909c688ace0441fd4e432d3c5fc9a1cc06b4239bf86955\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75d7fd5bba471f9371960e5c6cfd5d7cb49402d47459acad01a9c438ea33de0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75d7fd5bba471f9371960e5c6cfd5d7cb49402d47459acad01a9c438ea33de0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eb07d53076d60f3363a1232e742bb4bd801d49104a2be1095af34a61a5b61cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eb07d53076d60f3363a1232e742bb4bd801d49104a2be1095af34a61a5b61cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5339fbb27f553f96497abbf4b0584cbffe399e44fafd90cd0e568e2f4efad6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5339fbb27f553f96497abbf4b0584cbffe399e44fafd90cd0e568e2f4efad6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef304631b36ef7a2edc5ce89f5b4d1efecd4947d1d71f3bb4068395d76aea346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef304631b36ef7a2edc5ce89f5b4d1efecd4947d1d71f3bb4068395d76aea346\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dsv6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:13Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:13 crc kubenswrapper[4789]: I0202 21:20:13.277944 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fb5d89e-c901-4857-a5e8-b4d8e3701714\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d85613f300f8d008d919038b40efff3fb67e228e25959cbfd2424640c6bc7a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://460e055777327e1734238bf51929751678ea5a757b00a344daa31c8c95e4c463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c292754c81dd4929f8e599e8ec352fd5e90431c60bb741c070a10ffa91fad72\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3685c506f77d3807711a442d68da94932064737cfdc5cb74557da135906e24f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:19:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:13Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:13 crc kubenswrapper[4789]: I0202 21:20:13.289012 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2x5ws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70a32268-2a2d-47f3-9fc6-4281b8dc6a02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9eaf3ca59ce89187ee20f3915b7fd4a8867e156c0be18b435511d2af95ffd949\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf446\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2x5ws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:13Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:13 crc kubenswrapper[4789]: I0202 21:20:13.321604 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:13 crc kubenswrapper[4789]: I0202 21:20:13.321648 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:13 crc kubenswrapper[4789]: I0202 21:20:13.321660 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:13 crc kubenswrapper[4789]: I0202 21:20:13.321681 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:13 crc kubenswrapper[4789]: I0202 21:20:13.321694 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:13Z","lastTransitionTime":"2026-02-02T21:20:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:13 crc kubenswrapper[4789]: I0202 21:20:13.419109 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 21:20:13 crc kubenswrapper[4789]: I0202 21:20:13.419167 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 21:20:13 crc kubenswrapper[4789]: E0202 21:20:13.419243 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 21:20:13 crc kubenswrapper[4789]: E0202 21:20:13.419331 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 21:20:13 crc kubenswrapper[4789]: I0202 21:20:13.419174 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 21:20:13 crc kubenswrapper[4789]: E0202 21:20:13.419417 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 21:20:13 crc kubenswrapper[4789]: I0202 21:20:13.424523 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:13 crc kubenswrapper[4789]: I0202 21:20:13.424559 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:13 crc kubenswrapper[4789]: I0202 21:20:13.424573 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:13 crc kubenswrapper[4789]: I0202 21:20:13.424613 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:13 crc kubenswrapper[4789]: I0202 21:20:13.424625 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:13Z","lastTransitionTime":"2026-02-02T21:20:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:13 crc kubenswrapper[4789]: I0202 21:20:13.534265 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:13 crc kubenswrapper[4789]: I0202 21:20:13.534337 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:13 crc kubenswrapper[4789]: I0202 21:20:13.534366 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:13 crc kubenswrapper[4789]: I0202 21:20:13.534398 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:13 crc kubenswrapper[4789]: I0202 21:20:13.534427 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:13Z","lastTransitionTime":"2026-02-02T21:20:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:13 crc kubenswrapper[4789]: I0202 21:20:13.638873 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:13 crc kubenswrapper[4789]: I0202 21:20:13.638937 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:13 crc kubenswrapper[4789]: I0202 21:20:13.638955 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:13 crc kubenswrapper[4789]: I0202 21:20:13.638979 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:13 crc kubenswrapper[4789]: I0202 21:20:13.638996 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:13Z","lastTransitionTime":"2026-02-02T21:20:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:13 crc kubenswrapper[4789]: I0202 21:20:13.742521 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:13 crc kubenswrapper[4789]: I0202 21:20:13.742610 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:13 crc kubenswrapper[4789]: I0202 21:20:13.742630 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:13 crc kubenswrapper[4789]: I0202 21:20:13.742655 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:13 crc kubenswrapper[4789]: I0202 21:20:13.742675 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:13Z","lastTransitionTime":"2026-02-02T21:20:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:13 crc kubenswrapper[4789]: I0202 21:20:13.746892 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 08:22:06.154254093 +0000 UTC Feb 02 21:20:13 crc kubenswrapper[4789]: I0202 21:20:13.848901 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:13 crc kubenswrapper[4789]: I0202 21:20:13.848968 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:13 crc kubenswrapper[4789]: I0202 21:20:13.848988 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:13 crc kubenswrapper[4789]: I0202 21:20:13.849012 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:13 crc kubenswrapper[4789]: I0202 21:20:13.849043 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:13Z","lastTransitionTime":"2026-02-02T21:20:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:13 crc kubenswrapper[4789]: I0202 21:20:13.952145 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:13 crc kubenswrapper[4789]: I0202 21:20:13.952232 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:13 crc kubenswrapper[4789]: I0202 21:20:13.952258 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:13 crc kubenswrapper[4789]: I0202 21:20:13.952294 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:13 crc kubenswrapper[4789]: I0202 21:20:13.952319 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:13Z","lastTransitionTime":"2026-02-02T21:20:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:14 crc kubenswrapper[4789]: I0202 21:20:14.031693 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dsv6b" event={"ID":"bcbed546-a1c3-4ba4-96b4-61471010b1c2","Type":"ContainerStarted","Data":"fff0380f801c6909a82bfc8a765cf7a393d7c2f7cc809611ded737a22f448a3a"} Feb 02 21:20:14 crc kubenswrapper[4789]: I0202 21:20:14.031767 4789 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 21:20:14 crc kubenswrapper[4789]: I0202 21:20:14.055881 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:14 crc kubenswrapper[4789]: I0202 21:20:14.055948 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:14 crc kubenswrapper[4789]: I0202 21:20:14.055967 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:14 crc kubenswrapper[4789]: I0202 21:20:14.055993 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:14 crc kubenswrapper[4789]: I0202 21:20:14.056017 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:14Z","lastTransitionTime":"2026-02-02T21:20:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:14 crc kubenswrapper[4789]: I0202 21:20:14.057263 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:14Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:14 crc kubenswrapper[4789]: I0202 21:20:14.080966 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa57a708f883719dd81200ccf975947c0af244b92ead81daee23171c9be412f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:14Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:14 crc kubenswrapper[4789]: I0202 21:20:14.098632 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6l576" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd970d28-4009-48b2-a0f4-2b8b1d54a2cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b641bb7caefade4d07fe41965573b059bd8b23a83bb9f60a9637d0152dffb07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwbqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6l576\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:14Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:14 crc kubenswrapper[4789]: I0202 21:20:14.120166 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19928db343f460eed7b046f14b45c6756081f57ea4b3ad77acc86f29eb856a82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://106225a63f3d4ecb7ba8c5266f738990ec7cc5603f9fabde6d234ae265dcc310\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:14Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:14 crc kubenswrapper[4789]: I0202 21:20:14.136436 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdf018b4-1451-4d37-be6e-05802b67c73e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad1716ba11e1b21eb68642e6935312760c389a669a007822290fbe72573bfaad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwk2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b108a17d437cce7e94365267d96e99e84944d9dd12174c5acb380bc1a1f9c885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwk2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c8vcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:14Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:14 crc kubenswrapper[4789]: I0202 21:20:14.154005 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40b3784b5ba59f0147d2889d897d17140759155e24587e9a323338e0a3125ddd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:14Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:14 crc kubenswrapper[4789]: I0202 21:20:14.159775 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:14 crc kubenswrapper[4789]: I0202 21:20:14.159840 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:14 crc kubenswrapper[4789]: I0202 21:20:14.159862 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:14 crc kubenswrapper[4789]: I0202 21:20:14.159889 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:14 crc kubenswrapper[4789]: I0202 21:20:14.159907 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:14Z","lastTransitionTime":"2026-02-02T21:20:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:14 crc kubenswrapper[4789]: I0202 21:20:14.173861 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:14Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:14 crc kubenswrapper[4789]: I0202 21:20:14.196176 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:14Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:14 crc kubenswrapper[4789]: I0202 21:20:14.229905 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29dad69c19f9411676fab81684dd613d866c494df1e333b46bfff35b98430103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://047a98525fa1b06be5e6f3bab6b417973a13bb32df7fad26ca90b4558ee4e364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05637515d4c19e054ce43cc23bb6d8e32c4752282dab08786fff3062eb976461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f05a85f44fd6a7b9efa0f591ec0afebd7818a80b132ec3dddd0faa22c380eea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd508dcd4022163f662ad27cf3ffbc4cca551d4020809ca5fac60cee8da80601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://021453f1c74b708ad3f39c2defa2664f098a552f3e22315479c483d24110e583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ddf86cf1be2810942de4465d9f3ff475fea38a612b6b9c033941c8f7ac5286\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f30344b8366cf5ef712d971dc4d8d552e7b50c1ecea8d2ace88bb80fce62231e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e90a8e698f2e9f30a596864d0d038bfad686cca1a56b1a8bc72fab5fe594f355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e90a8e698f2e9f30a596864d0d038bfad686cca1a56b1a8bc72fab5fe594f355\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w8vkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:14Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:14 crc kubenswrapper[4789]: I0202 21:20:14.246136 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wlsw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b25d791-42e1-4e08-b7da-41803cc40f4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0e2a17d3ade1aa229b2729d66fe953acc746673549e0d929076a3bd18839833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-snjrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wlsw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:14Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:14 crc kubenswrapper[4789]: I0202 21:20:14.262180 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9563eded-ca82-4eb6-90d4-e62b8acbe296\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72bf64d682578abb47e9fabf5e6c31e3a20594cc4a2d13304b2036f4198af265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cc46c408bb643834e494025442760929995b22175ce2af65a14004761848c2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e95c9e553b8e14141ddde6a221c0e5e4d46533d1631893e5d55416b9d16f4a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8bd6ccf6c7345f0ddc84a2983b618218fc65283b8d351c2df30d1221f304b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d8bd6ccf6c7345f0ddc84a2983b618218fc65283b8d351c2df30d1221f304b7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\" 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 21:20:01.773773 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 21:20:01.773776 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 21:20:01.773779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 21:20:01.773999 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0202 21:20:01.777333 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3307553149/tls.crt::/tmp/serving-cert-3307553149/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770067196\\\\\\\\\\\\\\\" (2026-02-02 21:19:55 +0000 UTC to 2026-03-04 21:19:56 +0000 UTC (now=2026-02-02 21:20:01.777292377 +0000 UTC))\\\\\\\"\\\\nI0202 21:20:01.777438 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770067201\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770067201\\\\\\\\\\\\\\\" (2026-02-02 20:20:01 +0000 UTC to 2027-02-02 20:20:01 +0000 UTC (now=2026-02-02 21:20:01.777417111 +0000 UTC))\\\\\\\"\\\\nI0202 21:20:01.777455 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 21:20:01.777484 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 21:20:01.777505 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0202 21:20:01.777526 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0202 21:20:01.777548 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3307553149/tls.crt::/tmp/serving-cert-3307553149/tls.key\\\\\\\"\\\\nF0202 21:20:01.777545 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cbe4a41d8721562e34639bc3ab24cc7f735d8de00e14d38b1b623b58740c5b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94bdc1c30a3c895835afc4f205ec20725ec13707db6957046cae7791b8850679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94bdc1c30a3c895835afc4f205ec20725ec13707db6957046cae7791b8850679\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:19:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:19:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:14Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:14 crc kubenswrapper[4789]: I0202 21:20:14.263818 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:14 crc kubenswrapper[4789]: I0202 21:20:14.263868 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:14 crc kubenswrapper[4789]: I0202 21:20:14.263886 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:14 crc kubenswrapper[4789]: I0202 21:20:14.263913 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:14 crc kubenswrapper[4789]: I0202 21:20:14.263930 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:14Z","lastTransitionTime":"2026-02-02T21:20:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:14 crc kubenswrapper[4789]: I0202 21:20:14.285301 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dsv6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcbed546-a1c3-4ba4-96b4-61471010b1c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fff0380f801c6909a82bfc8a765cf7a393d7c2f7cc809611ded737a22f448a3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48b8508849b24e18223b62bd73348759ba4c04a525afb7d4055175bceb0183c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e48b8508849b24e18223b62bd73348759ba4c04a525afb7d4055175bceb0183c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92b13f24aa00608821909c688ace0441fd4e432d3c5fc9a1cc06b4239bf86955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92b13f24aa00608821909c688ace0441fd4e432d3c5fc9a1cc06b4239bf86955\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75d7fd5bba471f9371960e5c6cfd5d7cb49402d47459acad01a9c438ea33de0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75d7fd5bba471f9371960e5c6cfd5d7cb49402d47459acad01a9c438ea33de0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eb07d53076d60f3363a1232e742bb4bd801d49104a2be1095af34a61a5b61cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eb07d53076d60f3363a1232e742bb4bd801d49104a2be1095af34a61a5b61cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5339fbb27f553f96497abbf4b0584cbffe399e44fafd90cd0e568e2f4efad6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5339fbb27f553f96497abbf4b0584cbffe399e44fafd90cd0e568e2f4efad6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef304631b36ef7a2edc5ce89f5b4d1efecd4947d1d71f3bb4068395d76aea346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef304631b36ef7a2edc5ce89f5b4d1efecd4947d1d71f3bb4068395d76aea346\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dsv6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:14Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:14 crc kubenswrapper[4789]: I0202 21:20:14.306844 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fb5d89e-c901-4857-a5e8-b4d8e3701714\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d85613f300f8d008d919038b40efff3fb67e228e25959cbfd2424640c6bc7a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://460e055777327e1734238bf51929751678ea5a757b00a344daa31c8c95e4c463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c292754c81dd4929f8e599e8ec352fd5e90431c60bb741c070a10ffa91fad72\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3685c506f77d3807711a442d68da94932064737cfdc5cb74557da135906e24f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:19:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:14Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:14 crc kubenswrapper[4789]: I0202 21:20:14.327187 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2x5ws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70a32268-2a2d-47f3-9fc6-4281b8dc6a02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9eaf3ca59ce89187ee20f3915b7fd4a8867e156c0be18b435511d2af95ffd949\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf446\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2x5ws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:14Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:14 crc kubenswrapper[4789]: I0202 21:20:14.367933 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:14 crc kubenswrapper[4789]: I0202 21:20:14.368022 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:14 crc kubenswrapper[4789]: I0202 21:20:14.368049 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:14 crc kubenswrapper[4789]: I0202 21:20:14.368084 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:14 crc kubenswrapper[4789]: I0202 21:20:14.368109 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:14Z","lastTransitionTime":"2026-02-02T21:20:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:14 crc kubenswrapper[4789]: I0202 21:20:14.471560 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:14 crc kubenswrapper[4789]: I0202 21:20:14.471714 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:14 crc kubenswrapper[4789]: I0202 21:20:14.471735 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:14 crc kubenswrapper[4789]: I0202 21:20:14.471813 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:14 crc kubenswrapper[4789]: I0202 21:20:14.471832 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:14Z","lastTransitionTime":"2026-02-02T21:20:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:14 crc kubenswrapper[4789]: I0202 21:20:14.575112 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:14 crc kubenswrapper[4789]: I0202 21:20:14.575175 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:14 crc kubenswrapper[4789]: I0202 21:20:14.575192 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:14 crc kubenswrapper[4789]: I0202 21:20:14.575216 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:14 crc kubenswrapper[4789]: I0202 21:20:14.575234 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:14Z","lastTransitionTime":"2026-02-02T21:20:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:14 crc kubenswrapper[4789]: I0202 21:20:14.678151 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:14 crc kubenswrapper[4789]: I0202 21:20:14.678200 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:14 crc kubenswrapper[4789]: I0202 21:20:14.678221 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:14 crc kubenswrapper[4789]: I0202 21:20:14.678245 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:14 crc kubenswrapper[4789]: I0202 21:20:14.678264 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:14Z","lastTransitionTime":"2026-02-02T21:20:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:14 crc kubenswrapper[4789]: I0202 21:20:14.747452 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 21:01:32.48520697 +0000 UTC Feb 02 21:20:14 crc kubenswrapper[4789]: I0202 21:20:14.780127 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:14 crc kubenswrapper[4789]: I0202 21:20:14.780166 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:14 crc kubenswrapper[4789]: I0202 21:20:14.780177 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:14 crc kubenswrapper[4789]: I0202 21:20:14.780189 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:14 crc kubenswrapper[4789]: I0202 21:20:14.780199 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:14Z","lastTransitionTime":"2026-02-02T21:20:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:14 crc kubenswrapper[4789]: I0202 21:20:14.882476 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:14 crc kubenswrapper[4789]: I0202 21:20:14.882519 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:14 crc kubenswrapper[4789]: I0202 21:20:14.882532 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:14 crc kubenswrapper[4789]: I0202 21:20:14.882550 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:14 crc kubenswrapper[4789]: I0202 21:20:14.882565 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:14Z","lastTransitionTime":"2026-02-02T21:20:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:14 crc kubenswrapper[4789]: I0202 21:20:14.985965 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:14 crc kubenswrapper[4789]: I0202 21:20:14.986040 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:14 crc kubenswrapper[4789]: I0202 21:20:14.986059 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:14 crc kubenswrapper[4789]: I0202 21:20:14.986092 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:14 crc kubenswrapper[4789]: I0202 21:20:14.986112 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:14Z","lastTransitionTime":"2026-02-02T21:20:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:15 crc kubenswrapper[4789]: I0202 21:20:15.089631 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:15 crc kubenswrapper[4789]: I0202 21:20:15.089692 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:15 crc kubenswrapper[4789]: I0202 21:20:15.089709 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:15 crc kubenswrapper[4789]: I0202 21:20:15.089735 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:15 crc kubenswrapper[4789]: I0202 21:20:15.089781 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:15Z","lastTransitionTime":"2026-02-02T21:20:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:15 crc kubenswrapper[4789]: I0202 21:20:15.192556 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:15 crc kubenswrapper[4789]: I0202 21:20:15.192605 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:15 crc kubenswrapper[4789]: I0202 21:20:15.192619 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:15 crc kubenswrapper[4789]: I0202 21:20:15.192634 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:15 crc kubenswrapper[4789]: I0202 21:20:15.192643 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:15Z","lastTransitionTime":"2026-02-02T21:20:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:15 crc kubenswrapper[4789]: I0202 21:20:15.298370 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:15 crc kubenswrapper[4789]: I0202 21:20:15.298448 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:15 crc kubenswrapper[4789]: I0202 21:20:15.298467 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:15 crc kubenswrapper[4789]: I0202 21:20:15.298524 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:15 crc kubenswrapper[4789]: I0202 21:20:15.298542 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:15Z","lastTransitionTime":"2026-02-02T21:20:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:15 crc kubenswrapper[4789]: I0202 21:20:15.404252 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:15 crc kubenswrapper[4789]: I0202 21:20:15.404294 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:15 crc kubenswrapper[4789]: I0202 21:20:15.404305 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:15 crc kubenswrapper[4789]: I0202 21:20:15.404320 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:15 crc kubenswrapper[4789]: I0202 21:20:15.404330 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:15Z","lastTransitionTime":"2026-02-02T21:20:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:15 crc kubenswrapper[4789]: I0202 21:20:15.419532 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 21:20:15 crc kubenswrapper[4789]: I0202 21:20:15.419549 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 21:20:15 crc kubenswrapper[4789]: E0202 21:20:15.419689 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 21:20:15 crc kubenswrapper[4789]: I0202 21:20:15.419714 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 21:20:15 crc kubenswrapper[4789]: E0202 21:20:15.419805 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 21:20:15 crc kubenswrapper[4789]: E0202 21:20:15.419990 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 21:20:15 crc kubenswrapper[4789]: I0202 21:20:15.506836 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:15 crc kubenswrapper[4789]: I0202 21:20:15.506901 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:15 crc kubenswrapper[4789]: I0202 21:20:15.506915 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:15 crc kubenswrapper[4789]: I0202 21:20:15.506937 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:15 crc kubenswrapper[4789]: I0202 21:20:15.506953 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:15Z","lastTransitionTime":"2026-02-02T21:20:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:15 crc kubenswrapper[4789]: I0202 21:20:15.609608 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:15 crc kubenswrapper[4789]: I0202 21:20:15.610198 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:15 crc kubenswrapper[4789]: I0202 21:20:15.610355 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:15 crc kubenswrapper[4789]: I0202 21:20:15.610519 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:15 crc kubenswrapper[4789]: I0202 21:20:15.610697 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:15Z","lastTransitionTime":"2026-02-02T21:20:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:15 crc kubenswrapper[4789]: I0202 21:20:15.714174 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:15 crc kubenswrapper[4789]: I0202 21:20:15.714771 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:15 crc kubenswrapper[4789]: I0202 21:20:15.714928 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:15 crc kubenswrapper[4789]: I0202 21:20:15.715084 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:15 crc kubenswrapper[4789]: I0202 21:20:15.715232 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:15Z","lastTransitionTime":"2026-02-02T21:20:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:15 crc kubenswrapper[4789]: I0202 21:20:15.727505 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d49gm"] Feb 02 21:20:15 crc kubenswrapper[4789]: I0202 21:20:15.728932 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d49gm" Feb 02 21:20:15 crc kubenswrapper[4789]: I0202 21:20:15.731713 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 02 21:20:15 crc kubenswrapper[4789]: I0202 21:20:15.731861 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 02 21:20:15 crc kubenswrapper[4789]: I0202 21:20:15.743948 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wlsw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b25d791-42e1-4e08-b7da-41803cc40f4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0e2a17d3ade1aa229b2729d66fe953acc746673549e0d929076a3bd18839833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-snjrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wlsw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:15Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:15 crc kubenswrapper[4789]: I0202 21:20:15.747566 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 16:34:25.105793189 +0000 UTC Feb 02 21:20:15 crc kubenswrapper[4789]: I0202 21:20:15.759933 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9563eded-ca82-4eb6-90d4-e62b8acbe296\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72bf64d682578abb47e9fabf5e6c31e3a20594cc4a2d13304b2036f4198af265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cc46c408bb643834e494025442760929995b22175ce2af65a14004761848c2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e95c9e553b8e14141ddde6a221c0e5e4d46533d1631893e5d55416b9d16f4a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8bd6ccf6c7345f0ddc84a2983b618218fc65283b8d351c2df30d1221f304b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d8bd6ccf6c7345f0ddc84a2983b618218fc65283b8d351c2df30d1221f304b7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\" 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 21:20:01.773773 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 21:20:01.773776 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 21:20:01.773779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 21:20:01.773999 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0202 21:20:01.777333 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3307553149/tls.crt::/tmp/serving-cert-3307553149/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770067196\\\\\\\\\\\\\\\" (2026-02-02 21:19:55 +0000 UTC to 2026-03-04 21:19:56 +0000 UTC (now=2026-02-02 21:20:01.777292377 +0000 UTC))\\\\\\\"\\\\nI0202 21:20:01.777438 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770067201\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770067201\\\\\\\\\\\\\\\" (2026-02-02 20:20:01 +0000 UTC to 2027-02-02 20:20:01 +0000 UTC (now=2026-02-02 21:20:01.777417111 +0000 UTC))\\\\\\\"\\\\nI0202 21:20:01.777455 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 21:20:01.777484 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 21:20:01.777505 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0202 21:20:01.777526 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0202 21:20:01.777548 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3307553149/tls.crt::/tmp/serving-cert-3307553149/tls.key\\\\\\\"\\\\nF0202 21:20:01.777545 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cbe4a41d8721562e34639bc3ab24cc7f735d8de00e14d38b1b623b58740c5b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94bdc1c30a3c895835afc4f205ec20725ec13707db6957046cae7791b8850679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94bdc1c30a3c895835afc4f205ec20725ec13707db6957046cae7791b8850679\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:19:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:19:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:15Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:15 crc kubenswrapper[4789]: I0202 21:20:15.778506 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dsv6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcbed546-a1c3-4ba4-96b4-61471010b1c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fff0380f801c6909a82bfc8a765cf7a393d7c2f7cc809611ded737a22f448a3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48b8508849b24e18223b62bd73348759ba4c04a525afb7d4055175bceb0183c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e48b8508849b24e18223b62bd73348759ba4c04a525afb7d4055175bceb0183c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92b13f24aa00608821909c688ace0441fd4e432d3c5fc9a1cc06b4239bf86955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92b13f24aa00608821909c688ace0441fd4e432d3c5fc9a1cc06b4239bf86955\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75d7fd5bba471f9371960e5c6cfd5d7cb49402d47459acad01a9c438ea33de0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75d7fd5bba471f9371960e5c6cfd5d7cb49402d47459acad01a9c438ea33de0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eb07d53076d60f3363a1232e742bb4bd801d49104a2be1095af34a61a5b61cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eb07d53076d60f3363a1232e742bb4bd801d49104a2be1095af34a61a5b61cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5339fbb27f553f96497abbf4b0584cbffe399e44fafd90cd0e568e2f4efad6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5339fbb27f553f96497abbf4b0584cbffe399e44fafd90cd0e568e2f4efad6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef304631b36ef7a2edc5ce89f5b4d1efecd4947d1d71f3bb4068395d76aea346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef304631b36ef7a2edc5ce89f5b4d1efecd4947d1d71f3bb4068395d76aea346\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dsv6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:15Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:15 crc kubenswrapper[4789]: I0202 21:20:15.800977 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29dad69c19f9411676fab81684dd613d866c494df1e333b46bfff35b98430103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://047a98525fa1b06be5e6f3bab6b417973a13bb32df7fad26ca90b4558ee4e364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05637515d4c19e054ce43cc23bb6d8e32c4752282dab08786fff3062eb976461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f05a85f44fd6a7b9efa0f591ec0afebd7818a80b132ec3dddd0faa22c380eea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd508dcd4022163f662ad27cf3ffbc4cca551d4020809ca5fac60cee8da80601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://021453f1c74b708ad3f39c2defa2664f098a552f3e22315479c483d24110e583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ddf86cf1be2810942de4465d9f3ff475fea38a612b6b9c033941c8f7ac5286\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f30344b8366cf5ef712d971dc4d8d552e7b50c1ecea8d2ace88bb80fce62231e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e90a8e698f2e9f30a596864d0d038bfad686cca1a56b1a8bc72fab5fe594f355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e90a8e698f2e9f30a596864d0d038bfad686cca1a56b1a8bc72fab5fe594f355\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w8vkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:15Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:15 crc kubenswrapper[4789]: I0202 21:20:15.818643 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:15 crc kubenswrapper[4789]: I0202 21:20:15.818704 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:15 crc kubenswrapper[4789]: I0202 21:20:15.818720 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:15 crc kubenswrapper[4789]: I0202 21:20:15.818744 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:15 crc kubenswrapper[4789]: I0202 21:20:15.818761 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:15Z","lastTransitionTime":"2026-02-02T21:20:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:15 crc kubenswrapper[4789]: I0202 21:20:15.821724 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fb5d89e-c901-4857-a5e8-b4d8e3701714\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d85613f300f8d008d919038b40efff3fb67e228e25959cbfd2424640c6bc7a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://460e055777327e1734238bf51929751678ea5a757b00a344daa31c8c95e4c463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c292754c81dd4929f8e599e8ec352fd5e90431c60bb741c070a10ffa91fad72\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3685c506f77d3807711a442d68da94932064737cfdc5cb74557da135906e24f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:19:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:15Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:15 crc kubenswrapper[4789]: I0202 21:20:15.848719 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2x5ws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70a32268-2a2d-47f3-9fc6-4281b8dc6a02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9eaf3ca59ce89187ee20f3915b7fd4a8867e156c0be18b435511d2af95ffd949\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf446\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2x5ws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:15Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:15 crc kubenswrapper[4789]: I0202 21:20:15.866207 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa57a708f883719dd81200ccf975947c0af244b92ead81daee23171c9be412f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:15Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:15 crc kubenswrapper[4789]: I0202 21:20:15.866500 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/39528981-2c85-43f3-8fa0-bfae5c3334cd-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-d49gm\" (UID: \"39528981-2c85-43f3-8fa0-bfae5c3334cd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d49gm" Feb 02 21:20:15 crc kubenswrapper[4789]: I0202 21:20:15.866541 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnfbx\" (UniqueName: \"kubernetes.io/projected/39528981-2c85-43f3-8fa0-bfae5c3334cd-kube-api-access-gnfbx\") pod \"ovnkube-control-plane-749d76644c-d49gm\" (UID: \"39528981-2c85-43f3-8fa0-bfae5c3334cd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d49gm" Feb 02 21:20:15 crc kubenswrapper[4789]: I0202 21:20:15.866665 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/39528981-2c85-43f3-8fa0-bfae5c3334cd-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-d49gm\" (UID: \"39528981-2c85-43f3-8fa0-bfae5c3334cd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d49gm" Feb 02 21:20:15 crc kubenswrapper[4789]: I0202 21:20:15.866784 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/39528981-2c85-43f3-8fa0-bfae5c3334cd-env-overrides\") pod \"ovnkube-control-plane-749d76644c-d49gm\" (UID: \"39528981-2c85-43f3-8fa0-bfae5c3334cd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d49gm" Feb 02 21:20:15 crc kubenswrapper[4789]: I0202 21:20:15.879054 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6l576" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd970d28-4009-48b2-a0f4-2b8b1d54a2cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b641bb7caefade4d07fe41965573b059bd8b23a83bb9f60a9637d0152dffb07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwbqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6l576\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:15Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:15 crc kubenswrapper[4789]: I0202 21:20:15.898653 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:15Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:15 crc kubenswrapper[4789]: I0202 21:20:15.915973 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdf018b4-1451-4d37-be6e-05802b67c73e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad1716ba11e1b21eb68642e6935312760c389a669a007822290fbe72573bfaad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwk2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b108a17d437cce7e94365267d96e99e84944d9dd12174c5acb380bc1a1f9c885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwk2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c8vcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:15Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:15 crc kubenswrapper[4789]: I0202 21:20:15.921092 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:15 crc kubenswrapper[4789]: I0202 21:20:15.921142 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:15 crc kubenswrapper[4789]: I0202 21:20:15.921159 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:15 crc kubenswrapper[4789]: I0202 21:20:15.921184 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:15 crc kubenswrapper[4789]: I0202 21:20:15.921202 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:15Z","lastTransitionTime":"2026-02-02T21:20:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:15 crc kubenswrapper[4789]: I0202 21:20:15.930973 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d49gm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39528981-2c85-43f3-8fa0-bfae5c3334cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnfbx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnfbx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-d49gm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:15Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:15 crc kubenswrapper[4789]: I0202 21:20:15.948418 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40b3784b5ba59f0147d2889d897d17140759155e24587e9a323338e0a3125ddd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:15Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:15 crc kubenswrapper[4789]: I0202 21:20:15.967757 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/39528981-2c85-43f3-8fa0-bfae5c3334cd-env-overrides\") pod \"ovnkube-control-plane-749d76644c-d49gm\" (UID: \"39528981-2c85-43f3-8fa0-bfae5c3334cd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d49gm" Feb 02 21:20:15 crc kubenswrapper[4789]: I0202 21:20:15.967818 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/39528981-2c85-43f3-8fa0-bfae5c3334cd-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-d49gm\" (UID: \"39528981-2c85-43f3-8fa0-bfae5c3334cd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d49gm" Feb 02 21:20:15 crc kubenswrapper[4789]: I0202 21:20:15.967836 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnfbx\" (UniqueName: \"kubernetes.io/projected/39528981-2c85-43f3-8fa0-bfae5c3334cd-kube-api-access-gnfbx\") pod \"ovnkube-control-plane-749d76644c-d49gm\" (UID: \"39528981-2c85-43f3-8fa0-bfae5c3334cd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d49gm" Feb 02 21:20:15 crc kubenswrapper[4789]: I0202 21:20:15.967875 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/39528981-2c85-43f3-8fa0-bfae5c3334cd-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-d49gm\" (UID: \"39528981-2c85-43f3-8fa0-bfae5c3334cd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d49gm" Feb 02 21:20:15 crc kubenswrapper[4789]: I0202 21:20:15.967994 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:15Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:15 crc kubenswrapper[4789]: I0202 21:20:15.968654 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/39528981-2c85-43f3-8fa0-bfae5c3334cd-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-d49gm\" (UID: \"39528981-2c85-43f3-8fa0-bfae5c3334cd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d49gm" Feb 02 21:20:15 crc kubenswrapper[4789]: I0202 21:20:15.968976 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/39528981-2c85-43f3-8fa0-bfae5c3334cd-env-overrides\") pod \"ovnkube-control-plane-749d76644c-d49gm\" (UID: \"39528981-2c85-43f3-8fa0-bfae5c3334cd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d49gm" Feb 02 21:20:15 crc kubenswrapper[4789]: I0202 21:20:15.977302 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/39528981-2c85-43f3-8fa0-bfae5c3334cd-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-d49gm\" (UID: \"39528981-2c85-43f3-8fa0-bfae5c3334cd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d49gm" Feb 02 21:20:15 crc kubenswrapper[4789]: I0202 21:20:15.992071 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:15Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:15 crc kubenswrapper[4789]: I0202 21:20:15.993219 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnfbx\" (UniqueName: \"kubernetes.io/projected/39528981-2c85-43f3-8fa0-bfae5c3334cd-kube-api-access-gnfbx\") pod \"ovnkube-control-plane-749d76644c-d49gm\" (UID: \"39528981-2c85-43f3-8fa0-bfae5c3334cd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d49gm" Feb 02 21:20:16 crc kubenswrapper[4789]: I0202 21:20:16.009241 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19928db343f460eed7b046f14b45c6756081f57ea4b3ad77acc86f29eb856a82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://106225a63f3d4ecb7ba8c5266f738990ec7cc5603f9fabde6d234ae265dcc310\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:16Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:16 crc kubenswrapper[4789]: I0202 21:20:16.025670 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:16 crc kubenswrapper[4789]: I0202 21:20:16.025711 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:16 crc kubenswrapper[4789]: I0202 21:20:16.025720 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:16 crc kubenswrapper[4789]: I0202 21:20:16.025735 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:16 crc kubenswrapper[4789]: I0202 21:20:16.025744 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:16Z","lastTransitionTime":"2026-02-02T21:20:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:16 crc kubenswrapper[4789]: I0202 21:20:16.042764 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d49gm" Feb 02 21:20:16 crc kubenswrapper[4789]: W0202 21:20:16.066565 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod39528981_2c85_43f3_8fa0_bfae5c3334cd.slice/crio-7afca7e40b92a92f06a1c2d199d05ef71c9a147588b55acfe3451f52d9d52b31 WatchSource:0}: Error finding container 7afca7e40b92a92f06a1c2d199d05ef71c9a147588b55acfe3451f52d9d52b31: Status 404 returned error can't find the container with id 7afca7e40b92a92f06a1c2d199d05ef71c9a147588b55acfe3451f52d9d52b31 Feb 02 21:20:16 crc kubenswrapper[4789]: I0202 21:20:16.129116 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:16 crc kubenswrapper[4789]: I0202 21:20:16.129195 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:16 crc kubenswrapper[4789]: I0202 21:20:16.129216 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:16 crc kubenswrapper[4789]: I0202 21:20:16.129248 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:16 crc kubenswrapper[4789]: I0202 21:20:16.129268 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:16Z","lastTransitionTime":"2026-02-02T21:20:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:16 crc kubenswrapper[4789]: I0202 21:20:16.236541 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:16 crc kubenswrapper[4789]: I0202 21:20:16.236618 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:16 crc kubenswrapper[4789]: I0202 21:20:16.236632 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:16 crc kubenswrapper[4789]: I0202 21:20:16.236653 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:16 crc kubenswrapper[4789]: I0202 21:20:16.236667 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:16Z","lastTransitionTime":"2026-02-02T21:20:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:16 crc kubenswrapper[4789]: I0202 21:20:16.339787 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:16 crc kubenswrapper[4789]: I0202 21:20:16.340127 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:16 crc kubenswrapper[4789]: I0202 21:20:16.340364 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:16 crc kubenswrapper[4789]: I0202 21:20:16.340404 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:16 crc kubenswrapper[4789]: I0202 21:20:16.340437 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:16Z","lastTransitionTime":"2026-02-02T21:20:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:16 crc kubenswrapper[4789]: I0202 21:20:16.448363 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:16 crc kubenswrapper[4789]: I0202 21:20:16.448438 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:16 crc kubenswrapper[4789]: I0202 21:20:16.448465 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:16 crc kubenswrapper[4789]: I0202 21:20:16.448497 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:16 crc kubenswrapper[4789]: I0202 21:20:16.448523 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:16Z","lastTransitionTime":"2026-02-02T21:20:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:16 crc kubenswrapper[4789]: I0202 21:20:16.551544 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:16 crc kubenswrapper[4789]: I0202 21:20:16.551648 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:16 crc kubenswrapper[4789]: I0202 21:20:16.551669 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:16 crc kubenswrapper[4789]: I0202 21:20:16.551694 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:16 crc kubenswrapper[4789]: I0202 21:20:16.551713 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:16Z","lastTransitionTime":"2026-02-02T21:20:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:16 crc kubenswrapper[4789]: I0202 21:20:16.654054 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:16 crc kubenswrapper[4789]: I0202 21:20:16.654080 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:16 crc kubenswrapper[4789]: I0202 21:20:16.654088 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:16 crc kubenswrapper[4789]: I0202 21:20:16.654111 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:16 crc kubenswrapper[4789]: I0202 21:20:16.654121 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:16Z","lastTransitionTime":"2026-02-02T21:20:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:16 crc kubenswrapper[4789]: I0202 21:20:16.748120 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 17:32:41.959062311 +0000 UTC Feb 02 21:20:16 crc kubenswrapper[4789]: I0202 21:20:16.756121 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:16 crc kubenswrapper[4789]: I0202 21:20:16.756184 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:16 crc kubenswrapper[4789]: I0202 21:20:16.756204 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:16 crc kubenswrapper[4789]: I0202 21:20:16.756226 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:16 crc kubenswrapper[4789]: I0202 21:20:16.756244 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:16Z","lastTransitionTime":"2026-02-02T21:20:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:16 crc kubenswrapper[4789]: I0202 21:20:16.858896 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:16 crc kubenswrapper[4789]: I0202 21:20:16.858969 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:16 crc kubenswrapper[4789]: I0202 21:20:16.858990 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:16 crc kubenswrapper[4789]: I0202 21:20:16.859025 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:16 crc kubenswrapper[4789]: I0202 21:20:16.859046 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:16Z","lastTransitionTime":"2026-02-02T21:20:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:16 crc kubenswrapper[4789]: I0202 21:20:16.908437 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-vjbpg"] Feb 02 21:20:16 crc kubenswrapper[4789]: I0202 21:20:16.908854 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vjbpg" Feb 02 21:20:16 crc kubenswrapper[4789]: E0202 21:20:16.908907 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vjbpg" podUID="2dc26662-64d3-47f0-9e0d-d340760ca348" Feb 02 21:20:16 crc kubenswrapper[4789]: I0202 21:20:16.925499 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d49gm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39528981-2c85-43f3-8fa0-bfae5c3334cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnfbx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnfbx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-d49gm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:16Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:16 crc kubenswrapper[4789]: I0202 21:20:16.939328 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vjbpg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dc26662-64d3-47f0-9e0d-d340760ca348\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9zq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9zq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vjbpg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:16Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:16 crc kubenswrapper[4789]: I0202 21:20:16.955223 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40b3784b5ba59f0147d2889d897d17140759155e24587e9a323338e0a3125ddd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:16Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:16 crc kubenswrapper[4789]: I0202 21:20:16.961862 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:16 crc kubenswrapper[4789]: I0202 21:20:16.961915 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:16 crc kubenswrapper[4789]: I0202 21:20:16.961932 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:16 crc kubenswrapper[4789]: I0202 21:20:16.961955 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:16 crc kubenswrapper[4789]: I0202 21:20:16.961972 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:16Z","lastTransitionTime":"2026-02-02T21:20:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:16 crc kubenswrapper[4789]: I0202 21:20:16.971436 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:16Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:16 crc kubenswrapper[4789]: I0202 21:20:16.990992 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:16Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:17 crc kubenswrapper[4789]: I0202 21:20:17.009507 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19928db343f460eed7b046f14b45c6756081f57ea4b3ad77acc86f29eb856a82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://106225a63f3d4ecb7ba8c5266f738990ec7cc5603f9fabde6d234ae265dcc310\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:17Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:17 crc kubenswrapper[4789]: I0202 21:20:17.025190 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdf018b4-1451-4d37-be6e-05802b67c73e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad1716ba11e1b21eb68642e6935312760c389a669a007822290fbe72573bfaad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwk2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b108a17d437cce7e94365267d96e99e84944d9dd12174c5acb380bc1a1f9c885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwk2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c8vcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:17Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:17 crc kubenswrapper[4789]: I0202 21:20:17.044365 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w8vkt_2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6/ovnkube-controller/0.log" Feb 02 21:20:17 crc kubenswrapper[4789]: I0202 21:20:17.044755 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9563eded-ca82-4eb6-90d4-e62b8acbe296\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72bf64d682578abb47e9fabf5e6c31e3a20594cc4a2d13304b2036f4198af265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cc46c408bb643834e494025442760929995b22175ce2af65a14004761848c2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e95c9e553b8e14141ddde6a221c0e5e4d46533d1631893e5d55416b9d16f4a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8bd6ccf6c7345f0ddc84a2983b618218fc65283b8d351c2df30d1221f304b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d8bd6ccf6c7345f0ddc84a2983b618218fc65283b8d351c2df30d1221f304b7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\" 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 21:20:01.773773 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 21:20:01.773776 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 21:20:01.773779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 21:20:01.773999 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0202 21:20:01.777333 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3307553149/tls.crt::/tmp/serving-cert-3307553149/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770067196\\\\\\\\\\\\\\\" (2026-02-02 21:19:55 +0000 UTC to 2026-03-04 21:19:56 +0000 UTC (now=2026-02-02 21:20:01.777292377 +0000 UTC))\\\\\\\"\\\\nI0202 21:20:01.777438 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770067201\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770067201\\\\\\\\\\\\\\\" (2026-02-02 20:20:01 +0000 UTC to 2027-02-02 20:20:01 +0000 UTC (now=2026-02-02 21:20:01.777417111 +0000 UTC))\\\\\\\"\\\\nI0202 21:20:01.777455 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 21:20:01.777484 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 21:20:01.777505 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0202 21:20:01.777526 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0202 21:20:01.777548 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3307553149/tls.crt::/tmp/serving-cert-3307553149/tls.key\\\\\\\"\\\\nF0202 21:20:01.777545 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cbe4a41d8721562e34639bc3ab24cc7f735d8de00e14d38b1b623b58740c5b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94bdc1c30a3c895835afc4f205ec20725ec13707db6957046cae7791b8850679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94bdc1c30a3c895835afc4f205ec20725ec13707db6957046cae7791b8850679\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:19:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:19:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:17Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:17 crc kubenswrapper[4789]: I0202 21:20:17.048690 4789 generic.go:334] "Generic (PLEG): container finished" podID="2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6" containerID="61ddf86cf1be2810942de4465d9f3ff475fea38a612b6b9c033941c8f7ac5286" exitCode=2 Feb 02 21:20:17 crc kubenswrapper[4789]: I0202 21:20:17.048728 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" event={"ID":"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6","Type":"ContainerDied","Data":"61ddf86cf1be2810942de4465d9f3ff475fea38a612b6b9c033941c8f7ac5286"} Feb 02 21:20:17 crc kubenswrapper[4789]: I0202 21:20:17.050244 4789 scope.go:117] "RemoveContainer" containerID="61ddf86cf1be2810942de4465d9f3ff475fea38a612b6b9c033941c8f7ac5286" Feb 02 21:20:17 crc kubenswrapper[4789]: I0202 21:20:17.051008 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d49gm" event={"ID":"39528981-2c85-43f3-8fa0-bfae5c3334cd","Type":"ContainerStarted","Data":"277fe88585ee146931597a14fe049a3d69197c94e0d84f5dfb334b08cd685723"} Feb 02 21:20:17 crc kubenswrapper[4789]: I0202 21:20:17.051055 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d49gm" event={"ID":"39528981-2c85-43f3-8fa0-bfae5c3334cd","Type":"ContainerStarted","Data":"d4ae303f0f4381207f4dd4a443e366d6e3de2014e9bc69aa644e98a76b239868"} Feb 02 21:20:17 crc kubenswrapper[4789]: I0202 21:20:17.051074 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d49gm" event={"ID":"39528981-2c85-43f3-8fa0-bfae5c3334cd","Type":"ContainerStarted","Data":"7afca7e40b92a92f06a1c2d199d05ef71c9a147588b55acfe3451f52d9d52b31"} Feb 02 21:20:17 crc kubenswrapper[4789]: I0202 21:20:17.067939 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:17 crc kubenswrapper[4789]: I0202 21:20:17.068307 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:17 crc kubenswrapper[4789]: I0202 21:20:17.068335 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:17 crc kubenswrapper[4789]: I0202 21:20:17.068363 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:17 crc kubenswrapper[4789]: I0202 21:20:17.068384 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:17Z","lastTransitionTime":"2026-02-02T21:20:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:17 crc kubenswrapper[4789]: I0202 21:20:17.070722 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dsv6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcbed546-a1c3-4ba4-96b4-61471010b1c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fff0380f801c6909a82bfc8a765cf7a393d7c2f7cc809611ded737a22f448a3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48b8508849b24e18223b62bd73348759ba4c04a525afb7d4055175bceb0183c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e48b8508849b24e18223b62bd73348759ba4c04a525afb7d4055175bceb0183c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92b13f24aa00608821909c688ace0441fd4e432d3c5fc9a1cc06b4239bf86955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92b13f24aa00608821909c688ace0441fd4e432d3c5fc9a1cc06b4239bf86955\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75d7fd5bba471f9371960e5c6cfd5d7cb49402d47459acad01a9c438ea33de0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75d7fd5bba471f9371960e5c6cfd5d7cb49402d47459acad01a9c438ea33de0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eb07d53076d60f3363a1232e742bb4bd801d49104a2be1095af34a61a5b61cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eb07d53076d60f3363a1232e742bb4bd801d49104a2be1095af34a61a5b61cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5339fbb27f553f96497abbf4b0584cbffe399e44fafd90cd0e568e2f4efad6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5339fbb27f553f96497abbf4b0584cbffe399e44fafd90cd0e568e2f4efad6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef304631b36ef7a2edc5ce89f5b4d1efecd4947d1d71f3bb4068395d76aea346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef304631b36ef7a2edc5ce89f5b4d1efecd4947d1d71f3bb4068395d76aea346\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dsv6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:17Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:17 crc kubenswrapper[4789]: I0202 21:20:17.079605 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 21:20:17 crc kubenswrapper[4789]: I0202 21:20:17.079680 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9zq9\" (UniqueName: \"kubernetes.io/projected/2dc26662-64d3-47f0-9e0d-d340760ca348-kube-api-access-c9zq9\") pod \"network-metrics-daemon-vjbpg\" (UID: \"2dc26662-64d3-47f0-9e0d-d340760ca348\") " pod="openshift-multus/network-metrics-daemon-vjbpg" Feb 02 21:20:17 crc kubenswrapper[4789]: E0202 21:20:17.079730 4789 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 21:20:17 crc kubenswrapper[4789]: E0202 21:20:17.079837 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 21:20:33.079804063 +0000 UTC m=+53.374829112 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 21:20:17 crc kubenswrapper[4789]: I0202 21:20:17.080080 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2dc26662-64d3-47f0-9e0d-d340760ca348-metrics-certs\") pod \"network-metrics-daemon-vjbpg\" (UID: \"2dc26662-64d3-47f0-9e0d-d340760ca348\") " pod="openshift-multus/network-metrics-daemon-vjbpg" Feb 02 21:20:17 crc kubenswrapper[4789]: I0202 21:20:17.091432 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29dad69c19f9411676fab81684dd613d866c494df1e333b46bfff35b98430103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://047a98525fa1b06be5e6f3bab6b417973a13bb32df7fad26ca90b4558ee4e364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05637515d4c19e054ce43cc23bb6d8e32c4752282dab08786fff3062eb976461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f05a85f44fd6a7b9efa0f591ec0afebd7818a80b132ec3dddd0faa22c380eea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd508dcd4022163f662ad27cf3ffbc4cca551d4020809ca5fac60cee8da80601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://021453f1c74b708ad3f39c2defa2664f098a552f3e22315479c483d24110e583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ddf86cf1be2810942de4465d9f3ff475fea38a612b6b9c033941c8f7ac5286\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f30344b8366cf5ef712d971dc4d8d552e7b50c1ecea8d2ace88bb80fce62231e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e90a8e698f2e9f30a596864d0d038bfad686cca1a56b1a8bc72fab5fe594f355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e90a8e698f2e9f30a596864d0d038bfad686cca1a56b1a8bc72fab5fe594f355\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w8vkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:17Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:17 crc kubenswrapper[4789]: I0202 21:20:17.105665 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wlsw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b25d791-42e1-4e08-b7da-41803cc40f4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0e2a17d3ade1aa229b2729d66fe953acc746673549e0d929076a3bd18839833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-snjrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wlsw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:17Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:17 crc kubenswrapper[4789]: I0202 21:20:17.117457 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fb5d89e-c901-4857-a5e8-b4d8e3701714\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d85613f300f8d008d919038b40efff3fb67e228e25959cbfd2424640c6bc7a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://460e055777327e1734238bf51929751678ea5a757b00a344daa31c8c95e4c463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c292754c81dd4929f8e599e8ec352fd5e90431c60bb741c070a10ffa91fad72\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3685c506f77d3807711a442d68da94932064737cfdc5cb74557da135906e24f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:19:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:17Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:17 crc kubenswrapper[4789]: I0202 21:20:17.129104 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2x5ws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70a32268-2a2d-47f3-9fc6-4281b8dc6a02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9eaf3ca59ce89187ee20f3915b7fd4a8867e156c0be18b435511d2af95ffd949\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf446\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2x5ws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:17Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:17 crc kubenswrapper[4789]: I0202 21:20:17.140082 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa57a708f883719dd81200ccf975947c0af244b92ead81daee23171c9be412f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:17Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:17 crc kubenswrapper[4789]: I0202 21:20:17.148829 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6l576" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd970d28-4009-48b2-a0f4-2b8b1d54a2cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b641bb7caefade4d07fe41965573b059bd8b23a83bb9f60a9637d0152dffb07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwbqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6l576\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:17Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:17 crc kubenswrapper[4789]: I0202 21:20:17.160102 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:17Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:17 crc kubenswrapper[4789]: I0202 21:20:17.171991 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d49gm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39528981-2c85-43f3-8fa0-bfae5c3334cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4ae303f0f4381207f4dd4a443e366d6e3de2014e9bc69aa644e98a76b239868\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnfbx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://277fe88585ee146931597a14fe049a3d69197c94e0d84f5dfb334b08cd685723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnfbx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-d49gm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:17Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:17 crc kubenswrapper[4789]: I0202 21:20:17.172061 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:17 crc kubenswrapper[4789]: I0202 21:20:17.172261 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:17 crc kubenswrapper[4789]: I0202 21:20:17.172281 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:17 crc kubenswrapper[4789]: I0202 21:20:17.172306 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:17 crc kubenswrapper[4789]: I0202 21:20:17.172323 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:17Z","lastTransitionTime":"2026-02-02T21:20:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:17 crc kubenswrapper[4789]: I0202 21:20:17.181231 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:20:17 crc kubenswrapper[4789]: I0202 21:20:17.181309 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 21:20:17 crc kubenswrapper[4789]: I0202 21:20:17.181338 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2dc26662-64d3-47f0-9e0d-d340760ca348-metrics-certs\") pod \"network-metrics-daemon-vjbpg\" (UID: \"2dc26662-64d3-47f0-9e0d-d340760ca348\") " pod="openshift-multus/network-metrics-daemon-vjbpg" Feb 02 21:20:17 crc kubenswrapper[4789]: I0202 21:20:17.181384 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 21:20:17 crc kubenswrapper[4789]: I0202 21:20:17.181429 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9zq9\" (UniqueName: \"kubernetes.io/projected/2dc26662-64d3-47f0-9e0d-d340760ca348-kube-api-access-c9zq9\") pod \"network-metrics-daemon-vjbpg\" (UID: \"2dc26662-64d3-47f0-9e0d-d340760ca348\") " pod="openshift-multus/network-metrics-daemon-vjbpg" Feb 02 21:20:17 crc kubenswrapper[4789]: I0202 21:20:17.181485 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 21:20:17 crc kubenswrapper[4789]: E0202 21:20:17.181836 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 21:20:33.181811086 +0000 UTC m=+53.476836135 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:20:17 crc kubenswrapper[4789]: E0202 21:20:17.181972 4789 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 21:20:17 crc kubenswrapper[4789]: E0202 21:20:17.182003 4789 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 21:20:17 crc kubenswrapper[4789]: E0202 21:20:17.182022 4789 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 21:20:17 crc kubenswrapper[4789]: E0202 21:20:17.182074 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-02 21:20:33.182059723 +0000 UTC m=+53.477084782 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 21:20:17 crc kubenswrapper[4789]: E0202 21:20:17.182359 4789 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 21:20:17 crc kubenswrapper[4789]: E0202 21:20:17.182390 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2dc26662-64d3-47f0-9e0d-d340760ca348-metrics-certs podName:2dc26662-64d3-47f0-9e0d-d340760ca348 nodeName:}" failed. No retries permitted until 2026-02-02 21:20:17.682382572 +0000 UTC m=+37.977407591 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2dc26662-64d3-47f0-9e0d-d340760ca348-metrics-certs") pod "network-metrics-daemon-vjbpg" (UID: "2dc26662-64d3-47f0-9e0d-d340760ca348") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 21:20:17 crc kubenswrapper[4789]: E0202 21:20:17.182772 4789 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 21:20:17 crc kubenswrapper[4789]: E0202 21:20:17.182926 4789 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 21:20:17 crc kubenswrapper[4789]: E0202 21:20:17.183042 4789 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 21:20:17 crc kubenswrapper[4789]: E0202 21:20:17.182924 4789 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 21:20:17 crc kubenswrapper[4789]: E0202 21:20:17.183190 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-02 21:20:33.183173405 +0000 UTC m=+53.478198444 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 21:20:17 crc kubenswrapper[4789]: E0202 21:20:17.183329 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 21:20:33.183303499 +0000 UTC m=+53.478328638 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 21:20:17 crc kubenswrapper[4789]: I0202 21:20:17.186922 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vjbpg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dc26662-64d3-47f0-9e0d-d340760ca348\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9zq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9zq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vjbpg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:17Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:17 crc kubenswrapper[4789]: I0202 21:20:17.206760 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9zq9\" (UniqueName: \"kubernetes.io/projected/2dc26662-64d3-47f0-9e0d-d340760ca348-kube-api-access-c9zq9\") pod \"network-metrics-daemon-vjbpg\" (UID: \"2dc26662-64d3-47f0-9e0d-d340760ca348\") " pod="openshift-multus/network-metrics-daemon-vjbpg" Feb 02 21:20:17 crc kubenswrapper[4789]: I0202 21:20:17.209737 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40b3784b5ba59f0147d2889d897d17140759155e24587e9a323338e0a3125ddd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:17Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:17 crc kubenswrapper[4789]: I0202 21:20:17.228192 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:17Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:17 crc kubenswrapper[4789]: I0202 21:20:17.246175 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:17Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:17 crc kubenswrapper[4789]: I0202 21:20:17.268117 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19928db343f460eed7b046f14b45c6756081f57ea4b3ad77acc86f29eb856a82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://106225a63f3d4ecb7ba8c5266f738990ec7cc5603f9fabde6d234ae265dcc310\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:17Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:17 crc kubenswrapper[4789]: I0202 21:20:17.278714 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:17 crc kubenswrapper[4789]: I0202 21:20:17.278772 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:17 crc kubenswrapper[4789]: I0202 21:20:17.278785 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:17 crc kubenswrapper[4789]: I0202 21:20:17.278806 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:17 crc kubenswrapper[4789]: I0202 21:20:17.278827 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:17Z","lastTransitionTime":"2026-02-02T21:20:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:17 crc kubenswrapper[4789]: I0202 21:20:17.285841 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdf018b4-1451-4d37-be6e-05802b67c73e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad1716ba11e1b21eb68642e6935312760c389a669a007822290fbe72573bfaad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwk2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b108a17d437cce7e94365267d96e99e84944d9dd12174c5acb380bc1a1f9c885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwk2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c8vcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:17Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:17 crc kubenswrapper[4789]: I0202 21:20:17.302880 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9563eded-ca82-4eb6-90d4-e62b8acbe296\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72bf64d682578abb47e9fabf5e6c31e3a20594cc4a2d13304b2036f4198af265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cc46c408bb643834e494025442760929995b22175ce2af65a14004761848c2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e95c9e553b8e14141ddde6a221c0e5e4d46533d1631893e5d55416b9d16f4a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8bd6ccf6c7345f0ddc84a2983b618218fc65283b8d351c2df30d1221f304b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d8bd6ccf6c7345f0ddc84a2983b618218fc65283b8d351c2df30d1221f304b7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\" 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 21:20:01.773773 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 21:20:01.773776 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 21:20:01.773779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 21:20:01.773999 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0202 21:20:01.777333 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3307553149/tls.crt::/tmp/serving-cert-3307553149/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770067196\\\\\\\\\\\\\\\" (2026-02-02 21:19:55 +0000 UTC to 2026-03-04 21:19:56 +0000 UTC (now=2026-02-02 21:20:01.777292377 +0000 UTC))\\\\\\\"\\\\nI0202 21:20:01.777438 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770067201\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770067201\\\\\\\\\\\\\\\" (2026-02-02 20:20:01 +0000 UTC to 2027-02-02 20:20:01 +0000 UTC (now=2026-02-02 21:20:01.777417111 +0000 UTC))\\\\\\\"\\\\nI0202 21:20:01.777455 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 21:20:01.777484 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 21:20:01.777505 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0202 21:20:01.777526 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0202 21:20:01.777548 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3307553149/tls.crt::/tmp/serving-cert-3307553149/tls.key\\\\\\\"\\\\nF0202 21:20:01.777545 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cbe4a41d8721562e34639bc3ab24cc7f735d8de00e14d38b1b623b58740c5b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94bdc1c30a3c895835afc4f205ec20725ec13707db6957046cae7791b8850679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94bdc1c30a3c895835afc4f205ec20725ec13707db6957046cae7791b8850679\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:19:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:19:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:17Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:17 crc kubenswrapper[4789]: I0202 21:20:17.321965 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dsv6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcbed546-a1c3-4ba4-96b4-61471010b1c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fff0380f801c6909a82bfc8a765cf7a393d7c2f7cc809611ded737a22f448a3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48b8508849b24e18223b62bd73348759ba4c04a525afb7d4055175bceb0183c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e48b8508849b24e18223b62bd73348759ba4c04a525afb7d4055175bceb0183c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92b13f24aa00608821909c688ace0441fd4e432d3c5fc9a1cc06b4239bf86955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92b13f24aa00608821909c688ace0441fd4e432d3c5fc9a1cc06b4239bf86955\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75d7fd5bba471f9371960e5c6cfd5d7cb49402d47459acad01a9c438ea33de0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75d7fd5bba471f9371960e5c6cfd5d7cb49402d47459acad01a9c438ea33de0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eb07d53076d60f3363a1232e742bb4bd801d49104a2be1095af34a61a5b61cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eb07d53076d60f3363a1232e742bb4bd801d49104a2be1095af34a61a5b61cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5339fbb27f553f96497abbf4b0584cbffe399e44fafd90cd0e568e2f4efad6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5339fbb27f553f96497abbf4b0584cbffe399e44fafd90cd0e568e2f4efad6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef304631b36ef7a2edc5ce89f5b4d1efecd4947d1d71f3bb4068395d76aea346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef304631b36ef7a2edc5ce89f5b4d1efecd4947d1d71f3bb4068395d76aea346\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dsv6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:17Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:17 crc kubenswrapper[4789]: I0202 21:20:17.355929 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29dad69c19f9411676fab81684dd613d866c494df1e333b46bfff35b98430103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://047a98525fa1b06be5e6f3bab6b417973a13bb32df7fad26ca90b4558ee4e364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05637515d4c19e054ce43cc23bb6d8e32c4752282dab08786fff3062eb976461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f05a85f44fd6a7b9efa0f591ec0afebd7818a80b132ec3dddd0faa22c380eea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd508dcd4022163f662ad27cf3ffbc4cca551d4020809ca5fac60cee8da80601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://021453f1c74b708ad3f39c2defa2664f098a552f3e22315479c483d24110e583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ddf86cf1be2810942de4465d9f3ff475fea38a612b6b9c033941c8f7ac5286\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61ddf86cf1be2810942de4465d9f3ff475fea38a612b6b9c033941c8f7ac5286\\\",\\\"exitCode\\\":2,\\\"finishedAt\\\":\\\"2026-02-02T21:20:16Z\\\",\\\"message\\\":\\\"49dde39f63d2}, \\\\\\\"AssertTimestampMonotonic\\\\\\\":dbus.Variant{sig:dbus.Signature{str:\\\\\\\"t\\\\\\\"}, value:0x4e2a35b}, \\\\\\\"ConditionResult\\\\\\\":dbus.Variant{sig:dbus.Signature{str:\\\\\\\"b\\\\\\\"}, value:true}, \\\\\\\"ConditionTimestamp\\\\\\\":dbus.Variant{sig:dbus.Signature{str:\\\\\\\"t\\\\\\\"}, value:0x649dde39f63cf}, \\\\\\\"ConditionTimestampMonotonic\\\\\\\":dbus.Variant{sig:dbus.Signature{str:\\\\\\\"t\\\\\\\"}, value:0x4e2a359}, \\\\\\\"FreezerState\\\\\\\":dbus.Variant{sig:dbus.Signature{str:\\\\\\\"s\\\\\\\"}, value:\\\\\\\"running\\\\\\\"}, \\\\\\\"InactiveEnterTimestamp\\\\\\\":dbus.Variant{sig:dbus.Signature{str:\\\\\\\"t\\\\\\\"}, value:0x0}, \\\\\\\"InactiveEnterTimestampMonotonic\\\\\\\":dbus.Variant{sig:dbus.Signature{str:\\\\\\\"t\\\\\\\"}, value:0x0}, \\\\\\\"InactiveExitTimestamp\\\\\\\":dbus.Variant{sig:dbus.Signature{str:\\\\\\\"t\\\\\\\"}, value:0x649dde39f67cf}, \\\\\\\"InactiveExitTimestampMonotonic\\\\\\\":dbus.Variant{sig:dbus.Signature{str:\\\\\\\"t\\\\\\\"}, value:0x4e2a758}, \\\\\\\"InvocationID\\\\\\\":dbus.Variant{sig:dbus.Signature{str:\\\\\\\"ay\\\\\\\"}, value:[]uint8{0x40, 0x56, 0x20, 0x71, 0x49, 0xf3, 0x49, 0x4e, 0xa1, 0xac, 0x9d, 0xa7, 0xf2, 0xd, 0xba, 0xdd}}, \\\\\\\"Job\\\\\\\":dbus.Variant{sig:dbus.Signature{str:\\\\\\\"(uo)\\\\\\\"}, value:[]interface {}{0x0, \\\\\\\"/\\\\\\\"}}, \\\\\\\"StateChangeTimestamp\\\\\\\":dbus.Variant{sig:dbus.Signature{str:\\\\\\\"t\\\\\\\"}, value:0x649dde39f67cf}, \\\\\\\"StateChangeTimestampMonotonic\\\\\\\":dbus.Variant{sig:dbus.Signature{str:\\\\\\\"t\\\\\\\"}, value:0x4e2a758}, \\\\\\\"SubState\\\\\\\":dbus.Variant{sig:dbus.Signature{str:\\\\\\\"s\\\\\\\"}, value:\\\\\\\"active\\\\\\\"}}, []string{\\\\\\\"Conditions\\\\\\\", \\\\\\\"Asserts\\\\\\\"}}, Sequence:0x5f}\\\\nI0202 21:20:16.310677 6057 ud\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f30344b8366cf5ef712d971dc4d8d552e7b50c1ecea8d2ace88bb80fce62231e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e90a8e698f2e9f30a596864d0d038bfad686cca1a56b1a8bc72fab5fe594f355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e90a8e698f2e9f30a596864d0d038bfad686cca1a56b1a8bc72fab5fe594f355\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w8vkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:17Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:17 crc kubenswrapper[4789]: I0202 21:20:17.382034 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:17 crc kubenswrapper[4789]: I0202 21:20:17.382066 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:17 crc kubenswrapper[4789]: I0202 21:20:17.382078 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:17 crc kubenswrapper[4789]: I0202 21:20:17.382094 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:17 crc kubenswrapper[4789]: I0202 21:20:17.382113 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:17Z","lastTransitionTime":"2026-02-02T21:20:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:17 crc kubenswrapper[4789]: I0202 21:20:17.383807 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wlsw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b25d791-42e1-4e08-b7da-41803cc40f4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0e2a17d3ade1aa229b2729d66fe953acc746673549e0d929076a3bd18839833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-snjrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wlsw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:17Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:17 crc kubenswrapper[4789]: I0202 21:20:17.408408 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fb5d89e-c901-4857-a5e8-b4d8e3701714\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d85613f300f8d008d919038b40efff3fb67e228e25959cbfd2424640c6bc7a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://460e055777327e1734238bf51929751678ea5a757b00a344daa31c8c95e4c463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c292754c81dd4929f8e599e8ec352fd5e90431c60bb741c070a10ffa91fad72\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3685c506f77d3807711a442d68da94932064737cfdc5cb74557da135906e24f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:19:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:17Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:17 crc kubenswrapper[4789]: I0202 21:20:17.418465 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 21:20:17 crc kubenswrapper[4789]: E0202 21:20:17.418571 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 21:20:17 crc kubenswrapper[4789]: I0202 21:20:17.418620 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 21:20:17 crc kubenswrapper[4789]: I0202 21:20:17.418658 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 21:20:17 crc kubenswrapper[4789]: E0202 21:20:17.418668 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 21:20:17 crc kubenswrapper[4789]: E0202 21:20:17.418794 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 21:20:17 crc kubenswrapper[4789]: I0202 21:20:17.422423 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2x5ws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70a32268-2a2d-47f3-9fc6-4281b8dc6a02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9eaf3ca59ce89187ee20f3915b7fd4a8867e156c0be18b435511d2af95ffd949\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf446\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2x5ws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:17Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:17 crc kubenswrapper[4789]: I0202 21:20:17.434729 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa57a708f883719dd81200ccf975947c0af244b92ead81daee23171c9be412f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:17Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:17 crc kubenswrapper[4789]: I0202 21:20:17.445633 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6l576" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd970d28-4009-48b2-a0f4-2b8b1d54a2cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b641bb7caefade4d07fe41965573b059bd8b23a83bb9f60a9637d0152dffb07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwbqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6l576\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:17Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:17 crc kubenswrapper[4789]: I0202 21:20:17.457566 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:17Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:17 crc kubenswrapper[4789]: I0202 21:20:17.473993 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:17 crc kubenswrapper[4789]: I0202 21:20:17.474044 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:17 crc kubenswrapper[4789]: I0202 21:20:17.474058 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:17 crc kubenswrapper[4789]: I0202 21:20:17.474076 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:17 crc kubenswrapper[4789]: I0202 21:20:17.474086 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:17Z","lastTransitionTime":"2026-02-02T21:20:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:17 crc kubenswrapper[4789]: E0202 21:20:17.491995 4789 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"007d0037-9447-42ea-b3a4-6e1f0d669307\\\",\\\"systemUUID\\\":\\\"53ecbfdd-0b43-4d74-98ca-c7bcbc951d86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:17Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:17 crc kubenswrapper[4789]: I0202 21:20:17.498008 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:17 crc kubenswrapper[4789]: I0202 21:20:17.498034 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:17 crc kubenswrapper[4789]: I0202 21:20:17.498044 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:17 crc kubenswrapper[4789]: I0202 21:20:17.498059 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:17 crc kubenswrapper[4789]: I0202 21:20:17.498071 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:17Z","lastTransitionTime":"2026-02-02T21:20:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:17 crc kubenswrapper[4789]: E0202 21:20:17.511426 4789 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"007d0037-9447-42ea-b3a4-6e1f0d669307\\\",\\\"systemUUID\\\":\\\"53ecbfdd-0b43-4d74-98ca-c7bcbc951d86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:17Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:17 crc kubenswrapper[4789]: I0202 21:20:17.515311 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:17 crc kubenswrapper[4789]: I0202 21:20:17.515329 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:17 crc kubenswrapper[4789]: I0202 21:20:17.515339 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:17 crc kubenswrapper[4789]: I0202 21:20:17.515351 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:17 crc kubenswrapper[4789]: I0202 21:20:17.515360 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:17Z","lastTransitionTime":"2026-02-02T21:20:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:17 crc kubenswrapper[4789]: E0202 21:20:17.528418 4789 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"007d0037-9447-42ea-b3a4-6e1f0d669307\\\",\\\"systemUUID\\\":\\\"53ecbfdd-0b43-4d74-98ca-c7bcbc951d86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:17Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:17 crc kubenswrapper[4789]: I0202 21:20:17.531892 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:17 crc kubenswrapper[4789]: I0202 21:20:17.531912 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:17 crc kubenswrapper[4789]: I0202 21:20:17.531919 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:17 crc kubenswrapper[4789]: I0202 21:20:17.531931 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:17 crc kubenswrapper[4789]: I0202 21:20:17.531939 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:17Z","lastTransitionTime":"2026-02-02T21:20:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:17 crc kubenswrapper[4789]: E0202 21:20:17.545778 4789 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"007d0037-9447-42ea-b3a4-6e1f0d669307\\\",\\\"systemUUID\\\":\\\"53ecbfdd-0b43-4d74-98ca-c7bcbc951d86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:17Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:17 crc kubenswrapper[4789]: I0202 21:20:17.549534 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:17 crc kubenswrapper[4789]: I0202 21:20:17.549618 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:17 crc kubenswrapper[4789]: I0202 21:20:17.549639 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:17 crc kubenswrapper[4789]: I0202 21:20:17.549663 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:17 crc kubenswrapper[4789]: I0202 21:20:17.549681 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:17Z","lastTransitionTime":"2026-02-02T21:20:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:17 crc kubenswrapper[4789]: E0202 21:20:17.563976 4789 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"007d0037-9447-42ea-b3a4-6e1f0d669307\\\",\\\"systemUUID\\\":\\\"53ecbfdd-0b43-4d74-98ca-c7bcbc951d86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:17Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:17 crc kubenswrapper[4789]: E0202 21:20:17.564106 4789 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 02 21:20:17 crc kubenswrapper[4789]: I0202 21:20:17.566042 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:17 crc kubenswrapper[4789]: I0202 21:20:17.566072 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:17 crc kubenswrapper[4789]: I0202 21:20:17.566083 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:17 crc kubenswrapper[4789]: I0202 21:20:17.566102 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:17 crc kubenswrapper[4789]: I0202 21:20:17.566115 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:17Z","lastTransitionTime":"2026-02-02T21:20:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:17 crc kubenswrapper[4789]: I0202 21:20:17.668222 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:17 crc kubenswrapper[4789]: I0202 21:20:17.668255 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:17 crc kubenswrapper[4789]: I0202 21:20:17.668265 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:17 crc kubenswrapper[4789]: I0202 21:20:17.668279 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:17 crc kubenswrapper[4789]: I0202 21:20:17.668289 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:17Z","lastTransitionTime":"2026-02-02T21:20:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:17 crc kubenswrapper[4789]: I0202 21:20:17.687409 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2dc26662-64d3-47f0-9e0d-d340760ca348-metrics-certs\") pod \"network-metrics-daemon-vjbpg\" (UID: \"2dc26662-64d3-47f0-9e0d-d340760ca348\") " pod="openshift-multus/network-metrics-daemon-vjbpg" Feb 02 21:20:17 crc kubenswrapper[4789]: E0202 21:20:17.687604 4789 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 21:20:17 crc kubenswrapper[4789]: E0202 21:20:17.687681 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2dc26662-64d3-47f0-9e0d-d340760ca348-metrics-certs podName:2dc26662-64d3-47f0-9e0d-d340760ca348 nodeName:}" failed. No retries permitted until 2026-02-02 21:20:18.687661928 +0000 UTC m=+38.982687027 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2dc26662-64d3-47f0-9e0d-d340760ca348-metrics-certs") pod "network-metrics-daemon-vjbpg" (UID: "2dc26662-64d3-47f0-9e0d-d340760ca348") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 21:20:17 crc kubenswrapper[4789]: I0202 21:20:17.748493 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 12:17:11.389570257 +0000 UTC Feb 02 21:20:17 crc kubenswrapper[4789]: I0202 21:20:17.770590 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:17 crc kubenswrapper[4789]: I0202 21:20:17.770624 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:17 crc kubenswrapper[4789]: I0202 21:20:17.770633 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:17 crc kubenswrapper[4789]: I0202 21:20:17.770646 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:17 crc kubenswrapper[4789]: I0202 21:20:17.770655 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:17Z","lastTransitionTime":"2026-02-02T21:20:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:17 crc kubenswrapper[4789]: I0202 21:20:17.873669 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:17 crc kubenswrapper[4789]: I0202 21:20:17.873705 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:17 crc kubenswrapper[4789]: I0202 21:20:17.873713 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:17 crc kubenswrapper[4789]: I0202 21:20:17.873725 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:17 crc kubenswrapper[4789]: I0202 21:20:17.873735 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:17Z","lastTransitionTime":"2026-02-02T21:20:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:17 crc kubenswrapper[4789]: I0202 21:20:17.976962 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:17 crc kubenswrapper[4789]: I0202 21:20:17.976998 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:17 crc kubenswrapper[4789]: I0202 21:20:17.977008 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:17 crc kubenswrapper[4789]: I0202 21:20:17.977024 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:17 crc kubenswrapper[4789]: I0202 21:20:17.977034 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:17Z","lastTransitionTime":"2026-02-02T21:20:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:18 crc kubenswrapper[4789]: I0202 21:20:18.057173 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w8vkt_2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6/ovnkube-controller/1.log" Feb 02 21:20:18 crc kubenswrapper[4789]: I0202 21:20:18.057957 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w8vkt_2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6/ovnkube-controller/0.log" Feb 02 21:20:18 crc kubenswrapper[4789]: I0202 21:20:18.061330 4789 generic.go:334] "Generic (PLEG): container finished" podID="2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6" containerID="ca031da70bd23ada7cfba11c78c5962555189be4b845b36d92ee2737df4e8bb7" exitCode=1 Feb 02 21:20:18 crc kubenswrapper[4789]: I0202 21:20:18.061367 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" event={"ID":"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6","Type":"ContainerDied","Data":"ca031da70bd23ada7cfba11c78c5962555189be4b845b36d92ee2737df4e8bb7"} Feb 02 21:20:18 crc kubenswrapper[4789]: I0202 21:20:18.061399 4789 scope.go:117] "RemoveContainer" containerID="61ddf86cf1be2810942de4465d9f3ff475fea38a612b6b9c033941c8f7ac5286" Feb 02 21:20:18 crc kubenswrapper[4789]: I0202 21:20:18.062861 4789 scope.go:117] "RemoveContainer" containerID="ca031da70bd23ada7cfba11c78c5962555189be4b845b36d92ee2737df4e8bb7" Feb 02 21:20:18 crc kubenswrapper[4789]: E0202 21:20:18.063210 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-w8vkt_openshift-ovn-kubernetes(2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6)\"" pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" podUID="2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6" Feb 02 21:20:18 crc kubenswrapper[4789]: I0202 21:20:18.079545 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:18 crc kubenswrapper[4789]: I0202 21:20:18.079650 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:18 crc kubenswrapper[4789]: I0202 21:20:18.079676 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:18 crc kubenswrapper[4789]: I0202 21:20:18.079705 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:18 crc kubenswrapper[4789]: I0202 21:20:18.079729 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:18Z","lastTransitionTime":"2026-02-02T21:20:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:18 crc kubenswrapper[4789]: I0202 21:20:18.082543 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40b3784b5ba59f0147d2889d897d17140759155e24587e9a323338e0a3125ddd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:18Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:18 crc kubenswrapper[4789]: I0202 21:20:18.097591 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:18Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:18 crc kubenswrapper[4789]: I0202 21:20:18.112529 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:18Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:18 crc kubenswrapper[4789]: I0202 21:20:18.133009 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19928db343f460eed7b046f14b45c6756081f57ea4b3ad77acc86f29eb856a82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://106225a63f3d4ecb7ba8c5266f738990ec7cc5603f9fabde6d234ae265dcc310\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:18Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:18 crc kubenswrapper[4789]: I0202 21:20:18.152148 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdf018b4-1451-4d37-be6e-05802b67c73e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad1716ba11e1b21eb68642e6935312760c389a669a007822290fbe72573bfaad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwk2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b108a17d437cce7e94365267d96e99e84944d9dd12174c5acb380bc1a1f9c885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwk2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c8vcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:18Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:18 crc kubenswrapper[4789]: I0202 21:20:18.167942 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d49gm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39528981-2c85-43f3-8fa0-bfae5c3334cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4ae303f0f4381207f4dd4a443e366d6e3de2014e9bc69aa644e98a76b239868\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnfbx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://277fe88585ee146931597a14fe049a3d69197c94e0d84f5dfb334b08cd685723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnfbx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-d49gm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:18Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:18 crc kubenswrapper[4789]: I0202 21:20:18.182900 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:18 crc kubenswrapper[4789]: I0202 21:20:18.182948 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:18 crc kubenswrapper[4789]: I0202 21:20:18.182961 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:18 crc kubenswrapper[4789]: I0202 21:20:18.182978 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:18 crc kubenswrapper[4789]: I0202 21:20:18.182988 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:18Z","lastTransitionTime":"2026-02-02T21:20:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:18 crc kubenswrapper[4789]: I0202 21:20:18.187653 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vjbpg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dc26662-64d3-47f0-9e0d-d340760ca348\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9zq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9zq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vjbpg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:18Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:18 crc kubenswrapper[4789]: I0202 21:20:18.210881 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9563eded-ca82-4eb6-90d4-e62b8acbe296\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72bf64d682578abb47e9fabf5e6c31e3a20594cc4a2d13304b2036f4198af265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cc46c408bb643834e494025442760929995b22175ce2af65a14004761848c2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e95c9e553b8e14141ddde6a221c0e5e4d46533d1631893e5d55416b9d16f4a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8bd6ccf6c7345f0ddc84a2983b618218fc65283b8d351c2df30d1221f304b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d8bd6ccf6c7345f0ddc84a2983b618218fc65283b8d351c2df30d1221f304b7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\" 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 21:20:01.773773 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 21:20:01.773776 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 21:20:01.773779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 21:20:01.773999 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0202 21:20:01.777333 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3307553149/tls.crt::/tmp/serving-cert-3307553149/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770067196\\\\\\\\\\\\\\\" (2026-02-02 21:19:55 +0000 UTC to 2026-03-04 21:19:56 +0000 UTC (now=2026-02-02 21:20:01.777292377 +0000 UTC))\\\\\\\"\\\\nI0202 21:20:01.777438 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770067201\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770067201\\\\\\\\\\\\\\\" (2026-02-02 20:20:01 +0000 UTC to 2027-02-02 20:20:01 +0000 UTC (now=2026-02-02 21:20:01.777417111 +0000 UTC))\\\\\\\"\\\\nI0202 21:20:01.777455 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 21:20:01.777484 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 21:20:01.777505 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0202 21:20:01.777526 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0202 21:20:01.777548 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3307553149/tls.crt::/tmp/serving-cert-3307553149/tls.key\\\\\\\"\\\\nF0202 21:20:01.777545 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cbe4a41d8721562e34639bc3ab24cc7f735d8de00e14d38b1b623b58740c5b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94bdc1c30a3c895835afc4f205ec20725ec13707db6957046cae7791b8850679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94bdc1c30a3c895835afc4f205ec20725ec13707db6957046cae7791b8850679\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:19:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:19:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:18Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:18 crc kubenswrapper[4789]: I0202 21:20:18.233973 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dsv6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcbed546-a1c3-4ba4-96b4-61471010b1c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fff0380f801c6909a82bfc8a765cf7a393d7c2f7cc809611ded737a22f448a3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48b8508849b24e18223b62bd73348759ba4c04a525afb7d4055175bceb0183c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e48b8508849b24e18223b62bd73348759ba4c04a525afb7d4055175bceb0183c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92b13f24aa00608821909c688ace0441fd4e432d3c5fc9a1cc06b4239bf86955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92b13f24aa00608821909c688ace0441fd4e432d3c5fc9a1cc06b4239bf86955\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75d7fd5bba471f9371960e5c6cfd5d7cb49402d47459acad01a9c438ea33de0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75d7fd5bba471f9371960e5c6cfd5d7cb49402d47459acad01a9c438ea33de0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eb07d53076d60f3363a1232e742bb4bd801d49104a2be1095af34a61a5b61cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eb07d53076d60f3363a1232e742bb4bd801d49104a2be1095af34a61a5b61cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5339fbb27f553f96497abbf4b0584cbffe399e44fafd90cd0e568e2f4efad6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5339fbb27f553f96497abbf4b0584cbffe399e44fafd90cd0e568e2f4efad6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef304631b36ef7a2edc5ce89f5b4d1efecd4947d1d71f3bb4068395d76aea346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef304631b36ef7a2edc5ce89f5b4d1efecd4947d1d71f3bb4068395d76aea346\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dsv6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:18Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:18 crc kubenswrapper[4789]: I0202 21:20:18.263332 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29dad69c19f9411676fab81684dd613d866c494df1e333b46bfff35b98430103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://047a98525fa1b06be5e6f3bab6b417973a13bb32df7fad26ca90b4558ee4e364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05637515d4c19e054ce43cc23bb6d8e32c4752282dab08786fff3062eb976461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f05a85f44fd6a7b9efa0f591ec0afebd7818a80b132ec3dddd0faa22c380eea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd508dcd4022163f662ad27cf3ffbc4cca551d4020809ca5fac60cee8da80601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://021453f1c74b708ad3f39c2defa2664f098a552f3e22315479c483d24110e583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca031da70bd23ada7cfba11c78c5962555189be4b845b36d92ee2737df4e8bb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61ddf86cf1be2810942de4465d9f3ff475fea38a612b6b9c033941c8f7ac5286\\\",\\\"exitCode\\\":2,\\\"finishedAt\\\":\\\"2026-02-02T21:20:16Z\\\",\\\"message\\\":\\\"49dde39f63d2}, \\\\\\\"AssertTimestampMonotonic\\\\\\\":dbus.Variant{sig:dbus.Signature{str:\\\\\\\"t\\\\\\\"}, value:0x4e2a35b}, \\\\\\\"ConditionResult\\\\\\\":dbus.Variant{sig:dbus.Signature{str:\\\\\\\"b\\\\\\\"}, value:true}, \\\\\\\"ConditionTimestamp\\\\\\\":dbus.Variant{sig:dbus.Signature{str:\\\\\\\"t\\\\\\\"}, value:0x649dde39f63cf}, \\\\\\\"ConditionTimestampMonotonic\\\\\\\":dbus.Variant{sig:dbus.Signature{str:\\\\\\\"t\\\\\\\"}, value:0x4e2a359}, \\\\\\\"FreezerState\\\\\\\":dbus.Variant{sig:dbus.Signature{str:\\\\\\\"s\\\\\\\"}, value:\\\\\\\"running\\\\\\\"}, \\\\\\\"InactiveEnterTimestamp\\\\\\\":dbus.Variant{sig:dbus.Signature{str:\\\\\\\"t\\\\\\\"}, value:0x0}, \\\\\\\"InactiveEnterTimestampMonotonic\\\\\\\":dbus.Variant{sig:dbus.Signature{str:\\\\\\\"t\\\\\\\"}, value:0x0}, \\\\\\\"InactiveExitTimestamp\\\\\\\":dbus.Variant{sig:dbus.Signature{str:\\\\\\\"t\\\\\\\"}, value:0x649dde39f67cf}, \\\\\\\"InactiveExitTimestampMonotonic\\\\\\\":dbus.Variant{sig:dbus.Signature{str:\\\\\\\"t\\\\\\\"}, value:0x4e2a758}, \\\\\\\"InvocationID\\\\\\\":dbus.Variant{sig:dbus.Signature{str:\\\\\\\"ay\\\\\\\"}, value:[]uint8{0x40, 0x56, 0x20, 0x71, 0x49, 0xf3, 0x49, 0x4e, 0xa1, 0xac, 0x9d, 0xa7, 0xf2, 0xd, 0xba, 0xdd}}, \\\\\\\"Job\\\\\\\":dbus.Variant{sig:dbus.Signature{str:\\\\\\\"(uo)\\\\\\\"}, value:[]interface {}{0x0, \\\\\\\"/\\\\\\\"}}, \\\\\\\"StateChangeTimestamp\\\\\\\":dbus.Variant{sig:dbus.Signature{str:\\\\\\\"t\\\\\\\"}, value:0x649dde39f67cf}, \\\\\\\"StateChangeTimestampMonotonic\\\\\\\":dbus.Variant{sig:dbus.Signature{str:\\\\\\\"t\\\\\\\"}, value:0x4e2a758}, \\\\\\\"SubState\\\\\\\":dbus.Variant{sig:dbus.Signature{str:\\\\\\\"s\\\\\\\"}, value:\\\\\\\"active\\\\\\\"}}, []string{\\\\\\\"Conditions\\\\\\\", \\\\\\\"Asserts\\\\\\\"}}, Sequence:0x5f}\\\\nI0202 21:20:16.310677 6057 ud\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca031da70bd23ada7cfba11c78c5962555189be4b845b36d92ee2737df4e8bb7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T21:20:18Z\\\",\\\"message\\\":\\\"reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.88:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ebd4748e-0473-49fb-88ad-83dbb221791a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0202 21:20:18.018667 6276 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console-operator/metrics]} name:Service_openshift-console-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.88:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ebd4748e-0473-49fb-88ad-83dbb221791a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0202 21:20:18.018180 6276 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f30344b8366cf5ef712d971dc4d8d552e7b50c1ecea8d2ace88bb80fce62231e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e90a8e698f2e9f30a596864d0d038bfad686cca1a56b1a8bc72fab5fe594f355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e90a8e698f2e9f30a596864d0d038bfad686cca1a56b1a8bc72fab5fe594f355\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w8vkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:18Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:18 crc kubenswrapper[4789]: I0202 21:20:18.277295 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wlsw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b25d791-42e1-4e08-b7da-41803cc40f4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0e2a17d3ade1aa229b2729d66fe953acc746673549e0d929076a3bd18839833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-snjrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wlsw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:18Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:18 crc kubenswrapper[4789]: I0202 21:20:18.290796 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:18 crc kubenswrapper[4789]: I0202 21:20:18.290847 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:18 crc kubenswrapper[4789]: I0202 21:20:18.290871 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:18 crc kubenswrapper[4789]: I0202 21:20:18.290953 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:18 crc kubenswrapper[4789]: I0202 21:20:18.290971 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:18Z","lastTransitionTime":"2026-02-02T21:20:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:18 crc kubenswrapper[4789]: I0202 21:20:18.294675 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fb5d89e-c901-4857-a5e8-b4d8e3701714\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d85613f300f8d008d919038b40efff3fb67e228e25959cbfd2424640c6bc7a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://460e055777327e1734238bf51929751678ea5a757b00a344daa31c8c95e4c463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c292754c81dd4929f8e599e8ec352fd5e90431c60bb741c070a10ffa91fad72\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3685c506f77d3807711a442d68da94932064737cfdc5cb74557da135906e24f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:19:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:18Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:18 crc kubenswrapper[4789]: I0202 21:20:18.313330 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2x5ws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70a32268-2a2d-47f3-9fc6-4281b8dc6a02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9eaf3ca59ce89187ee20f3915b7fd4a8867e156c0be18b435511d2af95ffd949\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf446\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2x5ws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:18Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:18 crc kubenswrapper[4789]: I0202 21:20:18.327703 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa57a708f883719dd81200ccf975947c0af244b92ead81daee23171c9be412f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:18Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:18 crc kubenswrapper[4789]: I0202 21:20:18.342432 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6l576" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd970d28-4009-48b2-a0f4-2b8b1d54a2cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b641bb7caefade4d07fe41965573b059bd8b23a83bb9f60a9637d0152dffb07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwbqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6l576\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:18Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:18 crc kubenswrapper[4789]: I0202 21:20:18.360470 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:18Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:18 crc kubenswrapper[4789]: I0202 21:20:18.394155 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:18 crc kubenswrapper[4789]: I0202 21:20:18.394215 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:18 crc kubenswrapper[4789]: I0202 21:20:18.394233 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:18 crc kubenswrapper[4789]: I0202 21:20:18.394256 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:18 crc kubenswrapper[4789]: I0202 21:20:18.394273 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:18Z","lastTransitionTime":"2026-02-02T21:20:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:18 crc kubenswrapper[4789]: I0202 21:20:18.419481 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vjbpg" Feb 02 21:20:18 crc kubenswrapper[4789]: E0202 21:20:18.419756 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vjbpg" podUID="2dc26662-64d3-47f0-9e0d-d340760ca348" Feb 02 21:20:18 crc kubenswrapper[4789]: I0202 21:20:18.497136 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:18 crc kubenswrapper[4789]: I0202 21:20:18.497189 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:18 crc kubenswrapper[4789]: I0202 21:20:18.497208 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:18 crc kubenswrapper[4789]: I0202 21:20:18.497232 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:18 crc kubenswrapper[4789]: I0202 21:20:18.497250 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:18Z","lastTransitionTime":"2026-02-02T21:20:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:18 crc kubenswrapper[4789]: I0202 21:20:18.600212 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:18 crc kubenswrapper[4789]: I0202 21:20:18.600271 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:18 crc kubenswrapper[4789]: I0202 21:20:18.600289 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:18 crc kubenswrapper[4789]: I0202 21:20:18.600312 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:18 crc kubenswrapper[4789]: I0202 21:20:18.600331 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:18Z","lastTransitionTime":"2026-02-02T21:20:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:18 crc kubenswrapper[4789]: I0202 21:20:18.698407 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2dc26662-64d3-47f0-9e0d-d340760ca348-metrics-certs\") pod \"network-metrics-daemon-vjbpg\" (UID: \"2dc26662-64d3-47f0-9e0d-d340760ca348\") " pod="openshift-multus/network-metrics-daemon-vjbpg" Feb 02 21:20:18 crc kubenswrapper[4789]: E0202 21:20:18.698668 4789 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 21:20:18 crc kubenswrapper[4789]: E0202 21:20:18.698774 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2dc26662-64d3-47f0-9e0d-d340760ca348-metrics-certs podName:2dc26662-64d3-47f0-9e0d-d340760ca348 nodeName:}" failed. No retries permitted until 2026-02-02 21:20:20.698741615 +0000 UTC m=+40.993766674 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2dc26662-64d3-47f0-9e0d-d340760ca348-metrics-certs") pod "network-metrics-daemon-vjbpg" (UID: "2dc26662-64d3-47f0-9e0d-d340760ca348") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 21:20:18 crc kubenswrapper[4789]: I0202 21:20:18.703610 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:18 crc kubenswrapper[4789]: I0202 21:20:18.703656 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:18 crc kubenswrapper[4789]: I0202 21:20:18.703673 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:18 crc kubenswrapper[4789]: I0202 21:20:18.703697 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:18 crc kubenswrapper[4789]: I0202 21:20:18.703714 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:18Z","lastTransitionTime":"2026-02-02T21:20:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:18 crc kubenswrapper[4789]: I0202 21:20:18.749526 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 08:05:42.358464455 +0000 UTC Feb 02 21:20:18 crc kubenswrapper[4789]: I0202 21:20:18.806364 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:18 crc kubenswrapper[4789]: I0202 21:20:18.806407 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:18 crc kubenswrapper[4789]: I0202 21:20:18.806423 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:18 crc kubenswrapper[4789]: I0202 21:20:18.806447 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:18 crc kubenswrapper[4789]: I0202 21:20:18.806464 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:18Z","lastTransitionTime":"2026-02-02T21:20:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:18 crc kubenswrapper[4789]: I0202 21:20:18.909536 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:18 crc kubenswrapper[4789]: I0202 21:20:18.909674 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:18 crc kubenswrapper[4789]: I0202 21:20:18.909777 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:18 crc kubenswrapper[4789]: I0202 21:20:18.909978 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:18 crc kubenswrapper[4789]: I0202 21:20:18.910071 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:18Z","lastTransitionTime":"2026-02-02T21:20:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:19 crc kubenswrapper[4789]: I0202 21:20:19.013092 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:19 crc kubenswrapper[4789]: I0202 21:20:19.013141 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:19 crc kubenswrapper[4789]: I0202 21:20:19.013157 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:19 crc kubenswrapper[4789]: I0202 21:20:19.013181 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:19 crc kubenswrapper[4789]: I0202 21:20:19.013202 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:19Z","lastTransitionTime":"2026-02-02T21:20:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:19 crc kubenswrapper[4789]: I0202 21:20:19.068460 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w8vkt_2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6/ovnkube-controller/1.log" Feb 02 21:20:19 crc kubenswrapper[4789]: I0202 21:20:19.115753 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:19 crc kubenswrapper[4789]: I0202 21:20:19.115815 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:19 crc kubenswrapper[4789]: I0202 21:20:19.115838 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:19 crc kubenswrapper[4789]: I0202 21:20:19.115865 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:19 crc kubenswrapper[4789]: I0202 21:20:19.115883 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:19Z","lastTransitionTime":"2026-02-02T21:20:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:19 crc kubenswrapper[4789]: I0202 21:20:19.218862 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:19 crc kubenswrapper[4789]: I0202 21:20:19.218902 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:19 crc kubenswrapper[4789]: I0202 21:20:19.218911 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:19 crc kubenswrapper[4789]: I0202 21:20:19.218925 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:19 crc kubenswrapper[4789]: I0202 21:20:19.218935 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:19Z","lastTransitionTime":"2026-02-02T21:20:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:19 crc kubenswrapper[4789]: I0202 21:20:19.322316 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:19 crc kubenswrapper[4789]: I0202 21:20:19.322353 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:19 crc kubenswrapper[4789]: I0202 21:20:19.322365 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:19 crc kubenswrapper[4789]: I0202 21:20:19.322382 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:19 crc kubenswrapper[4789]: I0202 21:20:19.322395 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:19Z","lastTransitionTime":"2026-02-02T21:20:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:19 crc kubenswrapper[4789]: I0202 21:20:19.419100 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 21:20:19 crc kubenswrapper[4789]: I0202 21:20:19.419099 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 21:20:19 crc kubenswrapper[4789]: I0202 21:20:19.419225 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 21:20:19 crc kubenswrapper[4789]: E0202 21:20:19.419973 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 21:20:19 crc kubenswrapper[4789]: E0202 21:20:19.420110 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 21:20:19 crc kubenswrapper[4789]: E0202 21:20:19.419801 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 21:20:19 crc kubenswrapper[4789]: I0202 21:20:19.429904 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:19 crc kubenswrapper[4789]: I0202 21:20:19.429992 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:19 crc kubenswrapper[4789]: I0202 21:20:19.430023 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:19 crc kubenswrapper[4789]: I0202 21:20:19.430055 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:19 crc kubenswrapper[4789]: I0202 21:20:19.430080 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:19Z","lastTransitionTime":"2026-02-02T21:20:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:19 crc kubenswrapper[4789]: I0202 21:20:19.532820 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:19 crc kubenswrapper[4789]: I0202 21:20:19.532954 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:19 crc kubenswrapper[4789]: I0202 21:20:19.532980 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:19 crc kubenswrapper[4789]: I0202 21:20:19.533003 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:19 crc kubenswrapper[4789]: I0202 21:20:19.533021 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:19Z","lastTransitionTime":"2026-02-02T21:20:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:19 crc kubenswrapper[4789]: I0202 21:20:19.636287 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:19 crc kubenswrapper[4789]: I0202 21:20:19.636342 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:19 crc kubenswrapper[4789]: I0202 21:20:19.636360 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:19 crc kubenswrapper[4789]: I0202 21:20:19.636382 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:19 crc kubenswrapper[4789]: I0202 21:20:19.636398 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:19Z","lastTransitionTime":"2026-02-02T21:20:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:19 crc kubenswrapper[4789]: I0202 21:20:19.739545 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:19 crc kubenswrapper[4789]: I0202 21:20:19.739706 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:19 crc kubenswrapper[4789]: I0202 21:20:19.739729 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:19 crc kubenswrapper[4789]: I0202 21:20:19.739761 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:19 crc kubenswrapper[4789]: I0202 21:20:19.739793 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:19Z","lastTransitionTime":"2026-02-02T21:20:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:19 crc kubenswrapper[4789]: I0202 21:20:19.749854 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 13:21:08.310992438 +0000 UTC Feb 02 21:20:19 crc kubenswrapper[4789]: I0202 21:20:19.843326 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:19 crc kubenswrapper[4789]: I0202 21:20:19.843384 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:19 crc kubenswrapper[4789]: I0202 21:20:19.843403 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:19 crc kubenswrapper[4789]: I0202 21:20:19.843437 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:19 crc kubenswrapper[4789]: I0202 21:20:19.843455 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:19Z","lastTransitionTime":"2026-02-02T21:20:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:19 crc kubenswrapper[4789]: I0202 21:20:19.947275 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:19 crc kubenswrapper[4789]: I0202 21:20:19.947347 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:19 crc kubenswrapper[4789]: I0202 21:20:19.947371 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:19 crc kubenswrapper[4789]: I0202 21:20:19.947404 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:19 crc kubenswrapper[4789]: I0202 21:20:19.947426 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:19Z","lastTransitionTime":"2026-02-02T21:20:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:20 crc kubenswrapper[4789]: I0202 21:20:20.050651 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:20 crc kubenswrapper[4789]: I0202 21:20:20.050721 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:20 crc kubenswrapper[4789]: I0202 21:20:20.050745 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:20 crc kubenswrapper[4789]: I0202 21:20:20.050775 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:20 crc kubenswrapper[4789]: I0202 21:20:20.050797 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:20Z","lastTransitionTime":"2026-02-02T21:20:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:20 crc kubenswrapper[4789]: I0202 21:20:20.154102 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:20 crc kubenswrapper[4789]: I0202 21:20:20.154173 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:20 crc kubenswrapper[4789]: I0202 21:20:20.154194 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:20 crc kubenswrapper[4789]: I0202 21:20:20.154224 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:20 crc kubenswrapper[4789]: I0202 21:20:20.154246 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:20Z","lastTransitionTime":"2026-02-02T21:20:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:20 crc kubenswrapper[4789]: I0202 21:20:20.257160 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:20 crc kubenswrapper[4789]: I0202 21:20:20.257233 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:20 crc kubenswrapper[4789]: I0202 21:20:20.257252 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:20 crc kubenswrapper[4789]: I0202 21:20:20.257278 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:20 crc kubenswrapper[4789]: I0202 21:20:20.257322 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:20Z","lastTransitionTime":"2026-02-02T21:20:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:20 crc kubenswrapper[4789]: I0202 21:20:20.360059 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:20 crc kubenswrapper[4789]: I0202 21:20:20.360139 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:20 crc kubenswrapper[4789]: I0202 21:20:20.360162 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:20 crc kubenswrapper[4789]: I0202 21:20:20.360189 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:20 crc kubenswrapper[4789]: I0202 21:20:20.360206 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:20Z","lastTransitionTime":"2026-02-02T21:20:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:20 crc kubenswrapper[4789]: I0202 21:20:20.419185 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vjbpg" Feb 02 21:20:20 crc kubenswrapper[4789]: E0202 21:20:20.419436 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vjbpg" podUID="2dc26662-64d3-47f0-9e0d-d340760ca348" Feb 02 21:20:20 crc kubenswrapper[4789]: I0202 21:20:20.447794 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dsv6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcbed546-a1c3-4ba4-96b4-61471010b1c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fff0380f801c6909a82bfc8a765cf7a393d7c2f7cc809611ded737a22f448a3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48b8508849b24e18223b62bd73348759ba4c04a525afb7d4055175bceb0183c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e48b8508849b24e18223b62bd73348759ba4c04a525afb7d4055175bceb0183c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92b13f24aa00608821909c688ace0441fd4e432d3c5fc9a1cc06b4239bf86955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92b13f24aa00608821909c688ace0441fd4e432d3c5fc9a1cc06b4239bf86955\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75d7fd5bba471f9371960e5c6cfd5d7cb49402d47459acad01a9c438ea33de0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75d7fd5bba471f9371960e5c6cfd5d7cb49402d47459acad01a9c438ea33de0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eb07d53076d60f3363a1232e742bb4bd801d49104a2be1095af34a61a5b61cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eb07d53076d60f3363a1232e742bb4bd801d49104a2be1095af34a61a5b61cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5339fbb27f553f96497abbf4b0584cbffe399e44fafd90cd0e568e2f4efad6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5339fbb27f553f96497abbf4b0584cbffe399e44fafd90cd0e568e2f4efad6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef304631b36ef7a2edc5ce89f5b4d1efecd4947d1d71f3bb4068395d76aea346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef304631b36ef7a2edc5ce89f5b4d1efecd4947d1d71f3bb4068395d76aea346\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dsv6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:20Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:20 crc kubenswrapper[4789]: I0202 21:20:20.463680 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:20 crc kubenswrapper[4789]: I0202 21:20:20.463749 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:20 crc kubenswrapper[4789]: I0202 21:20:20.463774 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:20 crc kubenswrapper[4789]: I0202 21:20:20.463803 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:20 crc kubenswrapper[4789]: I0202 21:20:20.463825 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:20Z","lastTransitionTime":"2026-02-02T21:20:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:20 crc kubenswrapper[4789]: I0202 21:20:20.479046 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29dad69c19f9411676fab81684dd613d866c494df1e333b46bfff35b98430103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://047a98525fa1b06be5e6f3bab6b417973a13bb32df7fad26ca90b4558ee4e364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05637515d4c19e054ce43cc23bb6d8e32c4752282dab08786fff3062eb976461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f05a85f44fd6a7b9efa0f591ec0afebd7818a80b132ec3dddd0faa22c380eea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd508dcd4022163f662ad27cf3ffbc4cca551d4020809ca5fac60cee8da80601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://021453f1c74b708ad3f39c2defa2664f098a552f3e22315479c483d24110e583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca031da70bd23ada7cfba11c78c5962555189be4b845b36d92ee2737df4e8bb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61ddf86cf1be2810942de4465d9f3ff475fea38a612b6b9c033941c8f7ac5286\\\",\\\"exitCode\\\":2,\\\"finishedAt\\\":\\\"2026-02-02T21:20:16Z\\\",\\\"message\\\":\\\"49dde39f63d2}, \\\\\\\"AssertTimestampMonotonic\\\\\\\":dbus.Variant{sig:dbus.Signature{str:\\\\\\\"t\\\\\\\"}, value:0x4e2a35b}, \\\\\\\"ConditionResult\\\\\\\":dbus.Variant{sig:dbus.Signature{str:\\\\\\\"b\\\\\\\"}, value:true}, \\\\\\\"ConditionTimestamp\\\\\\\":dbus.Variant{sig:dbus.Signature{str:\\\\\\\"t\\\\\\\"}, value:0x649dde39f63cf}, \\\\\\\"ConditionTimestampMonotonic\\\\\\\":dbus.Variant{sig:dbus.Signature{str:\\\\\\\"t\\\\\\\"}, value:0x4e2a359}, \\\\\\\"FreezerState\\\\\\\":dbus.Variant{sig:dbus.Signature{str:\\\\\\\"s\\\\\\\"}, value:\\\\\\\"running\\\\\\\"}, \\\\\\\"InactiveEnterTimestamp\\\\\\\":dbus.Variant{sig:dbus.Signature{str:\\\\\\\"t\\\\\\\"}, value:0x0}, \\\\\\\"InactiveEnterTimestampMonotonic\\\\\\\":dbus.Variant{sig:dbus.Signature{str:\\\\\\\"t\\\\\\\"}, value:0x0}, \\\\\\\"InactiveExitTimestamp\\\\\\\":dbus.Variant{sig:dbus.Signature{str:\\\\\\\"t\\\\\\\"}, value:0x649dde39f67cf}, \\\\\\\"InactiveExitTimestampMonotonic\\\\\\\":dbus.Variant{sig:dbus.Signature{str:\\\\\\\"t\\\\\\\"}, value:0x4e2a758}, \\\\\\\"InvocationID\\\\\\\":dbus.Variant{sig:dbus.Signature{str:\\\\\\\"ay\\\\\\\"}, value:[]uint8{0x40, 0x56, 0x20, 0x71, 0x49, 0xf3, 0x49, 0x4e, 0xa1, 0xac, 0x9d, 0xa7, 0xf2, 0xd, 0xba, 0xdd}}, \\\\\\\"Job\\\\\\\":dbus.Variant{sig:dbus.Signature{str:\\\\\\\"(uo)\\\\\\\"}, value:[]interface {}{0x0, \\\\\\\"/\\\\\\\"}}, \\\\\\\"StateChangeTimestamp\\\\\\\":dbus.Variant{sig:dbus.Signature{str:\\\\\\\"t\\\\\\\"}, value:0x649dde39f67cf}, \\\\\\\"StateChangeTimestampMonotonic\\\\\\\":dbus.Variant{sig:dbus.Signature{str:\\\\\\\"t\\\\\\\"}, value:0x4e2a758}, \\\\\\\"SubState\\\\\\\":dbus.Variant{sig:dbus.Signature{str:\\\\\\\"s\\\\\\\"}, value:\\\\\\\"active\\\\\\\"}}, []string{\\\\\\\"Conditions\\\\\\\", \\\\\\\"Asserts\\\\\\\"}}, Sequence:0x5f}\\\\nI0202 21:20:16.310677 6057 ud\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca031da70bd23ada7cfba11c78c5962555189be4b845b36d92ee2737df4e8bb7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T21:20:18Z\\\",\\\"message\\\":\\\"reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.88:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ebd4748e-0473-49fb-88ad-83dbb221791a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0202 21:20:18.018667 6276 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console-operator/metrics]} name:Service_openshift-console-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.88:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ebd4748e-0473-49fb-88ad-83dbb221791a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0202 21:20:18.018180 6276 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f30344b8366cf5ef712d971dc4d8d552e7b50c1ecea8d2ace88bb80fce62231e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e90a8e698f2e9f30a596864d0d038bfad686cca1a56b1a8bc72fab5fe594f355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e90a8e698f2e9f30a596864d0d038bfad686cca1a56b1a8bc72fab5fe594f355\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w8vkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:20Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:20 crc kubenswrapper[4789]: I0202 21:20:20.495962 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wlsw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b25d791-42e1-4e08-b7da-41803cc40f4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0e2a17d3ade1aa229b2729d66fe953acc746673549e0d929076a3bd18839833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-snjrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wlsw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:20Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:20 crc kubenswrapper[4789]: I0202 21:20:20.513931 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9563eded-ca82-4eb6-90d4-e62b8acbe296\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72bf64d682578abb47e9fabf5e6c31e3a20594cc4a2d13304b2036f4198af265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cc46c408bb643834e494025442760929995b22175ce2af65a14004761848c2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e95c9e553b8e14141ddde6a221c0e5e4d46533d1631893e5d55416b9d16f4a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8bd6ccf6c7345f0ddc84a2983b618218fc65283b8d351c2df30d1221f304b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d8bd6ccf6c7345f0ddc84a2983b618218fc65283b8d351c2df30d1221f304b7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\" 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 21:20:01.773773 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 21:20:01.773776 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 21:20:01.773779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 21:20:01.773999 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0202 21:20:01.777333 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3307553149/tls.crt::/tmp/serving-cert-3307553149/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770067196\\\\\\\\\\\\\\\" (2026-02-02 21:19:55 +0000 UTC to 2026-03-04 21:19:56 +0000 UTC (now=2026-02-02 21:20:01.777292377 +0000 UTC))\\\\\\\"\\\\nI0202 21:20:01.777438 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770067201\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770067201\\\\\\\\\\\\\\\" (2026-02-02 20:20:01 +0000 UTC to 2027-02-02 20:20:01 +0000 UTC (now=2026-02-02 21:20:01.777417111 +0000 UTC))\\\\\\\"\\\\nI0202 21:20:01.777455 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 21:20:01.777484 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 21:20:01.777505 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0202 21:20:01.777526 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0202 21:20:01.777548 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3307553149/tls.crt::/tmp/serving-cert-3307553149/tls.key\\\\\\\"\\\\nF0202 21:20:01.777545 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cbe4a41d8721562e34639bc3ab24cc7f735d8de00e14d38b1b623b58740c5b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94bdc1c30a3c895835afc4f205ec20725ec13707db6957046cae7791b8850679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94bdc1c30a3c895835afc4f205ec20725ec13707db6957046cae7791b8850679\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:19:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:19:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:20Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:20 crc kubenswrapper[4789]: I0202 21:20:20.537790 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2x5ws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70a32268-2a2d-47f3-9fc6-4281b8dc6a02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9eaf3ca59ce89187ee20f3915b7fd4a8867e156c0be18b435511d2af95ffd949\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf446\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2x5ws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:20Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:20 crc kubenswrapper[4789]: I0202 21:20:20.558002 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fb5d89e-c901-4857-a5e8-b4d8e3701714\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d85613f300f8d008d919038b40efff3fb67e228e25959cbfd2424640c6bc7a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://460e055777327e1734238bf51929751678ea5a757b00a344daa31c8c95e4c463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c292754c81dd4929f8e599e8ec352fd5e90431c60bb741c070a10ffa91fad72\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3685c506f77d3807711a442d68da94932064737cfdc5cb74557da135906e24f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:19:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:20Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:20 crc kubenswrapper[4789]: I0202 21:20:20.566639 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:20 crc kubenswrapper[4789]: I0202 21:20:20.566836 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:20 crc kubenswrapper[4789]: I0202 21:20:20.566970 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:20 crc kubenswrapper[4789]: I0202 21:20:20.567152 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:20 crc kubenswrapper[4789]: I0202 21:20:20.567287 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:20Z","lastTransitionTime":"2026-02-02T21:20:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:20 crc kubenswrapper[4789]: I0202 21:20:20.578119 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6l576" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd970d28-4009-48b2-a0f4-2b8b1d54a2cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b641bb7caefade4d07fe41965573b059bd8b23a83bb9f60a9637d0152dffb07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwbqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6l576\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:20Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:20 crc kubenswrapper[4789]: I0202 21:20:20.600767 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:20Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:20 crc kubenswrapper[4789]: I0202 21:20:20.624362 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa57a708f883719dd81200ccf975947c0af244b92ead81daee23171c9be412f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:20Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:20 crc kubenswrapper[4789]: I0202 21:20:20.644231 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:20Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:20 crc kubenswrapper[4789]: I0202 21:20:20.664676 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19928db343f460eed7b046f14b45c6756081f57ea4b3ad77acc86f29eb856a82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://106225a63f3d4ecb7ba8c5266f738990ec7cc5603f9fabde6d234ae265dcc310\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:20Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:20 crc kubenswrapper[4789]: I0202 21:20:20.670050 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:20 crc kubenswrapper[4789]: I0202 21:20:20.670122 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:20 crc kubenswrapper[4789]: I0202 21:20:20.670135 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:20 crc kubenswrapper[4789]: I0202 21:20:20.670152 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:20 crc kubenswrapper[4789]: I0202 21:20:20.670185 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:20Z","lastTransitionTime":"2026-02-02T21:20:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:20 crc kubenswrapper[4789]: I0202 21:20:20.682258 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdf018b4-1451-4d37-be6e-05802b67c73e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad1716ba11e1b21eb68642e6935312760c389a669a007822290fbe72573bfaad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwk2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b108a17d437cce7e94365267d96e99e84944d9dd12174c5acb380bc1a1f9c885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwk2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c8vcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:20Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:20 crc kubenswrapper[4789]: I0202 21:20:20.701188 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d49gm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39528981-2c85-43f3-8fa0-bfae5c3334cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4ae303f0f4381207f4dd4a443e366d6e3de2014e9bc69aa644e98a76b239868\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnfbx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://277fe88585ee146931597a14fe049a3d69197c94e0d84f5dfb334b08cd685723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnfbx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-d49gm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:20Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:20 crc kubenswrapper[4789]: I0202 21:20:20.717970 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vjbpg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dc26662-64d3-47f0-9e0d-d340760ca348\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9zq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9zq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vjbpg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:20Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:20 crc kubenswrapper[4789]: I0202 21:20:20.720549 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2dc26662-64d3-47f0-9e0d-d340760ca348-metrics-certs\") pod \"network-metrics-daemon-vjbpg\" (UID: \"2dc26662-64d3-47f0-9e0d-d340760ca348\") " pod="openshift-multus/network-metrics-daemon-vjbpg" Feb 02 21:20:20 crc kubenswrapper[4789]: E0202 21:20:20.720919 4789 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 21:20:20 crc kubenswrapper[4789]: E0202 21:20:20.721146 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2dc26662-64d3-47f0-9e0d-d340760ca348-metrics-certs podName:2dc26662-64d3-47f0-9e0d-d340760ca348 nodeName:}" failed. No retries permitted until 2026-02-02 21:20:24.721116217 +0000 UTC m=+45.016141266 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2dc26662-64d3-47f0-9e0d-d340760ca348-metrics-certs") pod "network-metrics-daemon-vjbpg" (UID: "2dc26662-64d3-47f0-9e0d-d340760ca348") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 21:20:20 crc kubenswrapper[4789]: I0202 21:20:20.735833 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40b3784b5ba59f0147d2889d897d17140759155e24587e9a323338e0a3125ddd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:20Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:20 crc kubenswrapper[4789]: I0202 21:20:20.750303 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 00:43:37.617339581 +0000 UTC Feb 02 21:20:20 crc kubenswrapper[4789]: I0202 21:20:20.754038 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:20Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:20 crc kubenswrapper[4789]: I0202 21:20:20.773695 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:20 crc kubenswrapper[4789]: I0202 21:20:20.773765 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:20 crc kubenswrapper[4789]: I0202 21:20:20.773791 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:20 crc kubenswrapper[4789]: I0202 21:20:20.773820 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:20 crc kubenswrapper[4789]: I0202 21:20:20.773842 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:20Z","lastTransitionTime":"2026-02-02T21:20:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:20 crc kubenswrapper[4789]: I0202 21:20:20.878162 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:20 crc kubenswrapper[4789]: I0202 21:20:20.878221 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:20 crc kubenswrapper[4789]: I0202 21:20:20.878238 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:20 crc kubenswrapper[4789]: I0202 21:20:20.878261 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:20 crc kubenswrapper[4789]: I0202 21:20:20.878280 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:20Z","lastTransitionTime":"2026-02-02T21:20:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:20 crc kubenswrapper[4789]: I0202 21:20:20.981561 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:20 crc kubenswrapper[4789]: I0202 21:20:20.981637 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:20 crc kubenswrapper[4789]: I0202 21:20:20.981655 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:20 crc kubenswrapper[4789]: I0202 21:20:20.981679 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:20 crc kubenswrapper[4789]: I0202 21:20:20.981697 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:20Z","lastTransitionTime":"2026-02-02T21:20:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:21 crc kubenswrapper[4789]: I0202 21:20:21.084714 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:21 crc kubenswrapper[4789]: I0202 21:20:21.084754 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:21 crc kubenswrapper[4789]: I0202 21:20:21.084765 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:21 crc kubenswrapper[4789]: I0202 21:20:21.084795 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:21 crc kubenswrapper[4789]: I0202 21:20:21.084807 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:21Z","lastTransitionTime":"2026-02-02T21:20:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:21 crc kubenswrapper[4789]: I0202 21:20:21.188171 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:21 crc kubenswrapper[4789]: I0202 21:20:21.188418 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:21 crc kubenswrapper[4789]: I0202 21:20:21.188436 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:21 crc kubenswrapper[4789]: I0202 21:20:21.188461 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:21 crc kubenswrapper[4789]: I0202 21:20:21.188481 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:21Z","lastTransitionTime":"2026-02-02T21:20:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:21 crc kubenswrapper[4789]: I0202 21:20:21.291737 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:21 crc kubenswrapper[4789]: I0202 21:20:21.291786 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:21 crc kubenswrapper[4789]: I0202 21:20:21.291803 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:21 crc kubenswrapper[4789]: I0202 21:20:21.291824 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:21 crc kubenswrapper[4789]: I0202 21:20:21.291840 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:21Z","lastTransitionTime":"2026-02-02T21:20:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:21 crc kubenswrapper[4789]: I0202 21:20:21.395501 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:21 crc kubenswrapper[4789]: I0202 21:20:21.395572 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:21 crc kubenswrapper[4789]: I0202 21:20:21.395631 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:21 crc kubenswrapper[4789]: I0202 21:20:21.395656 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:21 crc kubenswrapper[4789]: I0202 21:20:21.395676 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:21Z","lastTransitionTime":"2026-02-02T21:20:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:21 crc kubenswrapper[4789]: I0202 21:20:21.419341 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 21:20:21 crc kubenswrapper[4789]: I0202 21:20:21.419345 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 21:20:21 crc kubenswrapper[4789]: E0202 21:20:21.419498 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 21:20:21 crc kubenswrapper[4789]: E0202 21:20:21.419661 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 21:20:21 crc kubenswrapper[4789]: I0202 21:20:21.419955 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 21:20:21 crc kubenswrapper[4789]: E0202 21:20:21.420272 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 21:20:21 crc kubenswrapper[4789]: I0202 21:20:21.498292 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:21 crc kubenswrapper[4789]: I0202 21:20:21.498757 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:21 crc kubenswrapper[4789]: I0202 21:20:21.498907 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:21 crc kubenswrapper[4789]: I0202 21:20:21.499059 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:21 crc kubenswrapper[4789]: I0202 21:20:21.499189 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:21Z","lastTransitionTime":"2026-02-02T21:20:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:21 crc kubenswrapper[4789]: I0202 21:20:21.602717 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:21 crc kubenswrapper[4789]: I0202 21:20:21.603017 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:21 crc kubenswrapper[4789]: I0202 21:20:21.603102 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:21 crc kubenswrapper[4789]: I0202 21:20:21.603190 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:21 crc kubenswrapper[4789]: I0202 21:20:21.603323 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:21Z","lastTransitionTime":"2026-02-02T21:20:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:21 crc kubenswrapper[4789]: I0202 21:20:21.705787 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:21 crc kubenswrapper[4789]: I0202 21:20:21.705887 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:21 crc kubenswrapper[4789]: I0202 21:20:21.705911 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:21 crc kubenswrapper[4789]: I0202 21:20:21.705938 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:21 crc kubenswrapper[4789]: I0202 21:20:21.705958 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:21Z","lastTransitionTime":"2026-02-02T21:20:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:21 crc kubenswrapper[4789]: I0202 21:20:21.750869 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 11:16:03.519685826 +0000 UTC Feb 02 21:20:21 crc kubenswrapper[4789]: I0202 21:20:21.809693 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:21 crc kubenswrapper[4789]: I0202 21:20:21.810091 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:21 crc kubenswrapper[4789]: I0202 21:20:21.810109 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:21 crc kubenswrapper[4789]: I0202 21:20:21.810134 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:21 crc kubenswrapper[4789]: I0202 21:20:21.810154 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:21Z","lastTransitionTime":"2026-02-02T21:20:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:21 crc kubenswrapper[4789]: I0202 21:20:21.912942 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:21 crc kubenswrapper[4789]: I0202 21:20:21.913008 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:21 crc kubenswrapper[4789]: I0202 21:20:21.913026 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:21 crc kubenswrapper[4789]: I0202 21:20:21.913053 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:21 crc kubenswrapper[4789]: I0202 21:20:21.913073 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:21Z","lastTransitionTime":"2026-02-02T21:20:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:22 crc kubenswrapper[4789]: I0202 21:20:22.016678 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:22 crc kubenswrapper[4789]: I0202 21:20:22.016739 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:22 crc kubenswrapper[4789]: I0202 21:20:22.016758 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:22 crc kubenswrapper[4789]: I0202 21:20:22.016786 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:22 crc kubenswrapper[4789]: I0202 21:20:22.016806 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:22Z","lastTransitionTime":"2026-02-02T21:20:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:22 crc kubenswrapper[4789]: I0202 21:20:22.120168 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:22 crc kubenswrapper[4789]: I0202 21:20:22.120237 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:22 crc kubenswrapper[4789]: I0202 21:20:22.120258 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:22 crc kubenswrapper[4789]: I0202 21:20:22.120286 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:22 crc kubenswrapper[4789]: I0202 21:20:22.120305 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:22Z","lastTransitionTime":"2026-02-02T21:20:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:22 crc kubenswrapper[4789]: I0202 21:20:22.224451 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:22 crc kubenswrapper[4789]: I0202 21:20:22.224502 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:22 crc kubenswrapper[4789]: I0202 21:20:22.224520 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:22 crc kubenswrapper[4789]: I0202 21:20:22.224543 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:22 crc kubenswrapper[4789]: I0202 21:20:22.224560 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:22Z","lastTransitionTime":"2026-02-02T21:20:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:22 crc kubenswrapper[4789]: I0202 21:20:22.327839 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:22 crc kubenswrapper[4789]: I0202 21:20:22.327888 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:22 crc kubenswrapper[4789]: I0202 21:20:22.327906 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:22 crc kubenswrapper[4789]: I0202 21:20:22.327929 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:22 crc kubenswrapper[4789]: I0202 21:20:22.327946 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:22Z","lastTransitionTime":"2026-02-02T21:20:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:22 crc kubenswrapper[4789]: I0202 21:20:22.419700 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vjbpg" Feb 02 21:20:22 crc kubenswrapper[4789]: E0202 21:20:22.420157 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vjbpg" podUID="2dc26662-64d3-47f0-9e0d-d340760ca348" Feb 02 21:20:22 crc kubenswrapper[4789]: I0202 21:20:22.420431 4789 scope.go:117] "RemoveContainer" containerID="5d8bd6ccf6c7345f0ddc84a2983b618218fc65283b8d351c2df30d1221f304b7" Feb 02 21:20:22 crc kubenswrapper[4789]: I0202 21:20:22.434458 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:22 crc kubenswrapper[4789]: I0202 21:20:22.434540 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:22 crc kubenswrapper[4789]: I0202 21:20:22.434569 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:22 crc kubenswrapper[4789]: I0202 21:20:22.434635 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:22 crc kubenswrapper[4789]: I0202 21:20:22.434659 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:22Z","lastTransitionTime":"2026-02-02T21:20:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:22 crc kubenswrapper[4789]: I0202 21:20:22.453168 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" Feb 02 21:20:22 crc kubenswrapper[4789]: I0202 21:20:22.454438 4789 scope.go:117] "RemoveContainer" containerID="ca031da70bd23ada7cfba11c78c5962555189be4b845b36d92ee2737df4e8bb7" Feb 02 21:20:22 crc kubenswrapper[4789]: E0202 21:20:22.454844 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-w8vkt_openshift-ovn-kubernetes(2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6)\"" pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" podUID="2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6" Feb 02 21:20:22 crc kubenswrapper[4789]: I0202 21:20:22.477782 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fb5d89e-c901-4857-a5e8-b4d8e3701714\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d85613f300f8d008d919038b40efff3fb67e228e25959cbfd2424640c6bc7a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://460e055777327e1734238bf51929751678ea5a757b00a344daa31c8c95e4c463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c292754c81dd4929f8e599e8ec352fd5e90431c60bb741c070a10ffa91fad72\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3685c506f77d3807711a442d68da94932064737cfdc5cb74557da135906e24f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:19:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:22Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:22 crc kubenswrapper[4789]: I0202 21:20:22.498116 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2x5ws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70a32268-2a2d-47f3-9fc6-4281b8dc6a02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9eaf3ca59ce89187ee20f3915b7fd4a8867e156c0be18b435511d2af95ffd949\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf446\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2x5ws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:22Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:22 crc kubenswrapper[4789]: I0202 21:20:22.522495 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa57a708f883719dd81200ccf975947c0af244b92ead81daee23171c9be412f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:22Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:22 crc kubenswrapper[4789]: I0202 21:20:22.538918 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:22 crc kubenswrapper[4789]: I0202 21:20:22.538973 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:22 crc kubenswrapper[4789]: I0202 21:20:22.538994 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:22 crc kubenswrapper[4789]: I0202 21:20:22.539019 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:22 crc kubenswrapper[4789]: I0202 21:20:22.539040 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:22Z","lastTransitionTime":"2026-02-02T21:20:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:22 crc kubenswrapper[4789]: I0202 21:20:22.539129 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6l576" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd970d28-4009-48b2-a0f4-2b8b1d54a2cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b641bb7caefade4d07fe41965573b059bd8b23a83bb9f60a9637d0152dffb07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwbqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6l576\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:22Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:22 crc kubenswrapper[4789]: I0202 21:20:22.557376 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:22Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:22 crc kubenswrapper[4789]: I0202 21:20:22.572712 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdf018b4-1451-4d37-be6e-05802b67c73e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad1716ba11e1b21eb68642e6935312760c389a669a007822290fbe72573bfaad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwk2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b108a17d437cce7e94365267d96e99e84944d9dd12174c5acb380bc1a1f9c885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwk2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c8vcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:22Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:22 crc kubenswrapper[4789]: I0202 21:20:22.588305 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d49gm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39528981-2c85-43f3-8fa0-bfae5c3334cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4ae303f0f4381207f4dd4a443e366d6e3de2014e9bc69aa644e98a76b239868\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnfbx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://277fe88585ee146931597a14fe049a3d69197c94e0d84f5dfb334b08cd685723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnfbx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-d49gm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:22Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:22 crc kubenswrapper[4789]: I0202 21:20:22.602484 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vjbpg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dc26662-64d3-47f0-9e0d-d340760ca348\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9zq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9zq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vjbpg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:22Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:22 crc kubenswrapper[4789]: I0202 21:20:22.615035 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40b3784b5ba59f0147d2889d897d17140759155e24587e9a323338e0a3125ddd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:22Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:22 crc kubenswrapper[4789]: I0202 21:20:22.632390 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:22Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:22 crc kubenswrapper[4789]: I0202 21:20:22.642068 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:22 crc kubenswrapper[4789]: I0202 21:20:22.642121 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:22 crc kubenswrapper[4789]: I0202 21:20:22.642139 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:22 crc kubenswrapper[4789]: I0202 21:20:22.642162 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:22 crc kubenswrapper[4789]: I0202 21:20:22.642180 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:22Z","lastTransitionTime":"2026-02-02T21:20:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:22 crc kubenswrapper[4789]: I0202 21:20:22.650255 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:22Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:22 crc kubenswrapper[4789]: I0202 21:20:22.669509 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19928db343f460eed7b046f14b45c6756081f57ea4b3ad77acc86f29eb856a82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://106225a63f3d4ecb7ba8c5266f738990ec7cc5603f9fabde6d234ae265dcc310\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:22Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:22 crc kubenswrapper[4789]: I0202 21:20:22.684202 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wlsw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b25d791-42e1-4e08-b7da-41803cc40f4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0e2a17d3ade1aa229b2729d66fe953acc746673549e0d929076a3bd18839833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-snjrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wlsw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:22Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:22 crc kubenswrapper[4789]: I0202 21:20:22.703823 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9563eded-ca82-4eb6-90d4-e62b8acbe296\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72bf64d682578abb47e9fabf5e6c31e3a20594cc4a2d13304b2036f4198af265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cc46c408bb643834e494025442760929995b22175ce2af65a14004761848c2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e95c9e553b8e14141ddde6a221c0e5e4d46533d1631893e5d55416b9d16f4a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8bd6ccf6c7345f0ddc84a2983b618218fc65283b8d351c2df30d1221f304b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d8bd6ccf6c7345f0ddc84a2983b618218fc65283b8d351c2df30d1221f304b7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\" 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 21:20:01.773773 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 21:20:01.773776 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 21:20:01.773779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 21:20:01.773999 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0202 21:20:01.777333 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3307553149/tls.crt::/tmp/serving-cert-3307553149/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770067196\\\\\\\\\\\\\\\" (2026-02-02 21:19:55 +0000 UTC to 2026-03-04 21:19:56 +0000 UTC (now=2026-02-02 21:20:01.777292377 +0000 UTC))\\\\\\\"\\\\nI0202 21:20:01.777438 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770067201\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770067201\\\\\\\\\\\\\\\" (2026-02-02 20:20:01 +0000 UTC to 2027-02-02 20:20:01 +0000 UTC (now=2026-02-02 21:20:01.777417111 +0000 UTC))\\\\\\\"\\\\nI0202 21:20:01.777455 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 21:20:01.777484 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 21:20:01.777505 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0202 21:20:01.777526 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0202 21:20:01.777548 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3307553149/tls.crt::/tmp/serving-cert-3307553149/tls.key\\\\\\\"\\\\nF0202 21:20:01.777545 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cbe4a41d8721562e34639bc3ab24cc7f735d8de00e14d38b1b623b58740c5b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94bdc1c30a3c895835afc4f205ec20725ec13707db6957046cae7791b8850679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94bdc1c30a3c895835afc4f205ec20725ec13707db6957046cae7791b8850679\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:19:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:19:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:22Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:22 crc kubenswrapper[4789]: I0202 21:20:22.721649 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dsv6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcbed546-a1c3-4ba4-96b4-61471010b1c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fff0380f801c6909a82bfc8a765cf7a393d7c2f7cc809611ded737a22f448a3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48b8508849b24e18223b62bd73348759ba4c04a525afb7d4055175bceb0183c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e48b8508849b24e18223b62bd73348759ba4c04a525afb7d4055175bceb0183c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92b13f24aa00608821909c688ace0441fd4e432d3c5fc9a1cc06b4239bf86955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92b13f24aa00608821909c688ace0441fd4e432d3c5fc9a1cc06b4239bf86955\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75d7fd5bba471f9371960e5c6cfd5d7cb49402d47459acad01a9c438ea33de0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75d7fd5bba471f9371960e5c6cfd5d7cb49402d47459acad01a9c438ea33de0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eb07d53076d60f3363a1232e742bb4bd801d49104a2be1095af34a61a5b61cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eb07d53076d60f3363a1232e742bb4bd801d49104a2be1095af34a61a5b61cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5339fbb27f553f96497abbf4b0584cbffe399e44fafd90cd0e568e2f4efad6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5339fbb27f553f96497abbf4b0584cbffe399e44fafd90cd0e568e2f4efad6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef304631b36ef7a2edc5ce89f5b4d1efecd4947d1d71f3bb4068395d76aea346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef304631b36ef7a2edc5ce89f5b4d1efecd4947d1d71f3bb4068395d76aea346\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dsv6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:22Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:22 crc kubenswrapper[4789]: I0202 21:20:22.745144 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:22 crc kubenswrapper[4789]: I0202 21:20:22.745195 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:22 crc kubenswrapper[4789]: I0202 21:20:22.745212 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:22 crc kubenswrapper[4789]: I0202 21:20:22.745235 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:22 crc kubenswrapper[4789]: I0202 21:20:22.745253 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:22Z","lastTransitionTime":"2026-02-02T21:20:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:22 crc kubenswrapper[4789]: I0202 21:20:22.748623 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29dad69c19f9411676fab81684dd613d866c494df1e333b46bfff35b98430103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://047a98525fa1b06be5e6f3bab6b417973a13bb32df7fad26ca90b4558ee4e364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05637515d4c19e054ce43cc23bb6d8e32c4752282dab08786fff3062eb976461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f05a85f44fd6a7b9efa0f591ec0afebd7818a80b132ec3dddd0faa22c380eea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd508dcd4022163f662ad27cf3ffbc4cca551d4020809ca5fac60cee8da80601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://021453f1c74b708ad3f39c2defa2664f098a552f3e22315479c483d24110e583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca031da70bd23ada7cfba11c78c5962555189be4b845b36d92ee2737df4e8bb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca031da70bd23ada7cfba11c78c5962555189be4b845b36d92ee2737df4e8bb7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T21:20:18Z\\\",\\\"message\\\":\\\"reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.88:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ebd4748e-0473-49fb-88ad-83dbb221791a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0202 21:20:18.018667 6276 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console-operator/metrics]} name:Service_openshift-console-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.88:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ebd4748e-0473-49fb-88ad-83dbb221791a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0202 21:20:18.018180 6276 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-w8vkt_openshift-ovn-kubernetes(2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f30344b8366cf5ef712d971dc4d8d552e7b50c1ecea8d2ace88bb80fce62231e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e90a8e698f2e9f30a596864d0d038bfad686cca1a56b1a8bc72fab5fe594f355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e90a8e698f2e9f30a596864d0d038bfad686cca1a56b1a8bc72fab5fe594f355\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w8vkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:22Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:22 crc kubenswrapper[4789]: I0202 21:20:22.752671 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 09:34:54.935062087 +0000 UTC Feb 02 21:20:22 crc kubenswrapper[4789]: I0202 21:20:22.849589 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:22 crc kubenswrapper[4789]: I0202 21:20:22.849673 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:22 crc kubenswrapper[4789]: I0202 21:20:22.849693 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:22 crc kubenswrapper[4789]: I0202 21:20:22.849718 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:22 crc kubenswrapper[4789]: I0202 21:20:22.849736 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:22Z","lastTransitionTime":"2026-02-02T21:20:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:22 crc kubenswrapper[4789]: I0202 21:20:22.952901 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:22 crc kubenswrapper[4789]: I0202 21:20:22.952983 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:22 crc kubenswrapper[4789]: I0202 21:20:22.953008 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:22 crc kubenswrapper[4789]: I0202 21:20:22.953041 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:22 crc kubenswrapper[4789]: I0202 21:20:22.953066 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:22Z","lastTransitionTime":"2026-02-02T21:20:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:23 crc kubenswrapper[4789]: I0202 21:20:23.055947 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:23 crc kubenswrapper[4789]: I0202 21:20:23.056010 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:23 crc kubenswrapper[4789]: I0202 21:20:23.056034 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:23 crc kubenswrapper[4789]: I0202 21:20:23.056064 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:23 crc kubenswrapper[4789]: I0202 21:20:23.056086 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:23Z","lastTransitionTime":"2026-02-02T21:20:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:23 crc kubenswrapper[4789]: I0202 21:20:23.091411 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 02 21:20:23 crc kubenswrapper[4789]: I0202 21:20:23.094115 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"66a4db3799101ccca8a89d6bfd2c9d36940b8710ee3d256e47cd61cfe6ac7c07"} Feb 02 21:20:23 crc kubenswrapper[4789]: I0202 21:20:23.094627 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 21:20:23 crc kubenswrapper[4789]: I0202 21:20:23.116576 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa57a708f883719dd81200ccf975947c0af244b92ead81daee23171c9be412f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:23Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:23 crc kubenswrapper[4789]: I0202 21:20:23.133711 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6l576" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd970d28-4009-48b2-a0f4-2b8b1d54a2cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b641bb7caefade4d07fe41965573b059bd8b23a83bb9f60a9637d0152dffb07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwbqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6l576\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:23Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:23 crc kubenswrapper[4789]: I0202 21:20:23.149753 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:23Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:23 crc kubenswrapper[4789]: I0202 21:20:23.161691 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:23 crc kubenswrapper[4789]: I0202 21:20:23.161748 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:23 crc kubenswrapper[4789]: I0202 21:20:23.161771 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:23 crc kubenswrapper[4789]: I0202 21:20:23.161800 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:23 crc kubenswrapper[4789]: I0202 21:20:23.161823 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:23Z","lastTransitionTime":"2026-02-02T21:20:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:23 crc kubenswrapper[4789]: I0202 21:20:23.166066 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vjbpg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dc26662-64d3-47f0-9e0d-d340760ca348\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9zq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9zq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vjbpg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:23Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:23 crc kubenswrapper[4789]: I0202 21:20:23.186544 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40b3784b5ba59f0147d2889d897d17140759155e24587e9a323338e0a3125ddd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:23Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:23 crc kubenswrapper[4789]: I0202 21:20:23.208311 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:23Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:23 crc kubenswrapper[4789]: I0202 21:20:23.228327 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:23Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:23 crc kubenswrapper[4789]: I0202 21:20:23.247447 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19928db343f460eed7b046f14b45c6756081f57ea4b3ad77acc86f29eb856a82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://106225a63f3d4ecb7ba8c5266f738990ec7cc5603f9fabde6d234ae265dcc310\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:23Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:23 crc kubenswrapper[4789]: I0202 21:20:23.264299 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:23 crc kubenswrapper[4789]: I0202 21:20:23.264550 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:23 crc kubenswrapper[4789]: I0202 21:20:23.264732 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:23 crc kubenswrapper[4789]: I0202 21:20:23.264937 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:23 crc kubenswrapper[4789]: I0202 21:20:23.265138 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:23Z","lastTransitionTime":"2026-02-02T21:20:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:23 crc kubenswrapper[4789]: I0202 21:20:23.265169 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdf018b4-1451-4d37-be6e-05802b67c73e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad1716ba11e1b21eb68642e6935312760c389a669a007822290fbe72573bfaad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwk2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b108a17d437cce7e94365267d96e99e84944d9dd12174c5acb380bc1a1f9c885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwk2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c8vcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:23Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:23 crc kubenswrapper[4789]: I0202 21:20:23.282441 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d49gm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39528981-2c85-43f3-8fa0-bfae5c3334cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4ae303f0f4381207f4dd4a443e366d6e3de2014e9bc69aa644e98a76b239868\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnfbx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://277fe88585ee146931597a14fe049a3d69197c94e0d84f5dfb334b08cd685723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnfbx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-d49gm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:23Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:23 crc kubenswrapper[4789]: I0202 21:20:23.303656 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9563eded-ca82-4eb6-90d4-e62b8acbe296\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72bf64d682578abb47e9fabf5e6c31e3a20594cc4a2d13304b2036f4198af265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cc46c408bb643834e494025442760929995b22175ce2af65a14004761848c2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e95c9e553b8e14141ddde6a221c0e5e4d46533d1631893e5d55416b9d16f4a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66a4db3799101ccca8a89d6bfd2c9d36940b8710ee3d256e47cd61cfe6ac7c07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d8bd6ccf6c7345f0ddc84a2983b618218fc65283b8d351c2df30d1221f304b7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\" 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 21:20:01.773773 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 21:20:01.773776 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 21:20:01.773779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 21:20:01.773999 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0202 21:20:01.777333 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3307553149/tls.crt::/tmp/serving-cert-3307553149/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770067196\\\\\\\\\\\\\\\" (2026-02-02 21:19:55 +0000 UTC to 2026-03-04 21:19:56 +0000 UTC (now=2026-02-02 21:20:01.777292377 +0000 UTC))\\\\\\\"\\\\nI0202 21:20:01.777438 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770067201\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770067201\\\\\\\\\\\\\\\" (2026-02-02 20:20:01 +0000 UTC to 2027-02-02 20:20:01 +0000 UTC (now=2026-02-02 21:20:01.777417111 +0000 UTC))\\\\\\\"\\\\nI0202 21:20:01.777455 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 21:20:01.777484 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 21:20:01.777505 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0202 21:20:01.777526 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0202 21:20:01.777548 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3307553149/tls.crt::/tmp/serving-cert-3307553149/tls.key\\\\\\\"\\\\nF0202 21:20:01.777545 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cbe4a41d8721562e34639bc3ab24cc7f735d8de00e14d38b1b623b58740c5b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94bdc1c30a3c895835afc4f205ec20725ec13707db6957046cae7791b8850679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94bdc1c30a3c895835afc4f205ec20725ec13707db6957046cae7791b8850679\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:19:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:19:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:23Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:23 crc kubenswrapper[4789]: I0202 21:20:23.330102 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dsv6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcbed546-a1c3-4ba4-96b4-61471010b1c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fff0380f801c6909a82bfc8a765cf7a393d7c2f7cc809611ded737a22f448a3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48b8508849b24e18223b62bd73348759ba4c04a525afb7d4055175bceb0183c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e48b8508849b24e18223b62bd73348759ba4c04a525afb7d4055175bceb0183c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92b13f24aa00608821909c688ace0441fd4e432d3c5fc9a1cc06b4239bf86955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92b13f24aa00608821909c688ace0441fd4e432d3c5fc9a1cc06b4239bf86955\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75d7fd5bba471f9371960e5c6cfd5d7cb49402d47459acad01a9c438ea33de0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75d7fd5bba471f9371960e5c6cfd5d7cb49402d47459acad01a9c438ea33de0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eb07d53076d60f3363a1232e742bb4bd801d49104a2be1095af34a61a5b61cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eb07d53076d60f3363a1232e742bb4bd801d49104a2be1095af34a61a5b61cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5339fbb27f553f96497abbf4b0584cbffe399e44fafd90cd0e568e2f4efad6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5339fbb27f553f96497abbf4b0584cbffe399e44fafd90cd0e568e2f4efad6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef304631b36ef7a2edc5ce89f5b4d1efecd4947d1d71f3bb4068395d76aea346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef304631b36ef7a2edc5ce89f5b4d1efecd4947d1d71f3bb4068395d76aea346\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dsv6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:23Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:23 crc kubenswrapper[4789]: I0202 21:20:23.356175 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29dad69c19f9411676fab81684dd613d866c494df1e333b46bfff35b98430103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://047a98525fa1b06be5e6f3bab6b417973a13bb32df7fad26ca90b4558ee4e364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05637515d4c19e054ce43cc23bb6d8e32c4752282dab08786fff3062eb976461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f05a85f44fd6a7b9efa0f591ec0afebd7818a80b132ec3dddd0faa22c380eea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd508dcd4022163f662ad27cf3ffbc4cca551d4020809ca5fac60cee8da80601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://021453f1c74b708ad3f39c2defa2664f098a552f3e22315479c483d24110e583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca031da70bd23ada7cfba11c78c5962555189be4b845b36d92ee2737df4e8bb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca031da70bd23ada7cfba11c78c5962555189be4b845b36d92ee2737df4e8bb7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T21:20:18Z\\\",\\\"message\\\":\\\"reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.88:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ebd4748e-0473-49fb-88ad-83dbb221791a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0202 21:20:18.018667 6276 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console-operator/metrics]} name:Service_openshift-console-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.88:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ebd4748e-0473-49fb-88ad-83dbb221791a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0202 21:20:18.018180 6276 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-w8vkt_openshift-ovn-kubernetes(2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f30344b8366cf5ef712d971dc4d8d552e7b50c1ecea8d2ace88bb80fce62231e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e90a8e698f2e9f30a596864d0d038bfad686cca1a56b1a8bc72fab5fe594f355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e90a8e698f2e9f30a596864d0d038bfad686cca1a56b1a8bc72fab5fe594f355\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w8vkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:23Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:23 crc kubenswrapper[4789]: I0202 21:20:23.369273 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:23 crc kubenswrapper[4789]: I0202 21:20:23.369343 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:23 crc kubenswrapper[4789]: I0202 21:20:23.369362 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:23 crc kubenswrapper[4789]: I0202 21:20:23.369387 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:23 crc kubenswrapper[4789]: I0202 21:20:23.369408 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:23Z","lastTransitionTime":"2026-02-02T21:20:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:23 crc kubenswrapper[4789]: I0202 21:20:23.373276 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wlsw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b25d791-42e1-4e08-b7da-41803cc40f4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0e2a17d3ade1aa229b2729d66fe953acc746673549e0d929076a3bd18839833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-snjrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wlsw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:23Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:23 crc kubenswrapper[4789]: I0202 21:20:23.386798 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fb5d89e-c901-4857-a5e8-b4d8e3701714\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d85613f300f8d008d919038b40efff3fb67e228e25959cbfd2424640c6bc7a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://460e055777327e1734238bf51929751678ea5a757b00a344daa31c8c95e4c463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c292754c81dd4929f8e599e8ec352fd5e90431c60bb741c070a10ffa91fad72\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3685c506f77d3807711a442d68da94932064737cfdc5cb74557da135906e24f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:19:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:23Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:23 crc kubenswrapper[4789]: I0202 21:20:23.399601 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2x5ws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70a32268-2a2d-47f3-9fc6-4281b8dc6a02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9eaf3ca59ce89187ee20f3915b7fd4a8867e156c0be18b435511d2af95ffd949\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf446\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2x5ws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:23Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:23 crc kubenswrapper[4789]: I0202 21:20:23.419068 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 21:20:23 crc kubenswrapper[4789]: I0202 21:20:23.419178 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 21:20:23 crc kubenswrapper[4789]: I0202 21:20:23.419193 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 21:20:23 crc kubenswrapper[4789]: E0202 21:20:23.419399 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 21:20:23 crc kubenswrapper[4789]: E0202 21:20:23.419523 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 21:20:23 crc kubenswrapper[4789]: E0202 21:20:23.419774 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 21:20:23 crc kubenswrapper[4789]: I0202 21:20:23.472890 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:23 crc kubenswrapper[4789]: I0202 21:20:23.472954 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:23 crc kubenswrapper[4789]: I0202 21:20:23.472977 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:23 crc kubenswrapper[4789]: I0202 21:20:23.473011 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:23 crc kubenswrapper[4789]: I0202 21:20:23.473033 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:23Z","lastTransitionTime":"2026-02-02T21:20:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:23 crc kubenswrapper[4789]: I0202 21:20:23.576705 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:23 crc kubenswrapper[4789]: I0202 21:20:23.576793 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:23 crc kubenswrapper[4789]: I0202 21:20:23.576836 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:23 crc kubenswrapper[4789]: I0202 21:20:23.576873 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:23 crc kubenswrapper[4789]: I0202 21:20:23.576897 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:23Z","lastTransitionTime":"2026-02-02T21:20:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:23 crc kubenswrapper[4789]: I0202 21:20:23.680354 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:23 crc kubenswrapper[4789]: I0202 21:20:23.680414 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:23 crc kubenswrapper[4789]: I0202 21:20:23.680431 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:23 crc kubenswrapper[4789]: I0202 21:20:23.680456 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:23 crc kubenswrapper[4789]: I0202 21:20:23.680478 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:23Z","lastTransitionTime":"2026-02-02T21:20:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:23 crc kubenswrapper[4789]: I0202 21:20:23.752859 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 04:36:40.454499502 +0000 UTC Feb 02 21:20:23 crc kubenswrapper[4789]: I0202 21:20:23.783883 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:23 crc kubenswrapper[4789]: I0202 21:20:23.783960 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:23 crc kubenswrapper[4789]: I0202 21:20:23.783987 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:23 crc kubenswrapper[4789]: I0202 21:20:23.784014 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:23 crc kubenswrapper[4789]: I0202 21:20:23.784033 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:23Z","lastTransitionTime":"2026-02-02T21:20:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:23 crc kubenswrapper[4789]: I0202 21:20:23.887267 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:23 crc kubenswrapper[4789]: I0202 21:20:23.887326 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:23 crc kubenswrapper[4789]: I0202 21:20:23.887344 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:23 crc kubenswrapper[4789]: I0202 21:20:23.887368 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:23 crc kubenswrapper[4789]: I0202 21:20:23.887385 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:23Z","lastTransitionTime":"2026-02-02T21:20:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:23 crc kubenswrapper[4789]: I0202 21:20:23.991517 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:23 crc kubenswrapper[4789]: I0202 21:20:23.991587 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:23 crc kubenswrapper[4789]: I0202 21:20:23.991640 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:23 crc kubenswrapper[4789]: I0202 21:20:23.991671 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:23 crc kubenswrapper[4789]: I0202 21:20:23.991692 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:23Z","lastTransitionTime":"2026-02-02T21:20:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:24 crc kubenswrapper[4789]: I0202 21:20:24.095667 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:24 crc kubenswrapper[4789]: I0202 21:20:24.095734 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:24 crc kubenswrapper[4789]: I0202 21:20:24.095757 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:24 crc kubenswrapper[4789]: I0202 21:20:24.095787 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:24 crc kubenswrapper[4789]: I0202 21:20:24.095810 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:24Z","lastTransitionTime":"2026-02-02T21:20:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:24 crc kubenswrapper[4789]: I0202 21:20:24.200291 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:24 crc kubenswrapper[4789]: I0202 21:20:24.200350 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:24 crc kubenswrapper[4789]: I0202 21:20:24.200369 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:24 crc kubenswrapper[4789]: I0202 21:20:24.200395 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:24 crc kubenswrapper[4789]: I0202 21:20:24.200416 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:24Z","lastTransitionTime":"2026-02-02T21:20:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:24 crc kubenswrapper[4789]: I0202 21:20:24.303143 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:24 crc kubenswrapper[4789]: I0202 21:20:24.303203 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:24 crc kubenswrapper[4789]: I0202 21:20:24.303223 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:24 crc kubenswrapper[4789]: I0202 21:20:24.303247 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:24 crc kubenswrapper[4789]: I0202 21:20:24.303265 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:24Z","lastTransitionTime":"2026-02-02T21:20:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:24 crc kubenswrapper[4789]: I0202 21:20:24.405848 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:24 crc kubenswrapper[4789]: I0202 21:20:24.405917 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:24 crc kubenswrapper[4789]: I0202 21:20:24.405936 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:24 crc kubenswrapper[4789]: I0202 21:20:24.405969 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:24 crc kubenswrapper[4789]: I0202 21:20:24.405988 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:24Z","lastTransitionTime":"2026-02-02T21:20:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:24 crc kubenswrapper[4789]: I0202 21:20:24.419372 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vjbpg" Feb 02 21:20:24 crc kubenswrapper[4789]: E0202 21:20:24.419566 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vjbpg" podUID="2dc26662-64d3-47f0-9e0d-d340760ca348" Feb 02 21:20:24 crc kubenswrapper[4789]: I0202 21:20:24.508666 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:24 crc kubenswrapper[4789]: I0202 21:20:24.508705 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:24 crc kubenswrapper[4789]: I0202 21:20:24.508714 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:24 crc kubenswrapper[4789]: I0202 21:20:24.508728 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:24 crc kubenswrapper[4789]: I0202 21:20:24.508738 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:24Z","lastTransitionTime":"2026-02-02T21:20:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:24 crc kubenswrapper[4789]: I0202 21:20:24.612326 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:24 crc kubenswrapper[4789]: I0202 21:20:24.612399 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:24 crc kubenswrapper[4789]: I0202 21:20:24.612422 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:24 crc kubenswrapper[4789]: I0202 21:20:24.612452 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:24 crc kubenswrapper[4789]: I0202 21:20:24.612474 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:24Z","lastTransitionTime":"2026-02-02T21:20:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:24 crc kubenswrapper[4789]: I0202 21:20:24.716450 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:24 crc kubenswrapper[4789]: I0202 21:20:24.716510 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:24 crc kubenswrapper[4789]: I0202 21:20:24.716527 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:24 crc kubenswrapper[4789]: I0202 21:20:24.716550 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:24 crc kubenswrapper[4789]: I0202 21:20:24.716569 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:24Z","lastTransitionTime":"2026-02-02T21:20:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:24 crc kubenswrapper[4789]: I0202 21:20:24.753993 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 02:38:32.169602374 +0000 UTC Feb 02 21:20:24 crc kubenswrapper[4789]: I0202 21:20:24.768580 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2dc26662-64d3-47f0-9e0d-d340760ca348-metrics-certs\") pod \"network-metrics-daemon-vjbpg\" (UID: \"2dc26662-64d3-47f0-9e0d-d340760ca348\") " pod="openshift-multus/network-metrics-daemon-vjbpg" Feb 02 21:20:24 crc kubenswrapper[4789]: E0202 21:20:24.769020 4789 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 21:20:24 crc kubenswrapper[4789]: E0202 21:20:24.769113 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2dc26662-64d3-47f0-9e0d-d340760ca348-metrics-certs podName:2dc26662-64d3-47f0-9e0d-d340760ca348 nodeName:}" failed. No retries permitted until 2026-02-02 21:20:32.76909144 +0000 UTC m=+53.064116499 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2dc26662-64d3-47f0-9e0d-d340760ca348-metrics-certs") pod "network-metrics-daemon-vjbpg" (UID: "2dc26662-64d3-47f0-9e0d-d340760ca348") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 21:20:24 crc kubenswrapper[4789]: I0202 21:20:24.819866 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:24 crc kubenswrapper[4789]: I0202 21:20:24.819919 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:24 crc kubenswrapper[4789]: I0202 21:20:24.819935 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:24 crc kubenswrapper[4789]: I0202 21:20:24.819957 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:24 crc kubenswrapper[4789]: I0202 21:20:24.819973 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:24Z","lastTransitionTime":"2026-02-02T21:20:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:24 crc kubenswrapper[4789]: I0202 21:20:24.923096 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:24 crc kubenswrapper[4789]: I0202 21:20:24.923167 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:24 crc kubenswrapper[4789]: I0202 21:20:24.923190 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:24 crc kubenswrapper[4789]: I0202 21:20:24.923220 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:24 crc kubenswrapper[4789]: I0202 21:20:24.923241 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:24Z","lastTransitionTime":"2026-02-02T21:20:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:25 crc kubenswrapper[4789]: I0202 21:20:25.026559 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:25 crc kubenswrapper[4789]: I0202 21:20:25.026668 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:25 crc kubenswrapper[4789]: I0202 21:20:25.026688 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:25 crc kubenswrapper[4789]: I0202 21:20:25.026714 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:25 crc kubenswrapper[4789]: I0202 21:20:25.026731 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:25Z","lastTransitionTime":"2026-02-02T21:20:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:25 crc kubenswrapper[4789]: I0202 21:20:25.130198 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:25 crc kubenswrapper[4789]: I0202 21:20:25.130258 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:25 crc kubenswrapper[4789]: I0202 21:20:25.130275 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:25 crc kubenswrapper[4789]: I0202 21:20:25.130299 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:25 crc kubenswrapper[4789]: I0202 21:20:25.130316 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:25Z","lastTransitionTime":"2026-02-02T21:20:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:25 crc kubenswrapper[4789]: I0202 21:20:25.233858 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:25 crc kubenswrapper[4789]: I0202 21:20:25.233963 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:25 crc kubenswrapper[4789]: I0202 21:20:25.233986 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:25 crc kubenswrapper[4789]: I0202 21:20:25.234049 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:25 crc kubenswrapper[4789]: I0202 21:20:25.234067 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:25Z","lastTransitionTime":"2026-02-02T21:20:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:25 crc kubenswrapper[4789]: I0202 21:20:25.338211 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:25 crc kubenswrapper[4789]: I0202 21:20:25.338274 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:25 crc kubenswrapper[4789]: I0202 21:20:25.338294 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:25 crc kubenswrapper[4789]: I0202 21:20:25.338320 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:25 crc kubenswrapper[4789]: I0202 21:20:25.338340 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:25Z","lastTransitionTime":"2026-02-02T21:20:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:25 crc kubenswrapper[4789]: I0202 21:20:25.419372 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 21:20:25 crc kubenswrapper[4789]: I0202 21:20:25.419412 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 21:20:25 crc kubenswrapper[4789]: I0202 21:20:25.419372 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 21:20:25 crc kubenswrapper[4789]: E0202 21:20:25.419568 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 21:20:25 crc kubenswrapper[4789]: E0202 21:20:25.419686 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 21:20:25 crc kubenswrapper[4789]: E0202 21:20:25.419750 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 21:20:25 crc kubenswrapper[4789]: I0202 21:20:25.442173 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:25 crc kubenswrapper[4789]: I0202 21:20:25.442246 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:25 crc kubenswrapper[4789]: I0202 21:20:25.442269 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:25 crc kubenswrapper[4789]: I0202 21:20:25.442298 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:25 crc kubenswrapper[4789]: I0202 21:20:25.442320 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:25Z","lastTransitionTime":"2026-02-02T21:20:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:25 crc kubenswrapper[4789]: I0202 21:20:25.545728 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:25 crc kubenswrapper[4789]: I0202 21:20:25.545789 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:25 crc kubenswrapper[4789]: I0202 21:20:25.545810 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:25 crc kubenswrapper[4789]: I0202 21:20:25.545836 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:25 crc kubenswrapper[4789]: I0202 21:20:25.545854 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:25Z","lastTransitionTime":"2026-02-02T21:20:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:25 crc kubenswrapper[4789]: I0202 21:20:25.648854 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:25 crc kubenswrapper[4789]: I0202 21:20:25.648922 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:25 crc kubenswrapper[4789]: I0202 21:20:25.648948 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:25 crc kubenswrapper[4789]: I0202 21:20:25.648977 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:25 crc kubenswrapper[4789]: I0202 21:20:25.648999 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:25Z","lastTransitionTime":"2026-02-02T21:20:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:25 crc kubenswrapper[4789]: I0202 21:20:25.752617 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:25 crc kubenswrapper[4789]: I0202 21:20:25.752705 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:25 crc kubenswrapper[4789]: I0202 21:20:25.752732 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:25 crc kubenswrapper[4789]: I0202 21:20:25.752762 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:25 crc kubenswrapper[4789]: I0202 21:20:25.752782 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:25Z","lastTransitionTime":"2026-02-02T21:20:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:25 crc kubenswrapper[4789]: I0202 21:20:25.755054 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 01:18:58.074461154 +0000 UTC Feb 02 21:20:25 crc kubenswrapper[4789]: I0202 21:20:25.856882 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:25 crc kubenswrapper[4789]: I0202 21:20:25.856944 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:25 crc kubenswrapper[4789]: I0202 21:20:25.856964 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:25 crc kubenswrapper[4789]: I0202 21:20:25.856992 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:25 crc kubenswrapper[4789]: I0202 21:20:25.857010 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:25Z","lastTransitionTime":"2026-02-02T21:20:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:25 crc kubenswrapper[4789]: I0202 21:20:25.960464 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:25 crc kubenswrapper[4789]: I0202 21:20:25.960514 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:25 crc kubenswrapper[4789]: I0202 21:20:25.960531 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:25 crc kubenswrapper[4789]: I0202 21:20:25.960636 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:25 crc kubenswrapper[4789]: I0202 21:20:25.960656 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:25Z","lastTransitionTime":"2026-02-02T21:20:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:26 crc kubenswrapper[4789]: I0202 21:20:26.064410 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:26 crc kubenswrapper[4789]: I0202 21:20:26.064453 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:26 crc kubenswrapper[4789]: I0202 21:20:26.064464 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:26 crc kubenswrapper[4789]: I0202 21:20:26.064481 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:26 crc kubenswrapper[4789]: I0202 21:20:26.064493 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:26Z","lastTransitionTime":"2026-02-02T21:20:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:26 crc kubenswrapper[4789]: I0202 21:20:26.167926 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:26 crc kubenswrapper[4789]: I0202 21:20:26.167975 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:26 crc kubenswrapper[4789]: I0202 21:20:26.167991 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:26 crc kubenswrapper[4789]: I0202 21:20:26.168015 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:26 crc kubenswrapper[4789]: I0202 21:20:26.168031 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:26Z","lastTransitionTime":"2026-02-02T21:20:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:26 crc kubenswrapper[4789]: I0202 21:20:26.272124 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:26 crc kubenswrapper[4789]: I0202 21:20:26.272265 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:26 crc kubenswrapper[4789]: I0202 21:20:26.272290 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:26 crc kubenswrapper[4789]: I0202 21:20:26.272402 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:26 crc kubenswrapper[4789]: I0202 21:20:26.272438 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:26Z","lastTransitionTime":"2026-02-02T21:20:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:26 crc kubenswrapper[4789]: I0202 21:20:26.375975 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:26 crc kubenswrapper[4789]: I0202 21:20:26.376067 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:26 crc kubenswrapper[4789]: I0202 21:20:26.376105 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:26 crc kubenswrapper[4789]: I0202 21:20:26.376135 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:26 crc kubenswrapper[4789]: I0202 21:20:26.376153 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:26Z","lastTransitionTime":"2026-02-02T21:20:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:26 crc kubenswrapper[4789]: I0202 21:20:26.419412 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vjbpg" Feb 02 21:20:26 crc kubenswrapper[4789]: E0202 21:20:26.419640 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vjbpg" podUID="2dc26662-64d3-47f0-9e0d-d340760ca348" Feb 02 21:20:26 crc kubenswrapper[4789]: I0202 21:20:26.479472 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:26 crc kubenswrapper[4789]: I0202 21:20:26.479521 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:26 crc kubenswrapper[4789]: I0202 21:20:26.479540 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:26 crc kubenswrapper[4789]: I0202 21:20:26.479563 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:26 crc kubenswrapper[4789]: I0202 21:20:26.479607 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:26Z","lastTransitionTime":"2026-02-02T21:20:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:26 crc kubenswrapper[4789]: I0202 21:20:26.583152 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:26 crc kubenswrapper[4789]: I0202 21:20:26.583227 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:26 crc kubenswrapper[4789]: I0202 21:20:26.583248 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:26 crc kubenswrapper[4789]: I0202 21:20:26.583271 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:26 crc kubenswrapper[4789]: I0202 21:20:26.583289 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:26Z","lastTransitionTime":"2026-02-02T21:20:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:26 crc kubenswrapper[4789]: I0202 21:20:26.691011 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:26 crc kubenswrapper[4789]: I0202 21:20:26.691066 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:26 crc kubenswrapper[4789]: I0202 21:20:26.691084 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:26 crc kubenswrapper[4789]: I0202 21:20:26.691108 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:26 crc kubenswrapper[4789]: I0202 21:20:26.691127 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:26Z","lastTransitionTime":"2026-02-02T21:20:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:26 crc kubenswrapper[4789]: I0202 21:20:26.755797 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 03:40:29.374512319 +0000 UTC Feb 02 21:20:26 crc kubenswrapper[4789]: I0202 21:20:26.793965 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:26 crc kubenswrapper[4789]: I0202 21:20:26.794032 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:26 crc kubenswrapper[4789]: I0202 21:20:26.794058 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:26 crc kubenswrapper[4789]: I0202 21:20:26.794089 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:26 crc kubenswrapper[4789]: I0202 21:20:26.794111 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:26Z","lastTransitionTime":"2026-02-02T21:20:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:26 crc kubenswrapper[4789]: I0202 21:20:26.898561 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:26 crc kubenswrapper[4789]: I0202 21:20:26.898665 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:26 crc kubenswrapper[4789]: I0202 21:20:26.898684 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:26 crc kubenswrapper[4789]: I0202 21:20:26.898713 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:26 crc kubenswrapper[4789]: I0202 21:20:26.898732 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:26Z","lastTransitionTime":"2026-02-02T21:20:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:27 crc kubenswrapper[4789]: I0202 21:20:27.002266 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:27 crc kubenswrapper[4789]: I0202 21:20:27.002333 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:27 crc kubenswrapper[4789]: I0202 21:20:27.002353 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:27 crc kubenswrapper[4789]: I0202 21:20:27.002377 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:27 crc kubenswrapper[4789]: I0202 21:20:27.002394 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:27Z","lastTransitionTime":"2026-02-02T21:20:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:27 crc kubenswrapper[4789]: I0202 21:20:27.109246 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:27 crc kubenswrapper[4789]: I0202 21:20:27.109296 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:27 crc kubenswrapper[4789]: I0202 21:20:27.109326 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:27 crc kubenswrapper[4789]: I0202 21:20:27.109348 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:27 crc kubenswrapper[4789]: I0202 21:20:27.109364 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:27Z","lastTransitionTime":"2026-02-02T21:20:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:27 crc kubenswrapper[4789]: I0202 21:20:27.212374 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:27 crc kubenswrapper[4789]: I0202 21:20:27.212432 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:27 crc kubenswrapper[4789]: I0202 21:20:27.212449 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:27 crc kubenswrapper[4789]: I0202 21:20:27.212473 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:27 crc kubenswrapper[4789]: I0202 21:20:27.212490 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:27Z","lastTransitionTime":"2026-02-02T21:20:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:27 crc kubenswrapper[4789]: I0202 21:20:27.315551 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:27 crc kubenswrapper[4789]: I0202 21:20:27.315637 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:27 crc kubenswrapper[4789]: I0202 21:20:27.315656 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:27 crc kubenswrapper[4789]: I0202 21:20:27.315679 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:27 crc kubenswrapper[4789]: I0202 21:20:27.315695 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:27Z","lastTransitionTime":"2026-02-02T21:20:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:27 crc kubenswrapper[4789]: I0202 21:20:27.418392 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:27 crc kubenswrapper[4789]: I0202 21:20:27.418444 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:27 crc kubenswrapper[4789]: I0202 21:20:27.418462 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:27 crc kubenswrapper[4789]: I0202 21:20:27.418477 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 21:20:27 crc kubenswrapper[4789]: I0202 21:20:27.418522 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 21:20:27 crc kubenswrapper[4789]: E0202 21:20:27.418704 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 21:20:27 crc kubenswrapper[4789]: I0202 21:20:27.418488 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:27 crc kubenswrapper[4789]: I0202 21:20:27.418785 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 21:20:27 crc kubenswrapper[4789]: I0202 21:20:27.418795 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:27Z","lastTransitionTime":"2026-02-02T21:20:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:27 crc kubenswrapper[4789]: E0202 21:20:27.418909 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 21:20:27 crc kubenswrapper[4789]: E0202 21:20:27.419017 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 21:20:27 crc kubenswrapper[4789]: I0202 21:20:27.521477 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:27 crc kubenswrapper[4789]: I0202 21:20:27.521542 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:27 crc kubenswrapper[4789]: I0202 21:20:27.521560 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:27 crc kubenswrapper[4789]: I0202 21:20:27.521585 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:27 crc kubenswrapper[4789]: I0202 21:20:27.521637 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:27Z","lastTransitionTime":"2026-02-02T21:20:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:27 crc kubenswrapper[4789]: I0202 21:20:27.631743 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:27 crc kubenswrapper[4789]: I0202 21:20:27.632150 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:27 crc kubenswrapper[4789]: I0202 21:20:27.632297 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:27 crc kubenswrapper[4789]: I0202 21:20:27.632426 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:27 crc kubenswrapper[4789]: I0202 21:20:27.632557 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:27Z","lastTransitionTime":"2026-02-02T21:20:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:27 crc kubenswrapper[4789]: I0202 21:20:27.638721 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:27 crc kubenswrapper[4789]: I0202 21:20:27.638772 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:27 crc kubenswrapper[4789]: I0202 21:20:27.638788 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:27 crc kubenswrapper[4789]: I0202 21:20:27.638808 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:27 crc kubenswrapper[4789]: I0202 21:20:27.638825 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:27Z","lastTransitionTime":"2026-02-02T21:20:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:27 crc kubenswrapper[4789]: E0202 21:20:27.659391 4789 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"007d0037-9447-42ea-b3a4-6e1f0d669307\\\",\\\"systemUUID\\\":\\\"53ecbfdd-0b43-4d74-98ca-c7bcbc951d86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:27Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:27 crc kubenswrapper[4789]: I0202 21:20:27.664693 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:27 crc kubenswrapper[4789]: I0202 21:20:27.664766 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:27 crc kubenswrapper[4789]: I0202 21:20:27.664784 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:27 crc kubenswrapper[4789]: I0202 21:20:27.664810 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:27 crc kubenswrapper[4789]: I0202 21:20:27.664827 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:27Z","lastTransitionTime":"2026-02-02T21:20:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:27 crc kubenswrapper[4789]: E0202 21:20:27.686272 4789 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"007d0037-9447-42ea-b3a4-6e1f0d669307\\\",\\\"systemUUID\\\":\\\"53ecbfdd-0b43-4d74-98ca-c7bcbc951d86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:27Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:27 crc kubenswrapper[4789]: I0202 21:20:27.691125 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:27 crc kubenswrapper[4789]: I0202 21:20:27.691185 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:27 crc kubenswrapper[4789]: I0202 21:20:27.691205 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:27 crc kubenswrapper[4789]: I0202 21:20:27.691232 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:27 crc kubenswrapper[4789]: I0202 21:20:27.691252 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:27Z","lastTransitionTime":"2026-02-02T21:20:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:27 crc kubenswrapper[4789]: E0202 21:20:27.711474 4789 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"007d0037-9447-42ea-b3a4-6e1f0d669307\\\",\\\"systemUUID\\\":\\\"53ecbfdd-0b43-4d74-98ca-c7bcbc951d86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:27Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:27 crc kubenswrapper[4789]: I0202 21:20:27.716435 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:27 crc kubenswrapper[4789]: I0202 21:20:27.716504 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:27 crc kubenswrapper[4789]: I0202 21:20:27.716524 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:27 crc kubenswrapper[4789]: I0202 21:20:27.716549 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:27 crc kubenswrapper[4789]: I0202 21:20:27.716568 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:27Z","lastTransitionTime":"2026-02-02T21:20:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:27 crc kubenswrapper[4789]: E0202 21:20:27.739016 4789 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"007d0037-9447-42ea-b3a4-6e1f0d669307\\\",\\\"systemUUID\\\":\\\"53ecbfdd-0b43-4d74-98ca-c7bcbc951d86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:27Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:27 crc kubenswrapper[4789]: I0202 21:20:27.743976 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:27 crc kubenswrapper[4789]: I0202 21:20:27.744037 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:27 crc kubenswrapper[4789]: I0202 21:20:27.744057 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:27 crc kubenswrapper[4789]: I0202 21:20:27.744081 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:27 crc kubenswrapper[4789]: I0202 21:20:27.744099 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:27Z","lastTransitionTime":"2026-02-02T21:20:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:27 crc kubenswrapper[4789]: I0202 21:20:27.756398 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 08:51:51.503876538 +0000 UTC Feb 02 21:20:27 crc kubenswrapper[4789]: E0202 21:20:27.763252 4789 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"007d0037-9447-42ea-b3a4-6e1f0d669307\\\",\\\"systemUUID\\\":\\\"53ecbfdd-0b43-4d74-98ca-c7bcbc951d86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:27Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:27 crc kubenswrapper[4789]: E0202 21:20:27.763656 4789 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 02 21:20:27 crc kubenswrapper[4789]: I0202 21:20:27.765757 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:27 crc kubenswrapper[4789]: I0202 21:20:27.765800 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:27 crc kubenswrapper[4789]: I0202 21:20:27.765819 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:27 crc kubenswrapper[4789]: I0202 21:20:27.765840 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:27 crc kubenswrapper[4789]: I0202 21:20:27.765857 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:27Z","lastTransitionTime":"2026-02-02T21:20:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:27 crc kubenswrapper[4789]: I0202 21:20:27.869103 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:27 crc kubenswrapper[4789]: I0202 21:20:27.869273 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:27 crc kubenswrapper[4789]: I0202 21:20:27.869300 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:27 crc kubenswrapper[4789]: I0202 21:20:27.869363 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:27 crc kubenswrapper[4789]: I0202 21:20:27.869385 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:27Z","lastTransitionTime":"2026-02-02T21:20:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:27 crc kubenswrapper[4789]: I0202 21:20:27.972096 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:27 crc kubenswrapper[4789]: I0202 21:20:27.972135 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:27 crc kubenswrapper[4789]: I0202 21:20:27.972144 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:27 crc kubenswrapper[4789]: I0202 21:20:27.972158 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:27 crc kubenswrapper[4789]: I0202 21:20:27.972167 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:27Z","lastTransitionTime":"2026-02-02T21:20:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:28 crc kubenswrapper[4789]: I0202 21:20:28.075506 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:28 crc kubenswrapper[4789]: I0202 21:20:28.075559 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:28 crc kubenswrapper[4789]: I0202 21:20:28.075568 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:28 crc kubenswrapper[4789]: I0202 21:20:28.075594 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:28 crc kubenswrapper[4789]: I0202 21:20:28.075604 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:28Z","lastTransitionTime":"2026-02-02T21:20:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:28 crc kubenswrapper[4789]: I0202 21:20:28.178432 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:28 crc kubenswrapper[4789]: I0202 21:20:28.178471 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:28 crc kubenswrapper[4789]: I0202 21:20:28.178480 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:28 crc kubenswrapper[4789]: I0202 21:20:28.178496 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:28 crc kubenswrapper[4789]: I0202 21:20:28.178506 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:28Z","lastTransitionTime":"2026-02-02T21:20:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:28 crc kubenswrapper[4789]: I0202 21:20:28.282651 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:28 crc kubenswrapper[4789]: I0202 21:20:28.282729 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:28 crc kubenswrapper[4789]: I0202 21:20:28.282746 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:28 crc kubenswrapper[4789]: I0202 21:20:28.282773 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:28 crc kubenswrapper[4789]: I0202 21:20:28.282790 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:28Z","lastTransitionTime":"2026-02-02T21:20:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:28 crc kubenswrapper[4789]: I0202 21:20:28.385869 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:28 crc kubenswrapper[4789]: I0202 21:20:28.385963 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:28 crc kubenswrapper[4789]: I0202 21:20:28.385983 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:28 crc kubenswrapper[4789]: I0202 21:20:28.386007 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:28 crc kubenswrapper[4789]: I0202 21:20:28.386026 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:28Z","lastTransitionTime":"2026-02-02T21:20:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:28 crc kubenswrapper[4789]: I0202 21:20:28.418810 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vjbpg" Feb 02 21:20:28 crc kubenswrapper[4789]: E0202 21:20:28.419054 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vjbpg" podUID="2dc26662-64d3-47f0-9e0d-d340760ca348" Feb 02 21:20:28 crc kubenswrapper[4789]: I0202 21:20:28.489102 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:28 crc kubenswrapper[4789]: I0202 21:20:28.489158 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:28 crc kubenswrapper[4789]: I0202 21:20:28.489172 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:28 crc kubenswrapper[4789]: I0202 21:20:28.489194 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:28 crc kubenswrapper[4789]: I0202 21:20:28.489206 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:28Z","lastTransitionTime":"2026-02-02T21:20:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:28 crc kubenswrapper[4789]: I0202 21:20:28.593215 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:28 crc kubenswrapper[4789]: I0202 21:20:28.593283 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:28 crc kubenswrapper[4789]: I0202 21:20:28.593300 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:28 crc kubenswrapper[4789]: I0202 21:20:28.593327 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:28 crc kubenswrapper[4789]: I0202 21:20:28.593346 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:28Z","lastTransitionTime":"2026-02-02T21:20:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:28 crc kubenswrapper[4789]: I0202 21:20:28.696892 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:28 crc kubenswrapper[4789]: I0202 21:20:28.696978 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:28 crc kubenswrapper[4789]: I0202 21:20:28.697004 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:28 crc kubenswrapper[4789]: I0202 21:20:28.697040 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:28 crc kubenswrapper[4789]: I0202 21:20:28.697078 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:28Z","lastTransitionTime":"2026-02-02T21:20:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:28 crc kubenswrapper[4789]: I0202 21:20:28.756678 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 06:42:17.72671322 +0000 UTC Feb 02 21:20:28 crc kubenswrapper[4789]: I0202 21:20:28.800544 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:28 crc kubenswrapper[4789]: I0202 21:20:28.800641 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:28 crc kubenswrapper[4789]: I0202 21:20:28.800661 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:28 crc kubenswrapper[4789]: I0202 21:20:28.800689 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:28 crc kubenswrapper[4789]: I0202 21:20:28.800709 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:28Z","lastTransitionTime":"2026-02-02T21:20:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:28 crc kubenswrapper[4789]: I0202 21:20:28.904461 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:28 crc kubenswrapper[4789]: I0202 21:20:28.904520 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:28 crc kubenswrapper[4789]: I0202 21:20:28.904540 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:28 crc kubenswrapper[4789]: I0202 21:20:28.904564 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:28 crc kubenswrapper[4789]: I0202 21:20:28.904586 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:28Z","lastTransitionTime":"2026-02-02T21:20:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:29 crc kubenswrapper[4789]: I0202 21:20:29.007381 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:29 crc kubenswrapper[4789]: I0202 21:20:29.007439 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:29 crc kubenswrapper[4789]: I0202 21:20:29.007456 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:29 crc kubenswrapper[4789]: I0202 21:20:29.007481 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:29 crc kubenswrapper[4789]: I0202 21:20:29.007507 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:29Z","lastTransitionTime":"2026-02-02T21:20:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:29 crc kubenswrapper[4789]: I0202 21:20:29.111171 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:29 crc kubenswrapper[4789]: I0202 21:20:29.111257 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:29 crc kubenswrapper[4789]: I0202 21:20:29.111283 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:29 crc kubenswrapper[4789]: I0202 21:20:29.111314 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:29 crc kubenswrapper[4789]: I0202 21:20:29.111339 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:29Z","lastTransitionTime":"2026-02-02T21:20:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:29 crc kubenswrapper[4789]: I0202 21:20:29.217934 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:29 crc kubenswrapper[4789]: I0202 21:20:29.218558 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:29 crc kubenswrapper[4789]: I0202 21:20:29.218778 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:29 crc kubenswrapper[4789]: I0202 21:20:29.219071 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:29 crc kubenswrapper[4789]: I0202 21:20:29.219263 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:29Z","lastTransitionTime":"2026-02-02T21:20:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:29 crc kubenswrapper[4789]: I0202 21:20:29.321787 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:29 crc kubenswrapper[4789]: I0202 21:20:29.321848 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:29 crc kubenswrapper[4789]: I0202 21:20:29.321866 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:29 crc kubenswrapper[4789]: I0202 21:20:29.321889 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:29 crc kubenswrapper[4789]: I0202 21:20:29.321907 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:29Z","lastTransitionTime":"2026-02-02T21:20:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:29 crc kubenswrapper[4789]: I0202 21:20:29.418922 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 21:20:29 crc kubenswrapper[4789]: E0202 21:20:29.419105 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 21:20:29 crc kubenswrapper[4789]: I0202 21:20:29.419312 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 21:20:29 crc kubenswrapper[4789]: E0202 21:20:29.419418 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 21:20:29 crc kubenswrapper[4789]: I0202 21:20:29.419491 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 21:20:29 crc kubenswrapper[4789]: E0202 21:20:29.419719 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 21:20:29 crc kubenswrapper[4789]: I0202 21:20:29.424724 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:29 crc kubenswrapper[4789]: I0202 21:20:29.424805 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:29 crc kubenswrapper[4789]: I0202 21:20:29.424842 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:29 crc kubenswrapper[4789]: I0202 21:20:29.424865 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:29 crc kubenswrapper[4789]: I0202 21:20:29.424882 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:29Z","lastTransitionTime":"2026-02-02T21:20:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:29 crc kubenswrapper[4789]: I0202 21:20:29.527551 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:29 crc kubenswrapper[4789]: I0202 21:20:29.527618 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:29 crc kubenswrapper[4789]: I0202 21:20:29.527629 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:29 crc kubenswrapper[4789]: I0202 21:20:29.527644 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:29 crc kubenswrapper[4789]: I0202 21:20:29.527653 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:29Z","lastTransitionTime":"2026-02-02T21:20:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:29 crc kubenswrapper[4789]: I0202 21:20:29.550187 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 21:20:29 crc kubenswrapper[4789]: I0202 21:20:29.559384 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 02 21:20:29 crc kubenswrapper[4789]: I0202 21:20:29.568690 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa57a708f883719dd81200ccf975947c0af244b92ead81daee23171c9be412f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:29Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:29 crc kubenswrapper[4789]: I0202 21:20:29.580225 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6l576" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd970d28-4009-48b2-a0f4-2b8b1d54a2cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b641bb7caefade4d07fe41965573b059bd8b23a83bb9f60a9637d0152dffb07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwbqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6l576\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:29Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:29 crc kubenswrapper[4789]: I0202 21:20:29.593390 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:29Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:29 crc kubenswrapper[4789]: I0202 21:20:29.605572 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdf018b4-1451-4d37-be6e-05802b67c73e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad1716ba11e1b21eb68642e6935312760c389a669a007822290fbe72573bfaad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwk2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b108a17d437cce7e94365267d96e99e84944d9dd12174c5acb380bc1a1f9c885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwk2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c8vcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:29Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:29 crc kubenswrapper[4789]: I0202 21:20:29.620537 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d49gm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39528981-2c85-43f3-8fa0-bfae5c3334cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4ae303f0f4381207f4dd4a443e366d6e3de2014e9bc69aa644e98a76b239868\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnfbx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://277fe88585ee146931597a14fe049a3d69197c94e0d84f5dfb334b08cd685723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnfbx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-d49gm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:29Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:29 crc kubenswrapper[4789]: I0202 21:20:29.630547 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:29 crc kubenswrapper[4789]: I0202 21:20:29.630593 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:29 crc kubenswrapper[4789]: I0202 21:20:29.630603 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:29 crc kubenswrapper[4789]: I0202 21:20:29.630616 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:29 crc kubenswrapper[4789]: I0202 21:20:29.630627 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:29Z","lastTransitionTime":"2026-02-02T21:20:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:29 crc kubenswrapper[4789]: I0202 21:20:29.633104 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vjbpg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dc26662-64d3-47f0-9e0d-d340760ca348\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9zq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9zq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vjbpg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:29Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:29 crc kubenswrapper[4789]: I0202 21:20:29.650617 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40b3784b5ba59f0147d2889d897d17140759155e24587e9a323338e0a3125ddd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:29Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:29 crc kubenswrapper[4789]: I0202 21:20:29.668031 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:29Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:29 crc kubenswrapper[4789]: I0202 21:20:29.689695 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:29Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:29 crc kubenswrapper[4789]: I0202 21:20:29.707566 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19928db343f460eed7b046f14b45c6756081f57ea4b3ad77acc86f29eb856a82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://106225a63f3d4ecb7ba8c5266f738990ec7cc5603f9fabde6d234ae265dcc310\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:29Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:29 crc kubenswrapper[4789]: I0202 21:20:29.723040 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wlsw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b25d791-42e1-4e08-b7da-41803cc40f4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0e2a17d3ade1aa229b2729d66fe953acc746673549e0d929076a3bd18839833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-snjrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wlsw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:29Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:29 crc kubenswrapper[4789]: I0202 21:20:29.733398 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:29 crc kubenswrapper[4789]: I0202 21:20:29.733444 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:29 crc kubenswrapper[4789]: I0202 21:20:29.733461 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:29 crc kubenswrapper[4789]: I0202 21:20:29.733488 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:29 crc kubenswrapper[4789]: I0202 21:20:29.733509 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:29Z","lastTransitionTime":"2026-02-02T21:20:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:29 crc kubenswrapper[4789]: I0202 21:20:29.745411 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9563eded-ca82-4eb6-90d4-e62b8acbe296\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72bf64d682578abb47e9fabf5e6c31e3a20594cc4a2d13304b2036f4198af265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cc46c408bb643834e494025442760929995b22175ce2af65a14004761848c2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e95c9e553b8e14141ddde6a221c0e5e4d46533d1631893e5d55416b9d16f4a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66a4db3799101ccca8a89d6bfd2c9d36940b8710ee3d256e47cd61cfe6ac7c07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d8bd6ccf6c7345f0ddc84a2983b618218fc65283b8d351c2df30d1221f304b7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\" 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 21:20:01.773773 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 21:20:01.773776 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 21:20:01.773779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 21:20:01.773999 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0202 21:20:01.777333 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3307553149/tls.crt::/tmp/serving-cert-3307553149/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770067196\\\\\\\\\\\\\\\" (2026-02-02 21:19:55 +0000 UTC to 2026-03-04 21:19:56 +0000 UTC (now=2026-02-02 21:20:01.777292377 +0000 UTC))\\\\\\\"\\\\nI0202 21:20:01.777438 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770067201\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770067201\\\\\\\\\\\\\\\" (2026-02-02 20:20:01 +0000 UTC to 2027-02-02 20:20:01 +0000 UTC (now=2026-02-02 21:20:01.777417111 +0000 UTC))\\\\\\\"\\\\nI0202 21:20:01.777455 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 21:20:01.777484 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 21:20:01.777505 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0202 21:20:01.777526 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0202 21:20:01.777548 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3307553149/tls.crt::/tmp/serving-cert-3307553149/tls.key\\\\\\\"\\\\nF0202 21:20:01.777545 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cbe4a41d8721562e34639bc3ab24cc7f735d8de00e14d38b1b623b58740c5b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94bdc1c30a3c895835afc4f205ec20725ec13707db6957046cae7791b8850679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94bdc1c30a3c895835afc4f205ec20725ec13707db6957046cae7791b8850679\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:19:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:19:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:29Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:29 crc kubenswrapper[4789]: I0202 21:20:29.757142 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 03:46:11.694056221 +0000 UTC Feb 02 21:20:29 crc kubenswrapper[4789]: I0202 21:20:29.766954 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dsv6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcbed546-a1c3-4ba4-96b4-61471010b1c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fff0380f801c6909a82bfc8a765cf7a393d7c2f7cc809611ded737a22f448a3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48b8508849b24e18223b62bd73348759ba4c04a525afb7d4055175bceb0183c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e48b8508849b24e18223b62bd73348759ba4c04a525afb7d4055175bceb0183c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92b13f24aa00608821909c688ace0441fd4e432d3c5fc9a1cc06b4239bf86955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92b13f24aa00608821909c688ace0441fd4e432d3c5fc9a1cc06b4239bf86955\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75d7fd5bba471f9371960e5c6cfd5d7cb49402d47459acad01a9c438ea33de0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75d7fd5bba471f9371960e5c6cfd5d7cb49402d47459acad01a9c438ea33de0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eb07d53076d60f3363a1232e742bb4bd801d49104a2be1095af34a61a5b61cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eb07d53076d60f3363a1232e742bb4bd801d49104a2be1095af34a61a5b61cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5339fbb27f553f96497abbf4b0584cbffe399e44fafd90cd0e568e2f4efad6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5339fbb27f553f96497abbf4b0584cbffe399e44fafd90cd0e568e2f4efad6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef304631b36ef7a2edc5ce89f5b4d1efecd4947d1d71f3bb4068395d76aea346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef304631b36ef7a2edc5ce89f5b4d1efecd4947d1d71f3bb4068395d76aea346\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dsv6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:29Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:29 crc kubenswrapper[4789]: I0202 21:20:29.798142 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29dad69c19f9411676fab81684dd613d866c494df1e333b46bfff35b98430103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://047a98525fa1b06be5e6f3bab6b417973a13bb32df7fad26ca90b4558ee4e364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05637515d4c19e054ce43cc23bb6d8e32c4752282dab08786fff3062eb976461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f05a85f44fd6a7b9efa0f591ec0afebd7818a80b132ec3dddd0faa22c380eea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd508dcd4022163f662ad27cf3ffbc4cca551d4020809ca5fac60cee8da80601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://021453f1c74b708ad3f39c2defa2664f098a552f3e22315479c483d24110e583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca031da70bd23ada7cfba11c78c5962555189be4b845b36d92ee2737df4e8bb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca031da70bd23ada7cfba11c78c5962555189be4b845b36d92ee2737df4e8bb7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T21:20:18Z\\\",\\\"message\\\":\\\"reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.88:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ebd4748e-0473-49fb-88ad-83dbb221791a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0202 21:20:18.018667 6276 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console-operator/metrics]} name:Service_openshift-console-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.88:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ebd4748e-0473-49fb-88ad-83dbb221791a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0202 21:20:18.018180 6276 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-w8vkt_openshift-ovn-kubernetes(2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f30344b8366cf5ef712d971dc4d8d552e7b50c1ecea8d2ace88bb80fce62231e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e90a8e698f2e9f30a596864d0d038bfad686cca1a56b1a8bc72fab5fe594f355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e90a8e698f2e9f30a596864d0d038bfad686cca1a56b1a8bc72fab5fe594f355\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w8vkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:29Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:29 crc kubenswrapper[4789]: I0202 21:20:29.814695 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fb5d89e-c901-4857-a5e8-b4d8e3701714\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d85613f300f8d008d919038b40efff3fb67e228e25959cbfd2424640c6bc7a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://460e055777327e1734238bf51929751678ea5a757b00a344daa31c8c95e4c463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c292754c81dd4929f8e599e8ec352fd5e90431c60bb741c070a10ffa91fad72\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3685c506f77d3807711a442d68da94932064737cfdc5cb74557da135906e24f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:19:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:29Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:29 crc kubenswrapper[4789]: I0202 21:20:29.837189 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:29 crc kubenswrapper[4789]: I0202 21:20:29.837234 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:29 crc kubenswrapper[4789]: I0202 21:20:29.837247 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:29 crc kubenswrapper[4789]: I0202 21:20:29.837264 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:29 crc kubenswrapper[4789]: I0202 21:20:29.837275 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:29Z","lastTransitionTime":"2026-02-02T21:20:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:29 crc kubenswrapper[4789]: I0202 21:20:29.838522 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2x5ws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70a32268-2a2d-47f3-9fc6-4281b8dc6a02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9eaf3ca59ce89187ee20f3915b7fd4a8867e156c0be18b435511d2af95ffd949\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf446\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2x5ws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:29Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:29 crc kubenswrapper[4789]: I0202 21:20:29.940336 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:29 crc kubenswrapper[4789]: I0202 21:20:29.940476 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:29 crc kubenswrapper[4789]: I0202 21:20:29.940502 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:29 crc kubenswrapper[4789]: I0202 21:20:29.940536 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:29 crc kubenswrapper[4789]: I0202 21:20:29.940555 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:29Z","lastTransitionTime":"2026-02-02T21:20:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:30 crc kubenswrapper[4789]: I0202 21:20:30.043995 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:30 crc kubenswrapper[4789]: I0202 21:20:30.044063 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:30 crc kubenswrapper[4789]: I0202 21:20:30.044087 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:30 crc kubenswrapper[4789]: I0202 21:20:30.044112 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:30 crc kubenswrapper[4789]: I0202 21:20:30.044129 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:30Z","lastTransitionTime":"2026-02-02T21:20:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:30 crc kubenswrapper[4789]: I0202 21:20:30.147295 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:30 crc kubenswrapper[4789]: I0202 21:20:30.147359 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:30 crc kubenswrapper[4789]: I0202 21:20:30.147376 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:30 crc kubenswrapper[4789]: I0202 21:20:30.147401 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:30 crc kubenswrapper[4789]: I0202 21:20:30.147418 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:30Z","lastTransitionTime":"2026-02-02T21:20:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:30 crc kubenswrapper[4789]: I0202 21:20:30.250871 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:30 crc kubenswrapper[4789]: I0202 21:20:30.250933 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:30 crc kubenswrapper[4789]: I0202 21:20:30.250951 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:30 crc kubenswrapper[4789]: I0202 21:20:30.250976 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:30 crc kubenswrapper[4789]: I0202 21:20:30.250994 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:30Z","lastTransitionTime":"2026-02-02T21:20:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:30 crc kubenswrapper[4789]: I0202 21:20:30.354279 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:30 crc kubenswrapper[4789]: I0202 21:20:30.354393 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:30 crc kubenswrapper[4789]: I0202 21:20:30.354412 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:30 crc kubenswrapper[4789]: I0202 21:20:30.354440 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:30 crc kubenswrapper[4789]: I0202 21:20:30.354460 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:30Z","lastTransitionTime":"2026-02-02T21:20:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:30 crc kubenswrapper[4789]: I0202 21:20:30.418946 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vjbpg" Feb 02 21:20:30 crc kubenswrapper[4789]: E0202 21:20:30.419200 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vjbpg" podUID="2dc26662-64d3-47f0-9e0d-d340760ca348" Feb 02 21:20:30 crc kubenswrapper[4789]: I0202 21:20:30.443995 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fb5d89e-c901-4857-a5e8-b4d8e3701714\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d85613f300f8d008d919038b40efff3fb67e228e25959cbfd2424640c6bc7a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://460e055777327e1734238bf51929751678ea5a757b00a344daa31c8c95e4c463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c292754c81dd4929f8e599e8ec352fd5e90431c60bb741c070a10ffa91fad72\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3685c506f77d3807711a442d68da94932064737cfdc5cb74557da135906e24f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:19:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:30Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:30 crc kubenswrapper[4789]: I0202 21:20:30.457058 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:30 crc kubenswrapper[4789]: I0202 21:20:30.457141 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:30 crc kubenswrapper[4789]: I0202 21:20:30.457165 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:30 crc kubenswrapper[4789]: I0202 21:20:30.457197 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:30 crc kubenswrapper[4789]: I0202 21:20:30.457235 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:30Z","lastTransitionTime":"2026-02-02T21:20:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:30 crc kubenswrapper[4789]: I0202 21:20:30.468034 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2x5ws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70a32268-2a2d-47f3-9fc6-4281b8dc6a02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9eaf3ca59ce89187ee20f3915b7fd4a8867e156c0be18b435511d2af95ffd949\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf446\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2x5ws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:30Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:30 crc kubenswrapper[4789]: I0202 21:20:30.488928 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:30Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:30 crc kubenswrapper[4789]: I0202 21:20:30.507912 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af0ef1e0-7fcc-4fa6-8463-a0f5a9145c2e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3c3d527f77f26e052c4ef9c0577938dc23c802918e742b9fb9020cb6ba705f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eface13a61e8f9d1e8e9512c78a4c70973bfad708c3cdea7f7251f6fa408a59f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ed17431bc8880523ea84349c68ea56e389033b550390d88e60373f663d1491f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d78edf2f7e647edfcc306a3feff71c983907077ad05a684af9fa2990deaf3b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d78edf2f7e647edfcc306a3feff71c983907077ad05a684af9fa2990deaf3b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:19:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:19:40Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:30Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:30 crc kubenswrapper[4789]: I0202 21:20:30.533798 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa57a708f883719dd81200ccf975947c0af244b92ead81daee23171c9be412f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:30Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:30 crc kubenswrapper[4789]: I0202 21:20:30.551464 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6l576" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd970d28-4009-48b2-a0f4-2b8b1d54a2cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b641bb7caefade4d07fe41965573b059bd8b23a83bb9f60a9637d0152dffb07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwbqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6l576\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:30Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:30 crc kubenswrapper[4789]: I0202 21:20:30.559864 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:30 crc kubenswrapper[4789]: I0202 21:20:30.559921 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:30 crc kubenswrapper[4789]: I0202 21:20:30.559940 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:30 crc kubenswrapper[4789]: I0202 21:20:30.559964 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:30 crc kubenswrapper[4789]: I0202 21:20:30.559981 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:30Z","lastTransitionTime":"2026-02-02T21:20:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:30 crc kubenswrapper[4789]: I0202 21:20:30.573135 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19928db343f460eed7b046f14b45c6756081f57ea4b3ad77acc86f29eb856a82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://106225a63f3d4ecb7ba8c5266f738990ec7cc5603f9fabde6d234ae265dcc310\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:30Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:30 crc kubenswrapper[4789]: I0202 21:20:30.591566 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdf018b4-1451-4d37-be6e-05802b67c73e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad1716ba11e1b21eb68642e6935312760c389a669a007822290fbe72573bfaad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwk2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b108a17d437cce7e94365267d96e99e84944d9dd12174c5acb380bc1a1f9c885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwk2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c8vcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:30Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:30 crc kubenswrapper[4789]: I0202 21:20:30.609182 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d49gm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39528981-2c85-43f3-8fa0-bfae5c3334cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4ae303f0f4381207f4dd4a443e366d6e3de2014e9bc69aa644e98a76b239868\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnfbx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://277fe88585ee146931597a14fe049a3d69197c94e0d84f5dfb334b08cd685723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnfbx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-d49gm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:30Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:30 crc kubenswrapper[4789]: I0202 21:20:30.626262 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vjbpg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dc26662-64d3-47f0-9e0d-d340760ca348\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9zq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9zq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vjbpg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:30Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:30 crc kubenswrapper[4789]: I0202 21:20:30.645010 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40b3784b5ba59f0147d2889d897d17140759155e24587e9a323338e0a3125ddd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:30Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:30 crc kubenswrapper[4789]: I0202 21:20:30.662337 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:30 crc kubenswrapper[4789]: I0202 21:20:30.662419 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:30 crc kubenswrapper[4789]: I0202 21:20:30.662439 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:30 crc kubenswrapper[4789]: I0202 21:20:30.662464 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:30 crc kubenswrapper[4789]: I0202 21:20:30.662481 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:30Z","lastTransitionTime":"2026-02-02T21:20:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:30 crc kubenswrapper[4789]: I0202 21:20:30.665817 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:30Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:30 crc kubenswrapper[4789]: I0202 21:20:30.686569 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:30Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:30 crc kubenswrapper[4789]: I0202 21:20:30.720040 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29dad69c19f9411676fab81684dd613d866c494df1e333b46bfff35b98430103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://047a98525fa1b06be5e6f3bab6b417973a13bb32df7fad26ca90b4558ee4e364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05637515d4c19e054ce43cc23bb6d8e32c4752282dab08786fff3062eb976461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f05a85f44fd6a7b9efa0f591ec0afebd7818a80b132ec3dddd0faa22c380eea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd508dcd4022163f662ad27cf3ffbc4cca551d4020809ca5fac60cee8da80601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://021453f1c74b708ad3f39c2defa2664f098a552f3e22315479c483d24110e583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca031da70bd23ada7cfba11c78c5962555189be4b845b36d92ee2737df4e8bb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca031da70bd23ada7cfba11c78c5962555189be4b845b36d92ee2737df4e8bb7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T21:20:18Z\\\",\\\"message\\\":\\\"reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.88:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ebd4748e-0473-49fb-88ad-83dbb221791a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0202 21:20:18.018667 6276 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console-operator/metrics]} name:Service_openshift-console-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.88:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ebd4748e-0473-49fb-88ad-83dbb221791a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0202 21:20:18.018180 6276 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-w8vkt_openshift-ovn-kubernetes(2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f30344b8366cf5ef712d971dc4d8d552e7b50c1ecea8d2ace88bb80fce62231e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e90a8e698f2e9f30a596864d0d038bfad686cca1a56b1a8bc72fab5fe594f355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e90a8e698f2e9f30a596864d0d038bfad686cca1a56b1a8bc72fab5fe594f355\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w8vkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:30Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:30 crc kubenswrapper[4789]: I0202 21:20:30.738029 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wlsw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b25d791-42e1-4e08-b7da-41803cc40f4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0e2a17d3ade1aa229b2729d66fe953acc746673549e0d929076a3bd18839833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-snjrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wlsw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:30Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:30 crc kubenswrapper[4789]: I0202 21:20:30.757850 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 18:26:33.346433018 +0000 UTC Feb 02 21:20:30 crc kubenswrapper[4789]: I0202 21:20:30.759847 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9563eded-ca82-4eb6-90d4-e62b8acbe296\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72bf64d682578abb47e9fabf5e6c31e3a20594cc4a2d13304b2036f4198af265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cc46c408bb643834e494025442760929995b22175ce2af65a14004761848c2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e95c9e553b8e14141ddde6a221c0e5e4d46533d1631893e5d55416b9d16f4a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66a4db3799101ccca8a89d6bfd2c9d36940b8710ee3d256e47cd61cfe6ac7c07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d8bd6ccf6c7345f0ddc84a2983b618218fc65283b8d351c2df30d1221f304b7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\" 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 21:20:01.773773 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 21:20:01.773776 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 21:20:01.773779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 21:20:01.773999 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0202 21:20:01.777333 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3307553149/tls.crt::/tmp/serving-cert-3307553149/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770067196\\\\\\\\\\\\\\\" (2026-02-02 21:19:55 +0000 UTC to 2026-03-04 21:19:56 +0000 UTC (now=2026-02-02 21:20:01.777292377 +0000 UTC))\\\\\\\"\\\\nI0202 21:20:01.777438 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770067201\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770067201\\\\\\\\\\\\\\\" (2026-02-02 20:20:01 +0000 UTC to 2027-02-02 20:20:01 +0000 UTC (now=2026-02-02 21:20:01.777417111 +0000 UTC))\\\\\\\"\\\\nI0202 21:20:01.777455 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 21:20:01.777484 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 21:20:01.777505 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0202 21:20:01.777526 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0202 21:20:01.777548 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3307553149/tls.crt::/tmp/serving-cert-3307553149/tls.key\\\\\\\"\\\\nF0202 21:20:01.777545 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cbe4a41d8721562e34639bc3ab24cc7f735d8de00e14d38b1b623b58740c5b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94bdc1c30a3c895835afc4f205ec20725ec13707db6957046cae7791b8850679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94bdc1c30a3c895835afc4f205ec20725ec13707db6957046cae7791b8850679\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:19:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:19:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:30Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:30 crc kubenswrapper[4789]: I0202 21:20:30.765619 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:30 crc kubenswrapper[4789]: I0202 21:20:30.765667 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:30 crc kubenswrapper[4789]: I0202 21:20:30.765685 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:30 crc kubenswrapper[4789]: I0202 21:20:30.765709 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:30 crc kubenswrapper[4789]: I0202 21:20:30.765726 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:30Z","lastTransitionTime":"2026-02-02T21:20:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:30 crc kubenswrapper[4789]: I0202 21:20:30.784312 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dsv6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcbed546-a1c3-4ba4-96b4-61471010b1c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fff0380f801c6909a82bfc8a765cf7a393d7c2f7cc809611ded737a22f448a3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48b8508849b24e18223b62bd73348759ba4c04a525afb7d4055175bceb0183c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e48b8508849b24e18223b62bd73348759ba4c04a525afb7d4055175bceb0183c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92b13f24aa00608821909c688ace0441fd4e432d3c5fc9a1cc06b4239bf86955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92b13f24aa00608821909c688ace0441fd4e432d3c5fc9a1cc06b4239bf86955\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75d7fd5bba471f9371960e5c6cfd5d7cb49402d47459acad01a9c438ea33de0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75d7fd5bba471f9371960e5c6cfd5d7cb49402d47459acad01a9c438ea33de0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eb07d53076d60f3363a1232e742bb4bd801d49104a2be1095af34a61a5b61cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eb07d53076d60f3363a1232e742bb4bd801d49104a2be1095af34a61a5b61cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5339fbb27f553f96497abbf4b0584cbffe399e44fafd90cd0e568e2f4efad6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5339fbb27f553f96497abbf4b0584cbffe399e44fafd90cd0e568e2f4efad6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef304631b36ef7a2edc5ce89f5b4d1efecd4947d1d71f3bb4068395d76aea346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef304631b36ef7a2edc5ce89f5b4d1efecd4947d1d71f3bb4068395d76aea346\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dsv6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:30Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:30 crc kubenswrapper[4789]: I0202 21:20:30.869565 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:30 crc kubenswrapper[4789]: I0202 21:20:30.869659 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:30 crc kubenswrapper[4789]: I0202 21:20:30.869675 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:30 crc kubenswrapper[4789]: I0202 21:20:30.869699 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:30 crc kubenswrapper[4789]: I0202 21:20:30.869719 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:30Z","lastTransitionTime":"2026-02-02T21:20:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:30 crc kubenswrapper[4789]: I0202 21:20:30.973324 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:30 crc kubenswrapper[4789]: I0202 21:20:30.973900 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:30 crc kubenswrapper[4789]: I0202 21:20:30.974085 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:30 crc kubenswrapper[4789]: I0202 21:20:30.974254 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:30 crc kubenswrapper[4789]: I0202 21:20:30.974411 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:30Z","lastTransitionTime":"2026-02-02T21:20:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:31 crc kubenswrapper[4789]: I0202 21:20:31.077114 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:31 crc kubenswrapper[4789]: I0202 21:20:31.077185 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:31 crc kubenswrapper[4789]: I0202 21:20:31.077207 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:31 crc kubenswrapper[4789]: I0202 21:20:31.077234 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:31 crc kubenswrapper[4789]: I0202 21:20:31.077253 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:31Z","lastTransitionTime":"2026-02-02T21:20:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:31 crc kubenswrapper[4789]: I0202 21:20:31.179684 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:31 crc kubenswrapper[4789]: I0202 21:20:31.180064 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:31 crc kubenswrapper[4789]: I0202 21:20:31.180208 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:31 crc kubenswrapper[4789]: I0202 21:20:31.180339 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:31 crc kubenswrapper[4789]: I0202 21:20:31.180485 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:31Z","lastTransitionTime":"2026-02-02T21:20:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:31 crc kubenswrapper[4789]: I0202 21:20:31.287225 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:31 crc kubenswrapper[4789]: I0202 21:20:31.287358 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:31 crc kubenswrapper[4789]: I0202 21:20:31.287402 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:31 crc kubenswrapper[4789]: I0202 21:20:31.287442 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:31 crc kubenswrapper[4789]: I0202 21:20:31.287460 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:31Z","lastTransitionTime":"2026-02-02T21:20:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:31 crc kubenswrapper[4789]: I0202 21:20:31.391214 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:31 crc kubenswrapper[4789]: I0202 21:20:31.391272 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:31 crc kubenswrapper[4789]: I0202 21:20:31.391289 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:31 crc kubenswrapper[4789]: I0202 21:20:31.391313 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:31 crc kubenswrapper[4789]: I0202 21:20:31.391330 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:31Z","lastTransitionTime":"2026-02-02T21:20:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:31 crc kubenswrapper[4789]: I0202 21:20:31.418525 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 21:20:31 crc kubenswrapper[4789]: E0202 21:20:31.418852 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 21:20:31 crc kubenswrapper[4789]: I0202 21:20:31.418923 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 21:20:31 crc kubenswrapper[4789]: I0202 21:20:31.420836 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 21:20:31 crc kubenswrapper[4789]: E0202 21:20:31.421044 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 21:20:31 crc kubenswrapper[4789]: E0202 21:20:31.421154 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 21:20:31 crc kubenswrapper[4789]: I0202 21:20:31.495077 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:31 crc kubenswrapper[4789]: I0202 21:20:31.495140 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:31 crc kubenswrapper[4789]: I0202 21:20:31.495158 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:31 crc kubenswrapper[4789]: I0202 21:20:31.495183 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:31 crc kubenswrapper[4789]: I0202 21:20:31.495204 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:31Z","lastTransitionTime":"2026-02-02T21:20:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:31 crc kubenswrapper[4789]: I0202 21:20:31.599191 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:31 crc kubenswrapper[4789]: I0202 21:20:31.599292 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:31 crc kubenswrapper[4789]: I0202 21:20:31.599315 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:31 crc kubenswrapper[4789]: I0202 21:20:31.599364 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:31 crc kubenswrapper[4789]: I0202 21:20:31.599383 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:31Z","lastTransitionTime":"2026-02-02T21:20:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:31 crc kubenswrapper[4789]: I0202 21:20:31.702802 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:31 crc kubenswrapper[4789]: I0202 21:20:31.702871 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:31 crc kubenswrapper[4789]: I0202 21:20:31.702892 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:31 crc kubenswrapper[4789]: I0202 21:20:31.702917 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:31 crc kubenswrapper[4789]: I0202 21:20:31.702940 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:31Z","lastTransitionTime":"2026-02-02T21:20:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:31 crc kubenswrapper[4789]: I0202 21:20:31.758476 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 13:48:24.644079359 +0000 UTC Feb 02 21:20:31 crc kubenswrapper[4789]: I0202 21:20:31.806094 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:31 crc kubenswrapper[4789]: I0202 21:20:31.806149 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:31 crc kubenswrapper[4789]: I0202 21:20:31.806174 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:31 crc kubenswrapper[4789]: I0202 21:20:31.806203 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:31 crc kubenswrapper[4789]: I0202 21:20:31.806225 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:31Z","lastTransitionTime":"2026-02-02T21:20:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:31 crc kubenswrapper[4789]: I0202 21:20:31.909041 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:31 crc kubenswrapper[4789]: I0202 21:20:31.909098 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:31 crc kubenswrapper[4789]: I0202 21:20:31.909121 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:31 crc kubenswrapper[4789]: I0202 21:20:31.909149 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:31 crc kubenswrapper[4789]: I0202 21:20:31.909170 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:31Z","lastTransitionTime":"2026-02-02T21:20:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:32 crc kubenswrapper[4789]: I0202 21:20:32.012575 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:32 crc kubenswrapper[4789]: I0202 21:20:32.012667 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:32 crc kubenswrapper[4789]: I0202 21:20:32.012687 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:32 crc kubenswrapper[4789]: I0202 21:20:32.012717 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:32 crc kubenswrapper[4789]: I0202 21:20:32.012741 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:32Z","lastTransitionTime":"2026-02-02T21:20:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:32 crc kubenswrapper[4789]: I0202 21:20:32.114962 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:32 crc kubenswrapper[4789]: I0202 21:20:32.114992 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:32 crc kubenswrapper[4789]: I0202 21:20:32.115001 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:32 crc kubenswrapper[4789]: I0202 21:20:32.115012 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:32 crc kubenswrapper[4789]: I0202 21:20:32.115021 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:32Z","lastTransitionTime":"2026-02-02T21:20:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:32 crc kubenswrapper[4789]: I0202 21:20:32.217236 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:32 crc kubenswrapper[4789]: I0202 21:20:32.217332 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:32 crc kubenswrapper[4789]: I0202 21:20:32.217355 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:32 crc kubenswrapper[4789]: I0202 21:20:32.217382 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:32 crc kubenswrapper[4789]: I0202 21:20:32.217403 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:32Z","lastTransitionTime":"2026-02-02T21:20:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:32 crc kubenswrapper[4789]: I0202 21:20:32.321542 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:32 crc kubenswrapper[4789]: I0202 21:20:32.321625 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:32 crc kubenswrapper[4789]: I0202 21:20:32.321647 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:32 crc kubenswrapper[4789]: I0202 21:20:32.321678 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:32 crc kubenswrapper[4789]: I0202 21:20:32.321703 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:32Z","lastTransitionTime":"2026-02-02T21:20:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:32 crc kubenswrapper[4789]: I0202 21:20:32.419956 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vjbpg" Feb 02 21:20:32 crc kubenswrapper[4789]: E0202 21:20:32.420135 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vjbpg" podUID="2dc26662-64d3-47f0-9e0d-d340760ca348" Feb 02 21:20:32 crc kubenswrapper[4789]: I0202 21:20:32.424338 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:32 crc kubenswrapper[4789]: I0202 21:20:32.424391 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:32 crc kubenswrapper[4789]: I0202 21:20:32.424413 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:32 crc kubenswrapper[4789]: I0202 21:20:32.424438 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:32 crc kubenswrapper[4789]: I0202 21:20:32.424459 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:32Z","lastTransitionTime":"2026-02-02T21:20:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:32 crc kubenswrapper[4789]: I0202 21:20:32.527677 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:32 crc kubenswrapper[4789]: I0202 21:20:32.527751 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:32 crc kubenswrapper[4789]: I0202 21:20:32.527770 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:32 crc kubenswrapper[4789]: I0202 21:20:32.527848 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:32 crc kubenswrapper[4789]: I0202 21:20:32.527873 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:32Z","lastTransitionTime":"2026-02-02T21:20:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:32 crc kubenswrapper[4789]: I0202 21:20:32.632707 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:32 crc kubenswrapper[4789]: I0202 21:20:32.633109 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:32 crc kubenswrapper[4789]: I0202 21:20:32.633289 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:32 crc kubenswrapper[4789]: I0202 21:20:32.633491 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:32 crc kubenswrapper[4789]: I0202 21:20:32.633788 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:32Z","lastTransitionTime":"2026-02-02T21:20:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:32 crc kubenswrapper[4789]: I0202 21:20:32.736567 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:32 crc kubenswrapper[4789]: I0202 21:20:32.736944 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:32 crc kubenswrapper[4789]: I0202 21:20:32.737076 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:32 crc kubenswrapper[4789]: I0202 21:20:32.737219 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:32 crc kubenswrapper[4789]: I0202 21:20:32.737391 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:32Z","lastTransitionTime":"2026-02-02T21:20:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:32 crc kubenswrapper[4789]: I0202 21:20:32.759614 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 04:44:12.189556484 +0000 UTC Feb 02 21:20:32 crc kubenswrapper[4789]: I0202 21:20:32.840950 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:32 crc kubenswrapper[4789]: I0202 21:20:32.841375 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:32 crc kubenswrapper[4789]: I0202 21:20:32.841679 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:32 crc kubenswrapper[4789]: I0202 21:20:32.841931 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:32 crc kubenswrapper[4789]: I0202 21:20:32.842133 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:32Z","lastTransitionTime":"2026-02-02T21:20:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:32 crc kubenswrapper[4789]: I0202 21:20:32.854960 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2dc26662-64d3-47f0-9e0d-d340760ca348-metrics-certs\") pod \"network-metrics-daemon-vjbpg\" (UID: \"2dc26662-64d3-47f0-9e0d-d340760ca348\") " pod="openshift-multus/network-metrics-daemon-vjbpg" Feb 02 21:20:32 crc kubenswrapper[4789]: E0202 21:20:32.855144 4789 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 21:20:32 crc kubenswrapper[4789]: E0202 21:20:32.855246 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2dc26662-64d3-47f0-9e0d-d340760ca348-metrics-certs podName:2dc26662-64d3-47f0-9e0d-d340760ca348 nodeName:}" failed. No retries permitted until 2026-02-02 21:20:48.855219778 +0000 UTC m=+69.150244827 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2dc26662-64d3-47f0-9e0d-d340760ca348-metrics-certs") pod "network-metrics-daemon-vjbpg" (UID: "2dc26662-64d3-47f0-9e0d-d340760ca348") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 21:20:32 crc kubenswrapper[4789]: I0202 21:20:32.944942 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:32 crc kubenswrapper[4789]: I0202 21:20:32.945002 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:32 crc kubenswrapper[4789]: I0202 21:20:32.945020 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:32 crc kubenswrapper[4789]: I0202 21:20:32.945043 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:32 crc kubenswrapper[4789]: I0202 21:20:32.945060 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:32Z","lastTransitionTime":"2026-02-02T21:20:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:33 crc kubenswrapper[4789]: I0202 21:20:33.048739 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:33 crc kubenswrapper[4789]: I0202 21:20:33.048790 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:33 crc kubenswrapper[4789]: I0202 21:20:33.048807 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:33 crc kubenswrapper[4789]: I0202 21:20:33.048832 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:33 crc kubenswrapper[4789]: I0202 21:20:33.048849 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:33Z","lastTransitionTime":"2026-02-02T21:20:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:33 crc kubenswrapper[4789]: I0202 21:20:33.151772 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:33 crc kubenswrapper[4789]: I0202 21:20:33.151841 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:33 crc kubenswrapper[4789]: I0202 21:20:33.151860 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:33 crc kubenswrapper[4789]: I0202 21:20:33.151886 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:33 crc kubenswrapper[4789]: I0202 21:20:33.151904 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:33Z","lastTransitionTime":"2026-02-02T21:20:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:33 crc kubenswrapper[4789]: I0202 21:20:33.157204 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 21:20:33 crc kubenswrapper[4789]: E0202 21:20:33.157363 4789 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 21:20:33 crc kubenswrapper[4789]: E0202 21:20:33.157485 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 21:21:05.157446877 +0000 UTC m=+85.452471966 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 21:20:33 crc kubenswrapper[4789]: I0202 21:20:33.255518 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:33 crc kubenswrapper[4789]: I0202 21:20:33.255621 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:33 crc kubenswrapper[4789]: I0202 21:20:33.255647 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:33 crc kubenswrapper[4789]: I0202 21:20:33.255674 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:33 crc kubenswrapper[4789]: I0202 21:20:33.255690 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:33Z","lastTransitionTime":"2026-02-02T21:20:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:33 crc kubenswrapper[4789]: I0202 21:20:33.257921 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:20:33 crc kubenswrapper[4789]: E0202 21:20:33.258078 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 21:21:05.25804627 +0000 UTC m=+85.553071329 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:20:33 crc kubenswrapper[4789]: I0202 21:20:33.258129 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 21:20:33 crc kubenswrapper[4789]: I0202 21:20:33.258204 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 21:20:33 crc kubenswrapper[4789]: I0202 21:20:33.258255 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 21:20:33 crc kubenswrapper[4789]: E0202 21:20:33.258421 4789 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 21:20:33 crc kubenswrapper[4789]: E0202 21:20:33.258442 4789 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 21:20:33 crc kubenswrapper[4789]: E0202 21:20:33.258462 4789 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 21:20:33 crc kubenswrapper[4789]: E0202 21:20:33.258516 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-02 21:21:05.258499943 +0000 UTC m=+85.553525002 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 21:20:33 crc kubenswrapper[4789]: E0202 21:20:33.258988 4789 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 21:20:33 crc kubenswrapper[4789]: E0202 21:20:33.259043 4789 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 21:20:33 crc kubenswrapper[4789]: E0202 21:20:33.259066 4789 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 21:20:33 crc kubenswrapper[4789]: E0202 21:20:33.258992 4789 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 21:20:33 crc kubenswrapper[4789]: E0202 21:20:33.259172 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-02 21:21:05.259145141 +0000 UTC m=+85.554170200 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 21:20:33 crc kubenswrapper[4789]: E0202 21:20:33.259212 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 21:21:05.259193613 +0000 UTC m=+85.554218672 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 21:20:33 crc kubenswrapper[4789]: I0202 21:20:33.358954 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:33 crc kubenswrapper[4789]: I0202 21:20:33.359026 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:33 crc kubenswrapper[4789]: I0202 21:20:33.359051 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:33 crc kubenswrapper[4789]: I0202 21:20:33.359079 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:33 crc kubenswrapper[4789]: I0202 21:20:33.359103 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:33Z","lastTransitionTime":"2026-02-02T21:20:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:33 crc kubenswrapper[4789]: I0202 21:20:33.418883 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 21:20:33 crc kubenswrapper[4789]: I0202 21:20:33.418935 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 21:20:33 crc kubenswrapper[4789]: I0202 21:20:33.418887 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 21:20:33 crc kubenswrapper[4789]: E0202 21:20:33.419107 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 21:20:33 crc kubenswrapper[4789]: E0202 21:20:33.419235 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 21:20:33 crc kubenswrapper[4789]: E0202 21:20:33.419421 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 21:20:33 crc kubenswrapper[4789]: I0202 21:20:33.461931 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:33 crc kubenswrapper[4789]: I0202 21:20:33.461982 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:33 crc kubenswrapper[4789]: I0202 21:20:33.461998 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:33 crc kubenswrapper[4789]: I0202 21:20:33.462023 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:33 crc kubenswrapper[4789]: I0202 21:20:33.462047 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:33Z","lastTransitionTime":"2026-02-02T21:20:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:33 crc kubenswrapper[4789]: I0202 21:20:33.566569 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:33 crc kubenswrapper[4789]: I0202 21:20:33.566979 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:33 crc kubenswrapper[4789]: I0202 21:20:33.567187 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:33 crc kubenswrapper[4789]: I0202 21:20:33.567348 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:33 crc kubenswrapper[4789]: I0202 21:20:33.567533 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:33Z","lastTransitionTime":"2026-02-02T21:20:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:33 crc kubenswrapper[4789]: I0202 21:20:33.670433 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:33 crc kubenswrapper[4789]: I0202 21:20:33.670496 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:33 crc kubenswrapper[4789]: I0202 21:20:33.670514 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:33 crc kubenswrapper[4789]: I0202 21:20:33.670576 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:33 crc kubenswrapper[4789]: I0202 21:20:33.670625 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:33Z","lastTransitionTime":"2026-02-02T21:20:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:33 crc kubenswrapper[4789]: I0202 21:20:33.734031 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 21:20:33 crc kubenswrapper[4789]: I0202 21:20:33.756918 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fb5d89e-c901-4857-a5e8-b4d8e3701714\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d85613f300f8d008d919038b40efff3fb67e228e25959cbfd2424640c6bc7a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://460e055777327e1734238bf51929751678ea5a757b00a344daa31c8c95e4c463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c292754c81dd4929f8e599e8ec352fd5e90431c60bb741c070a10ffa91fad72\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3685c506f77d3807711a442d68da94932064737cfdc5cb74557da135906e24f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:19:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:33Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:33 crc kubenswrapper[4789]: I0202 21:20:33.760016 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 15:41:18.809685962 +0000 UTC Feb 02 21:20:33 crc kubenswrapper[4789]: I0202 21:20:33.773812 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:33 crc kubenswrapper[4789]: I0202 21:20:33.773877 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:33 crc kubenswrapper[4789]: I0202 21:20:33.773899 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:33 crc kubenswrapper[4789]: I0202 21:20:33.773931 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:33 crc kubenswrapper[4789]: I0202 21:20:33.773954 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:33Z","lastTransitionTime":"2026-02-02T21:20:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:33 crc kubenswrapper[4789]: I0202 21:20:33.779102 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2x5ws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70a32268-2a2d-47f3-9fc6-4281b8dc6a02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9eaf3ca59ce89187ee20f3915b7fd4a8867e156c0be18b435511d2af95ffd949\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf446\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2x5ws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:33Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:33 crc kubenswrapper[4789]: I0202 21:20:33.798909 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:33Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:33 crc kubenswrapper[4789]: I0202 21:20:33.817550 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af0ef1e0-7fcc-4fa6-8463-a0f5a9145c2e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3c3d527f77f26e052c4ef9c0577938dc23c802918e742b9fb9020cb6ba705f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eface13a61e8f9d1e8e9512c78a4c70973bfad708c3cdea7f7251f6fa408a59f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ed17431bc8880523ea84349c68ea56e389033b550390d88e60373f663d1491f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d78edf2f7e647edfcc306a3feff71c983907077ad05a684af9fa2990deaf3b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d78edf2f7e647edfcc306a3feff71c983907077ad05a684af9fa2990deaf3b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:19:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:19:40Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:33Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:33 crc kubenswrapper[4789]: I0202 21:20:33.837985 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa57a708f883719dd81200ccf975947c0af244b92ead81daee23171c9be412f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:33Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:33 crc kubenswrapper[4789]: I0202 21:20:33.853812 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6l576" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd970d28-4009-48b2-a0f4-2b8b1d54a2cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b641bb7caefade4d07fe41965573b059bd8b23a83bb9f60a9637d0152dffb07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwbqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6l576\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:33Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:33 crc kubenswrapper[4789]: I0202 21:20:33.872972 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19928db343f460eed7b046f14b45c6756081f57ea4b3ad77acc86f29eb856a82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://106225a63f3d4ecb7ba8c5266f738990ec7cc5603f9fabde6d234ae265dcc310\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:33Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:33 crc kubenswrapper[4789]: I0202 21:20:33.877903 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:33 crc kubenswrapper[4789]: I0202 21:20:33.877955 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:33 crc kubenswrapper[4789]: I0202 21:20:33.877977 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:33 crc kubenswrapper[4789]: I0202 21:20:33.878009 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:33 crc kubenswrapper[4789]: I0202 21:20:33.878034 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:33Z","lastTransitionTime":"2026-02-02T21:20:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:33 crc kubenswrapper[4789]: I0202 21:20:33.891130 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdf018b4-1451-4d37-be6e-05802b67c73e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad1716ba11e1b21eb68642e6935312760c389a669a007822290fbe72573bfaad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwk2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b108a17d437cce7e94365267d96e99e84944d9dd12174c5acb380bc1a1f9c885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwk2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c8vcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:33Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:33 crc kubenswrapper[4789]: I0202 21:20:33.912632 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d49gm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39528981-2c85-43f3-8fa0-bfae5c3334cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4ae303f0f4381207f4dd4a443e366d6e3de2014e9bc69aa644e98a76b239868\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnfbx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://277fe88585ee146931597a14fe049a3d69197c94e0d84f5dfb334b08cd685723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnfbx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-d49gm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:33Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:33 crc kubenswrapper[4789]: I0202 21:20:33.930365 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vjbpg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dc26662-64d3-47f0-9e0d-d340760ca348\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9zq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9zq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vjbpg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:33Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:33 crc kubenswrapper[4789]: I0202 21:20:33.947690 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40b3784b5ba59f0147d2889d897d17140759155e24587e9a323338e0a3125ddd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:33Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:33 crc kubenswrapper[4789]: I0202 21:20:33.967463 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:33Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:33 crc kubenswrapper[4789]: I0202 21:20:33.981770 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:33 crc kubenswrapper[4789]: I0202 21:20:33.982123 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:33 crc kubenswrapper[4789]: I0202 21:20:33.982688 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:33 crc kubenswrapper[4789]: I0202 21:20:33.983342 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:33 crc kubenswrapper[4789]: I0202 21:20:33.983898 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:33Z","lastTransitionTime":"2026-02-02T21:20:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:33 crc kubenswrapper[4789]: I0202 21:20:33.988903 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:33Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:34 crc kubenswrapper[4789]: I0202 21:20:34.019582 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29dad69c19f9411676fab81684dd613d866c494df1e333b46bfff35b98430103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://047a98525fa1b06be5e6f3bab6b417973a13bb32df7fad26ca90b4558ee4e364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05637515d4c19e054ce43cc23bb6d8e32c4752282dab08786fff3062eb976461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f05a85f44fd6a7b9efa0f591ec0afebd7818a80b132ec3dddd0faa22c380eea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd508dcd4022163f662ad27cf3ffbc4cca551d4020809ca5fac60cee8da80601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://021453f1c74b708ad3f39c2defa2664f098a552f3e22315479c483d24110e583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca031da70bd23ada7cfba11c78c5962555189be4b845b36d92ee2737df4e8bb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca031da70bd23ada7cfba11c78c5962555189be4b845b36d92ee2737df4e8bb7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T21:20:18Z\\\",\\\"message\\\":\\\"reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.88:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ebd4748e-0473-49fb-88ad-83dbb221791a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0202 21:20:18.018667 6276 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console-operator/metrics]} name:Service_openshift-console-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.88:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ebd4748e-0473-49fb-88ad-83dbb221791a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0202 21:20:18.018180 6276 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-w8vkt_openshift-ovn-kubernetes(2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f30344b8366cf5ef712d971dc4d8d552e7b50c1ecea8d2ace88bb80fce62231e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e90a8e698f2e9f30a596864d0d038bfad686cca1a56b1a8bc72fab5fe594f355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e90a8e698f2e9f30a596864d0d038bfad686cca1a56b1a8bc72fab5fe594f355\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w8vkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:34Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:34 crc kubenswrapper[4789]: I0202 21:20:34.035569 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wlsw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b25d791-42e1-4e08-b7da-41803cc40f4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0e2a17d3ade1aa229b2729d66fe953acc746673549e0d929076a3bd18839833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-snjrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wlsw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:34Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:34 crc kubenswrapper[4789]: I0202 21:20:34.057558 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9563eded-ca82-4eb6-90d4-e62b8acbe296\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72bf64d682578abb47e9fabf5e6c31e3a20594cc4a2d13304b2036f4198af265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cc46c408bb643834e494025442760929995b22175ce2af65a14004761848c2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e95c9e553b8e14141ddde6a221c0e5e4d46533d1631893e5d55416b9d16f4a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66a4db3799101ccca8a89d6bfd2c9d36940b8710ee3d256e47cd61cfe6ac7c07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d8bd6ccf6c7345f0ddc84a2983b618218fc65283b8d351c2df30d1221f304b7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\" 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 21:20:01.773773 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 21:20:01.773776 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 21:20:01.773779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 21:20:01.773999 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0202 21:20:01.777333 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3307553149/tls.crt::/tmp/serving-cert-3307553149/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770067196\\\\\\\\\\\\\\\" (2026-02-02 21:19:55 +0000 UTC to 2026-03-04 21:19:56 +0000 UTC (now=2026-02-02 21:20:01.777292377 +0000 UTC))\\\\\\\"\\\\nI0202 21:20:01.777438 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770067201\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770067201\\\\\\\\\\\\\\\" (2026-02-02 20:20:01 +0000 UTC to 2027-02-02 20:20:01 +0000 UTC (now=2026-02-02 21:20:01.777417111 +0000 UTC))\\\\\\\"\\\\nI0202 21:20:01.777455 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 21:20:01.777484 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 21:20:01.777505 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0202 21:20:01.777526 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0202 21:20:01.777548 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3307553149/tls.crt::/tmp/serving-cert-3307553149/tls.key\\\\\\\"\\\\nF0202 21:20:01.777545 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cbe4a41d8721562e34639bc3ab24cc7f735d8de00e14d38b1b623b58740c5b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94bdc1c30a3c895835afc4f205ec20725ec13707db6957046cae7791b8850679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94bdc1c30a3c895835afc4f205ec20725ec13707db6957046cae7791b8850679\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:19:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:19:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:34Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:34 crc kubenswrapper[4789]: I0202 21:20:34.083328 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dsv6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcbed546-a1c3-4ba4-96b4-61471010b1c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fff0380f801c6909a82bfc8a765cf7a393d7c2f7cc809611ded737a22f448a3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48b8508849b24e18223b62bd73348759ba4c04a525afb7d4055175bceb0183c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e48b8508849b24e18223b62bd73348759ba4c04a525afb7d4055175bceb0183c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92b13f24aa00608821909c688ace0441fd4e432d3c5fc9a1cc06b4239bf86955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92b13f24aa00608821909c688ace0441fd4e432d3c5fc9a1cc06b4239bf86955\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75d7fd5bba471f9371960e5c6cfd5d7cb49402d47459acad01a9c438ea33de0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75d7fd5bba471f9371960e5c6cfd5d7cb49402d47459acad01a9c438ea33de0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eb07d53076d60f3363a1232e742bb4bd801d49104a2be1095af34a61a5b61cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eb07d53076d60f3363a1232e742bb4bd801d49104a2be1095af34a61a5b61cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5339fbb27f553f96497abbf4b0584cbffe399e44fafd90cd0e568e2f4efad6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5339fbb27f553f96497abbf4b0584cbffe399e44fafd90cd0e568e2f4efad6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef304631b36ef7a2edc5ce89f5b4d1efecd4947d1d71f3bb4068395d76aea346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef304631b36ef7a2edc5ce89f5b4d1efecd4947d1d71f3bb4068395d76aea346\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dsv6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:34Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:34 crc kubenswrapper[4789]: I0202 21:20:34.088023 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:34 crc kubenswrapper[4789]: I0202 21:20:34.088061 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:34 crc kubenswrapper[4789]: I0202 21:20:34.088078 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:34 crc kubenswrapper[4789]: I0202 21:20:34.088103 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:34 crc kubenswrapper[4789]: I0202 21:20:34.088120 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:34Z","lastTransitionTime":"2026-02-02T21:20:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:34 crc kubenswrapper[4789]: I0202 21:20:34.190636 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:34 crc kubenswrapper[4789]: I0202 21:20:34.190733 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:34 crc kubenswrapper[4789]: I0202 21:20:34.190766 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:34 crc kubenswrapper[4789]: I0202 21:20:34.190796 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:34 crc kubenswrapper[4789]: I0202 21:20:34.190816 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:34Z","lastTransitionTime":"2026-02-02T21:20:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:34 crc kubenswrapper[4789]: I0202 21:20:34.293509 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:34 crc kubenswrapper[4789]: I0202 21:20:34.293562 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:34 crc kubenswrapper[4789]: I0202 21:20:34.293610 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:34 crc kubenswrapper[4789]: I0202 21:20:34.293637 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:34 crc kubenswrapper[4789]: I0202 21:20:34.293655 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:34Z","lastTransitionTime":"2026-02-02T21:20:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:34 crc kubenswrapper[4789]: I0202 21:20:34.396293 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:34 crc kubenswrapper[4789]: I0202 21:20:34.396647 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:34 crc kubenswrapper[4789]: I0202 21:20:34.396787 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:34 crc kubenswrapper[4789]: I0202 21:20:34.396976 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:34 crc kubenswrapper[4789]: I0202 21:20:34.397111 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:34Z","lastTransitionTime":"2026-02-02T21:20:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:34 crc kubenswrapper[4789]: I0202 21:20:34.419299 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vjbpg" Feb 02 21:20:34 crc kubenswrapper[4789]: E0202 21:20:34.419487 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vjbpg" podUID="2dc26662-64d3-47f0-9e0d-d340760ca348" Feb 02 21:20:34 crc kubenswrapper[4789]: I0202 21:20:34.500129 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:34 crc kubenswrapper[4789]: I0202 21:20:34.500187 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:34 crc kubenswrapper[4789]: I0202 21:20:34.500206 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:34 crc kubenswrapper[4789]: I0202 21:20:34.500230 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:34 crc kubenswrapper[4789]: I0202 21:20:34.500251 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:34Z","lastTransitionTime":"2026-02-02T21:20:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:34 crc kubenswrapper[4789]: I0202 21:20:34.603453 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:34 crc kubenswrapper[4789]: I0202 21:20:34.603509 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:34 crc kubenswrapper[4789]: I0202 21:20:34.603525 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:34 crc kubenswrapper[4789]: I0202 21:20:34.603549 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:34 crc kubenswrapper[4789]: I0202 21:20:34.603566 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:34Z","lastTransitionTime":"2026-02-02T21:20:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:34 crc kubenswrapper[4789]: I0202 21:20:34.706840 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:34 crc kubenswrapper[4789]: I0202 21:20:34.706907 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:34 crc kubenswrapper[4789]: I0202 21:20:34.706927 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:34 crc kubenswrapper[4789]: I0202 21:20:34.706965 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:34 crc kubenswrapper[4789]: I0202 21:20:34.706983 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:34Z","lastTransitionTime":"2026-02-02T21:20:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:34 crc kubenswrapper[4789]: I0202 21:20:34.761480 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 09:01:37.541811734 +0000 UTC Feb 02 21:20:34 crc kubenswrapper[4789]: I0202 21:20:34.810408 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:34 crc kubenswrapper[4789]: I0202 21:20:34.810442 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:34 crc kubenswrapper[4789]: I0202 21:20:34.810451 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:34 crc kubenswrapper[4789]: I0202 21:20:34.810466 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:34 crc kubenswrapper[4789]: I0202 21:20:34.810475 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:34Z","lastTransitionTime":"2026-02-02T21:20:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:34 crc kubenswrapper[4789]: I0202 21:20:34.913235 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:34 crc kubenswrapper[4789]: I0202 21:20:34.913310 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:34 crc kubenswrapper[4789]: I0202 21:20:34.913329 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:34 crc kubenswrapper[4789]: I0202 21:20:34.913352 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:34 crc kubenswrapper[4789]: I0202 21:20:34.913370 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:34Z","lastTransitionTime":"2026-02-02T21:20:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:35 crc kubenswrapper[4789]: I0202 21:20:35.016451 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:35 crc kubenswrapper[4789]: I0202 21:20:35.016494 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:35 crc kubenswrapper[4789]: I0202 21:20:35.016503 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:35 crc kubenswrapper[4789]: I0202 21:20:35.016518 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:35 crc kubenswrapper[4789]: I0202 21:20:35.016527 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:35Z","lastTransitionTime":"2026-02-02T21:20:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:35 crc kubenswrapper[4789]: I0202 21:20:35.120372 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:35 crc kubenswrapper[4789]: I0202 21:20:35.120427 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:35 crc kubenswrapper[4789]: I0202 21:20:35.120446 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:35 crc kubenswrapper[4789]: I0202 21:20:35.120474 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:35 crc kubenswrapper[4789]: I0202 21:20:35.120491 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:35Z","lastTransitionTime":"2026-02-02T21:20:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:35 crc kubenswrapper[4789]: I0202 21:20:35.223324 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:35 crc kubenswrapper[4789]: I0202 21:20:35.223396 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:35 crc kubenswrapper[4789]: I0202 21:20:35.223417 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:35 crc kubenswrapper[4789]: I0202 21:20:35.223440 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:35 crc kubenswrapper[4789]: I0202 21:20:35.223457 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:35Z","lastTransitionTime":"2026-02-02T21:20:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:35 crc kubenswrapper[4789]: I0202 21:20:35.326189 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:35 crc kubenswrapper[4789]: I0202 21:20:35.326803 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:35 crc kubenswrapper[4789]: I0202 21:20:35.326834 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:35 crc kubenswrapper[4789]: I0202 21:20:35.326863 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:35 crc kubenswrapper[4789]: I0202 21:20:35.326883 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:35Z","lastTransitionTime":"2026-02-02T21:20:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:35 crc kubenswrapper[4789]: I0202 21:20:35.418786 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 21:20:35 crc kubenswrapper[4789]: I0202 21:20:35.418841 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 21:20:35 crc kubenswrapper[4789]: I0202 21:20:35.418808 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 21:20:35 crc kubenswrapper[4789]: E0202 21:20:35.418934 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 21:20:35 crc kubenswrapper[4789]: E0202 21:20:35.419136 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 21:20:35 crc kubenswrapper[4789]: E0202 21:20:35.419344 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 21:20:35 crc kubenswrapper[4789]: I0202 21:20:35.429689 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:35 crc kubenswrapper[4789]: I0202 21:20:35.429746 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:35 crc kubenswrapper[4789]: I0202 21:20:35.429765 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:35 crc kubenswrapper[4789]: I0202 21:20:35.429788 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:35 crc kubenswrapper[4789]: I0202 21:20:35.429806 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:35Z","lastTransitionTime":"2026-02-02T21:20:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:35 crc kubenswrapper[4789]: I0202 21:20:35.532621 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:35 crc kubenswrapper[4789]: I0202 21:20:35.532680 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:35 crc kubenswrapper[4789]: I0202 21:20:35.532698 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:35 crc kubenswrapper[4789]: I0202 21:20:35.532723 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:35 crc kubenswrapper[4789]: I0202 21:20:35.532740 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:35Z","lastTransitionTime":"2026-02-02T21:20:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:35 crc kubenswrapper[4789]: I0202 21:20:35.636294 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:35 crc kubenswrapper[4789]: I0202 21:20:35.636349 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:35 crc kubenswrapper[4789]: I0202 21:20:35.636365 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:35 crc kubenswrapper[4789]: I0202 21:20:35.636390 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:35 crc kubenswrapper[4789]: I0202 21:20:35.636405 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:35Z","lastTransitionTime":"2026-02-02T21:20:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:35 crc kubenswrapper[4789]: I0202 21:20:35.739395 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:35 crc kubenswrapper[4789]: I0202 21:20:35.739440 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:35 crc kubenswrapper[4789]: I0202 21:20:35.739474 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:35 crc kubenswrapper[4789]: I0202 21:20:35.739491 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:35 crc kubenswrapper[4789]: I0202 21:20:35.739503 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:35Z","lastTransitionTime":"2026-02-02T21:20:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:35 crc kubenswrapper[4789]: I0202 21:20:35.762184 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 14:32:32.076952611 +0000 UTC Feb 02 21:20:35 crc kubenswrapper[4789]: I0202 21:20:35.842899 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:35 crc kubenswrapper[4789]: I0202 21:20:35.842981 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:35 crc kubenswrapper[4789]: I0202 21:20:35.843033 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:35 crc kubenswrapper[4789]: I0202 21:20:35.843056 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:35 crc kubenswrapper[4789]: I0202 21:20:35.843073 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:35Z","lastTransitionTime":"2026-02-02T21:20:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:35 crc kubenswrapper[4789]: I0202 21:20:35.945786 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:35 crc kubenswrapper[4789]: I0202 21:20:35.945833 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:35 crc kubenswrapper[4789]: I0202 21:20:35.945849 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:35 crc kubenswrapper[4789]: I0202 21:20:35.945873 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:35 crc kubenswrapper[4789]: I0202 21:20:35.945891 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:35Z","lastTransitionTime":"2026-02-02T21:20:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:36 crc kubenswrapper[4789]: I0202 21:20:36.048959 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:36 crc kubenswrapper[4789]: I0202 21:20:36.049016 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:36 crc kubenswrapper[4789]: I0202 21:20:36.049046 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:36 crc kubenswrapper[4789]: I0202 21:20:36.049072 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:36 crc kubenswrapper[4789]: I0202 21:20:36.049090 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:36Z","lastTransitionTime":"2026-02-02T21:20:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:36 crc kubenswrapper[4789]: I0202 21:20:36.151319 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:36 crc kubenswrapper[4789]: I0202 21:20:36.151365 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:36 crc kubenswrapper[4789]: I0202 21:20:36.151381 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:36 crc kubenswrapper[4789]: I0202 21:20:36.151403 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:36 crc kubenswrapper[4789]: I0202 21:20:36.151419 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:36Z","lastTransitionTime":"2026-02-02T21:20:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:36 crc kubenswrapper[4789]: I0202 21:20:36.254325 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:36 crc kubenswrapper[4789]: I0202 21:20:36.254387 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:36 crc kubenswrapper[4789]: I0202 21:20:36.254406 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:36 crc kubenswrapper[4789]: I0202 21:20:36.254428 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:36 crc kubenswrapper[4789]: I0202 21:20:36.254446 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:36Z","lastTransitionTime":"2026-02-02T21:20:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:36 crc kubenswrapper[4789]: I0202 21:20:36.357093 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:36 crc kubenswrapper[4789]: I0202 21:20:36.357150 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:36 crc kubenswrapper[4789]: I0202 21:20:36.357167 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:36 crc kubenswrapper[4789]: I0202 21:20:36.357236 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:36 crc kubenswrapper[4789]: I0202 21:20:36.357273 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:36Z","lastTransitionTime":"2026-02-02T21:20:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:36 crc kubenswrapper[4789]: I0202 21:20:36.418779 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vjbpg" Feb 02 21:20:36 crc kubenswrapper[4789]: E0202 21:20:36.418979 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vjbpg" podUID="2dc26662-64d3-47f0-9e0d-d340760ca348" Feb 02 21:20:36 crc kubenswrapper[4789]: I0202 21:20:36.460662 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:36 crc kubenswrapper[4789]: I0202 21:20:36.460742 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:36 crc kubenswrapper[4789]: I0202 21:20:36.460767 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:36 crc kubenswrapper[4789]: I0202 21:20:36.460797 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:36 crc kubenswrapper[4789]: I0202 21:20:36.460823 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:36Z","lastTransitionTime":"2026-02-02T21:20:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:36 crc kubenswrapper[4789]: I0202 21:20:36.564687 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:36 crc kubenswrapper[4789]: I0202 21:20:36.564747 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:36 crc kubenswrapper[4789]: I0202 21:20:36.564762 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:36 crc kubenswrapper[4789]: I0202 21:20:36.564782 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:36 crc kubenswrapper[4789]: I0202 21:20:36.564797 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:36Z","lastTransitionTime":"2026-02-02T21:20:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:36 crc kubenswrapper[4789]: I0202 21:20:36.667766 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:36 crc kubenswrapper[4789]: I0202 21:20:36.667838 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:36 crc kubenswrapper[4789]: I0202 21:20:36.667862 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:36 crc kubenswrapper[4789]: I0202 21:20:36.667894 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:36 crc kubenswrapper[4789]: I0202 21:20:36.667917 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:36Z","lastTransitionTime":"2026-02-02T21:20:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:36 crc kubenswrapper[4789]: I0202 21:20:36.763186 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 13:03:48.822655183 +0000 UTC Feb 02 21:20:36 crc kubenswrapper[4789]: I0202 21:20:36.772425 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:36 crc kubenswrapper[4789]: I0202 21:20:36.772485 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:36 crc kubenswrapper[4789]: I0202 21:20:36.772508 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:36 crc kubenswrapper[4789]: I0202 21:20:36.772538 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:36 crc kubenswrapper[4789]: I0202 21:20:36.772560 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:36Z","lastTransitionTime":"2026-02-02T21:20:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:36 crc kubenswrapper[4789]: I0202 21:20:36.875527 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:36 crc kubenswrapper[4789]: I0202 21:20:36.875625 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:36 crc kubenswrapper[4789]: I0202 21:20:36.875644 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:36 crc kubenswrapper[4789]: I0202 21:20:36.875672 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:36 crc kubenswrapper[4789]: I0202 21:20:36.875690 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:36Z","lastTransitionTime":"2026-02-02T21:20:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:36 crc kubenswrapper[4789]: I0202 21:20:36.979031 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:36 crc kubenswrapper[4789]: I0202 21:20:36.979109 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:36 crc kubenswrapper[4789]: I0202 21:20:36.979134 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:36 crc kubenswrapper[4789]: I0202 21:20:36.979166 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:36 crc kubenswrapper[4789]: I0202 21:20:36.979190 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:36Z","lastTransitionTime":"2026-02-02T21:20:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:37 crc kubenswrapper[4789]: I0202 21:20:37.082548 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:37 crc kubenswrapper[4789]: I0202 21:20:37.082652 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:37 crc kubenswrapper[4789]: I0202 21:20:37.082671 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:37 crc kubenswrapper[4789]: I0202 21:20:37.082694 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:37 crc kubenswrapper[4789]: I0202 21:20:37.082712 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:37Z","lastTransitionTime":"2026-02-02T21:20:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:37 crc kubenswrapper[4789]: I0202 21:20:37.186091 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:37 crc kubenswrapper[4789]: I0202 21:20:37.186223 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:37 crc kubenswrapper[4789]: I0202 21:20:37.186240 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:37 crc kubenswrapper[4789]: I0202 21:20:37.186263 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:37 crc kubenswrapper[4789]: I0202 21:20:37.186280 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:37Z","lastTransitionTime":"2026-02-02T21:20:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:37 crc kubenswrapper[4789]: I0202 21:20:37.289004 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:37 crc kubenswrapper[4789]: I0202 21:20:37.289086 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:37 crc kubenswrapper[4789]: I0202 21:20:37.289105 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:37 crc kubenswrapper[4789]: I0202 21:20:37.289130 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:37 crc kubenswrapper[4789]: I0202 21:20:37.289148 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:37Z","lastTransitionTime":"2026-02-02T21:20:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:37 crc kubenswrapper[4789]: I0202 21:20:37.394323 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:37 crc kubenswrapper[4789]: I0202 21:20:37.394392 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:37 crc kubenswrapper[4789]: I0202 21:20:37.394410 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:37 crc kubenswrapper[4789]: I0202 21:20:37.394433 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:37 crc kubenswrapper[4789]: I0202 21:20:37.394451 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:37Z","lastTransitionTime":"2026-02-02T21:20:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:37 crc kubenswrapper[4789]: I0202 21:20:37.419114 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 21:20:37 crc kubenswrapper[4789]: I0202 21:20:37.419168 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 21:20:37 crc kubenswrapper[4789]: I0202 21:20:37.419217 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 21:20:37 crc kubenswrapper[4789]: E0202 21:20:37.419360 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 21:20:37 crc kubenswrapper[4789]: E0202 21:20:37.420302 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 21:20:37 crc kubenswrapper[4789]: E0202 21:20:37.420490 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 21:20:37 crc kubenswrapper[4789]: I0202 21:20:37.420940 4789 scope.go:117] "RemoveContainer" containerID="ca031da70bd23ada7cfba11c78c5962555189be4b845b36d92ee2737df4e8bb7" Feb 02 21:20:37 crc kubenswrapper[4789]: I0202 21:20:37.497939 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:37 crc kubenswrapper[4789]: I0202 21:20:37.497989 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:37 crc kubenswrapper[4789]: I0202 21:20:37.498007 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:37 crc kubenswrapper[4789]: I0202 21:20:37.498031 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:37 crc kubenswrapper[4789]: I0202 21:20:37.498048 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:37Z","lastTransitionTime":"2026-02-02T21:20:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:37 crc kubenswrapper[4789]: I0202 21:20:37.600770 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:37 crc kubenswrapper[4789]: I0202 21:20:37.600901 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:37 crc kubenswrapper[4789]: I0202 21:20:37.600927 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:37 crc kubenswrapper[4789]: I0202 21:20:37.600955 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:37 crc kubenswrapper[4789]: I0202 21:20:37.600978 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:37Z","lastTransitionTime":"2026-02-02T21:20:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:37 crc kubenswrapper[4789]: I0202 21:20:37.703367 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:37 crc kubenswrapper[4789]: I0202 21:20:37.703418 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:37 crc kubenswrapper[4789]: I0202 21:20:37.703430 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:37 crc kubenswrapper[4789]: I0202 21:20:37.703466 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:37 crc kubenswrapper[4789]: I0202 21:20:37.703512 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:37Z","lastTransitionTime":"2026-02-02T21:20:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:37 crc kubenswrapper[4789]: I0202 21:20:37.764775 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 11:56:32.127089808 +0000 UTC Feb 02 21:20:37 crc kubenswrapper[4789]: I0202 21:20:37.807164 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:37 crc kubenswrapper[4789]: I0202 21:20:37.807216 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:37 crc kubenswrapper[4789]: I0202 21:20:37.807233 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:37 crc kubenswrapper[4789]: I0202 21:20:37.807255 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:37 crc kubenswrapper[4789]: I0202 21:20:37.807273 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:37Z","lastTransitionTime":"2026-02-02T21:20:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:37 crc kubenswrapper[4789]: I0202 21:20:37.833763 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:37 crc kubenswrapper[4789]: I0202 21:20:37.833829 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:37 crc kubenswrapper[4789]: I0202 21:20:37.833849 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:37 crc kubenswrapper[4789]: I0202 21:20:37.833873 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:37 crc kubenswrapper[4789]: I0202 21:20:37.833891 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:37Z","lastTransitionTime":"2026-02-02T21:20:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:37 crc kubenswrapper[4789]: E0202 21:20:37.853183 4789 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"007d0037-9447-42ea-b3a4-6e1f0d669307\\\",\\\"systemUUID\\\":\\\"53ecbfdd-0b43-4d74-98ca-c7bcbc951d86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:37Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:37 crc kubenswrapper[4789]: I0202 21:20:37.867302 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:37 crc kubenswrapper[4789]: I0202 21:20:37.867335 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:37 crc kubenswrapper[4789]: I0202 21:20:37.867346 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:37 crc kubenswrapper[4789]: I0202 21:20:37.867360 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:37 crc kubenswrapper[4789]: I0202 21:20:37.867371 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:37Z","lastTransitionTime":"2026-02-02T21:20:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:37 crc kubenswrapper[4789]: E0202 21:20:37.880574 4789 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"007d0037-9447-42ea-b3a4-6e1f0d669307\\\",\\\"systemUUID\\\":\\\"53ecbfdd-0b43-4d74-98ca-c7bcbc951d86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:37Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:37 crc kubenswrapper[4789]: I0202 21:20:37.884621 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:37 crc kubenswrapper[4789]: I0202 21:20:37.884652 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:37 crc kubenswrapper[4789]: I0202 21:20:37.884699 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:37 crc kubenswrapper[4789]: I0202 21:20:37.884713 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:37 crc kubenswrapper[4789]: I0202 21:20:37.884721 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:37Z","lastTransitionTime":"2026-02-02T21:20:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:37 crc kubenswrapper[4789]: E0202 21:20:37.899423 4789 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"007d0037-9447-42ea-b3a4-6e1f0d669307\\\",\\\"systemUUID\\\":\\\"53ecbfdd-0b43-4d74-98ca-c7bcbc951d86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:37Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:37 crc kubenswrapper[4789]: I0202 21:20:37.904207 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:37 crc kubenswrapper[4789]: I0202 21:20:37.904285 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:37 crc kubenswrapper[4789]: I0202 21:20:37.904309 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:37 crc kubenswrapper[4789]: I0202 21:20:37.904344 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:37 crc kubenswrapper[4789]: I0202 21:20:37.904369 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:37Z","lastTransitionTime":"2026-02-02T21:20:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:37 crc kubenswrapper[4789]: E0202 21:20:37.924865 4789 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"007d0037-9447-42ea-b3a4-6e1f0d669307\\\",\\\"systemUUID\\\":\\\"53ecbfdd-0b43-4d74-98ca-c7bcbc951d86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:37Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:37 crc kubenswrapper[4789]: I0202 21:20:37.929933 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:37 crc kubenswrapper[4789]: I0202 21:20:37.929969 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:37 crc kubenswrapper[4789]: I0202 21:20:37.929982 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:37 crc kubenswrapper[4789]: I0202 21:20:37.930002 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:37 crc kubenswrapper[4789]: I0202 21:20:37.930014 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:37Z","lastTransitionTime":"2026-02-02T21:20:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:37 crc kubenswrapper[4789]: E0202 21:20:37.945997 4789 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"007d0037-9447-42ea-b3a4-6e1f0d669307\\\",\\\"systemUUID\\\":\\\"53ecbfdd-0b43-4d74-98ca-c7bcbc951d86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:37Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:37 crc kubenswrapper[4789]: E0202 21:20:37.946219 4789 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 02 21:20:37 crc kubenswrapper[4789]: I0202 21:20:37.948419 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:37 crc kubenswrapper[4789]: I0202 21:20:37.948493 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:37 crc kubenswrapper[4789]: I0202 21:20:37.948519 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:37 crc kubenswrapper[4789]: I0202 21:20:37.948556 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:37 crc kubenswrapper[4789]: I0202 21:20:37.948613 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:37Z","lastTransitionTime":"2026-02-02T21:20:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:38 crc kubenswrapper[4789]: I0202 21:20:38.052110 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:38 crc kubenswrapper[4789]: I0202 21:20:38.052184 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:38 crc kubenswrapper[4789]: I0202 21:20:38.052212 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:38 crc kubenswrapper[4789]: I0202 21:20:38.052248 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:38 crc kubenswrapper[4789]: I0202 21:20:38.052274 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:38Z","lastTransitionTime":"2026-02-02T21:20:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:38 crc kubenswrapper[4789]: I0202 21:20:38.151352 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w8vkt_2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6/ovnkube-controller/1.log" Feb 02 21:20:38 crc kubenswrapper[4789]: I0202 21:20:38.154450 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:38 crc kubenswrapper[4789]: I0202 21:20:38.154520 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:38 crc kubenswrapper[4789]: I0202 21:20:38.154544 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:38 crc kubenswrapper[4789]: I0202 21:20:38.154606 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:38 crc kubenswrapper[4789]: I0202 21:20:38.154635 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:38Z","lastTransitionTime":"2026-02-02T21:20:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:38 crc kubenswrapper[4789]: I0202 21:20:38.155216 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" event={"ID":"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6","Type":"ContainerStarted","Data":"8afae9b540e38d08b31eca754dc56d77b5477c42e30bd8aa7ff2200857e9906b"} Feb 02 21:20:38 crc kubenswrapper[4789]: I0202 21:20:38.155886 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" Feb 02 21:20:38 crc kubenswrapper[4789]: I0202 21:20:38.174105 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:38Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:38 crc kubenswrapper[4789]: I0202 21:20:38.194810 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:38Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:38 crc kubenswrapper[4789]: I0202 21:20:38.214124 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19928db343f460eed7b046f14b45c6756081f57ea4b3ad77acc86f29eb856a82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://106225a63f3d4ecb7ba8c5266f738990ec7cc5603f9fabde6d234ae265dcc310\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:38Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:38 crc kubenswrapper[4789]: I0202 21:20:38.230383 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdf018b4-1451-4d37-be6e-05802b67c73e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad1716ba11e1b21eb68642e6935312760c389a669a007822290fbe72573bfaad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwk2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b108a17d437cce7e94365267d96e99e84944d9dd12174c5acb380bc1a1f9c885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwk2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c8vcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:38Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:38 crc kubenswrapper[4789]: I0202 21:20:38.252394 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d49gm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39528981-2c85-43f3-8fa0-bfae5c3334cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4ae303f0f4381207f4dd4a443e366d6e3de2014e9bc69aa644e98a76b239868\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnfbx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://277fe88585ee146931597a14fe049a3d69197c94e0d84f5dfb334b08cd685723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnfbx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-d49gm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:38Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:38 crc kubenswrapper[4789]: I0202 21:20:38.257163 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:38 crc kubenswrapper[4789]: I0202 21:20:38.257205 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:38 crc kubenswrapper[4789]: I0202 21:20:38.257218 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:38 crc kubenswrapper[4789]: I0202 21:20:38.257235 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:38 crc kubenswrapper[4789]: I0202 21:20:38.257246 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:38Z","lastTransitionTime":"2026-02-02T21:20:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:38 crc kubenswrapper[4789]: I0202 21:20:38.270031 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vjbpg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dc26662-64d3-47f0-9e0d-d340760ca348\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9zq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9zq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vjbpg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:38Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:38 crc kubenswrapper[4789]: I0202 21:20:38.280542 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40b3784b5ba59f0147d2889d897d17140759155e24587e9a323338e0a3125ddd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:38Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:38 crc kubenswrapper[4789]: I0202 21:20:38.291680 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9563eded-ca82-4eb6-90d4-e62b8acbe296\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72bf64d682578abb47e9fabf5e6c31e3a20594cc4a2d13304b2036f4198af265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cc46c408bb643834e494025442760929995b22175ce2af65a14004761848c2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e95c9e553b8e14141ddde6a221c0e5e4d46533d1631893e5d55416b9d16f4a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66a4db3799101ccca8a89d6bfd2c9d36940b8710ee3d256e47cd61cfe6ac7c07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d8bd6ccf6c7345f0ddc84a2983b618218fc65283b8d351c2df30d1221f304b7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\" 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 21:20:01.773773 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 21:20:01.773776 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 21:20:01.773779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 21:20:01.773999 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0202 21:20:01.777333 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3307553149/tls.crt::/tmp/serving-cert-3307553149/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770067196\\\\\\\\\\\\\\\" (2026-02-02 21:19:55 +0000 UTC to 2026-03-04 21:19:56 +0000 UTC (now=2026-02-02 21:20:01.777292377 +0000 UTC))\\\\\\\"\\\\nI0202 21:20:01.777438 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770067201\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770067201\\\\\\\\\\\\\\\" (2026-02-02 20:20:01 +0000 UTC to 2027-02-02 20:20:01 +0000 UTC (now=2026-02-02 21:20:01.777417111 +0000 UTC))\\\\\\\"\\\\nI0202 21:20:01.777455 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 21:20:01.777484 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 21:20:01.777505 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0202 21:20:01.777526 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0202 21:20:01.777548 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3307553149/tls.crt::/tmp/serving-cert-3307553149/tls.key\\\\\\\"\\\\nF0202 21:20:01.777545 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cbe4a41d8721562e34639bc3ab24cc7f735d8de00e14d38b1b623b58740c5b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94bdc1c30a3c895835afc4f205ec20725ec13707db6957046cae7791b8850679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94bdc1c30a3c895835afc4f205ec20725ec13707db6957046cae7791b8850679\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:19:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:19:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:38Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:38 crc kubenswrapper[4789]: I0202 21:20:38.311460 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dsv6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcbed546-a1c3-4ba4-96b4-61471010b1c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fff0380f801c6909a82bfc8a765cf7a393d7c2f7cc809611ded737a22f448a3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48b8508849b24e18223b62bd73348759ba4c04a525afb7d4055175bceb0183c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e48b8508849b24e18223b62bd73348759ba4c04a525afb7d4055175bceb0183c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92b13f24aa00608821909c688ace0441fd4e432d3c5fc9a1cc06b4239bf86955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92b13f24aa00608821909c688ace0441fd4e432d3c5fc9a1cc06b4239bf86955\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75d7fd5bba471f9371960e5c6cfd5d7cb49402d47459acad01a9c438ea33de0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75d7fd5bba471f9371960e5c6cfd5d7cb49402d47459acad01a9c438ea33de0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eb07d53076d60f3363a1232e742bb4bd801d49104a2be1095af34a61a5b61cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eb07d53076d60f3363a1232e742bb4bd801d49104a2be1095af34a61a5b61cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5339fbb27f553f96497abbf4b0584cbffe399e44fafd90cd0e568e2f4efad6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5339fbb27f553f96497abbf4b0584cbffe399e44fafd90cd0e568e2f4efad6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef304631b36ef7a2edc5ce89f5b4d1efecd4947d1d71f3bb4068395d76aea346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef304631b36ef7a2edc5ce89f5b4d1efecd4947d1d71f3bb4068395d76aea346\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dsv6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:38Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:38 crc kubenswrapper[4789]: I0202 21:20:38.331470 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29dad69c19f9411676fab81684dd613d866c494df1e333b46bfff35b98430103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://047a98525fa1b06be5e6f3bab6b417973a13bb32df7fad26ca90b4558ee4e364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05637515d4c19e054ce43cc23bb6d8e32c4752282dab08786fff3062eb976461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f05a85f44fd6a7b9efa0f591ec0afebd7818a80b132ec3dddd0faa22c380eea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd508dcd4022163f662ad27cf3ffbc4cca551d4020809ca5fac60cee8da80601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://021453f1c74b708ad3f39c2defa2664f098a552f3e22315479c483d24110e583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8afae9b540e38d08b31eca754dc56d77b5477c42e30bd8aa7ff2200857e9906b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca031da70bd23ada7cfba11c78c5962555189be4b845b36d92ee2737df4e8bb7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T21:20:18Z\\\",\\\"message\\\":\\\"reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.88:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ebd4748e-0473-49fb-88ad-83dbb221791a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0202 21:20:18.018667 6276 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console-operator/metrics]} name:Service_openshift-console-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.88:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ebd4748e-0473-49fb-88ad-83dbb221791a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0202 21:20:18.018180 6276 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f30344b8366cf5ef712d971dc4d8d552e7b50c1ecea8d2ace88bb80fce62231e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e90a8e698f2e9f30a596864d0d038bfad686cca1a56b1a8bc72fab5fe594f355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e90a8e698f2e9f30a596864d0d038bfad686cca1a56b1a8bc72fab5fe594f355\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w8vkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:38Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:38 crc kubenswrapper[4789]: I0202 21:20:38.342124 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wlsw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b25d791-42e1-4e08-b7da-41803cc40f4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0e2a17d3ade1aa229b2729d66fe953acc746673549e0d929076a3bd18839833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-snjrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wlsw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:38Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:38 crc kubenswrapper[4789]: I0202 21:20:38.358378 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fb5d89e-c901-4857-a5e8-b4d8e3701714\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d85613f300f8d008d919038b40efff3fb67e228e25959cbfd2424640c6bc7a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://460e055777327e1734238bf51929751678ea5a757b00a344daa31c8c95e4c463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c292754c81dd4929f8e599e8ec352fd5e90431c60bb741c070a10ffa91fad72\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3685c506f77d3807711a442d68da94932064737cfdc5cb74557da135906e24f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:19:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:38Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:38 crc kubenswrapper[4789]: I0202 21:20:38.359414 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:38 crc kubenswrapper[4789]: I0202 21:20:38.359459 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:38 crc kubenswrapper[4789]: I0202 21:20:38.359472 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:38 crc kubenswrapper[4789]: I0202 21:20:38.359490 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:38 crc kubenswrapper[4789]: I0202 21:20:38.359503 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:38Z","lastTransitionTime":"2026-02-02T21:20:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:38 crc kubenswrapper[4789]: I0202 21:20:38.372159 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2x5ws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70a32268-2a2d-47f3-9fc6-4281b8dc6a02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9eaf3ca59ce89187ee20f3915b7fd4a8867e156c0be18b435511d2af95ffd949\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf446\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2x5ws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:38Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:38 crc kubenswrapper[4789]: I0202 21:20:38.384107 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa57a708f883719dd81200ccf975947c0af244b92ead81daee23171c9be412f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:38Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:38 crc kubenswrapper[4789]: I0202 21:20:38.396413 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6l576" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd970d28-4009-48b2-a0f4-2b8b1d54a2cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b641bb7caefade4d07fe41965573b059bd8b23a83bb9f60a9637d0152dffb07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwbqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6l576\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:38Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:38 crc kubenswrapper[4789]: I0202 21:20:38.408936 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:38Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:38 crc kubenswrapper[4789]: I0202 21:20:38.419084 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vjbpg" Feb 02 21:20:38 crc kubenswrapper[4789]: E0202 21:20:38.419250 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vjbpg" podUID="2dc26662-64d3-47f0-9e0d-d340760ca348" Feb 02 21:20:38 crc kubenswrapper[4789]: I0202 21:20:38.420093 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af0ef1e0-7fcc-4fa6-8463-a0f5a9145c2e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3c3d527f77f26e052c4ef9c0577938dc23c802918e742b9fb9020cb6ba705f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eface13a61e8f9d1e8e9512c78a4c70973bfad708c3cdea7f7251f6fa408a59f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ed17431bc8880523ea84349c68ea56e389033b550390d88e60373f663d1491f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d78edf2f7e647edfcc306a3feff71c983907077ad05a684af9fa2990deaf3b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d78edf2f7e647edfcc306a3feff71c983907077ad05a684af9fa2990deaf3b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:19:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:19:40Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:38Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:38 crc kubenswrapper[4789]: I0202 21:20:38.462848 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:38 crc kubenswrapper[4789]: I0202 21:20:38.462887 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:38 crc kubenswrapper[4789]: I0202 21:20:38.462901 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:38 crc kubenswrapper[4789]: I0202 21:20:38.462919 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:38 crc kubenswrapper[4789]: I0202 21:20:38.462930 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:38Z","lastTransitionTime":"2026-02-02T21:20:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:38 crc kubenswrapper[4789]: I0202 21:20:38.565151 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:38 crc kubenswrapper[4789]: I0202 21:20:38.565203 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:38 crc kubenswrapper[4789]: I0202 21:20:38.565221 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:38 crc kubenswrapper[4789]: I0202 21:20:38.565249 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:38 crc kubenswrapper[4789]: I0202 21:20:38.565266 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:38Z","lastTransitionTime":"2026-02-02T21:20:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:38 crc kubenswrapper[4789]: I0202 21:20:38.668822 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:38 crc kubenswrapper[4789]: I0202 21:20:38.668863 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:38 crc kubenswrapper[4789]: I0202 21:20:38.668872 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:38 crc kubenswrapper[4789]: I0202 21:20:38.668887 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:38 crc kubenswrapper[4789]: I0202 21:20:38.668898 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:38Z","lastTransitionTime":"2026-02-02T21:20:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:38 crc kubenswrapper[4789]: I0202 21:20:38.765637 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 00:32:50.723936878 +0000 UTC Feb 02 21:20:38 crc kubenswrapper[4789]: I0202 21:20:38.772367 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:38 crc kubenswrapper[4789]: I0202 21:20:38.772417 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:38 crc kubenswrapper[4789]: I0202 21:20:38.772435 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:38 crc kubenswrapper[4789]: I0202 21:20:38.772455 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:38 crc kubenswrapper[4789]: I0202 21:20:38.772471 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:38Z","lastTransitionTime":"2026-02-02T21:20:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:38 crc kubenswrapper[4789]: I0202 21:20:38.875079 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:38 crc kubenswrapper[4789]: I0202 21:20:38.875141 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:38 crc kubenswrapper[4789]: I0202 21:20:38.875161 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:38 crc kubenswrapper[4789]: I0202 21:20:38.875188 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:38 crc kubenswrapper[4789]: I0202 21:20:38.875206 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:38Z","lastTransitionTime":"2026-02-02T21:20:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:38 crc kubenswrapper[4789]: I0202 21:20:38.978886 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:38 crc kubenswrapper[4789]: I0202 21:20:38.978940 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:38 crc kubenswrapper[4789]: I0202 21:20:38.978958 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:38 crc kubenswrapper[4789]: I0202 21:20:38.978982 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:38 crc kubenswrapper[4789]: I0202 21:20:38.979002 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:38Z","lastTransitionTime":"2026-02-02T21:20:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:39 crc kubenswrapper[4789]: I0202 21:20:39.082734 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:39 crc kubenswrapper[4789]: I0202 21:20:39.082805 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:39 crc kubenswrapper[4789]: I0202 21:20:39.082825 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:39 crc kubenswrapper[4789]: I0202 21:20:39.082854 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:39 crc kubenswrapper[4789]: I0202 21:20:39.082872 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:39Z","lastTransitionTime":"2026-02-02T21:20:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:39 crc kubenswrapper[4789]: I0202 21:20:39.161146 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w8vkt_2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6/ovnkube-controller/2.log" Feb 02 21:20:39 crc kubenswrapper[4789]: I0202 21:20:39.162206 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w8vkt_2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6/ovnkube-controller/1.log" Feb 02 21:20:39 crc kubenswrapper[4789]: I0202 21:20:39.171875 4789 generic.go:334] "Generic (PLEG): container finished" podID="2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6" containerID="8afae9b540e38d08b31eca754dc56d77b5477c42e30bd8aa7ff2200857e9906b" exitCode=1 Feb 02 21:20:39 crc kubenswrapper[4789]: I0202 21:20:39.171942 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" event={"ID":"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6","Type":"ContainerDied","Data":"8afae9b540e38d08b31eca754dc56d77b5477c42e30bd8aa7ff2200857e9906b"} Feb 02 21:20:39 crc kubenswrapper[4789]: I0202 21:20:39.171983 4789 scope.go:117] "RemoveContainer" containerID="ca031da70bd23ada7cfba11c78c5962555189be4b845b36d92ee2737df4e8bb7" Feb 02 21:20:39 crc kubenswrapper[4789]: I0202 21:20:39.174096 4789 scope.go:117] "RemoveContainer" containerID="8afae9b540e38d08b31eca754dc56d77b5477c42e30bd8aa7ff2200857e9906b" Feb 02 21:20:39 crc kubenswrapper[4789]: E0202 21:20:39.174381 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-w8vkt_openshift-ovn-kubernetes(2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6)\"" pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" podUID="2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6" Feb 02 21:20:39 crc kubenswrapper[4789]: I0202 21:20:39.186244 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:39 crc kubenswrapper[4789]: I0202 21:20:39.186302 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:39 crc kubenswrapper[4789]: I0202 21:20:39.186326 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:39 crc kubenswrapper[4789]: I0202 21:20:39.186357 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:39 crc kubenswrapper[4789]: I0202 21:20:39.186380 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:39Z","lastTransitionTime":"2026-02-02T21:20:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:39 crc kubenswrapper[4789]: I0202 21:20:39.191645 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af0ef1e0-7fcc-4fa6-8463-a0f5a9145c2e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3c3d527f77f26e052c4ef9c0577938dc23c802918e742b9fb9020cb6ba705f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eface13a61e8f9d1e8e9512c78a4c70973bfad708c3cdea7f7251f6fa408a59f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ed17431bc8880523ea84349c68ea56e389033b550390d88e60373f663d1491f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d78edf2f7e647edfcc306a3feff71c983907077ad05a684af9fa2990deaf3b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d78edf2f7e647edfcc306a3feff71c983907077ad05a684af9fa2990deaf3b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:19:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:19:40Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:39Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:39 crc kubenswrapper[4789]: I0202 21:20:39.217724 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa57a708f883719dd81200ccf975947c0af244b92ead81daee23171c9be412f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:39Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:39 crc kubenswrapper[4789]: I0202 21:20:39.235786 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6l576" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd970d28-4009-48b2-a0f4-2b8b1d54a2cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b641bb7caefade4d07fe41965573b059bd8b23a83bb9f60a9637d0152dffb07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwbqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6l576\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:39Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:39 crc kubenswrapper[4789]: I0202 21:20:39.256809 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:39Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:39 crc kubenswrapper[4789]: I0202 21:20:39.273914 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vjbpg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dc26662-64d3-47f0-9e0d-d340760ca348\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9zq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9zq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vjbpg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:39Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:39 crc kubenswrapper[4789]: I0202 21:20:39.289565 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:39 crc kubenswrapper[4789]: I0202 21:20:39.289627 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:39 crc kubenswrapper[4789]: I0202 21:20:39.289640 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:39 crc kubenswrapper[4789]: I0202 21:20:39.289657 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:39 crc kubenswrapper[4789]: I0202 21:20:39.289670 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:39Z","lastTransitionTime":"2026-02-02T21:20:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:39 crc kubenswrapper[4789]: I0202 21:20:39.292247 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40b3784b5ba59f0147d2889d897d17140759155e24587e9a323338e0a3125ddd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:39Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:39 crc kubenswrapper[4789]: I0202 21:20:39.311386 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:39Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:39 crc kubenswrapper[4789]: I0202 21:20:39.331976 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:39Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:39 crc kubenswrapper[4789]: I0202 21:20:39.352792 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19928db343f460eed7b046f14b45c6756081f57ea4b3ad77acc86f29eb856a82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://106225a63f3d4ecb7ba8c5266f738990ec7cc5603f9fabde6d234ae265dcc310\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:39Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:39 crc kubenswrapper[4789]: I0202 21:20:39.371006 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdf018b4-1451-4d37-be6e-05802b67c73e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad1716ba11e1b21eb68642e6935312760c389a669a007822290fbe72573bfaad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwk2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b108a17d437cce7e94365267d96e99e84944d9dd12174c5acb380bc1a1f9c885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwk2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c8vcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:39Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:39 crc kubenswrapper[4789]: I0202 21:20:39.388996 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d49gm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39528981-2c85-43f3-8fa0-bfae5c3334cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4ae303f0f4381207f4dd4a443e366d6e3de2014e9bc69aa644e98a76b239868\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnfbx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://277fe88585ee146931597a14fe049a3d69197c94e0d84f5dfb334b08cd685723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnfbx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-d49gm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:39Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:39 crc kubenswrapper[4789]: I0202 21:20:39.392791 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:39 crc kubenswrapper[4789]: I0202 21:20:39.392833 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:39 crc kubenswrapper[4789]: I0202 21:20:39.392851 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:39 crc kubenswrapper[4789]: I0202 21:20:39.392874 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:39 crc kubenswrapper[4789]: I0202 21:20:39.392890 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:39Z","lastTransitionTime":"2026-02-02T21:20:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:39 crc kubenswrapper[4789]: I0202 21:20:39.411639 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9563eded-ca82-4eb6-90d4-e62b8acbe296\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72bf64d682578abb47e9fabf5e6c31e3a20594cc4a2d13304b2036f4198af265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cc46c408bb643834e494025442760929995b22175ce2af65a14004761848c2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e95c9e553b8e14141ddde6a221c0e5e4d46533d1631893e5d55416b9d16f4a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66a4db3799101ccca8a89d6bfd2c9d36940b8710ee3d256e47cd61cfe6ac7c07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d8bd6ccf6c7345f0ddc84a2983b618218fc65283b8d351c2df30d1221f304b7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\" 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 21:20:01.773773 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 21:20:01.773776 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 21:20:01.773779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 21:20:01.773999 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0202 21:20:01.777333 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3307553149/tls.crt::/tmp/serving-cert-3307553149/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770067196\\\\\\\\\\\\\\\" (2026-02-02 21:19:55 +0000 UTC to 2026-03-04 21:19:56 +0000 UTC (now=2026-02-02 21:20:01.777292377 +0000 UTC))\\\\\\\"\\\\nI0202 21:20:01.777438 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770067201\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770067201\\\\\\\\\\\\\\\" (2026-02-02 20:20:01 +0000 UTC to 2027-02-02 20:20:01 +0000 UTC (now=2026-02-02 21:20:01.777417111 +0000 UTC))\\\\\\\"\\\\nI0202 21:20:01.777455 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 21:20:01.777484 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 21:20:01.777505 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0202 21:20:01.777526 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0202 21:20:01.777548 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3307553149/tls.crt::/tmp/serving-cert-3307553149/tls.key\\\\\\\"\\\\nF0202 21:20:01.777545 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cbe4a41d8721562e34639bc3ab24cc7f735d8de00e14d38b1b623b58740c5b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94bdc1c30a3c895835afc4f205ec20725ec13707db6957046cae7791b8850679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94bdc1c30a3c895835afc4f205ec20725ec13707db6957046cae7791b8850679\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:19:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:19:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:39Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:39 crc kubenswrapper[4789]: I0202 21:20:39.421065 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 21:20:39 crc kubenswrapper[4789]: E0202 21:20:39.421238 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 21:20:39 crc kubenswrapper[4789]: I0202 21:20:39.421516 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 21:20:39 crc kubenswrapper[4789]: E0202 21:20:39.421648 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 21:20:39 crc kubenswrapper[4789]: I0202 21:20:39.421887 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 21:20:39 crc kubenswrapper[4789]: E0202 21:20:39.421984 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 21:20:39 crc kubenswrapper[4789]: I0202 21:20:39.436366 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dsv6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcbed546-a1c3-4ba4-96b4-61471010b1c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fff0380f801c6909a82bfc8a765cf7a393d7c2f7cc809611ded737a22f448a3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48b8508849b24e18223b62bd73348759ba4c04a525afb7d4055175bceb0183c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e48b8508849b24e18223b62bd73348759ba4c04a525afb7d4055175bceb0183c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92b13f24aa00608821909c688ace0441fd4e432d3c5fc9a1cc06b4239bf86955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92b13f24aa00608821909c688ace0441fd4e432d3c5fc9a1cc06b4239bf86955\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75d7fd5bba471f9371960e5c6cfd5d7cb49402d47459acad01a9c438ea33de0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75d7fd5bba471f9371960e5c6cfd5d7cb49402d47459acad01a9c438ea33de0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eb07d53076d60f3363a1232e742bb4bd801d49104a2be1095af34a61a5b61cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eb07d53076d60f3363a1232e742bb4bd801d49104a2be1095af34a61a5b61cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5339fbb27f553f96497abbf4b0584cbffe399e44fafd90cd0e568e2f4efad6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5339fbb27f553f96497abbf4b0584cbffe399e44fafd90cd0e568e2f4efad6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef304631b36ef7a2edc5ce89f5b4d1efecd4947d1d71f3bb4068395d76aea346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef304631b36ef7a2edc5ce89f5b4d1efecd4947d1d71f3bb4068395d76aea346\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dsv6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:39Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:39 crc kubenswrapper[4789]: I0202 21:20:39.468542 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29dad69c19f9411676fab81684dd613d866c494df1e333b46bfff35b98430103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://047a98525fa1b06be5e6f3bab6b417973a13bb32df7fad26ca90b4558ee4e364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05637515d4c19e054ce43cc23bb6d8e32c4752282dab08786fff3062eb976461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f05a85f44fd6a7b9efa0f591ec0afebd7818a80b132ec3dddd0faa22c380eea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd508dcd4022163f662ad27cf3ffbc4cca551d4020809ca5fac60cee8da80601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://021453f1c74b708ad3f39c2defa2664f098a552f3e22315479c483d24110e583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8afae9b540e38d08b31eca754dc56d77b5477c42e30bd8aa7ff2200857e9906b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca031da70bd23ada7cfba11c78c5962555189be4b845b36d92ee2737df4e8bb7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T21:20:18Z\\\",\\\"message\\\":\\\"reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.88:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ebd4748e-0473-49fb-88ad-83dbb221791a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0202 21:20:18.018667 6276 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console-operator/metrics]} name:Service_openshift-console-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.88:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ebd4748e-0473-49fb-88ad-83dbb221791a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0202 21:20:18.018180 6276 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8afae9b540e38d08b31eca754dc56d77b5477c42e30bd8aa7ff2200857e9906b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T21:20:38Z\\\",\\\"message\\\":\\\"er\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-apiserver/apiserver\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.93\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0202 21:20:38.597725 6510 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d49gm\\\\nI0202 21:20:38.597738 6510 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d49gm in node crc\\\\nF0202 21:20:38.597745 6510 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: In\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f30344b8366cf5ef712d971dc4d8d552e7b50c1ecea8d2ace88bb80fce62231e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e90a8e698f2e9f30a596864d0d038bfad686cca1a56b1a8bc72fab5fe594f355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e90a8e698f2e9f30a596864d0d038bfad686cca1a56b1a8bc72fab5fe594f355\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w8vkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:39Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:39 crc kubenswrapper[4789]: I0202 21:20:39.484398 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wlsw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b25d791-42e1-4e08-b7da-41803cc40f4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0e2a17d3ade1aa229b2729d66fe953acc746673549e0d929076a3bd18839833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-snjrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wlsw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:39Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:39 crc kubenswrapper[4789]: I0202 21:20:39.496864 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:39 crc kubenswrapper[4789]: I0202 21:20:39.496953 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:39 crc kubenswrapper[4789]: I0202 21:20:39.496978 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:39 crc kubenswrapper[4789]: I0202 21:20:39.497010 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:39 crc kubenswrapper[4789]: I0202 21:20:39.497034 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:39Z","lastTransitionTime":"2026-02-02T21:20:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:39 crc kubenswrapper[4789]: I0202 21:20:39.502695 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fb5d89e-c901-4857-a5e8-b4d8e3701714\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d85613f300f8d008d919038b40efff3fb67e228e25959cbfd2424640c6bc7a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://460e055777327e1734238bf51929751678ea5a757b00a344daa31c8c95e4c463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c292754c81dd4929f8e599e8ec352fd5e90431c60bb741c070a10ffa91fad72\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3685c506f77d3807711a442d68da94932064737cfdc5cb74557da135906e24f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:19:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:39Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:39 crc kubenswrapper[4789]: I0202 21:20:39.521669 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2x5ws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70a32268-2a2d-47f3-9fc6-4281b8dc6a02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9eaf3ca59ce89187ee20f3915b7fd4a8867e156c0be18b435511d2af95ffd949\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf446\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2x5ws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:39Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:39 crc kubenswrapper[4789]: I0202 21:20:39.599932 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:39 crc kubenswrapper[4789]: I0202 21:20:39.599989 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:39 crc kubenswrapper[4789]: I0202 21:20:39.600009 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:39 crc kubenswrapper[4789]: I0202 21:20:39.600035 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:39 crc kubenswrapper[4789]: I0202 21:20:39.600054 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:39Z","lastTransitionTime":"2026-02-02T21:20:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:39 crc kubenswrapper[4789]: I0202 21:20:39.703106 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:39 crc kubenswrapper[4789]: I0202 21:20:39.703168 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:39 crc kubenswrapper[4789]: I0202 21:20:39.703185 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:39 crc kubenswrapper[4789]: I0202 21:20:39.703213 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:39 crc kubenswrapper[4789]: I0202 21:20:39.703231 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:39Z","lastTransitionTime":"2026-02-02T21:20:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:39 crc kubenswrapper[4789]: I0202 21:20:39.766108 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 13:29:03.012325767 +0000 UTC Feb 02 21:20:39 crc kubenswrapper[4789]: I0202 21:20:39.806204 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:39 crc kubenswrapper[4789]: I0202 21:20:39.806268 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:39 crc kubenswrapper[4789]: I0202 21:20:39.806285 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:39 crc kubenswrapper[4789]: I0202 21:20:39.806313 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:39 crc kubenswrapper[4789]: I0202 21:20:39.806336 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:39Z","lastTransitionTime":"2026-02-02T21:20:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:39 crc kubenswrapper[4789]: I0202 21:20:39.909206 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:39 crc kubenswrapper[4789]: I0202 21:20:39.909282 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:39 crc kubenswrapper[4789]: I0202 21:20:39.909300 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:39 crc kubenswrapper[4789]: I0202 21:20:39.909324 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:39 crc kubenswrapper[4789]: I0202 21:20:39.909342 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:39Z","lastTransitionTime":"2026-02-02T21:20:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:40 crc kubenswrapper[4789]: I0202 21:20:40.012736 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:40 crc kubenswrapper[4789]: I0202 21:20:40.012803 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:40 crc kubenswrapper[4789]: I0202 21:20:40.012824 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:40 crc kubenswrapper[4789]: I0202 21:20:40.012849 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:40 crc kubenswrapper[4789]: I0202 21:20:40.012867 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:40Z","lastTransitionTime":"2026-02-02T21:20:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:40 crc kubenswrapper[4789]: I0202 21:20:40.115444 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:40 crc kubenswrapper[4789]: I0202 21:20:40.115502 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:40 crc kubenswrapper[4789]: I0202 21:20:40.115519 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:40 crc kubenswrapper[4789]: I0202 21:20:40.115544 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:40 crc kubenswrapper[4789]: I0202 21:20:40.115563 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:40Z","lastTransitionTime":"2026-02-02T21:20:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:40 crc kubenswrapper[4789]: I0202 21:20:40.178330 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w8vkt_2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6/ovnkube-controller/2.log" Feb 02 21:20:40 crc kubenswrapper[4789]: I0202 21:20:40.184904 4789 scope.go:117] "RemoveContainer" containerID="8afae9b540e38d08b31eca754dc56d77b5477c42e30bd8aa7ff2200857e9906b" Feb 02 21:20:40 crc kubenswrapper[4789]: E0202 21:20:40.185177 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-w8vkt_openshift-ovn-kubernetes(2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6)\"" pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" podUID="2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6" Feb 02 21:20:40 crc kubenswrapper[4789]: I0202 21:20:40.217819 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:40 crc kubenswrapper[4789]: I0202 21:20:40.217863 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:40 crc kubenswrapper[4789]: I0202 21:20:40.217874 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:40 crc kubenswrapper[4789]: I0202 21:20:40.217892 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:40 crc kubenswrapper[4789]: I0202 21:20:40.217915 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:40Z","lastTransitionTime":"2026-02-02T21:20:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:40 crc kubenswrapper[4789]: I0202 21:20:40.220877 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29dad69c19f9411676fab81684dd613d866c494df1e333b46bfff35b98430103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://047a98525fa1b06be5e6f3bab6b417973a13bb32df7fad26ca90b4558ee4e364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05637515d4c19e054ce43cc23bb6d8e32c4752282dab08786fff3062eb976461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f05a85f44fd6a7b9efa0f591ec0afebd7818a80b132ec3dddd0faa22c380eea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd508dcd4022163f662ad27cf3ffbc4cca551d4020809ca5fac60cee8da80601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://021453f1c74b708ad3f39c2defa2664f098a552f3e22315479c483d24110e583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8afae9b540e38d08b31eca754dc56d77b5477c42e30bd8aa7ff2200857e9906b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8afae9b540e38d08b31eca754dc56d77b5477c42e30bd8aa7ff2200857e9906b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T21:20:38Z\\\",\\\"message\\\":\\\"er\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-apiserver/apiserver\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.93\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0202 21:20:38.597725 6510 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d49gm\\\\nI0202 21:20:38.597738 6510 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d49gm in node crc\\\\nF0202 21:20:38.597745 6510 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: In\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-w8vkt_openshift-ovn-kubernetes(2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f30344b8366cf5ef712d971dc4d8d552e7b50c1ecea8d2ace88bb80fce62231e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e90a8e698f2e9f30a596864d0d038bfad686cca1a56b1a8bc72fab5fe594f355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e90a8e698f2e9f30a596864d0d038bfad686cca1a56b1a8bc72fab5fe594f355\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w8vkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:40Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:40 crc kubenswrapper[4789]: I0202 21:20:40.236995 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wlsw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b25d791-42e1-4e08-b7da-41803cc40f4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0e2a17d3ade1aa229b2729d66fe953acc746673549e0d929076a3bd18839833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-snjrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wlsw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:40Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:40 crc kubenswrapper[4789]: I0202 21:20:40.258284 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9563eded-ca82-4eb6-90d4-e62b8acbe296\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72bf64d682578abb47e9fabf5e6c31e3a20594cc4a2d13304b2036f4198af265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cc46c408bb643834e494025442760929995b22175ce2af65a14004761848c2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e95c9e553b8e14141ddde6a221c0e5e4d46533d1631893e5d55416b9d16f4a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66a4db3799101ccca8a89d6bfd2c9d36940b8710ee3d256e47cd61cfe6ac7c07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d8bd6ccf6c7345f0ddc84a2983b618218fc65283b8d351c2df30d1221f304b7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\" 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 21:20:01.773773 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 21:20:01.773776 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 21:20:01.773779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 21:20:01.773999 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0202 21:20:01.777333 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3307553149/tls.crt::/tmp/serving-cert-3307553149/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770067196\\\\\\\\\\\\\\\" (2026-02-02 21:19:55 +0000 UTC to 2026-03-04 21:19:56 +0000 UTC (now=2026-02-02 21:20:01.777292377 +0000 UTC))\\\\\\\"\\\\nI0202 21:20:01.777438 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770067201\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770067201\\\\\\\\\\\\\\\" (2026-02-02 20:20:01 +0000 UTC to 2027-02-02 20:20:01 +0000 UTC (now=2026-02-02 21:20:01.777417111 +0000 UTC))\\\\\\\"\\\\nI0202 21:20:01.777455 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 21:20:01.777484 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 21:20:01.777505 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0202 21:20:01.777526 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0202 21:20:01.777548 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3307553149/tls.crt::/tmp/serving-cert-3307553149/tls.key\\\\\\\"\\\\nF0202 21:20:01.777545 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cbe4a41d8721562e34639bc3ab24cc7f735d8de00e14d38b1b623b58740c5b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94bdc1c30a3c895835afc4f205ec20725ec13707db6957046cae7791b8850679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94bdc1c30a3c895835afc4f205ec20725ec13707db6957046cae7791b8850679\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:19:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:19:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:40Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:40 crc kubenswrapper[4789]: I0202 21:20:40.280525 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dsv6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcbed546-a1c3-4ba4-96b4-61471010b1c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fff0380f801c6909a82bfc8a765cf7a393d7c2f7cc809611ded737a22f448a3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48b8508849b24e18223b62bd73348759ba4c04a525afb7d4055175bceb0183c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e48b8508849b24e18223b62bd73348759ba4c04a525afb7d4055175bceb0183c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92b13f24aa00608821909c688ace0441fd4e432d3c5fc9a1cc06b4239bf86955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92b13f24aa00608821909c688ace0441fd4e432d3c5fc9a1cc06b4239bf86955\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75d7fd5bba471f9371960e5c6cfd5d7cb49402d47459acad01a9c438ea33de0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75d7fd5bba471f9371960e5c6cfd5d7cb49402d47459acad01a9c438ea33de0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eb07d53076d60f3363a1232e742bb4bd801d49104a2be1095af34a61a5b61cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eb07d53076d60f3363a1232e742bb4bd801d49104a2be1095af34a61a5b61cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5339fbb27f553f96497abbf4b0584cbffe399e44fafd90cd0e568e2f4efad6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5339fbb27f553f96497abbf4b0584cbffe399e44fafd90cd0e568e2f4efad6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef304631b36ef7a2edc5ce89f5b4d1efecd4947d1d71f3bb4068395d76aea346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef304631b36ef7a2edc5ce89f5b4d1efecd4947d1d71f3bb4068395d76aea346\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dsv6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:40Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:40 crc kubenswrapper[4789]: I0202 21:20:40.301450 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fb5d89e-c901-4857-a5e8-b4d8e3701714\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d85613f300f8d008d919038b40efff3fb67e228e25959cbfd2424640c6bc7a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://460e055777327e1734238bf51929751678ea5a757b00a344daa31c8c95e4c463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c292754c81dd4929f8e599e8ec352fd5e90431c60bb741c070a10ffa91fad72\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3685c506f77d3807711a442d68da94932064737cfdc5cb74557da135906e24f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:19:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:40Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:40 crc kubenswrapper[4789]: I0202 21:20:40.322297 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2x5ws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70a32268-2a2d-47f3-9fc6-4281b8dc6a02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9eaf3ca59ce89187ee20f3915b7fd4a8867e156c0be18b435511d2af95ffd949\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf446\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2x5ws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:40Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:40 crc kubenswrapper[4789]: I0202 21:20:40.322739 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:40 crc kubenswrapper[4789]: I0202 21:20:40.322772 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:40 crc kubenswrapper[4789]: I0202 21:20:40.322789 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:40 crc kubenswrapper[4789]: I0202 21:20:40.322813 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:40 crc kubenswrapper[4789]: I0202 21:20:40.322832 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:40Z","lastTransitionTime":"2026-02-02T21:20:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:40 crc kubenswrapper[4789]: I0202 21:20:40.341744 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:40Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:40 crc kubenswrapper[4789]: I0202 21:20:40.359526 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af0ef1e0-7fcc-4fa6-8463-a0f5a9145c2e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3c3d527f77f26e052c4ef9c0577938dc23c802918e742b9fb9020cb6ba705f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eface13a61e8f9d1e8e9512c78a4c70973bfad708c3cdea7f7251f6fa408a59f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ed17431bc8880523ea84349c68ea56e389033b550390d88e60373f663d1491f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d78edf2f7e647edfcc306a3feff71c983907077ad05a684af9fa2990deaf3b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d78edf2f7e647edfcc306a3feff71c983907077ad05a684af9fa2990deaf3b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:19:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:19:40Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:40Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:40 crc kubenswrapper[4789]: I0202 21:20:40.381325 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa57a708f883719dd81200ccf975947c0af244b92ead81daee23171c9be412f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:40Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:40 crc kubenswrapper[4789]: I0202 21:20:40.398260 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6l576" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd970d28-4009-48b2-a0f4-2b8b1d54a2cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b641bb7caefade4d07fe41965573b059bd8b23a83bb9f60a9637d0152dffb07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwbqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6l576\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:40Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:40 crc kubenswrapper[4789]: I0202 21:20:40.418551 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vjbpg" Feb 02 21:20:40 crc kubenswrapper[4789]: E0202 21:20:40.418796 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vjbpg" podUID="2dc26662-64d3-47f0-9e0d-d340760ca348" Feb 02 21:20:40 crc kubenswrapper[4789]: I0202 21:20:40.418798 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19928db343f460eed7b046f14b45c6756081f57ea4b3ad77acc86f29eb856a82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://106225a63f3d4ecb7ba8c5266f738990ec7cc5603f9fabde6d234ae265dcc310\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:40Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:40 crc kubenswrapper[4789]: I0202 21:20:40.426387 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:40 crc kubenswrapper[4789]: I0202 21:20:40.426446 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:40 crc kubenswrapper[4789]: I0202 21:20:40.426464 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:40 crc kubenswrapper[4789]: I0202 21:20:40.426487 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:40 crc kubenswrapper[4789]: I0202 21:20:40.426507 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:40Z","lastTransitionTime":"2026-02-02T21:20:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:40 crc kubenswrapper[4789]: I0202 21:20:40.438342 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdf018b4-1451-4d37-be6e-05802b67c73e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad1716ba11e1b21eb68642e6935312760c389a669a007822290fbe72573bfaad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwk2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b108a17d437cce7e94365267d96e99e84944d9dd12174c5acb380bc1a1f9c885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwk2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c8vcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:40Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:40 crc kubenswrapper[4789]: I0202 21:20:40.458331 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d49gm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39528981-2c85-43f3-8fa0-bfae5c3334cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4ae303f0f4381207f4dd4a443e366d6e3de2014e9bc69aa644e98a76b239868\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnfbx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://277fe88585ee146931597a14fe049a3d69197c94e0d84f5dfb334b08cd685723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnfbx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-d49gm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:40Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:40 crc kubenswrapper[4789]: I0202 21:20:40.475771 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vjbpg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dc26662-64d3-47f0-9e0d-d340760ca348\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9zq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9zq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vjbpg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:40Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:40 crc kubenswrapper[4789]: I0202 21:20:40.494212 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40b3784b5ba59f0147d2889d897d17140759155e24587e9a323338e0a3125ddd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:40Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:40 crc kubenswrapper[4789]: I0202 21:20:40.513957 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:40Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:40 crc kubenswrapper[4789]: I0202 21:20:40.529320 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:40 crc kubenswrapper[4789]: I0202 21:20:40.529375 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:40 crc kubenswrapper[4789]: I0202 21:20:40.529398 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:40 crc kubenswrapper[4789]: I0202 21:20:40.529428 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:40 crc kubenswrapper[4789]: I0202 21:20:40.529452 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:40Z","lastTransitionTime":"2026-02-02T21:20:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:40 crc kubenswrapper[4789]: I0202 21:20:40.535835 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:40Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:40 crc kubenswrapper[4789]: I0202 21:20:40.557219 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fb5d89e-c901-4857-a5e8-b4d8e3701714\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d85613f300f8d008d919038b40efff3fb67e228e25959cbfd2424640c6bc7a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://460e055777327e1734238bf51929751678ea5a757b00a344daa31c8c95e4c463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c292754c81dd4929f8e599e8ec352fd5e90431c60bb741c070a10ffa91fad72\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3685c506f77d3807711a442d68da94932064737cfdc5cb74557da135906e24f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:19:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:40Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:40 crc kubenswrapper[4789]: I0202 21:20:40.583926 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2x5ws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70a32268-2a2d-47f3-9fc6-4281b8dc6a02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9eaf3ca59ce89187ee20f3915b7fd4a8867e156c0be18b435511d2af95ffd949\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf446\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2x5ws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:40Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:40 crc kubenswrapper[4789]: I0202 21:20:40.603793 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af0ef1e0-7fcc-4fa6-8463-a0f5a9145c2e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3c3d527f77f26e052c4ef9c0577938dc23c802918e742b9fb9020cb6ba705f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eface13a61e8f9d1e8e9512c78a4c70973bfad708c3cdea7f7251f6fa408a59f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ed17431bc8880523ea84349c68ea56e389033b550390d88e60373f663d1491f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d78edf2f7e647edfcc306a3feff71c983907077ad05a684af9fa2990deaf3b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d78edf2f7e647edfcc306a3feff71c983907077ad05a684af9fa2990deaf3b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:19:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:19:40Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:40Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:40 crc kubenswrapper[4789]: I0202 21:20:40.623780 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa57a708f883719dd81200ccf975947c0af244b92ead81daee23171c9be412f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:40Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:40 crc kubenswrapper[4789]: I0202 21:20:40.632072 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:40 crc kubenswrapper[4789]: I0202 21:20:40.632134 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:40 crc kubenswrapper[4789]: I0202 21:20:40.632154 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:40 crc kubenswrapper[4789]: I0202 21:20:40.632188 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:40 crc kubenswrapper[4789]: I0202 21:20:40.632214 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:40Z","lastTransitionTime":"2026-02-02T21:20:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:40 crc kubenswrapper[4789]: I0202 21:20:40.640327 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6l576" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd970d28-4009-48b2-a0f4-2b8b1d54a2cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b641bb7caefade4d07fe41965573b059bd8b23a83bb9f60a9637d0152dffb07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwbqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6l576\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:40Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:40 crc kubenswrapper[4789]: I0202 21:20:40.658753 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:40Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:40 crc kubenswrapper[4789]: I0202 21:20:40.677710 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40b3784b5ba59f0147d2889d897d17140759155e24587e9a323338e0a3125ddd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:40Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:40 crc kubenswrapper[4789]: I0202 21:20:40.697332 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:40Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:40 crc kubenswrapper[4789]: I0202 21:20:40.717031 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:40Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:40 crc kubenswrapper[4789]: I0202 21:20:40.734852 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:40 crc kubenswrapper[4789]: I0202 21:20:40.734906 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:40 crc kubenswrapper[4789]: I0202 21:20:40.734927 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:40 crc kubenswrapper[4789]: I0202 21:20:40.734951 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:40 crc kubenswrapper[4789]: I0202 21:20:40.734980 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:40Z","lastTransitionTime":"2026-02-02T21:20:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:40 crc kubenswrapper[4789]: I0202 21:20:40.737566 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19928db343f460eed7b046f14b45c6756081f57ea4b3ad77acc86f29eb856a82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://106225a63f3d4ecb7ba8c5266f738990ec7cc5603f9fabde6d234ae265dcc310\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:40Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:40 crc kubenswrapper[4789]: I0202 21:20:40.754888 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdf018b4-1451-4d37-be6e-05802b67c73e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad1716ba11e1b21eb68642e6935312760c389a669a007822290fbe72573bfaad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwk2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b108a17d437cce7e94365267d96e99e84944d9dd12174c5acb380bc1a1f9c885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwk2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c8vcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:40Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:40 crc kubenswrapper[4789]: I0202 21:20:40.767754 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 11:04:20.707868779 +0000 UTC Feb 02 21:20:40 crc kubenswrapper[4789]: I0202 21:20:40.771939 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d49gm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39528981-2c85-43f3-8fa0-bfae5c3334cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4ae303f0f4381207f4dd4a443e366d6e3de2014e9bc69aa644e98a76b239868\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnfbx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://277fe88585ee146931597a14fe049a3d69197c94e0d84f5dfb334b08cd685723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnfbx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-d49gm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:40Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:40 crc kubenswrapper[4789]: I0202 21:20:40.788674 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vjbpg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dc26662-64d3-47f0-9e0d-d340760ca348\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9zq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9zq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vjbpg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:40Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:40 crc kubenswrapper[4789]: I0202 21:20:40.811165 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9563eded-ca82-4eb6-90d4-e62b8acbe296\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72bf64d682578abb47e9fabf5e6c31e3a20594cc4a2d13304b2036f4198af265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cc46c408bb643834e494025442760929995b22175ce2af65a14004761848c2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e95c9e553b8e14141ddde6a221c0e5e4d46533d1631893e5d55416b9d16f4a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66a4db3799101ccca8a89d6bfd2c9d36940b8710ee3d256e47cd61cfe6ac7c07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d8bd6ccf6c7345f0ddc84a2983b618218fc65283b8d351c2df30d1221f304b7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\" 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 21:20:01.773773 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 21:20:01.773776 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 21:20:01.773779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 21:20:01.773999 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0202 21:20:01.777333 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3307553149/tls.crt::/tmp/serving-cert-3307553149/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770067196\\\\\\\\\\\\\\\" (2026-02-02 21:19:55 +0000 UTC to 2026-03-04 21:19:56 +0000 UTC (now=2026-02-02 21:20:01.777292377 +0000 UTC))\\\\\\\"\\\\nI0202 21:20:01.777438 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770067201\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770067201\\\\\\\\\\\\\\\" (2026-02-02 20:20:01 +0000 UTC to 2027-02-02 20:20:01 +0000 UTC (now=2026-02-02 21:20:01.777417111 +0000 UTC))\\\\\\\"\\\\nI0202 21:20:01.777455 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 21:20:01.777484 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 21:20:01.777505 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0202 21:20:01.777526 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0202 21:20:01.777548 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3307553149/tls.crt::/tmp/serving-cert-3307553149/tls.key\\\\\\\"\\\\nF0202 21:20:01.777545 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cbe4a41d8721562e34639bc3ab24cc7f735d8de00e14d38b1b623b58740c5b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94bdc1c30a3c895835afc4f205ec20725ec13707db6957046cae7791b8850679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94bdc1c30a3c895835afc4f205ec20725ec13707db6957046cae7791b8850679\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:19:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:19:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:40Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:40 crc kubenswrapper[4789]: I0202 21:20:40.834689 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dsv6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcbed546-a1c3-4ba4-96b4-61471010b1c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fff0380f801c6909a82bfc8a765cf7a393d7c2f7cc809611ded737a22f448a3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48b8508849b24e18223b62bd73348759ba4c04a525afb7d4055175bceb0183c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e48b8508849b24e18223b62bd73348759ba4c04a525afb7d4055175bceb0183c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92b13f24aa00608821909c688ace0441fd4e432d3c5fc9a1cc06b4239bf86955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92b13f24aa00608821909c688ace0441fd4e432d3c5fc9a1cc06b4239bf86955\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75d7fd5bba471f9371960e5c6cfd5d7cb49402d47459acad01a9c438ea33de0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75d7fd5bba471f9371960e5c6cfd5d7cb49402d47459acad01a9c438ea33de0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eb07d53076d60f3363a1232e742bb4bd801d49104a2be1095af34a61a5b61cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eb07d53076d60f3363a1232e742bb4bd801d49104a2be1095af34a61a5b61cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5339fbb27f553f96497abbf4b0584cbffe399e44fafd90cd0e568e2f4efad6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5339fbb27f553f96497abbf4b0584cbffe399e44fafd90cd0e568e2f4efad6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef304631b36ef7a2edc5ce89f5b4d1efecd4947d1d71f3bb4068395d76aea346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef304631b36ef7a2edc5ce89f5b4d1efecd4947d1d71f3bb4068395d76aea346\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dsv6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:40Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:40 crc kubenswrapper[4789]: I0202 21:20:40.838126 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:40 crc kubenswrapper[4789]: I0202 21:20:40.838191 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:40 crc kubenswrapper[4789]: I0202 21:20:40.838208 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:40 crc kubenswrapper[4789]: I0202 21:20:40.838233 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:40 crc kubenswrapper[4789]: I0202 21:20:40.838250 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:40Z","lastTransitionTime":"2026-02-02T21:20:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:40 crc kubenswrapper[4789]: I0202 21:20:40.867269 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29dad69c19f9411676fab81684dd613d866c494df1e333b46bfff35b98430103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://047a98525fa1b06be5e6f3bab6b417973a13bb32df7fad26ca90b4558ee4e364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05637515d4c19e054ce43cc23bb6d8e32c4752282dab08786fff3062eb976461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f05a85f44fd6a7b9efa0f591ec0afebd7818a80b132ec3dddd0faa22c380eea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd508dcd4022163f662ad27cf3ffbc4cca551d4020809ca5fac60cee8da80601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://021453f1c74b708ad3f39c2defa2664f098a552f3e22315479c483d24110e583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8afae9b540e38d08b31eca754dc56d77b5477c42e30bd8aa7ff2200857e9906b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8afae9b540e38d08b31eca754dc56d77b5477c42e30bd8aa7ff2200857e9906b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T21:20:38Z\\\",\\\"message\\\":\\\"er\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-apiserver/apiserver\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.93\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0202 21:20:38.597725 6510 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d49gm\\\\nI0202 21:20:38.597738 6510 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d49gm in node crc\\\\nF0202 21:20:38.597745 6510 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: In\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-w8vkt_openshift-ovn-kubernetes(2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f30344b8366cf5ef712d971dc4d8d552e7b50c1ecea8d2ace88bb80fce62231e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e90a8e698f2e9f30a596864d0d038bfad686cca1a56b1a8bc72fab5fe594f355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e90a8e698f2e9f30a596864d0d038bfad686cca1a56b1a8bc72fab5fe594f355\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w8vkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:40Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:40 crc kubenswrapper[4789]: I0202 21:20:40.884203 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wlsw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b25d791-42e1-4e08-b7da-41803cc40f4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0e2a17d3ade1aa229b2729d66fe953acc746673549e0d929076a3bd18839833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-snjrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wlsw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:40Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:40 crc kubenswrapper[4789]: I0202 21:20:40.941383 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:40 crc kubenswrapper[4789]: I0202 21:20:40.941446 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:40 crc kubenswrapper[4789]: I0202 21:20:40.941472 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:40 crc kubenswrapper[4789]: I0202 21:20:40.941499 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:40 crc kubenswrapper[4789]: I0202 21:20:40.941520 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:40Z","lastTransitionTime":"2026-02-02T21:20:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:41 crc kubenswrapper[4789]: I0202 21:20:41.045026 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:41 crc kubenswrapper[4789]: I0202 21:20:41.045077 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:41 crc kubenswrapper[4789]: I0202 21:20:41.045094 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:41 crc kubenswrapper[4789]: I0202 21:20:41.045118 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:41 crc kubenswrapper[4789]: I0202 21:20:41.045136 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:41Z","lastTransitionTime":"2026-02-02T21:20:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:41 crc kubenswrapper[4789]: I0202 21:20:41.148472 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:41 crc kubenswrapper[4789]: I0202 21:20:41.148530 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:41 crc kubenswrapper[4789]: I0202 21:20:41.148549 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:41 crc kubenswrapper[4789]: I0202 21:20:41.148575 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:41 crc kubenswrapper[4789]: I0202 21:20:41.148633 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:41Z","lastTransitionTime":"2026-02-02T21:20:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:41 crc kubenswrapper[4789]: I0202 21:20:41.252182 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:41 crc kubenswrapper[4789]: I0202 21:20:41.252294 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:41 crc kubenswrapper[4789]: I0202 21:20:41.252315 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:41 crc kubenswrapper[4789]: I0202 21:20:41.252342 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:41 crc kubenswrapper[4789]: I0202 21:20:41.252361 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:41Z","lastTransitionTime":"2026-02-02T21:20:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:41 crc kubenswrapper[4789]: I0202 21:20:41.355402 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:41 crc kubenswrapper[4789]: I0202 21:20:41.355450 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:41 crc kubenswrapper[4789]: I0202 21:20:41.355467 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:41 crc kubenswrapper[4789]: I0202 21:20:41.355489 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:41 crc kubenswrapper[4789]: I0202 21:20:41.355506 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:41Z","lastTransitionTime":"2026-02-02T21:20:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:41 crc kubenswrapper[4789]: I0202 21:20:41.419680 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 21:20:41 crc kubenswrapper[4789]: E0202 21:20:41.419865 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 21:20:41 crc kubenswrapper[4789]: I0202 21:20:41.420117 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 21:20:41 crc kubenswrapper[4789]: I0202 21:20:41.420166 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 21:20:41 crc kubenswrapper[4789]: E0202 21:20:41.420255 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 21:20:41 crc kubenswrapper[4789]: E0202 21:20:41.420632 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 21:20:41 crc kubenswrapper[4789]: I0202 21:20:41.458557 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:41 crc kubenswrapper[4789]: I0202 21:20:41.458658 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:41 crc kubenswrapper[4789]: I0202 21:20:41.458680 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:41 crc kubenswrapper[4789]: I0202 21:20:41.458708 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:41 crc kubenswrapper[4789]: I0202 21:20:41.458727 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:41Z","lastTransitionTime":"2026-02-02T21:20:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:41 crc kubenswrapper[4789]: I0202 21:20:41.561762 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:41 crc kubenswrapper[4789]: I0202 21:20:41.561810 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:41 crc kubenswrapper[4789]: I0202 21:20:41.561828 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:41 crc kubenswrapper[4789]: I0202 21:20:41.561851 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:41 crc kubenswrapper[4789]: I0202 21:20:41.561869 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:41Z","lastTransitionTime":"2026-02-02T21:20:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:41 crc kubenswrapper[4789]: I0202 21:20:41.665473 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:41 crc kubenswrapper[4789]: I0202 21:20:41.665551 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:41 crc kubenswrapper[4789]: I0202 21:20:41.665571 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:41 crc kubenswrapper[4789]: I0202 21:20:41.665643 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:41 crc kubenswrapper[4789]: I0202 21:20:41.665702 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:41Z","lastTransitionTime":"2026-02-02T21:20:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:41 crc kubenswrapper[4789]: I0202 21:20:41.768507 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 09:01:58.336887799 +0000 UTC Feb 02 21:20:41 crc kubenswrapper[4789]: I0202 21:20:41.769695 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:41 crc kubenswrapper[4789]: I0202 21:20:41.769892 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:41 crc kubenswrapper[4789]: I0202 21:20:41.769974 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:41 crc kubenswrapper[4789]: I0202 21:20:41.770017 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:41 crc kubenswrapper[4789]: I0202 21:20:41.770095 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:41Z","lastTransitionTime":"2026-02-02T21:20:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:41 crc kubenswrapper[4789]: I0202 21:20:41.873473 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:41 crc kubenswrapper[4789]: I0202 21:20:41.873548 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:41 crc kubenswrapper[4789]: I0202 21:20:41.873567 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:41 crc kubenswrapper[4789]: I0202 21:20:41.873621 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:41 crc kubenswrapper[4789]: I0202 21:20:41.873640 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:41Z","lastTransitionTime":"2026-02-02T21:20:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:41 crc kubenswrapper[4789]: I0202 21:20:41.976787 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:41 crc kubenswrapper[4789]: I0202 21:20:41.976851 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:41 crc kubenswrapper[4789]: I0202 21:20:41.976869 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:41 crc kubenswrapper[4789]: I0202 21:20:41.976894 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:41 crc kubenswrapper[4789]: I0202 21:20:41.976912 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:41Z","lastTransitionTime":"2026-02-02T21:20:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:42 crc kubenswrapper[4789]: I0202 21:20:42.080166 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:42 crc kubenswrapper[4789]: I0202 21:20:42.080242 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:42 crc kubenswrapper[4789]: I0202 21:20:42.080266 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:42 crc kubenswrapper[4789]: I0202 21:20:42.080298 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:42 crc kubenswrapper[4789]: I0202 21:20:42.080320 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:42Z","lastTransitionTime":"2026-02-02T21:20:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:42 crc kubenswrapper[4789]: I0202 21:20:42.183354 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:42 crc kubenswrapper[4789]: I0202 21:20:42.183411 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:42 crc kubenswrapper[4789]: I0202 21:20:42.183428 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:42 crc kubenswrapper[4789]: I0202 21:20:42.183454 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:42 crc kubenswrapper[4789]: I0202 21:20:42.183472 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:42Z","lastTransitionTime":"2026-02-02T21:20:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:42 crc kubenswrapper[4789]: I0202 21:20:42.286419 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:42 crc kubenswrapper[4789]: I0202 21:20:42.286497 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:42 crc kubenswrapper[4789]: I0202 21:20:42.286519 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:42 crc kubenswrapper[4789]: I0202 21:20:42.286546 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:42 crc kubenswrapper[4789]: I0202 21:20:42.286564 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:42Z","lastTransitionTime":"2026-02-02T21:20:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:42 crc kubenswrapper[4789]: I0202 21:20:42.390445 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:42 crc kubenswrapper[4789]: I0202 21:20:42.390510 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:42 crc kubenswrapper[4789]: I0202 21:20:42.390528 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:42 crc kubenswrapper[4789]: I0202 21:20:42.390629 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:42 crc kubenswrapper[4789]: I0202 21:20:42.390661 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:42Z","lastTransitionTime":"2026-02-02T21:20:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:42 crc kubenswrapper[4789]: I0202 21:20:42.419102 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vjbpg" Feb 02 21:20:42 crc kubenswrapper[4789]: E0202 21:20:42.419312 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vjbpg" podUID="2dc26662-64d3-47f0-9e0d-d340760ca348" Feb 02 21:20:42 crc kubenswrapper[4789]: I0202 21:20:42.493783 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:42 crc kubenswrapper[4789]: I0202 21:20:42.493858 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:42 crc kubenswrapper[4789]: I0202 21:20:42.493876 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:42 crc kubenswrapper[4789]: I0202 21:20:42.493902 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:42 crc kubenswrapper[4789]: I0202 21:20:42.493921 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:42Z","lastTransitionTime":"2026-02-02T21:20:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:42 crc kubenswrapper[4789]: I0202 21:20:42.596745 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:42 crc kubenswrapper[4789]: I0202 21:20:42.596810 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:42 crc kubenswrapper[4789]: I0202 21:20:42.596827 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:42 crc kubenswrapper[4789]: I0202 21:20:42.596854 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:42 crc kubenswrapper[4789]: I0202 21:20:42.596872 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:42Z","lastTransitionTime":"2026-02-02T21:20:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:42 crc kubenswrapper[4789]: I0202 21:20:42.699958 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:42 crc kubenswrapper[4789]: I0202 21:20:42.700022 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:42 crc kubenswrapper[4789]: I0202 21:20:42.700035 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:42 crc kubenswrapper[4789]: I0202 21:20:42.700055 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:42 crc kubenswrapper[4789]: I0202 21:20:42.700068 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:42Z","lastTransitionTime":"2026-02-02T21:20:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:42 crc kubenswrapper[4789]: I0202 21:20:42.769271 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 12:18:47.070442152 +0000 UTC Feb 02 21:20:42 crc kubenswrapper[4789]: I0202 21:20:42.802687 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:42 crc kubenswrapper[4789]: I0202 21:20:42.802740 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:42 crc kubenswrapper[4789]: I0202 21:20:42.802758 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:42 crc kubenswrapper[4789]: I0202 21:20:42.802780 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:42 crc kubenswrapper[4789]: I0202 21:20:42.802806 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:42Z","lastTransitionTime":"2026-02-02T21:20:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:42 crc kubenswrapper[4789]: I0202 21:20:42.905750 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:42 crc kubenswrapper[4789]: I0202 21:20:42.905814 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:42 crc kubenswrapper[4789]: I0202 21:20:42.905832 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:42 crc kubenswrapper[4789]: I0202 21:20:42.905857 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:42 crc kubenswrapper[4789]: I0202 21:20:42.905876 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:42Z","lastTransitionTime":"2026-02-02T21:20:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:43 crc kubenswrapper[4789]: I0202 21:20:43.009614 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:43 crc kubenswrapper[4789]: I0202 21:20:43.009687 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:43 crc kubenswrapper[4789]: I0202 21:20:43.009712 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:43 crc kubenswrapper[4789]: I0202 21:20:43.009740 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:43 crc kubenswrapper[4789]: I0202 21:20:43.009763 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:43Z","lastTransitionTime":"2026-02-02T21:20:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:43 crc kubenswrapper[4789]: I0202 21:20:43.112935 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:43 crc kubenswrapper[4789]: I0202 21:20:43.112997 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:43 crc kubenswrapper[4789]: I0202 21:20:43.113021 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:43 crc kubenswrapper[4789]: I0202 21:20:43.113049 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:43 crc kubenswrapper[4789]: I0202 21:20:43.113073 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:43Z","lastTransitionTime":"2026-02-02T21:20:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:43 crc kubenswrapper[4789]: I0202 21:20:43.216031 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:43 crc kubenswrapper[4789]: I0202 21:20:43.216085 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:43 crc kubenswrapper[4789]: I0202 21:20:43.216107 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:43 crc kubenswrapper[4789]: I0202 21:20:43.216133 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:43 crc kubenswrapper[4789]: I0202 21:20:43.216155 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:43Z","lastTransitionTime":"2026-02-02T21:20:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:43 crc kubenswrapper[4789]: I0202 21:20:43.319893 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:43 crc kubenswrapper[4789]: I0202 21:20:43.319944 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:43 crc kubenswrapper[4789]: I0202 21:20:43.319962 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:43 crc kubenswrapper[4789]: I0202 21:20:43.319985 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:43 crc kubenswrapper[4789]: I0202 21:20:43.320003 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:43Z","lastTransitionTime":"2026-02-02T21:20:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:43 crc kubenswrapper[4789]: I0202 21:20:43.418640 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 21:20:43 crc kubenswrapper[4789]: I0202 21:20:43.418668 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 21:20:43 crc kubenswrapper[4789]: E0202 21:20:43.418980 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 21:20:43 crc kubenswrapper[4789]: I0202 21:20:43.418724 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 21:20:43 crc kubenswrapper[4789]: E0202 21:20:43.419277 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 21:20:43 crc kubenswrapper[4789]: E0202 21:20:43.419401 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 21:20:43 crc kubenswrapper[4789]: I0202 21:20:43.423131 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:43 crc kubenswrapper[4789]: I0202 21:20:43.423184 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:43 crc kubenswrapper[4789]: I0202 21:20:43.423202 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:43 crc kubenswrapper[4789]: I0202 21:20:43.423230 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:43 crc kubenswrapper[4789]: I0202 21:20:43.423253 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:43Z","lastTransitionTime":"2026-02-02T21:20:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:43 crc kubenswrapper[4789]: I0202 21:20:43.526690 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:43 crc kubenswrapper[4789]: I0202 21:20:43.526771 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:43 crc kubenswrapper[4789]: I0202 21:20:43.526798 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:43 crc kubenswrapper[4789]: I0202 21:20:43.526833 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:43 crc kubenswrapper[4789]: I0202 21:20:43.526855 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:43Z","lastTransitionTime":"2026-02-02T21:20:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:43 crc kubenswrapper[4789]: I0202 21:20:43.631073 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:43 crc kubenswrapper[4789]: I0202 21:20:43.631198 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:43 crc kubenswrapper[4789]: I0202 21:20:43.631228 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:43 crc kubenswrapper[4789]: I0202 21:20:43.631265 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:43 crc kubenswrapper[4789]: I0202 21:20:43.631291 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:43Z","lastTransitionTime":"2026-02-02T21:20:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:43 crc kubenswrapper[4789]: I0202 21:20:43.734246 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:43 crc kubenswrapper[4789]: I0202 21:20:43.734308 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:43 crc kubenswrapper[4789]: I0202 21:20:43.734327 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:43 crc kubenswrapper[4789]: I0202 21:20:43.734352 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:43 crc kubenswrapper[4789]: I0202 21:20:43.734372 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:43Z","lastTransitionTime":"2026-02-02T21:20:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:43 crc kubenswrapper[4789]: I0202 21:20:43.769413 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 04:08:35.610802824 +0000 UTC Feb 02 21:20:43 crc kubenswrapper[4789]: I0202 21:20:43.841208 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:43 crc kubenswrapper[4789]: I0202 21:20:43.841414 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:43 crc kubenswrapper[4789]: I0202 21:20:43.841487 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:43 crc kubenswrapper[4789]: I0202 21:20:43.841522 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:43 crc kubenswrapper[4789]: I0202 21:20:43.841553 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:43Z","lastTransitionTime":"2026-02-02T21:20:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:43 crc kubenswrapper[4789]: I0202 21:20:43.944733 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:43 crc kubenswrapper[4789]: I0202 21:20:43.944863 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:43 crc kubenswrapper[4789]: I0202 21:20:43.944878 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:43 crc kubenswrapper[4789]: I0202 21:20:43.944895 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:43 crc kubenswrapper[4789]: I0202 21:20:43.944909 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:43Z","lastTransitionTime":"2026-02-02T21:20:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:44 crc kubenswrapper[4789]: I0202 21:20:44.046754 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:44 crc kubenswrapper[4789]: I0202 21:20:44.046784 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:44 crc kubenswrapper[4789]: I0202 21:20:44.046792 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:44 crc kubenswrapper[4789]: I0202 21:20:44.046804 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:44 crc kubenswrapper[4789]: I0202 21:20:44.046814 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:44Z","lastTransitionTime":"2026-02-02T21:20:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:44 crc kubenswrapper[4789]: I0202 21:20:44.149094 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:44 crc kubenswrapper[4789]: I0202 21:20:44.149162 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:44 crc kubenswrapper[4789]: I0202 21:20:44.149180 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:44 crc kubenswrapper[4789]: I0202 21:20:44.149205 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:44 crc kubenswrapper[4789]: I0202 21:20:44.149223 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:44Z","lastTransitionTime":"2026-02-02T21:20:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:44 crc kubenswrapper[4789]: I0202 21:20:44.252120 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:44 crc kubenswrapper[4789]: I0202 21:20:44.252170 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:44 crc kubenswrapper[4789]: I0202 21:20:44.252186 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:44 crc kubenswrapper[4789]: I0202 21:20:44.252206 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:44 crc kubenswrapper[4789]: I0202 21:20:44.252218 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:44Z","lastTransitionTime":"2026-02-02T21:20:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:44 crc kubenswrapper[4789]: I0202 21:20:44.355058 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:44 crc kubenswrapper[4789]: I0202 21:20:44.355106 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:44 crc kubenswrapper[4789]: I0202 21:20:44.355119 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:44 crc kubenswrapper[4789]: I0202 21:20:44.355137 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:44 crc kubenswrapper[4789]: I0202 21:20:44.355148 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:44Z","lastTransitionTime":"2026-02-02T21:20:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:44 crc kubenswrapper[4789]: I0202 21:20:44.418664 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vjbpg" Feb 02 21:20:44 crc kubenswrapper[4789]: E0202 21:20:44.418885 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vjbpg" podUID="2dc26662-64d3-47f0-9e0d-d340760ca348" Feb 02 21:20:44 crc kubenswrapper[4789]: I0202 21:20:44.457433 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:44 crc kubenswrapper[4789]: I0202 21:20:44.457461 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:44 crc kubenswrapper[4789]: I0202 21:20:44.457470 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:44 crc kubenswrapper[4789]: I0202 21:20:44.457482 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:44 crc kubenswrapper[4789]: I0202 21:20:44.457493 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:44Z","lastTransitionTime":"2026-02-02T21:20:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:44 crc kubenswrapper[4789]: I0202 21:20:44.559702 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:44 crc kubenswrapper[4789]: I0202 21:20:44.559743 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:44 crc kubenswrapper[4789]: I0202 21:20:44.559753 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:44 crc kubenswrapper[4789]: I0202 21:20:44.559768 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:44 crc kubenswrapper[4789]: I0202 21:20:44.559779 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:44Z","lastTransitionTime":"2026-02-02T21:20:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:44 crc kubenswrapper[4789]: I0202 21:20:44.662389 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:44 crc kubenswrapper[4789]: I0202 21:20:44.662449 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:44 crc kubenswrapper[4789]: I0202 21:20:44.662462 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:44 crc kubenswrapper[4789]: I0202 21:20:44.662480 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:44 crc kubenswrapper[4789]: I0202 21:20:44.662493 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:44Z","lastTransitionTime":"2026-02-02T21:20:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:44 crc kubenswrapper[4789]: I0202 21:20:44.765783 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:44 crc kubenswrapper[4789]: I0202 21:20:44.765822 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:44 crc kubenswrapper[4789]: I0202 21:20:44.765834 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:44 crc kubenswrapper[4789]: I0202 21:20:44.765849 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:44 crc kubenswrapper[4789]: I0202 21:20:44.765859 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:44Z","lastTransitionTime":"2026-02-02T21:20:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:44 crc kubenswrapper[4789]: I0202 21:20:44.769949 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 10:20:36.230856637 +0000 UTC Feb 02 21:20:44 crc kubenswrapper[4789]: I0202 21:20:44.868272 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:44 crc kubenswrapper[4789]: I0202 21:20:44.868318 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:44 crc kubenswrapper[4789]: I0202 21:20:44.868330 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:44 crc kubenswrapper[4789]: I0202 21:20:44.868346 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:44 crc kubenswrapper[4789]: I0202 21:20:44.868357 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:44Z","lastTransitionTime":"2026-02-02T21:20:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:44 crc kubenswrapper[4789]: I0202 21:20:44.972089 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:44 crc kubenswrapper[4789]: I0202 21:20:44.972147 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:44 crc kubenswrapper[4789]: I0202 21:20:44.972165 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:44 crc kubenswrapper[4789]: I0202 21:20:44.972196 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:44 crc kubenswrapper[4789]: I0202 21:20:44.972214 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:44Z","lastTransitionTime":"2026-02-02T21:20:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:45 crc kubenswrapper[4789]: I0202 21:20:45.077250 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:45 crc kubenswrapper[4789]: I0202 21:20:45.077307 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:45 crc kubenswrapper[4789]: I0202 21:20:45.077324 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:45 crc kubenswrapper[4789]: I0202 21:20:45.077347 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:45 crc kubenswrapper[4789]: I0202 21:20:45.077365 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:45Z","lastTransitionTime":"2026-02-02T21:20:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:45 crc kubenswrapper[4789]: I0202 21:20:45.180359 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:45 crc kubenswrapper[4789]: I0202 21:20:45.180415 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:45 crc kubenswrapper[4789]: I0202 21:20:45.180432 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:45 crc kubenswrapper[4789]: I0202 21:20:45.180456 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:45 crc kubenswrapper[4789]: I0202 21:20:45.180474 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:45Z","lastTransitionTime":"2026-02-02T21:20:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:45 crc kubenswrapper[4789]: I0202 21:20:45.283204 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:45 crc kubenswrapper[4789]: I0202 21:20:45.283266 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:45 crc kubenswrapper[4789]: I0202 21:20:45.283283 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:45 crc kubenswrapper[4789]: I0202 21:20:45.283308 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:45 crc kubenswrapper[4789]: I0202 21:20:45.283326 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:45Z","lastTransitionTime":"2026-02-02T21:20:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:45 crc kubenswrapper[4789]: I0202 21:20:45.386275 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:45 crc kubenswrapper[4789]: I0202 21:20:45.386327 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:45 crc kubenswrapper[4789]: I0202 21:20:45.386339 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:45 crc kubenswrapper[4789]: I0202 21:20:45.386360 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:45 crc kubenswrapper[4789]: I0202 21:20:45.386370 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:45Z","lastTransitionTime":"2026-02-02T21:20:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:45 crc kubenswrapper[4789]: I0202 21:20:45.419282 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 21:20:45 crc kubenswrapper[4789]: I0202 21:20:45.419328 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 21:20:45 crc kubenswrapper[4789]: I0202 21:20:45.419358 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 21:20:45 crc kubenswrapper[4789]: E0202 21:20:45.419475 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 21:20:45 crc kubenswrapper[4789]: E0202 21:20:45.419595 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 21:20:45 crc kubenswrapper[4789]: E0202 21:20:45.419711 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 21:20:45 crc kubenswrapper[4789]: I0202 21:20:45.488611 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:45 crc kubenswrapper[4789]: I0202 21:20:45.488677 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:45 crc kubenswrapper[4789]: I0202 21:20:45.488691 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:45 crc kubenswrapper[4789]: I0202 21:20:45.488706 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:45 crc kubenswrapper[4789]: I0202 21:20:45.488718 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:45Z","lastTransitionTime":"2026-02-02T21:20:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:45 crc kubenswrapper[4789]: I0202 21:20:45.592367 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:45 crc kubenswrapper[4789]: I0202 21:20:45.592423 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:45 crc kubenswrapper[4789]: I0202 21:20:45.592435 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:45 crc kubenswrapper[4789]: I0202 21:20:45.592450 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:45 crc kubenswrapper[4789]: I0202 21:20:45.592458 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:45Z","lastTransitionTime":"2026-02-02T21:20:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:45 crc kubenswrapper[4789]: I0202 21:20:45.694947 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:45 crc kubenswrapper[4789]: I0202 21:20:45.694991 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:45 crc kubenswrapper[4789]: I0202 21:20:45.695003 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:45 crc kubenswrapper[4789]: I0202 21:20:45.695019 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:45 crc kubenswrapper[4789]: I0202 21:20:45.695031 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:45Z","lastTransitionTime":"2026-02-02T21:20:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:45 crc kubenswrapper[4789]: I0202 21:20:45.770947 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 07:39:35.31611589 +0000 UTC Feb 02 21:20:45 crc kubenswrapper[4789]: I0202 21:20:45.798549 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:45 crc kubenswrapper[4789]: I0202 21:20:45.798625 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:45 crc kubenswrapper[4789]: I0202 21:20:45.798644 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:45 crc kubenswrapper[4789]: I0202 21:20:45.798667 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:45 crc kubenswrapper[4789]: I0202 21:20:45.798690 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:45Z","lastTransitionTime":"2026-02-02T21:20:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:45 crc kubenswrapper[4789]: I0202 21:20:45.901270 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:45 crc kubenswrapper[4789]: I0202 21:20:45.901333 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:45 crc kubenswrapper[4789]: I0202 21:20:45.901350 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:45 crc kubenswrapper[4789]: I0202 21:20:45.901378 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:45 crc kubenswrapper[4789]: I0202 21:20:45.901396 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:45Z","lastTransitionTime":"2026-02-02T21:20:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:46 crc kubenswrapper[4789]: I0202 21:20:46.004663 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:46 crc kubenswrapper[4789]: I0202 21:20:46.004727 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:46 crc kubenswrapper[4789]: I0202 21:20:46.004746 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:46 crc kubenswrapper[4789]: I0202 21:20:46.004771 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:46 crc kubenswrapper[4789]: I0202 21:20:46.004792 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:46Z","lastTransitionTime":"2026-02-02T21:20:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:46 crc kubenswrapper[4789]: I0202 21:20:46.107169 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:46 crc kubenswrapper[4789]: I0202 21:20:46.107229 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:46 crc kubenswrapper[4789]: I0202 21:20:46.107246 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:46 crc kubenswrapper[4789]: I0202 21:20:46.107269 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:46 crc kubenswrapper[4789]: I0202 21:20:46.107289 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:46Z","lastTransitionTime":"2026-02-02T21:20:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:46 crc kubenswrapper[4789]: I0202 21:20:46.212818 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:46 crc kubenswrapper[4789]: I0202 21:20:46.212889 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:46 crc kubenswrapper[4789]: I0202 21:20:46.212913 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:46 crc kubenswrapper[4789]: I0202 21:20:46.212943 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:46 crc kubenswrapper[4789]: I0202 21:20:46.212976 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:46Z","lastTransitionTime":"2026-02-02T21:20:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:46 crc kubenswrapper[4789]: I0202 21:20:46.315238 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:46 crc kubenswrapper[4789]: I0202 21:20:46.315302 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:46 crc kubenswrapper[4789]: I0202 21:20:46.315313 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:46 crc kubenswrapper[4789]: I0202 21:20:46.315329 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:46 crc kubenswrapper[4789]: I0202 21:20:46.315357 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:46Z","lastTransitionTime":"2026-02-02T21:20:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:46 crc kubenswrapper[4789]: I0202 21:20:46.418638 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:46 crc kubenswrapper[4789]: I0202 21:20:46.418671 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:46 crc kubenswrapper[4789]: I0202 21:20:46.418679 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:46 crc kubenswrapper[4789]: I0202 21:20:46.418693 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:46 crc kubenswrapper[4789]: I0202 21:20:46.418703 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:46Z","lastTransitionTime":"2026-02-02T21:20:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:46 crc kubenswrapper[4789]: I0202 21:20:46.418894 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vjbpg" Feb 02 21:20:46 crc kubenswrapper[4789]: E0202 21:20:46.418984 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vjbpg" podUID="2dc26662-64d3-47f0-9e0d-d340760ca348" Feb 02 21:20:46 crc kubenswrapper[4789]: I0202 21:20:46.521018 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:46 crc kubenswrapper[4789]: I0202 21:20:46.521062 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:46 crc kubenswrapper[4789]: I0202 21:20:46.521073 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:46 crc kubenswrapper[4789]: I0202 21:20:46.521087 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:46 crc kubenswrapper[4789]: I0202 21:20:46.521096 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:46Z","lastTransitionTime":"2026-02-02T21:20:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:46 crc kubenswrapper[4789]: I0202 21:20:46.623824 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:46 crc kubenswrapper[4789]: I0202 21:20:46.623864 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:46 crc kubenswrapper[4789]: I0202 21:20:46.623873 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:46 crc kubenswrapper[4789]: I0202 21:20:46.623889 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:46 crc kubenswrapper[4789]: I0202 21:20:46.623899 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:46Z","lastTransitionTime":"2026-02-02T21:20:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:46 crc kubenswrapper[4789]: I0202 21:20:46.726353 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:46 crc kubenswrapper[4789]: I0202 21:20:46.726434 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:46 crc kubenswrapper[4789]: I0202 21:20:46.726459 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:46 crc kubenswrapper[4789]: I0202 21:20:46.726488 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:46 crc kubenswrapper[4789]: I0202 21:20:46.726508 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:46Z","lastTransitionTime":"2026-02-02T21:20:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:46 crc kubenswrapper[4789]: I0202 21:20:46.771787 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 09:15:49.898868559 +0000 UTC Feb 02 21:20:46 crc kubenswrapper[4789]: I0202 21:20:46.829713 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:46 crc kubenswrapper[4789]: I0202 21:20:46.829767 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:46 crc kubenswrapper[4789]: I0202 21:20:46.829777 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:46 crc kubenswrapper[4789]: I0202 21:20:46.829795 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:46 crc kubenswrapper[4789]: I0202 21:20:46.829805 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:46Z","lastTransitionTime":"2026-02-02T21:20:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:46 crc kubenswrapper[4789]: I0202 21:20:46.932023 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:46 crc kubenswrapper[4789]: I0202 21:20:46.932080 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:46 crc kubenswrapper[4789]: I0202 21:20:46.932094 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:46 crc kubenswrapper[4789]: I0202 21:20:46.932112 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:46 crc kubenswrapper[4789]: I0202 21:20:46.932131 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:46Z","lastTransitionTime":"2026-02-02T21:20:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:47 crc kubenswrapper[4789]: I0202 21:20:47.034806 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:47 crc kubenswrapper[4789]: I0202 21:20:47.034842 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:47 crc kubenswrapper[4789]: I0202 21:20:47.034850 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:47 crc kubenswrapper[4789]: I0202 21:20:47.034865 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:47 crc kubenswrapper[4789]: I0202 21:20:47.034877 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:47Z","lastTransitionTime":"2026-02-02T21:20:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:47 crc kubenswrapper[4789]: I0202 21:20:47.137307 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:47 crc kubenswrapper[4789]: I0202 21:20:47.137360 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:47 crc kubenswrapper[4789]: I0202 21:20:47.137376 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:47 crc kubenswrapper[4789]: I0202 21:20:47.137400 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:47 crc kubenswrapper[4789]: I0202 21:20:47.137417 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:47Z","lastTransitionTime":"2026-02-02T21:20:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:47 crc kubenswrapper[4789]: I0202 21:20:47.240358 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:47 crc kubenswrapper[4789]: I0202 21:20:47.240428 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:47 crc kubenswrapper[4789]: I0202 21:20:47.240456 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:47 crc kubenswrapper[4789]: I0202 21:20:47.240487 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:47 crc kubenswrapper[4789]: I0202 21:20:47.240511 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:47Z","lastTransitionTime":"2026-02-02T21:20:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:47 crc kubenswrapper[4789]: I0202 21:20:47.343001 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:47 crc kubenswrapper[4789]: I0202 21:20:47.343047 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:47 crc kubenswrapper[4789]: I0202 21:20:47.343060 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:47 crc kubenswrapper[4789]: I0202 21:20:47.343078 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:47 crc kubenswrapper[4789]: I0202 21:20:47.343089 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:47Z","lastTransitionTime":"2026-02-02T21:20:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:47 crc kubenswrapper[4789]: I0202 21:20:47.418813 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 21:20:47 crc kubenswrapper[4789]: I0202 21:20:47.418870 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 21:20:47 crc kubenswrapper[4789]: I0202 21:20:47.418901 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 21:20:47 crc kubenswrapper[4789]: E0202 21:20:47.418957 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 21:20:47 crc kubenswrapper[4789]: E0202 21:20:47.419019 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 21:20:47 crc kubenswrapper[4789]: E0202 21:20:47.419101 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 21:20:47 crc kubenswrapper[4789]: I0202 21:20:47.428483 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 02 21:20:47 crc kubenswrapper[4789]: I0202 21:20:47.445753 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:47 crc kubenswrapper[4789]: I0202 21:20:47.445797 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:47 crc kubenswrapper[4789]: I0202 21:20:47.445812 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:47 crc kubenswrapper[4789]: I0202 21:20:47.445829 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:47 crc kubenswrapper[4789]: I0202 21:20:47.445841 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:47Z","lastTransitionTime":"2026-02-02T21:20:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:47 crc kubenswrapper[4789]: I0202 21:20:47.548822 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:47 crc kubenswrapper[4789]: I0202 21:20:47.549050 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:47 crc kubenswrapper[4789]: I0202 21:20:47.549067 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:47 crc kubenswrapper[4789]: I0202 21:20:47.549090 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:47 crc kubenswrapper[4789]: I0202 21:20:47.549124 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:47Z","lastTransitionTime":"2026-02-02T21:20:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:47 crc kubenswrapper[4789]: I0202 21:20:47.650958 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:47 crc kubenswrapper[4789]: I0202 21:20:47.650991 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:47 crc kubenswrapper[4789]: I0202 21:20:47.650999 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:47 crc kubenswrapper[4789]: I0202 21:20:47.651046 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:47 crc kubenswrapper[4789]: I0202 21:20:47.651055 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:47Z","lastTransitionTime":"2026-02-02T21:20:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:47 crc kubenswrapper[4789]: I0202 21:20:47.753514 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:47 crc kubenswrapper[4789]: I0202 21:20:47.753562 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:47 crc kubenswrapper[4789]: I0202 21:20:47.753571 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:47 crc kubenswrapper[4789]: I0202 21:20:47.753606 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:47 crc kubenswrapper[4789]: I0202 21:20:47.753617 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:47Z","lastTransitionTime":"2026-02-02T21:20:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:47 crc kubenswrapper[4789]: I0202 21:20:47.772066 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 15:56:39.291185765 +0000 UTC Feb 02 21:20:47 crc kubenswrapper[4789]: I0202 21:20:47.855731 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:47 crc kubenswrapper[4789]: I0202 21:20:47.855794 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:47 crc kubenswrapper[4789]: I0202 21:20:47.855818 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:47 crc kubenswrapper[4789]: I0202 21:20:47.855847 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:47 crc kubenswrapper[4789]: I0202 21:20:47.855873 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:47Z","lastTransitionTime":"2026-02-02T21:20:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:47 crc kubenswrapper[4789]: I0202 21:20:47.958072 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:47 crc kubenswrapper[4789]: I0202 21:20:47.958233 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:47 crc kubenswrapper[4789]: I0202 21:20:47.958263 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:47 crc kubenswrapper[4789]: I0202 21:20:47.958288 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:47 crc kubenswrapper[4789]: I0202 21:20:47.958305 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:47Z","lastTransitionTime":"2026-02-02T21:20:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:48 crc kubenswrapper[4789]: I0202 21:20:48.062080 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:48 crc kubenswrapper[4789]: I0202 21:20:48.062123 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:48 crc kubenswrapper[4789]: I0202 21:20:48.062134 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:48 crc kubenswrapper[4789]: I0202 21:20:48.062151 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:48 crc kubenswrapper[4789]: I0202 21:20:48.062162 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:48Z","lastTransitionTime":"2026-02-02T21:20:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:48 crc kubenswrapper[4789]: I0202 21:20:48.154541 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:48 crc kubenswrapper[4789]: I0202 21:20:48.154596 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:48 crc kubenswrapper[4789]: I0202 21:20:48.154606 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:48 crc kubenswrapper[4789]: I0202 21:20:48.154620 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:48 crc kubenswrapper[4789]: I0202 21:20:48.154630 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:48Z","lastTransitionTime":"2026-02-02T21:20:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:48 crc kubenswrapper[4789]: E0202 21:20:48.165950 4789 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"007d0037-9447-42ea-b3a4-6e1f0d669307\\\",\\\"systemUUID\\\":\\\"53ecbfdd-0b43-4d74-98ca-c7bcbc951d86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:48Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:48 crc kubenswrapper[4789]: I0202 21:20:48.169695 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:48 crc kubenswrapper[4789]: I0202 21:20:48.169725 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:48 crc kubenswrapper[4789]: I0202 21:20:48.169735 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:48 crc kubenswrapper[4789]: I0202 21:20:48.169750 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:48 crc kubenswrapper[4789]: I0202 21:20:48.169761 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:48Z","lastTransitionTime":"2026-02-02T21:20:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:48 crc kubenswrapper[4789]: E0202 21:20:48.184617 4789 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"007d0037-9447-42ea-b3a4-6e1f0d669307\\\",\\\"systemUUID\\\":\\\"53ecbfdd-0b43-4d74-98ca-c7bcbc951d86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:48Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:48 crc kubenswrapper[4789]: I0202 21:20:48.187812 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:48 crc kubenswrapper[4789]: I0202 21:20:48.187919 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:48 crc kubenswrapper[4789]: I0202 21:20:48.187940 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:48 crc kubenswrapper[4789]: I0202 21:20:48.187966 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:48 crc kubenswrapper[4789]: I0202 21:20:48.188032 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:48Z","lastTransitionTime":"2026-02-02T21:20:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:48 crc kubenswrapper[4789]: E0202 21:20:48.203493 4789 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"007d0037-9447-42ea-b3a4-6e1f0d669307\\\",\\\"systemUUID\\\":\\\"53ecbfdd-0b43-4d74-98ca-c7bcbc951d86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:48Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:48 crc kubenswrapper[4789]: I0202 21:20:48.226027 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:48 crc kubenswrapper[4789]: I0202 21:20:48.226109 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:48 crc kubenswrapper[4789]: I0202 21:20:48.226130 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:48 crc kubenswrapper[4789]: I0202 21:20:48.226202 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:48 crc kubenswrapper[4789]: I0202 21:20:48.226222 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:48Z","lastTransitionTime":"2026-02-02T21:20:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:48 crc kubenswrapper[4789]: E0202 21:20:48.252375 4789 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"007d0037-9447-42ea-b3a4-6e1f0d669307\\\",\\\"systemUUID\\\":\\\"53ecbfdd-0b43-4d74-98ca-c7bcbc951d86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:48Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:48 crc kubenswrapper[4789]: I0202 21:20:48.258077 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:48 crc kubenswrapper[4789]: I0202 21:20:48.258148 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:48 crc kubenswrapper[4789]: I0202 21:20:48.258167 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:48 crc kubenswrapper[4789]: I0202 21:20:48.258192 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:48 crc kubenswrapper[4789]: I0202 21:20:48.258211 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:48Z","lastTransitionTime":"2026-02-02T21:20:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:48 crc kubenswrapper[4789]: E0202 21:20:48.270110 4789 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"007d0037-9447-42ea-b3a4-6e1f0d669307\\\",\\\"systemUUID\\\":\\\"53ecbfdd-0b43-4d74-98ca-c7bcbc951d86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:48Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:48 crc kubenswrapper[4789]: E0202 21:20:48.270353 4789 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 02 21:20:48 crc kubenswrapper[4789]: I0202 21:20:48.271780 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:48 crc kubenswrapper[4789]: I0202 21:20:48.271826 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:48 crc kubenswrapper[4789]: I0202 21:20:48.271842 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:48 crc kubenswrapper[4789]: I0202 21:20:48.271862 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:48 crc kubenswrapper[4789]: I0202 21:20:48.271883 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:48Z","lastTransitionTime":"2026-02-02T21:20:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:48 crc kubenswrapper[4789]: I0202 21:20:48.374352 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:48 crc kubenswrapper[4789]: I0202 21:20:48.374411 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:48 crc kubenswrapper[4789]: I0202 21:20:48.374431 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:48 crc kubenswrapper[4789]: I0202 21:20:48.374457 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:48 crc kubenswrapper[4789]: I0202 21:20:48.374476 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:48Z","lastTransitionTime":"2026-02-02T21:20:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:48 crc kubenswrapper[4789]: I0202 21:20:48.419079 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vjbpg" Feb 02 21:20:48 crc kubenswrapper[4789]: E0202 21:20:48.419293 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vjbpg" podUID="2dc26662-64d3-47f0-9e0d-d340760ca348" Feb 02 21:20:48 crc kubenswrapper[4789]: I0202 21:20:48.477439 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:48 crc kubenswrapper[4789]: I0202 21:20:48.477480 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:48 crc kubenswrapper[4789]: I0202 21:20:48.477492 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:48 crc kubenswrapper[4789]: I0202 21:20:48.477507 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:48 crc kubenswrapper[4789]: I0202 21:20:48.477520 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:48Z","lastTransitionTime":"2026-02-02T21:20:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:48 crc kubenswrapper[4789]: I0202 21:20:48.579803 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:48 crc kubenswrapper[4789]: I0202 21:20:48.579843 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:48 crc kubenswrapper[4789]: I0202 21:20:48.579854 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:48 crc kubenswrapper[4789]: I0202 21:20:48.579870 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:48 crc kubenswrapper[4789]: I0202 21:20:48.579882 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:48Z","lastTransitionTime":"2026-02-02T21:20:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:48 crc kubenswrapper[4789]: I0202 21:20:48.681952 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:48 crc kubenswrapper[4789]: I0202 21:20:48.681994 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:48 crc kubenswrapper[4789]: I0202 21:20:48.682005 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:48 crc kubenswrapper[4789]: I0202 21:20:48.682020 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:48 crc kubenswrapper[4789]: I0202 21:20:48.682032 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:48Z","lastTransitionTime":"2026-02-02T21:20:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:48 crc kubenswrapper[4789]: I0202 21:20:48.772909 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 16:39:18.718619312 +0000 UTC Feb 02 21:20:48 crc kubenswrapper[4789]: I0202 21:20:48.785116 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:48 crc kubenswrapper[4789]: I0202 21:20:48.785143 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:48 crc kubenswrapper[4789]: I0202 21:20:48.785152 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:48 crc kubenswrapper[4789]: I0202 21:20:48.785166 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:48 crc kubenswrapper[4789]: I0202 21:20:48.785176 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:48Z","lastTransitionTime":"2026-02-02T21:20:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:48 crc kubenswrapper[4789]: I0202 21:20:48.887778 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:48 crc kubenswrapper[4789]: I0202 21:20:48.887820 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:48 crc kubenswrapper[4789]: I0202 21:20:48.887837 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:48 crc kubenswrapper[4789]: I0202 21:20:48.887858 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:48 crc kubenswrapper[4789]: I0202 21:20:48.887873 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:48Z","lastTransitionTime":"2026-02-02T21:20:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:48 crc kubenswrapper[4789]: I0202 21:20:48.937982 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2dc26662-64d3-47f0-9e0d-d340760ca348-metrics-certs\") pod \"network-metrics-daemon-vjbpg\" (UID: \"2dc26662-64d3-47f0-9e0d-d340760ca348\") " pod="openshift-multus/network-metrics-daemon-vjbpg" Feb 02 21:20:48 crc kubenswrapper[4789]: E0202 21:20:48.938248 4789 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 21:20:48 crc kubenswrapper[4789]: E0202 21:20:48.938344 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2dc26662-64d3-47f0-9e0d-d340760ca348-metrics-certs podName:2dc26662-64d3-47f0-9e0d-d340760ca348 nodeName:}" failed. No retries permitted until 2026-02-02 21:21:20.938315891 +0000 UTC m=+101.233340960 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2dc26662-64d3-47f0-9e0d-d340760ca348-metrics-certs") pod "network-metrics-daemon-vjbpg" (UID: "2dc26662-64d3-47f0-9e0d-d340760ca348") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 21:20:48 crc kubenswrapper[4789]: I0202 21:20:48.990819 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:48 crc kubenswrapper[4789]: I0202 21:20:48.990882 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:48 crc kubenswrapper[4789]: I0202 21:20:48.990901 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:48 crc kubenswrapper[4789]: I0202 21:20:48.990926 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:48 crc kubenswrapper[4789]: I0202 21:20:48.990944 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:48Z","lastTransitionTime":"2026-02-02T21:20:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:49 crc kubenswrapper[4789]: I0202 21:20:49.093983 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:49 crc kubenswrapper[4789]: I0202 21:20:49.094029 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:49 crc kubenswrapper[4789]: I0202 21:20:49.094043 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:49 crc kubenswrapper[4789]: I0202 21:20:49.094062 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:49 crc kubenswrapper[4789]: I0202 21:20:49.094077 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:49Z","lastTransitionTime":"2026-02-02T21:20:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:49 crc kubenswrapper[4789]: I0202 21:20:49.197326 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:49 crc kubenswrapper[4789]: I0202 21:20:49.197389 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:49 crc kubenswrapper[4789]: I0202 21:20:49.197408 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:49 crc kubenswrapper[4789]: I0202 21:20:49.197433 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:49 crc kubenswrapper[4789]: I0202 21:20:49.197450 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:49Z","lastTransitionTime":"2026-02-02T21:20:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:49 crc kubenswrapper[4789]: I0202 21:20:49.300514 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:49 crc kubenswrapper[4789]: I0202 21:20:49.300549 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:49 crc kubenswrapper[4789]: I0202 21:20:49.300558 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:49 crc kubenswrapper[4789]: I0202 21:20:49.300592 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:49 crc kubenswrapper[4789]: I0202 21:20:49.300605 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:49Z","lastTransitionTime":"2026-02-02T21:20:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:49 crc kubenswrapper[4789]: I0202 21:20:49.403640 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:49 crc kubenswrapper[4789]: I0202 21:20:49.403707 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:49 crc kubenswrapper[4789]: I0202 21:20:49.403729 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:49 crc kubenswrapper[4789]: I0202 21:20:49.403759 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:49 crc kubenswrapper[4789]: I0202 21:20:49.403782 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:49Z","lastTransitionTime":"2026-02-02T21:20:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:49 crc kubenswrapper[4789]: I0202 21:20:49.419536 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 21:20:49 crc kubenswrapper[4789]: I0202 21:20:49.419636 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 21:20:49 crc kubenswrapper[4789]: I0202 21:20:49.419694 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 21:20:49 crc kubenswrapper[4789]: E0202 21:20:49.419909 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 21:20:49 crc kubenswrapper[4789]: E0202 21:20:49.420067 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 21:20:49 crc kubenswrapper[4789]: E0202 21:20:49.420173 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 21:20:49 crc kubenswrapper[4789]: I0202 21:20:49.506301 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:49 crc kubenswrapper[4789]: I0202 21:20:49.506352 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:49 crc kubenswrapper[4789]: I0202 21:20:49.506366 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:49 crc kubenswrapper[4789]: I0202 21:20:49.506388 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:49 crc kubenswrapper[4789]: I0202 21:20:49.506401 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:49Z","lastTransitionTime":"2026-02-02T21:20:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:49 crc kubenswrapper[4789]: I0202 21:20:49.609214 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:49 crc kubenswrapper[4789]: I0202 21:20:49.609269 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:49 crc kubenswrapper[4789]: I0202 21:20:49.609281 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:49 crc kubenswrapper[4789]: I0202 21:20:49.609298 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:49 crc kubenswrapper[4789]: I0202 21:20:49.609313 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:49Z","lastTransitionTime":"2026-02-02T21:20:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:49 crc kubenswrapper[4789]: I0202 21:20:49.711875 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:49 crc kubenswrapper[4789]: I0202 21:20:49.711909 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:49 crc kubenswrapper[4789]: I0202 21:20:49.711922 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:49 crc kubenswrapper[4789]: I0202 21:20:49.711939 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:49 crc kubenswrapper[4789]: I0202 21:20:49.711951 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:49Z","lastTransitionTime":"2026-02-02T21:20:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:49 crc kubenswrapper[4789]: I0202 21:20:49.773809 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 13:09:46.485032112 +0000 UTC Feb 02 21:20:49 crc kubenswrapper[4789]: I0202 21:20:49.814976 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:49 crc kubenswrapper[4789]: I0202 21:20:49.815018 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:49 crc kubenswrapper[4789]: I0202 21:20:49.815029 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:49 crc kubenswrapper[4789]: I0202 21:20:49.815045 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:49 crc kubenswrapper[4789]: I0202 21:20:49.815055 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:49Z","lastTransitionTime":"2026-02-02T21:20:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:49 crc kubenswrapper[4789]: I0202 21:20:49.918168 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:49 crc kubenswrapper[4789]: I0202 21:20:49.918219 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:49 crc kubenswrapper[4789]: I0202 21:20:49.918234 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:49 crc kubenswrapper[4789]: I0202 21:20:49.918257 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:49 crc kubenswrapper[4789]: I0202 21:20:49.918270 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:49Z","lastTransitionTime":"2026-02-02T21:20:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:50 crc kubenswrapper[4789]: I0202 21:20:50.021368 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:50 crc kubenswrapper[4789]: I0202 21:20:50.021437 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:50 crc kubenswrapper[4789]: I0202 21:20:50.021455 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:50 crc kubenswrapper[4789]: I0202 21:20:50.021484 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:50 crc kubenswrapper[4789]: I0202 21:20:50.021519 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:50Z","lastTransitionTime":"2026-02-02T21:20:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:50 crc kubenswrapper[4789]: I0202 21:20:50.123850 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:50 crc kubenswrapper[4789]: I0202 21:20:50.123937 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:50 crc kubenswrapper[4789]: I0202 21:20:50.123958 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:50 crc kubenswrapper[4789]: I0202 21:20:50.123987 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:50 crc kubenswrapper[4789]: I0202 21:20:50.124045 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:50Z","lastTransitionTime":"2026-02-02T21:20:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:50 crc kubenswrapper[4789]: I0202 21:20:50.226457 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:50 crc kubenswrapper[4789]: I0202 21:20:50.226499 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:50 crc kubenswrapper[4789]: I0202 21:20:50.226508 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:50 crc kubenswrapper[4789]: I0202 21:20:50.226521 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:50 crc kubenswrapper[4789]: I0202 21:20:50.226530 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:50Z","lastTransitionTime":"2026-02-02T21:20:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:50 crc kubenswrapper[4789]: I0202 21:20:50.328975 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:50 crc kubenswrapper[4789]: I0202 21:20:50.329018 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:50 crc kubenswrapper[4789]: I0202 21:20:50.329030 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:50 crc kubenswrapper[4789]: I0202 21:20:50.329049 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:50 crc kubenswrapper[4789]: I0202 21:20:50.329061 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:50Z","lastTransitionTime":"2026-02-02T21:20:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:50 crc kubenswrapper[4789]: I0202 21:20:50.419197 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vjbpg" Feb 02 21:20:50 crc kubenswrapper[4789]: E0202 21:20:50.420267 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vjbpg" podUID="2dc26662-64d3-47f0-9e0d-d340760ca348" Feb 02 21:20:50 crc kubenswrapper[4789]: I0202 21:20:50.431264 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:50 crc kubenswrapper[4789]: I0202 21:20:50.431338 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:50 crc kubenswrapper[4789]: I0202 21:20:50.431359 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:50 crc kubenswrapper[4789]: I0202 21:20:50.431387 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:50 crc kubenswrapper[4789]: I0202 21:20:50.431406 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:50Z","lastTransitionTime":"2026-02-02T21:20:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:50 crc kubenswrapper[4789]: I0202 21:20:50.436165 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdf018b4-1451-4d37-be6e-05802b67c73e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad1716ba11e1b21eb68642e6935312760c389a669a007822290fbe72573bfaad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwk2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b108a17d437cce7e94365267d96e99e84944d9dd12174c5acb380bc1a1f9c885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwk2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c8vcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:50Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:50 crc kubenswrapper[4789]: I0202 21:20:50.447076 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d49gm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39528981-2c85-43f3-8fa0-bfae5c3334cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4ae303f0f4381207f4dd4a443e366d6e3de2014e9bc69aa644e98a76b239868\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnfbx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://277fe88585ee146931597a14fe049a3d69197c94e0d84f5dfb334b08cd685723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnfbx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-d49gm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:50Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:50 crc kubenswrapper[4789]: I0202 21:20:50.457345 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vjbpg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dc26662-64d3-47f0-9e0d-d340760ca348\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9zq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9zq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vjbpg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:50Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:50 crc kubenswrapper[4789]: I0202 21:20:50.468968 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40b3784b5ba59f0147d2889d897d17140759155e24587e9a323338e0a3125ddd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:50Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:50 crc kubenswrapper[4789]: I0202 21:20:50.483133 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:50Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:50 crc kubenswrapper[4789]: I0202 21:20:50.495048 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:50Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:50 crc kubenswrapper[4789]: I0202 21:20:50.506574 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19928db343f460eed7b046f14b45c6756081f57ea4b3ad77acc86f29eb856a82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://106225a63f3d4ecb7ba8c5266f738990ec7cc5603f9fabde6d234ae265dcc310\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:50Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:50 crc kubenswrapper[4789]: I0202 21:20:50.517423 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wlsw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b25d791-42e1-4e08-b7da-41803cc40f4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0e2a17d3ade1aa229b2729d66fe953acc746673549e0d929076a3bd18839833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-snjrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wlsw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:50Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:50 crc kubenswrapper[4789]: I0202 21:20:50.530302 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf73e052-94a2-472e-88e9-63ab3a8d428b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c523dec61c09703463bce6b000fb79c832b3c190a960fe0097b654fd672477c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1470060c44a356a82b453ed22ef5c3841993bce37eba8523a13c49331499224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1470060c44a356a82b453ed22ef5c3841993bce37eba8523a13c49331499224\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:19:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:19:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:50Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:50 crc kubenswrapper[4789]: I0202 21:20:50.533862 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:50 crc kubenswrapper[4789]: I0202 21:20:50.533902 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:50 crc kubenswrapper[4789]: I0202 21:20:50.533919 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:50 crc kubenswrapper[4789]: I0202 21:20:50.533939 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:50 crc kubenswrapper[4789]: I0202 21:20:50.533954 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:50Z","lastTransitionTime":"2026-02-02T21:20:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:50 crc kubenswrapper[4789]: I0202 21:20:50.548659 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9563eded-ca82-4eb6-90d4-e62b8acbe296\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72bf64d682578abb47e9fabf5e6c31e3a20594cc4a2d13304b2036f4198af265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cc46c408bb643834e494025442760929995b22175ce2af65a14004761848c2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e95c9e553b8e14141ddde6a221c0e5e4d46533d1631893e5d55416b9d16f4a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66a4db3799101ccca8a89d6bfd2c9d36940b8710ee3d256e47cd61cfe6ac7c07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d8bd6ccf6c7345f0ddc84a2983b618218fc65283b8d351c2df30d1221f304b7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\" 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 21:20:01.773773 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 21:20:01.773776 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 21:20:01.773779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 21:20:01.773999 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0202 21:20:01.777333 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3307553149/tls.crt::/tmp/serving-cert-3307553149/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770067196\\\\\\\\\\\\\\\" (2026-02-02 21:19:55 +0000 UTC to 2026-03-04 21:19:56 +0000 UTC (now=2026-02-02 21:20:01.777292377 +0000 UTC))\\\\\\\"\\\\nI0202 21:20:01.777438 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770067201\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770067201\\\\\\\\\\\\\\\" (2026-02-02 20:20:01 +0000 UTC to 2027-02-02 20:20:01 +0000 UTC (now=2026-02-02 21:20:01.777417111 +0000 UTC))\\\\\\\"\\\\nI0202 21:20:01.777455 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 21:20:01.777484 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 21:20:01.777505 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0202 21:20:01.777526 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0202 21:20:01.777548 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3307553149/tls.crt::/tmp/serving-cert-3307553149/tls.key\\\\\\\"\\\\nF0202 21:20:01.777545 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cbe4a41d8721562e34639bc3ab24cc7f735d8de00e14d38b1b623b58740c5b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94bdc1c30a3c895835afc4f205ec20725ec13707db6957046cae7791b8850679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94bdc1c30a3c895835afc4f205ec20725ec13707db6957046cae7791b8850679\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:19:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:19:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:50Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:50 crc kubenswrapper[4789]: I0202 21:20:50.568727 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dsv6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcbed546-a1c3-4ba4-96b4-61471010b1c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fff0380f801c6909a82bfc8a765cf7a393d7c2f7cc809611ded737a22f448a3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48b8508849b24e18223b62bd73348759ba4c04a525afb7d4055175bceb0183c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e48b8508849b24e18223b62bd73348759ba4c04a525afb7d4055175bceb0183c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92b13f24aa00608821909c688ace0441fd4e432d3c5fc9a1cc06b4239bf86955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92b13f24aa00608821909c688ace0441fd4e432d3c5fc9a1cc06b4239bf86955\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75d7fd5bba471f9371960e5c6cfd5d7cb49402d47459acad01a9c438ea33de0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75d7fd5bba471f9371960e5c6cfd5d7cb49402d47459acad01a9c438ea33de0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eb07d53076d60f3363a1232e742bb4bd801d49104a2be1095af34a61a5b61cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eb07d53076d60f3363a1232e742bb4bd801d49104a2be1095af34a61a5b61cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5339fbb27f553f96497abbf4b0584cbffe399e44fafd90cd0e568e2f4efad6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5339fbb27f553f96497abbf4b0584cbffe399e44fafd90cd0e568e2f4efad6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef304631b36ef7a2edc5ce89f5b4d1efecd4947d1d71f3bb4068395d76aea346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef304631b36ef7a2edc5ce89f5b4d1efecd4947d1d71f3bb4068395d76aea346\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dsv6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:50Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:50 crc kubenswrapper[4789]: I0202 21:20:50.595786 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29dad69c19f9411676fab81684dd613d866c494df1e333b46bfff35b98430103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://047a98525fa1b06be5e6f3bab6b417973a13bb32df7fad26ca90b4558ee4e364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05637515d4c19e054ce43cc23bb6d8e32c4752282dab08786fff3062eb976461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f05a85f44fd6a7b9efa0f591ec0afebd7818a80b132ec3dddd0faa22c380eea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd508dcd4022163f662ad27cf3ffbc4cca551d4020809ca5fac60cee8da80601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://021453f1c74b708ad3f39c2defa2664f098a552f3e22315479c483d24110e583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8afae9b540e38d08b31eca754dc56d77b5477c42e30bd8aa7ff2200857e9906b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8afae9b540e38d08b31eca754dc56d77b5477c42e30bd8aa7ff2200857e9906b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T21:20:38Z\\\",\\\"message\\\":\\\"er\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-apiserver/apiserver\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.93\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0202 21:20:38.597725 6510 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d49gm\\\\nI0202 21:20:38.597738 6510 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d49gm in node crc\\\\nF0202 21:20:38.597745 6510 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: In\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-w8vkt_openshift-ovn-kubernetes(2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f30344b8366cf5ef712d971dc4d8d552e7b50c1ecea8d2ace88bb80fce62231e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e90a8e698f2e9f30a596864d0d038bfad686cca1a56b1a8bc72fab5fe594f355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e90a8e698f2e9f30a596864d0d038bfad686cca1a56b1a8bc72fab5fe594f355\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w8vkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:50Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:50 crc kubenswrapper[4789]: I0202 21:20:50.613825 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fb5d89e-c901-4857-a5e8-b4d8e3701714\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d85613f300f8d008d919038b40efff3fb67e228e25959cbfd2424640c6bc7a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://460e055777327e1734238bf51929751678ea5a757b00a344daa31c8c95e4c463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c292754c81dd4929f8e599e8ec352fd5e90431c60bb741c070a10ffa91fad72\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3685c506f77d3807711a442d68da94932064737cfdc5cb74557da135906e24f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:19:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:50Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:50 crc kubenswrapper[4789]: I0202 21:20:50.633253 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2x5ws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70a32268-2a2d-47f3-9fc6-4281b8dc6a02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9eaf3ca59ce89187ee20f3915b7fd4a8867e156c0be18b435511d2af95ffd949\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf446\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2x5ws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:50Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:50 crc kubenswrapper[4789]: I0202 21:20:50.635919 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:50 crc kubenswrapper[4789]: I0202 21:20:50.635951 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:50 crc kubenswrapper[4789]: I0202 21:20:50.635960 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:50 crc kubenswrapper[4789]: I0202 21:20:50.635974 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:50 crc kubenswrapper[4789]: I0202 21:20:50.635983 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:50Z","lastTransitionTime":"2026-02-02T21:20:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:50 crc kubenswrapper[4789]: I0202 21:20:50.645427 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af0ef1e0-7fcc-4fa6-8463-a0f5a9145c2e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3c3d527f77f26e052c4ef9c0577938dc23c802918e742b9fb9020cb6ba705f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eface13a61e8f9d1e8e9512c78a4c70973bfad708c3cdea7f7251f6fa408a59f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ed17431bc8880523ea84349c68ea56e389033b550390d88e60373f663d1491f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d78edf2f7e647edfcc306a3feff71c983907077ad05a684af9fa2990deaf3b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d78edf2f7e647edfcc306a3feff71c983907077ad05a684af9fa2990deaf3b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:19:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:19:40Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:50Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:50 crc kubenswrapper[4789]: I0202 21:20:50.659388 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa57a708f883719dd81200ccf975947c0af244b92ead81daee23171c9be412f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:50Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:50 crc kubenswrapper[4789]: I0202 21:20:50.670151 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6l576" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd970d28-4009-48b2-a0f4-2b8b1d54a2cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b641bb7caefade4d07fe41965573b059bd8b23a83bb9f60a9637d0152dffb07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwbqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6l576\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:50Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:50 crc kubenswrapper[4789]: I0202 21:20:50.683204 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:50Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:50 crc kubenswrapper[4789]: I0202 21:20:50.738410 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:50 crc kubenswrapper[4789]: I0202 21:20:50.738441 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:50 crc kubenswrapper[4789]: I0202 21:20:50.738452 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:50 crc kubenswrapper[4789]: I0202 21:20:50.738469 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:50 crc kubenswrapper[4789]: I0202 21:20:50.738483 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:50Z","lastTransitionTime":"2026-02-02T21:20:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:50 crc kubenswrapper[4789]: I0202 21:20:50.774962 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 14:02:14.342421364 +0000 UTC Feb 02 21:20:50 crc kubenswrapper[4789]: I0202 21:20:50.841720 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:50 crc kubenswrapper[4789]: I0202 21:20:50.841776 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:50 crc kubenswrapper[4789]: I0202 21:20:50.841794 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:50 crc kubenswrapper[4789]: I0202 21:20:50.841817 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:50 crc kubenswrapper[4789]: I0202 21:20:50.841832 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:50Z","lastTransitionTime":"2026-02-02T21:20:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:50 crc kubenswrapper[4789]: I0202 21:20:50.944289 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:50 crc kubenswrapper[4789]: I0202 21:20:50.944328 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:50 crc kubenswrapper[4789]: I0202 21:20:50.944339 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:50 crc kubenswrapper[4789]: I0202 21:20:50.944353 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:50 crc kubenswrapper[4789]: I0202 21:20:50.944363 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:50Z","lastTransitionTime":"2026-02-02T21:20:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:51 crc kubenswrapper[4789]: I0202 21:20:51.047239 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:51 crc kubenswrapper[4789]: I0202 21:20:51.047303 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:51 crc kubenswrapper[4789]: I0202 21:20:51.047322 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:51 crc kubenswrapper[4789]: I0202 21:20:51.047344 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:51 crc kubenswrapper[4789]: I0202 21:20:51.047361 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:51Z","lastTransitionTime":"2026-02-02T21:20:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:51 crc kubenswrapper[4789]: I0202 21:20:51.149536 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:51 crc kubenswrapper[4789]: I0202 21:20:51.149591 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:51 crc kubenswrapper[4789]: I0202 21:20:51.149603 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:51 crc kubenswrapper[4789]: I0202 21:20:51.149619 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:51 crc kubenswrapper[4789]: I0202 21:20:51.149630 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:51Z","lastTransitionTime":"2026-02-02T21:20:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:51 crc kubenswrapper[4789]: I0202 21:20:51.251773 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:51 crc kubenswrapper[4789]: I0202 21:20:51.251830 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:51 crc kubenswrapper[4789]: I0202 21:20:51.251840 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:51 crc kubenswrapper[4789]: I0202 21:20:51.251852 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:51 crc kubenswrapper[4789]: I0202 21:20:51.251861 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:51Z","lastTransitionTime":"2026-02-02T21:20:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:51 crc kubenswrapper[4789]: I0202 21:20:51.354455 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:51 crc kubenswrapper[4789]: I0202 21:20:51.354507 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:51 crc kubenswrapper[4789]: I0202 21:20:51.354516 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:51 crc kubenswrapper[4789]: I0202 21:20:51.354529 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:51 crc kubenswrapper[4789]: I0202 21:20:51.354538 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:51Z","lastTransitionTime":"2026-02-02T21:20:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:51 crc kubenswrapper[4789]: I0202 21:20:51.419419 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 21:20:51 crc kubenswrapper[4789]: I0202 21:20:51.419455 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 21:20:51 crc kubenswrapper[4789]: I0202 21:20:51.419553 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 21:20:51 crc kubenswrapper[4789]: E0202 21:20:51.419543 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 21:20:51 crc kubenswrapper[4789]: E0202 21:20:51.419636 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 21:20:51 crc kubenswrapper[4789]: E0202 21:20:51.419746 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 21:20:51 crc kubenswrapper[4789]: I0202 21:20:51.457146 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:51 crc kubenswrapper[4789]: I0202 21:20:51.457181 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:51 crc kubenswrapper[4789]: I0202 21:20:51.457190 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:51 crc kubenswrapper[4789]: I0202 21:20:51.457203 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:51 crc kubenswrapper[4789]: I0202 21:20:51.457211 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:51Z","lastTransitionTime":"2026-02-02T21:20:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:51 crc kubenswrapper[4789]: I0202 21:20:51.559747 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:51 crc kubenswrapper[4789]: I0202 21:20:51.559794 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:51 crc kubenswrapper[4789]: I0202 21:20:51.559805 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:51 crc kubenswrapper[4789]: I0202 21:20:51.559822 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:51 crc kubenswrapper[4789]: I0202 21:20:51.559833 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:51Z","lastTransitionTime":"2026-02-02T21:20:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:51 crc kubenswrapper[4789]: I0202 21:20:51.662222 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:51 crc kubenswrapper[4789]: I0202 21:20:51.662283 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:51 crc kubenswrapper[4789]: I0202 21:20:51.662300 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:51 crc kubenswrapper[4789]: I0202 21:20:51.662323 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:51 crc kubenswrapper[4789]: I0202 21:20:51.662341 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:51Z","lastTransitionTime":"2026-02-02T21:20:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:51 crc kubenswrapper[4789]: I0202 21:20:51.766229 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:51 crc kubenswrapper[4789]: I0202 21:20:51.766304 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:51 crc kubenswrapper[4789]: I0202 21:20:51.766322 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:51 crc kubenswrapper[4789]: I0202 21:20:51.766349 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:51 crc kubenswrapper[4789]: I0202 21:20:51.766367 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:51Z","lastTransitionTime":"2026-02-02T21:20:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:51 crc kubenswrapper[4789]: I0202 21:20:51.775487 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 15:51:58.654411485 +0000 UTC Feb 02 21:20:51 crc kubenswrapper[4789]: I0202 21:20:51.868997 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:51 crc kubenswrapper[4789]: I0202 21:20:51.869365 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:51 crc kubenswrapper[4789]: I0202 21:20:51.869376 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:51 crc kubenswrapper[4789]: I0202 21:20:51.869391 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:51 crc kubenswrapper[4789]: I0202 21:20:51.869400 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:51Z","lastTransitionTime":"2026-02-02T21:20:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:51 crc kubenswrapper[4789]: I0202 21:20:51.974604 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:51 crc kubenswrapper[4789]: I0202 21:20:51.974668 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:51 crc kubenswrapper[4789]: I0202 21:20:51.974680 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:51 crc kubenswrapper[4789]: I0202 21:20:51.974698 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:51 crc kubenswrapper[4789]: I0202 21:20:51.974709 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:51Z","lastTransitionTime":"2026-02-02T21:20:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:52 crc kubenswrapper[4789]: I0202 21:20:52.077456 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:52 crc kubenswrapper[4789]: I0202 21:20:52.077530 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:52 crc kubenswrapper[4789]: I0202 21:20:52.077553 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:52 crc kubenswrapper[4789]: I0202 21:20:52.077615 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:52 crc kubenswrapper[4789]: I0202 21:20:52.077641 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:52Z","lastTransitionTime":"2026-02-02T21:20:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:52 crc kubenswrapper[4789]: I0202 21:20:52.180196 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:52 crc kubenswrapper[4789]: I0202 21:20:52.180239 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:52 crc kubenswrapper[4789]: I0202 21:20:52.180249 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:52 crc kubenswrapper[4789]: I0202 21:20:52.180262 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:52 crc kubenswrapper[4789]: I0202 21:20:52.180271 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:52Z","lastTransitionTime":"2026-02-02T21:20:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:52 crc kubenswrapper[4789]: I0202 21:20:52.233093 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2x5ws_70a32268-2a2d-47f3-9fc6-4281b8dc6a02/kube-multus/0.log" Feb 02 21:20:52 crc kubenswrapper[4789]: I0202 21:20:52.233130 4789 generic.go:334] "Generic (PLEG): container finished" podID="70a32268-2a2d-47f3-9fc6-4281b8dc6a02" containerID="9eaf3ca59ce89187ee20f3915b7fd4a8867e156c0be18b435511d2af95ffd949" exitCode=1 Feb 02 21:20:52 crc kubenswrapper[4789]: I0202 21:20:52.233155 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2x5ws" event={"ID":"70a32268-2a2d-47f3-9fc6-4281b8dc6a02","Type":"ContainerDied","Data":"9eaf3ca59ce89187ee20f3915b7fd4a8867e156c0be18b435511d2af95ffd949"} Feb 02 21:20:52 crc kubenswrapper[4789]: I0202 21:20:52.233486 4789 scope.go:117] "RemoveContainer" containerID="9eaf3ca59ce89187ee20f3915b7fd4a8867e156c0be18b435511d2af95ffd949" Feb 02 21:20:52 crc kubenswrapper[4789]: I0202 21:20:52.249672 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af0ef1e0-7fcc-4fa6-8463-a0f5a9145c2e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3c3d527f77f26e052c4ef9c0577938dc23c802918e742b9fb9020cb6ba705f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eface13a61e8f9d1e8e9512c78a4c70973bfad708c3cdea7f7251f6fa408a59f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ed17431bc8880523ea84349c68ea56e389033b550390d88e60373f663d1491f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d78edf2f7e647edfcc306a3feff71c983907077ad05a684af9fa2990deaf3b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d78edf2f7e647edfcc306a3feff71c983907077ad05a684af9fa2990deaf3b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:19:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:19:40Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:52Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:52 crc kubenswrapper[4789]: I0202 21:20:52.264255 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa57a708f883719dd81200ccf975947c0af244b92ead81daee23171c9be412f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:52Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:52 crc kubenswrapper[4789]: I0202 21:20:52.277782 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6l576" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd970d28-4009-48b2-a0f4-2b8b1d54a2cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b641bb7caefade4d07fe41965573b059bd8b23a83bb9f60a9637d0152dffb07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwbqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6l576\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:52Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:52 crc kubenswrapper[4789]: I0202 21:20:52.283241 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:52 crc kubenswrapper[4789]: I0202 21:20:52.283272 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:52 crc kubenswrapper[4789]: I0202 21:20:52.283306 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:52 crc kubenswrapper[4789]: I0202 21:20:52.283331 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:52 crc kubenswrapper[4789]: I0202 21:20:52.283344 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:52Z","lastTransitionTime":"2026-02-02T21:20:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:52 crc kubenswrapper[4789]: I0202 21:20:52.297432 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:52Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:52 crc kubenswrapper[4789]: I0202 21:20:52.311125 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vjbpg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dc26662-64d3-47f0-9e0d-d340760ca348\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9zq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9zq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vjbpg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:52Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:52 crc kubenswrapper[4789]: I0202 21:20:52.328039 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40b3784b5ba59f0147d2889d897d17140759155e24587e9a323338e0a3125ddd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:52Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:52 crc kubenswrapper[4789]: I0202 21:20:52.347934 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:52Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:52 crc kubenswrapper[4789]: I0202 21:20:52.368569 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:52Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:52 crc kubenswrapper[4789]: I0202 21:20:52.385490 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:52 crc kubenswrapper[4789]: I0202 21:20:52.385544 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:52 crc kubenswrapper[4789]: I0202 21:20:52.385562 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:52 crc kubenswrapper[4789]: I0202 21:20:52.385613 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:52 crc kubenswrapper[4789]: I0202 21:20:52.385633 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:52Z","lastTransitionTime":"2026-02-02T21:20:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:52 crc kubenswrapper[4789]: I0202 21:20:52.389422 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19928db343f460eed7b046f14b45c6756081f57ea4b3ad77acc86f29eb856a82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://106225a63f3d4ecb7ba8c5266f738990ec7cc5603f9fabde6d234ae265dcc310\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:52Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:52 crc kubenswrapper[4789]: I0202 21:20:52.402630 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdf018b4-1451-4d37-be6e-05802b67c73e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad1716ba11e1b21eb68642e6935312760c389a669a007822290fbe72573bfaad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwk2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b108a17d437cce7e94365267d96e99e84944d9dd12174c5acb380bc1a1f9c885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwk2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c8vcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:52Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:52 crc kubenswrapper[4789]: I0202 21:20:52.419353 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d49gm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39528981-2c85-43f3-8fa0-bfae5c3334cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4ae303f0f4381207f4dd4a443e366d6e3de2014e9bc69aa644e98a76b239868\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnfbx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://277fe88585ee146931597a14fe049a3d69197c94e0d84f5dfb334b08cd685723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnfbx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-d49gm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:52Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:52 crc kubenswrapper[4789]: I0202 21:20:52.419471 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vjbpg" Feb 02 21:20:52 crc kubenswrapper[4789]: E0202 21:20:52.419719 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vjbpg" podUID="2dc26662-64d3-47f0-9e0d-d340760ca348" Feb 02 21:20:52 crc kubenswrapper[4789]: I0202 21:20:52.431078 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf73e052-94a2-472e-88e9-63ab3a8d428b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c523dec61c09703463bce6b000fb79c832b3c190a960fe0097b654fd672477c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1470060c44a356a82b453ed22ef5c3841993bce37eba8523a13c49331499224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1470060c44a356a82b453ed22ef5c3841993bce37eba8523a13c49331499224\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:19:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:19:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:52Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:52 crc kubenswrapper[4789]: I0202 21:20:52.447297 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9563eded-ca82-4eb6-90d4-e62b8acbe296\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72bf64d682578abb47e9fabf5e6c31e3a20594cc4a2d13304b2036f4198af265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cc46c408bb643834e494025442760929995b22175ce2af65a14004761848c2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e95c9e553b8e14141ddde6a221c0e5e4d46533d1631893e5d55416b9d16f4a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66a4db3799101ccca8a89d6bfd2c9d36940b8710ee3d256e47cd61cfe6ac7c07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d8bd6ccf6c7345f0ddc84a2983b618218fc65283b8d351c2df30d1221f304b7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\" 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 21:20:01.773773 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 21:20:01.773776 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 21:20:01.773779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 21:20:01.773999 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0202 21:20:01.777333 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3307553149/tls.crt::/tmp/serving-cert-3307553149/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770067196\\\\\\\\\\\\\\\" (2026-02-02 21:19:55 +0000 UTC to 2026-03-04 21:19:56 +0000 UTC (now=2026-02-02 21:20:01.777292377 +0000 UTC))\\\\\\\"\\\\nI0202 21:20:01.777438 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770067201\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770067201\\\\\\\\\\\\\\\" (2026-02-02 20:20:01 +0000 UTC to 2027-02-02 20:20:01 +0000 UTC (now=2026-02-02 21:20:01.777417111 +0000 UTC))\\\\\\\"\\\\nI0202 21:20:01.777455 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 21:20:01.777484 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 21:20:01.777505 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0202 21:20:01.777526 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0202 21:20:01.777548 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3307553149/tls.crt::/tmp/serving-cert-3307553149/tls.key\\\\\\\"\\\\nF0202 21:20:01.777545 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cbe4a41d8721562e34639bc3ab24cc7f735d8de00e14d38b1b623b58740c5b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94bdc1c30a3c895835afc4f205ec20725ec13707db6957046cae7791b8850679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94bdc1c30a3c895835afc4f205ec20725ec13707db6957046cae7791b8850679\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:19:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:19:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:52Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:52 crc kubenswrapper[4789]: I0202 21:20:52.469826 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dsv6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcbed546-a1c3-4ba4-96b4-61471010b1c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fff0380f801c6909a82bfc8a765cf7a393d7c2f7cc809611ded737a22f448a3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48b8508849b24e18223b62bd73348759ba4c04a525afb7d4055175bceb0183c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e48b8508849b24e18223b62bd73348759ba4c04a525afb7d4055175bceb0183c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92b13f24aa00608821909c688ace0441fd4e432d3c5fc9a1cc06b4239bf86955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92b13f24aa00608821909c688ace0441fd4e432d3c5fc9a1cc06b4239bf86955\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75d7fd5bba471f9371960e5c6cfd5d7cb49402d47459acad01a9c438ea33de0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75d7fd5bba471f9371960e5c6cfd5d7cb49402d47459acad01a9c438ea33de0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eb07d53076d60f3363a1232e742bb4bd801d49104a2be1095af34a61a5b61cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eb07d53076d60f3363a1232e742bb4bd801d49104a2be1095af34a61a5b61cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5339fbb27f553f96497abbf4b0584cbffe399e44fafd90cd0e568e2f4efad6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5339fbb27f553f96497abbf4b0584cbffe399e44fafd90cd0e568e2f4efad6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef304631b36ef7a2edc5ce89f5b4d1efecd4947d1d71f3bb4068395d76aea346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef304631b36ef7a2edc5ce89f5b4d1efecd4947d1d71f3bb4068395d76aea346\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dsv6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:52Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:52 crc kubenswrapper[4789]: I0202 21:20:52.488362 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:52 crc kubenswrapper[4789]: I0202 21:20:52.488393 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:52 crc kubenswrapper[4789]: I0202 21:20:52.488402 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:52 crc kubenswrapper[4789]: I0202 21:20:52.488415 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:52 crc kubenswrapper[4789]: I0202 21:20:52.488424 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:52Z","lastTransitionTime":"2026-02-02T21:20:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:52 crc kubenswrapper[4789]: I0202 21:20:52.499686 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29dad69c19f9411676fab81684dd613d866c494df1e333b46bfff35b98430103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://047a98525fa1b06be5e6f3bab6b417973a13bb32df7fad26ca90b4558ee4e364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05637515d4c19e054ce43cc23bb6d8e32c4752282dab08786fff3062eb976461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f05a85f44fd6a7b9efa0f591ec0afebd7818a80b132ec3dddd0faa22c380eea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd508dcd4022163f662ad27cf3ffbc4cca551d4020809ca5fac60cee8da80601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://021453f1c74b708ad3f39c2defa2664f098a552f3e22315479c483d24110e583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8afae9b540e38d08b31eca754dc56d77b5477c42e30bd8aa7ff2200857e9906b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8afae9b540e38d08b31eca754dc56d77b5477c42e30bd8aa7ff2200857e9906b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T21:20:38Z\\\",\\\"message\\\":\\\"er\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-apiserver/apiserver\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.93\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0202 21:20:38.597725 6510 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d49gm\\\\nI0202 21:20:38.597738 6510 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d49gm in node crc\\\\nF0202 21:20:38.597745 6510 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: In\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-w8vkt_openshift-ovn-kubernetes(2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f30344b8366cf5ef712d971dc4d8d552e7b50c1ecea8d2ace88bb80fce62231e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e90a8e698f2e9f30a596864d0d038bfad686cca1a56b1a8bc72fab5fe594f355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e90a8e698f2e9f30a596864d0d038bfad686cca1a56b1a8bc72fab5fe594f355\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w8vkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:52Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:52 crc kubenswrapper[4789]: I0202 21:20:52.513573 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wlsw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b25d791-42e1-4e08-b7da-41803cc40f4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0e2a17d3ade1aa229b2729d66fe953acc746673549e0d929076a3bd18839833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-snjrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wlsw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:52Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:52 crc kubenswrapper[4789]: I0202 21:20:52.530433 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fb5d89e-c901-4857-a5e8-b4d8e3701714\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d85613f300f8d008d919038b40efff3fb67e228e25959cbfd2424640c6bc7a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://460e055777327e1734238bf51929751678ea5a757b00a344daa31c8c95e4c463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c292754c81dd4929f8e599e8ec352fd5e90431c60bb741c070a10ffa91fad72\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3685c506f77d3807711a442d68da94932064737cfdc5cb74557da135906e24f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:19:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:52Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:52 crc kubenswrapper[4789]: I0202 21:20:52.548824 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2x5ws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70a32268-2a2d-47f3-9fc6-4281b8dc6a02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9eaf3ca59ce89187ee20f3915b7fd4a8867e156c0be18b435511d2af95ffd949\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9eaf3ca59ce89187ee20f3915b7fd4a8867e156c0be18b435511d2af95ffd949\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T21:20:51Z\\\",\\\"message\\\":\\\"2026-02-02T21:20:06+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_3308657d-555b-440d-96c9-7656e8ca7e3e\\\\n2026-02-02T21:20:06+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_3308657d-555b-440d-96c9-7656e8ca7e3e to /host/opt/cni/bin/\\\\n2026-02-02T21:20:06Z [verbose] multus-daemon started\\\\n2026-02-02T21:20:06Z [verbose] Readiness Indicator file check\\\\n2026-02-02T21:20:51Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf446\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2x5ws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:52Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:52 crc kubenswrapper[4789]: I0202 21:20:52.593953 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:52 crc kubenswrapper[4789]: I0202 21:20:52.594005 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:52 crc kubenswrapper[4789]: I0202 21:20:52.594018 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:52 crc kubenswrapper[4789]: I0202 21:20:52.594037 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:52 crc kubenswrapper[4789]: I0202 21:20:52.594057 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:52Z","lastTransitionTime":"2026-02-02T21:20:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:52 crc kubenswrapper[4789]: I0202 21:20:52.697053 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:52 crc kubenswrapper[4789]: I0202 21:20:52.697121 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:52 crc kubenswrapper[4789]: I0202 21:20:52.697139 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:52 crc kubenswrapper[4789]: I0202 21:20:52.697733 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:52 crc kubenswrapper[4789]: I0202 21:20:52.697794 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:52Z","lastTransitionTime":"2026-02-02T21:20:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:52 crc kubenswrapper[4789]: I0202 21:20:52.776600 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 04:05:07.956553829 +0000 UTC Feb 02 21:20:52 crc kubenswrapper[4789]: I0202 21:20:52.800342 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:52 crc kubenswrapper[4789]: I0202 21:20:52.800406 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:52 crc kubenswrapper[4789]: I0202 21:20:52.800430 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:52 crc kubenswrapper[4789]: I0202 21:20:52.800459 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:52 crc kubenswrapper[4789]: I0202 21:20:52.800480 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:52Z","lastTransitionTime":"2026-02-02T21:20:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:52 crc kubenswrapper[4789]: I0202 21:20:52.903877 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:52 crc kubenswrapper[4789]: I0202 21:20:52.903937 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:52 crc kubenswrapper[4789]: I0202 21:20:52.903955 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:52 crc kubenswrapper[4789]: I0202 21:20:52.903981 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:52 crc kubenswrapper[4789]: I0202 21:20:52.903999 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:52Z","lastTransitionTime":"2026-02-02T21:20:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:53 crc kubenswrapper[4789]: I0202 21:20:53.007312 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:53 crc kubenswrapper[4789]: I0202 21:20:53.007420 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:53 crc kubenswrapper[4789]: I0202 21:20:53.007440 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:53 crc kubenswrapper[4789]: I0202 21:20:53.007467 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:53 crc kubenswrapper[4789]: I0202 21:20:53.007491 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:53Z","lastTransitionTime":"2026-02-02T21:20:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:53 crc kubenswrapper[4789]: I0202 21:20:53.110241 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:53 crc kubenswrapper[4789]: I0202 21:20:53.110307 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:53 crc kubenswrapper[4789]: I0202 21:20:53.110328 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:53 crc kubenswrapper[4789]: I0202 21:20:53.110359 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:53 crc kubenswrapper[4789]: I0202 21:20:53.110381 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:53Z","lastTransitionTime":"2026-02-02T21:20:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:53 crc kubenswrapper[4789]: I0202 21:20:53.213825 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:53 crc kubenswrapper[4789]: I0202 21:20:53.213864 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:53 crc kubenswrapper[4789]: I0202 21:20:53.213873 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:53 crc kubenswrapper[4789]: I0202 21:20:53.213887 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:53 crc kubenswrapper[4789]: I0202 21:20:53.213897 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:53Z","lastTransitionTime":"2026-02-02T21:20:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:53 crc kubenswrapper[4789]: I0202 21:20:53.238144 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2x5ws_70a32268-2a2d-47f3-9fc6-4281b8dc6a02/kube-multus/0.log" Feb 02 21:20:53 crc kubenswrapper[4789]: I0202 21:20:53.238192 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2x5ws" event={"ID":"70a32268-2a2d-47f3-9fc6-4281b8dc6a02","Type":"ContainerStarted","Data":"75cf318c3d63c5316cbeba8abb93919973f88b415ed7116b55333813b8a889fa"} Feb 02 21:20:53 crc kubenswrapper[4789]: I0202 21:20:53.252230 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf73e052-94a2-472e-88e9-63ab3a8d428b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c523dec61c09703463bce6b000fb79c832b3c190a960fe0097b654fd672477c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1470060c44a356a82b453ed22ef5c3841993bce37eba8523a13c49331499224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1470060c44a356a82b453ed22ef5c3841993bce37eba8523a13c49331499224\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:19:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:19:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:53Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:53 crc kubenswrapper[4789]: I0202 21:20:53.270810 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9563eded-ca82-4eb6-90d4-e62b8acbe296\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72bf64d682578abb47e9fabf5e6c31e3a20594cc4a2d13304b2036f4198af265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cc46c408bb643834e494025442760929995b22175ce2af65a14004761848c2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e95c9e553b8e14141ddde6a221c0e5e4d46533d1631893e5d55416b9d16f4a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66a4db3799101ccca8a89d6bfd2c9d36940b8710ee3d256e47cd61cfe6ac7c07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d8bd6ccf6c7345f0ddc84a2983b618218fc65283b8d351c2df30d1221f304b7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\" 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 21:20:01.773773 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 21:20:01.773776 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 21:20:01.773779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 21:20:01.773999 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0202 21:20:01.777333 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3307553149/tls.crt::/tmp/serving-cert-3307553149/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770067196\\\\\\\\\\\\\\\" (2026-02-02 21:19:55 +0000 UTC to 2026-03-04 21:19:56 +0000 UTC (now=2026-02-02 21:20:01.777292377 +0000 UTC))\\\\\\\"\\\\nI0202 21:20:01.777438 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770067201\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770067201\\\\\\\\\\\\\\\" (2026-02-02 20:20:01 +0000 UTC to 2027-02-02 20:20:01 +0000 UTC (now=2026-02-02 21:20:01.777417111 +0000 UTC))\\\\\\\"\\\\nI0202 21:20:01.777455 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 21:20:01.777484 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 21:20:01.777505 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0202 21:20:01.777526 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0202 21:20:01.777548 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3307553149/tls.crt::/tmp/serving-cert-3307553149/tls.key\\\\\\\"\\\\nF0202 21:20:01.777545 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cbe4a41d8721562e34639bc3ab24cc7f735d8de00e14d38b1b623b58740c5b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94bdc1c30a3c895835afc4f205ec20725ec13707db6957046cae7791b8850679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94bdc1c30a3c895835afc4f205ec20725ec13707db6957046cae7791b8850679\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:19:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:19:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:53Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:53 crc kubenswrapper[4789]: I0202 21:20:53.287184 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dsv6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcbed546-a1c3-4ba4-96b4-61471010b1c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fff0380f801c6909a82bfc8a765cf7a393d7c2f7cc809611ded737a22f448a3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48b8508849b24e18223b62bd73348759ba4c04a525afb7d4055175bceb0183c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e48b8508849b24e18223b62bd73348759ba4c04a525afb7d4055175bceb0183c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92b13f24aa00608821909c688ace0441fd4e432d3c5fc9a1cc06b4239bf86955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92b13f24aa00608821909c688ace0441fd4e432d3c5fc9a1cc06b4239bf86955\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75d7fd5bba471f9371960e5c6cfd5d7cb49402d47459acad01a9c438ea33de0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75d7fd5bba471f9371960e5c6cfd5d7cb49402d47459acad01a9c438ea33de0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eb07d53076d60f3363a1232e742bb4bd801d49104a2be1095af34a61a5b61cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eb07d53076d60f3363a1232e742bb4bd801d49104a2be1095af34a61a5b61cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5339fbb27f553f96497abbf4b0584cbffe399e44fafd90cd0e568e2f4efad6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5339fbb27f553f96497abbf4b0584cbffe399e44fafd90cd0e568e2f4efad6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef304631b36ef7a2edc5ce89f5b4d1efecd4947d1d71f3bb4068395d76aea346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef304631b36ef7a2edc5ce89f5b4d1efecd4947d1d71f3bb4068395d76aea346\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dsv6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:53Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:53 crc kubenswrapper[4789]: I0202 21:20:53.315861 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29dad69c19f9411676fab81684dd613d866c494df1e333b46bfff35b98430103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://047a98525fa1b06be5e6f3bab6b417973a13bb32df7fad26ca90b4558ee4e364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05637515d4c19e054ce43cc23bb6d8e32c4752282dab08786fff3062eb976461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f05a85f44fd6a7b9efa0f591ec0afebd7818a80b132ec3dddd0faa22c380eea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd508dcd4022163f662ad27cf3ffbc4cca551d4020809ca5fac60cee8da80601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://021453f1c74b708ad3f39c2defa2664f098a552f3e22315479c483d24110e583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8afae9b540e38d08b31eca754dc56d77b5477c42e30bd8aa7ff2200857e9906b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8afae9b540e38d08b31eca754dc56d77b5477c42e30bd8aa7ff2200857e9906b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T21:20:38Z\\\",\\\"message\\\":\\\"er\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-apiserver/apiserver\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.93\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0202 21:20:38.597725 6510 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d49gm\\\\nI0202 21:20:38.597738 6510 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d49gm in node crc\\\\nF0202 21:20:38.597745 6510 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: In\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-w8vkt_openshift-ovn-kubernetes(2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f30344b8366cf5ef712d971dc4d8d552e7b50c1ecea8d2ace88bb80fce62231e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e90a8e698f2e9f30a596864d0d038bfad686cca1a56b1a8bc72fab5fe594f355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e90a8e698f2e9f30a596864d0d038bfad686cca1a56b1a8bc72fab5fe594f355\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w8vkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:53Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:53 crc kubenswrapper[4789]: I0202 21:20:53.316450 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:53 crc kubenswrapper[4789]: I0202 21:20:53.316487 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:53 crc kubenswrapper[4789]: I0202 21:20:53.316497 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:53 crc kubenswrapper[4789]: I0202 21:20:53.316510 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:53 crc kubenswrapper[4789]: I0202 21:20:53.316521 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:53Z","lastTransitionTime":"2026-02-02T21:20:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:53 crc kubenswrapper[4789]: I0202 21:20:53.329246 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wlsw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b25d791-42e1-4e08-b7da-41803cc40f4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0e2a17d3ade1aa229b2729d66fe953acc746673549e0d929076a3bd18839833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-snjrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wlsw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:53Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:53 crc kubenswrapper[4789]: I0202 21:20:53.346962 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fb5d89e-c901-4857-a5e8-b4d8e3701714\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d85613f300f8d008d919038b40efff3fb67e228e25959cbfd2424640c6bc7a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://460e055777327e1734238bf51929751678ea5a757b00a344daa31c8c95e4c463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c292754c81dd4929f8e599e8ec352fd5e90431c60bb741c070a10ffa91fad72\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3685c506f77d3807711a442d68da94932064737cfdc5cb74557da135906e24f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:19:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:53Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:53 crc kubenswrapper[4789]: I0202 21:20:53.368923 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2x5ws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70a32268-2a2d-47f3-9fc6-4281b8dc6a02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75cf318c3d63c5316cbeba8abb93919973f88b415ed7116b55333813b8a889fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9eaf3ca59ce89187ee20f3915b7fd4a8867e156c0be18b435511d2af95ffd949\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T21:20:51Z\\\",\\\"message\\\":\\\"2026-02-02T21:20:06+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_3308657d-555b-440d-96c9-7656e8ca7e3e\\\\n2026-02-02T21:20:06+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_3308657d-555b-440d-96c9-7656e8ca7e3e to /host/opt/cni/bin/\\\\n2026-02-02T21:20:06Z [verbose] multus-daemon started\\\\n2026-02-02T21:20:06Z [verbose] Readiness Indicator file check\\\\n2026-02-02T21:20:51Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:03Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf446\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2x5ws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:53Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:53 crc kubenswrapper[4789]: I0202 21:20:53.380954 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af0ef1e0-7fcc-4fa6-8463-a0f5a9145c2e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3c3d527f77f26e052c4ef9c0577938dc23c802918e742b9fb9020cb6ba705f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eface13a61e8f9d1e8e9512c78a4c70973bfad708c3cdea7f7251f6fa408a59f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ed17431bc8880523ea84349c68ea56e389033b550390d88e60373f663d1491f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d78edf2f7e647edfcc306a3feff71c983907077ad05a684af9fa2990deaf3b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d78edf2f7e647edfcc306a3feff71c983907077ad05a684af9fa2990deaf3b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:19:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:19:40Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:53Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:53 crc kubenswrapper[4789]: I0202 21:20:53.394294 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa57a708f883719dd81200ccf975947c0af244b92ead81daee23171c9be412f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:53Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:53 crc kubenswrapper[4789]: I0202 21:20:53.405362 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6l576" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd970d28-4009-48b2-a0f4-2b8b1d54a2cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b641bb7caefade4d07fe41965573b059bd8b23a83bb9f60a9637d0152dffb07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwbqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6l576\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:53Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:53 crc kubenswrapper[4789]: I0202 21:20:53.417035 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:53Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:53 crc kubenswrapper[4789]: I0202 21:20:53.418156 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:53 crc kubenswrapper[4789]: I0202 21:20:53.418181 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:53 crc kubenswrapper[4789]: I0202 21:20:53.418190 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:53 crc kubenswrapper[4789]: I0202 21:20:53.418202 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:53 crc kubenswrapper[4789]: I0202 21:20:53.418235 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:53Z","lastTransitionTime":"2026-02-02T21:20:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:53 crc kubenswrapper[4789]: I0202 21:20:53.418488 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 21:20:53 crc kubenswrapper[4789]: I0202 21:20:53.418519 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 21:20:53 crc kubenswrapper[4789]: I0202 21:20:53.418556 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 21:20:53 crc kubenswrapper[4789]: E0202 21:20:53.418821 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 21:20:53 crc kubenswrapper[4789]: E0202 21:20:53.418863 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 21:20:53 crc kubenswrapper[4789]: E0202 21:20:53.418933 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 21:20:53 crc kubenswrapper[4789]: I0202 21:20:53.419015 4789 scope.go:117] "RemoveContainer" containerID="8afae9b540e38d08b31eca754dc56d77b5477c42e30bd8aa7ff2200857e9906b" Feb 02 21:20:53 crc kubenswrapper[4789]: E0202 21:20:53.419142 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-w8vkt_openshift-ovn-kubernetes(2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6)\"" pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" podUID="2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6" Feb 02 21:20:53 crc kubenswrapper[4789]: I0202 21:20:53.427620 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40b3784b5ba59f0147d2889d897d17140759155e24587e9a323338e0a3125ddd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:53Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:53 crc kubenswrapper[4789]: I0202 21:20:53.438822 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:53Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:53 crc kubenswrapper[4789]: I0202 21:20:53.451029 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:53Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:53 crc kubenswrapper[4789]: I0202 21:20:53.462949 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19928db343f460eed7b046f14b45c6756081f57ea4b3ad77acc86f29eb856a82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://106225a63f3d4ecb7ba8c5266f738990ec7cc5603f9fabde6d234ae265dcc310\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:53Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:53 crc kubenswrapper[4789]: I0202 21:20:53.473133 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdf018b4-1451-4d37-be6e-05802b67c73e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad1716ba11e1b21eb68642e6935312760c389a669a007822290fbe72573bfaad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwk2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b108a17d437cce7e94365267d96e99e84944d9dd12174c5acb380bc1a1f9c885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwk2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c8vcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:53Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:53 crc kubenswrapper[4789]: I0202 21:20:53.484675 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d49gm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39528981-2c85-43f3-8fa0-bfae5c3334cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4ae303f0f4381207f4dd4a443e366d6e3de2014e9bc69aa644e98a76b239868\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnfbx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://277fe88585ee146931597a14fe049a3d69197c94e0d84f5dfb334b08cd685723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnfbx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-d49gm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:53Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:53 crc kubenswrapper[4789]: I0202 21:20:53.495774 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vjbpg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dc26662-64d3-47f0-9e0d-d340760ca348\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9zq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9zq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vjbpg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:53Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:53 crc kubenswrapper[4789]: I0202 21:20:53.520360 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:53 crc kubenswrapper[4789]: I0202 21:20:53.520389 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:53 crc kubenswrapper[4789]: I0202 21:20:53.520422 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:53 crc kubenswrapper[4789]: I0202 21:20:53.520443 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:53 crc kubenswrapper[4789]: I0202 21:20:53.520453 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:53Z","lastTransitionTime":"2026-02-02T21:20:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:53 crc kubenswrapper[4789]: I0202 21:20:53.623543 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:53 crc kubenswrapper[4789]: I0202 21:20:53.623604 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:53 crc kubenswrapper[4789]: I0202 21:20:53.623617 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:53 crc kubenswrapper[4789]: I0202 21:20:53.623634 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:53 crc kubenswrapper[4789]: I0202 21:20:53.623646 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:53Z","lastTransitionTime":"2026-02-02T21:20:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:53 crc kubenswrapper[4789]: I0202 21:20:53.727381 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:53 crc kubenswrapper[4789]: I0202 21:20:53.727427 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:53 crc kubenswrapper[4789]: I0202 21:20:53.727437 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:53 crc kubenswrapper[4789]: I0202 21:20:53.727450 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:53 crc kubenswrapper[4789]: I0202 21:20:53.727459 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:53Z","lastTransitionTime":"2026-02-02T21:20:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:53 crc kubenswrapper[4789]: I0202 21:20:53.777434 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 00:30:47.478341739 +0000 UTC Feb 02 21:20:53 crc kubenswrapper[4789]: I0202 21:20:53.829434 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:53 crc kubenswrapper[4789]: I0202 21:20:53.829465 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:53 crc kubenswrapper[4789]: I0202 21:20:53.829477 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:53 crc kubenswrapper[4789]: I0202 21:20:53.829493 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:53 crc kubenswrapper[4789]: I0202 21:20:53.829505 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:53Z","lastTransitionTime":"2026-02-02T21:20:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:53 crc kubenswrapper[4789]: I0202 21:20:53.932759 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:53 crc kubenswrapper[4789]: I0202 21:20:53.932815 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:53 crc kubenswrapper[4789]: I0202 21:20:53.932831 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:53 crc kubenswrapper[4789]: I0202 21:20:53.932858 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:53 crc kubenswrapper[4789]: I0202 21:20:53.932875 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:53Z","lastTransitionTime":"2026-02-02T21:20:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:54 crc kubenswrapper[4789]: I0202 21:20:54.035515 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:54 crc kubenswrapper[4789]: I0202 21:20:54.035605 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:54 crc kubenswrapper[4789]: I0202 21:20:54.035623 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:54 crc kubenswrapper[4789]: I0202 21:20:54.035648 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:54 crc kubenswrapper[4789]: I0202 21:20:54.035666 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:54Z","lastTransitionTime":"2026-02-02T21:20:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:54 crc kubenswrapper[4789]: I0202 21:20:54.137809 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:54 crc kubenswrapper[4789]: I0202 21:20:54.137848 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:54 crc kubenswrapper[4789]: I0202 21:20:54.137862 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:54 crc kubenswrapper[4789]: I0202 21:20:54.137877 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:54 crc kubenswrapper[4789]: I0202 21:20:54.137889 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:54Z","lastTransitionTime":"2026-02-02T21:20:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:54 crc kubenswrapper[4789]: I0202 21:20:54.245433 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:54 crc kubenswrapper[4789]: I0202 21:20:54.245489 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:54 crc kubenswrapper[4789]: I0202 21:20:54.245508 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:54 crc kubenswrapper[4789]: I0202 21:20:54.245544 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:54 crc kubenswrapper[4789]: I0202 21:20:54.245560 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:54Z","lastTransitionTime":"2026-02-02T21:20:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:54 crc kubenswrapper[4789]: I0202 21:20:54.348420 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:54 crc kubenswrapper[4789]: I0202 21:20:54.348493 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:54 crc kubenswrapper[4789]: I0202 21:20:54.348514 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:54 crc kubenswrapper[4789]: I0202 21:20:54.348540 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:54 crc kubenswrapper[4789]: I0202 21:20:54.348561 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:54Z","lastTransitionTime":"2026-02-02T21:20:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:54 crc kubenswrapper[4789]: I0202 21:20:54.419626 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vjbpg" Feb 02 21:20:54 crc kubenswrapper[4789]: E0202 21:20:54.419892 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vjbpg" podUID="2dc26662-64d3-47f0-9e0d-d340760ca348" Feb 02 21:20:54 crc kubenswrapper[4789]: I0202 21:20:54.451867 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:54 crc kubenswrapper[4789]: I0202 21:20:54.451931 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:54 crc kubenswrapper[4789]: I0202 21:20:54.451951 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:54 crc kubenswrapper[4789]: I0202 21:20:54.451977 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:54 crc kubenswrapper[4789]: I0202 21:20:54.452001 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:54Z","lastTransitionTime":"2026-02-02T21:20:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:54 crc kubenswrapper[4789]: I0202 21:20:54.554621 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:54 crc kubenswrapper[4789]: I0202 21:20:54.554687 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:54 crc kubenswrapper[4789]: I0202 21:20:54.554706 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:54 crc kubenswrapper[4789]: I0202 21:20:54.554731 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:54 crc kubenswrapper[4789]: I0202 21:20:54.554749 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:54Z","lastTransitionTime":"2026-02-02T21:20:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:54 crc kubenswrapper[4789]: I0202 21:20:54.657158 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:54 crc kubenswrapper[4789]: I0202 21:20:54.657214 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:54 crc kubenswrapper[4789]: I0202 21:20:54.657233 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:54 crc kubenswrapper[4789]: I0202 21:20:54.657258 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:54 crc kubenswrapper[4789]: I0202 21:20:54.657277 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:54Z","lastTransitionTime":"2026-02-02T21:20:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:54 crc kubenswrapper[4789]: I0202 21:20:54.760637 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:54 crc kubenswrapper[4789]: I0202 21:20:54.760706 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:54 crc kubenswrapper[4789]: I0202 21:20:54.760721 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:54 crc kubenswrapper[4789]: I0202 21:20:54.760740 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:54 crc kubenswrapper[4789]: I0202 21:20:54.760777 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:54Z","lastTransitionTime":"2026-02-02T21:20:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:54 crc kubenswrapper[4789]: I0202 21:20:54.778619 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 12:56:26.949578727 +0000 UTC Feb 02 21:20:54 crc kubenswrapper[4789]: I0202 21:20:54.863602 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:54 crc kubenswrapper[4789]: I0202 21:20:54.863656 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:54 crc kubenswrapper[4789]: I0202 21:20:54.863673 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:54 crc kubenswrapper[4789]: I0202 21:20:54.863698 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:54 crc kubenswrapper[4789]: I0202 21:20:54.863735 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:54Z","lastTransitionTime":"2026-02-02T21:20:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:54 crc kubenswrapper[4789]: I0202 21:20:54.966094 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:54 crc kubenswrapper[4789]: I0202 21:20:54.966138 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:54 crc kubenswrapper[4789]: I0202 21:20:54.966147 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:54 crc kubenswrapper[4789]: I0202 21:20:54.966183 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:54 crc kubenswrapper[4789]: I0202 21:20:54.966195 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:54Z","lastTransitionTime":"2026-02-02T21:20:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:55 crc kubenswrapper[4789]: I0202 21:20:55.068610 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:55 crc kubenswrapper[4789]: I0202 21:20:55.068669 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:55 crc kubenswrapper[4789]: I0202 21:20:55.068684 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:55 crc kubenswrapper[4789]: I0202 21:20:55.068702 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:55 crc kubenswrapper[4789]: I0202 21:20:55.068713 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:55Z","lastTransitionTime":"2026-02-02T21:20:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:55 crc kubenswrapper[4789]: I0202 21:20:55.171641 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:55 crc kubenswrapper[4789]: I0202 21:20:55.171679 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:55 crc kubenswrapper[4789]: I0202 21:20:55.171690 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:55 crc kubenswrapper[4789]: I0202 21:20:55.171706 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:55 crc kubenswrapper[4789]: I0202 21:20:55.171716 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:55Z","lastTransitionTime":"2026-02-02T21:20:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:55 crc kubenswrapper[4789]: I0202 21:20:55.273916 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:55 crc kubenswrapper[4789]: I0202 21:20:55.273958 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:55 crc kubenswrapper[4789]: I0202 21:20:55.273989 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:55 crc kubenswrapper[4789]: I0202 21:20:55.274004 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:55 crc kubenswrapper[4789]: I0202 21:20:55.274014 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:55Z","lastTransitionTime":"2026-02-02T21:20:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:55 crc kubenswrapper[4789]: I0202 21:20:55.376478 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:55 crc kubenswrapper[4789]: I0202 21:20:55.376520 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:55 crc kubenswrapper[4789]: I0202 21:20:55.376531 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:55 crc kubenswrapper[4789]: I0202 21:20:55.376548 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:55 crc kubenswrapper[4789]: I0202 21:20:55.376559 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:55Z","lastTransitionTime":"2026-02-02T21:20:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:55 crc kubenswrapper[4789]: I0202 21:20:55.419320 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 21:20:55 crc kubenswrapper[4789]: E0202 21:20:55.419473 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 21:20:55 crc kubenswrapper[4789]: I0202 21:20:55.419617 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 21:20:55 crc kubenswrapper[4789]: I0202 21:20:55.419628 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 21:20:55 crc kubenswrapper[4789]: E0202 21:20:55.419825 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 21:20:55 crc kubenswrapper[4789]: E0202 21:20:55.419946 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 21:20:55 crc kubenswrapper[4789]: I0202 21:20:55.479386 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:55 crc kubenswrapper[4789]: I0202 21:20:55.479438 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:55 crc kubenswrapper[4789]: I0202 21:20:55.479450 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:55 crc kubenswrapper[4789]: I0202 21:20:55.479469 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:55 crc kubenswrapper[4789]: I0202 21:20:55.479482 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:55Z","lastTransitionTime":"2026-02-02T21:20:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:55 crc kubenswrapper[4789]: I0202 21:20:55.582137 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:55 crc kubenswrapper[4789]: I0202 21:20:55.582233 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:55 crc kubenswrapper[4789]: I0202 21:20:55.582253 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:55 crc kubenswrapper[4789]: I0202 21:20:55.582276 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:55 crc kubenswrapper[4789]: I0202 21:20:55.582293 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:55Z","lastTransitionTime":"2026-02-02T21:20:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:55 crc kubenswrapper[4789]: I0202 21:20:55.685264 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:55 crc kubenswrapper[4789]: I0202 21:20:55.685316 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:55 crc kubenswrapper[4789]: I0202 21:20:55.685336 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:55 crc kubenswrapper[4789]: I0202 21:20:55.685357 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:55 crc kubenswrapper[4789]: I0202 21:20:55.685373 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:55Z","lastTransitionTime":"2026-02-02T21:20:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:55 crc kubenswrapper[4789]: I0202 21:20:55.779573 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 11:14:40.287843357 +0000 UTC Feb 02 21:20:55 crc kubenswrapper[4789]: I0202 21:20:55.787968 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:55 crc kubenswrapper[4789]: I0202 21:20:55.788018 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:55 crc kubenswrapper[4789]: I0202 21:20:55.788027 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:55 crc kubenswrapper[4789]: I0202 21:20:55.788043 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:55 crc kubenswrapper[4789]: I0202 21:20:55.788057 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:55Z","lastTransitionTime":"2026-02-02T21:20:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:55 crc kubenswrapper[4789]: I0202 21:20:55.891123 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:55 crc kubenswrapper[4789]: I0202 21:20:55.891179 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:55 crc kubenswrapper[4789]: I0202 21:20:55.891243 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:55 crc kubenswrapper[4789]: I0202 21:20:55.891265 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:55 crc kubenswrapper[4789]: I0202 21:20:55.891279 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:55Z","lastTransitionTime":"2026-02-02T21:20:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:56 crc kubenswrapper[4789]: I0202 21:20:56.003009 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:56 crc kubenswrapper[4789]: I0202 21:20:56.003077 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:56 crc kubenswrapper[4789]: I0202 21:20:56.003100 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:56 crc kubenswrapper[4789]: I0202 21:20:56.003131 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:56 crc kubenswrapper[4789]: I0202 21:20:56.003151 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:56Z","lastTransitionTime":"2026-02-02T21:20:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:56 crc kubenswrapper[4789]: I0202 21:20:56.106901 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:56 crc kubenswrapper[4789]: I0202 21:20:56.106969 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:56 crc kubenswrapper[4789]: I0202 21:20:56.106986 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:56 crc kubenswrapper[4789]: I0202 21:20:56.107010 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:56 crc kubenswrapper[4789]: I0202 21:20:56.107027 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:56Z","lastTransitionTime":"2026-02-02T21:20:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:56 crc kubenswrapper[4789]: I0202 21:20:56.209744 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:56 crc kubenswrapper[4789]: I0202 21:20:56.209815 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:56 crc kubenswrapper[4789]: I0202 21:20:56.209832 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:56 crc kubenswrapper[4789]: I0202 21:20:56.209856 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:56 crc kubenswrapper[4789]: I0202 21:20:56.209874 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:56Z","lastTransitionTime":"2026-02-02T21:20:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:56 crc kubenswrapper[4789]: I0202 21:20:56.312983 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:56 crc kubenswrapper[4789]: I0202 21:20:56.313084 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:56 crc kubenswrapper[4789]: I0202 21:20:56.313111 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:56 crc kubenswrapper[4789]: I0202 21:20:56.313142 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:56 crc kubenswrapper[4789]: I0202 21:20:56.313166 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:56Z","lastTransitionTime":"2026-02-02T21:20:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:56 crc kubenswrapper[4789]: I0202 21:20:56.416337 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:56 crc kubenswrapper[4789]: I0202 21:20:56.416404 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:56 crc kubenswrapper[4789]: I0202 21:20:56.416422 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:56 crc kubenswrapper[4789]: I0202 21:20:56.416444 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:56 crc kubenswrapper[4789]: I0202 21:20:56.416462 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:56Z","lastTransitionTime":"2026-02-02T21:20:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:56 crc kubenswrapper[4789]: I0202 21:20:56.418869 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vjbpg" Feb 02 21:20:56 crc kubenswrapper[4789]: E0202 21:20:56.419028 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vjbpg" podUID="2dc26662-64d3-47f0-9e0d-d340760ca348" Feb 02 21:20:56 crc kubenswrapper[4789]: I0202 21:20:56.519636 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:56 crc kubenswrapper[4789]: I0202 21:20:56.519703 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:56 crc kubenswrapper[4789]: I0202 21:20:56.519727 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:56 crc kubenswrapper[4789]: I0202 21:20:56.519758 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:56 crc kubenswrapper[4789]: I0202 21:20:56.519782 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:56Z","lastTransitionTime":"2026-02-02T21:20:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:56 crc kubenswrapper[4789]: I0202 21:20:56.622711 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:56 crc kubenswrapper[4789]: I0202 21:20:56.622781 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:56 crc kubenswrapper[4789]: I0202 21:20:56.622804 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:56 crc kubenswrapper[4789]: I0202 21:20:56.622834 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:56 crc kubenswrapper[4789]: I0202 21:20:56.622890 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:56Z","lastTransitionTime":"2026-02-02T21:20:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:56 crc kubenswrapper[4789]: I0202 21:20:56.725542 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:56 crc kubenswrapper[4789]: I0202 21:20:56.725625 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:56 crc kubenswrapper[4789]: I0202 21:20:56.725646 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:56 crc kubenswrapper[4789]: I0202 21:20:56.725671 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:56 crc kubenswrapper[4789]: I0202 21:20:56.725686 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:56Z","lastTransitionTime":"2026-02-02T21:20:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:56 crc kubenswrapper[4789]: I0202 21:20:56.790875 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 08:42:35.016795045 +0000 UTC Feb 02 21:20:56 crc kubenswrapper[4789]: I0202 21:20:56.829795 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:56 crc kubenswrapper[4789]: I0202 21:20:56.829864 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:56 crc kubenswrapper[4789]: I0202 21:20:56.829885 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:56 crc kubenswrapper[4789]: I0202 21:20:56.829907 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:56 crc kubenswrapper[4789]: I0202 21:20:56.829927 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:56Z","lastTransitionTime":"2026-02-02T21:20:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:56 crc kubenswrapper[4789]: I0202 21:20:56.933064 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:56 crc kubenswrapper[4789]: I0202 21:20:56.933134 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:56 crc kubenswrapper[4789]: I0202 21:20:56.933157 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:56 crc kubenswrapper[4789]: I0202 21:20:56.933188 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:56 crc kubenswrapper[4789]: I0202 21:20:56.933213 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:56Z","lastTransitionTime":"2026-02-02T21:20:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:57 crc kubenswrapper[4789]: I0202 21:20:57.035957 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:57 crc kubenswrapper[4789]: I0202 21:20:57.035993 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:57 crc kubenswrapper[4789]: I0202 21:20:57.036003 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:57 crc kubenswrapper[4789]: I0202 21:20:57.036014 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:57 crc kubenswrapper[4789]: I0202 21:20:57.036023 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:57Z","lastTransitionTime":"2026-02-02T21:20:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:57 crc kubenswrapper[4789]: I0202 21:20:57.139251 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:57 crc kubenswrapper[4789]: I0202 21:20:57.139286 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:57 crc kubenswrapper[4789]: I0202 21:20:57.139296 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:57 crc kubenswrapper[4789]: I0202 21:20:57.139313 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:57 crc kubenswrapper[4789]: I0202 21:20:57.139324 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:57Z","lastTransitionTime":"2026-02-02T21:20:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:57 crc kubenswrapper[4789]: I0202 21:20:57.242820 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:57 crc kubenswrapper[4789]: I0202 21:20:57.242900 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:57 crc kubenswrapper[4789]: I0202 21:20:57.242923 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:57 crc kubenswrapper[4789]: I0202 21:20:57.242948 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:57 crc kubenswrapper[4789]: I0202 21:20:57.242965 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:57Z","lastTransitionTime":"2026-02-02T21:20:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:57 crc kubenswrapper[4789]: I0202 21:20:57.346166 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:57 crc kubenswrapper[4789]: I0202 21:20:57.346214 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:57 crc kubenswrapper[4789]: I0202 21:20:57.346228 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:57 crc kubenswrapper[4789]: I0202 21:20:57.346246 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:57 crc kubenswrapper[4789]: I0202 21:20:57.346260 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:57Z","lastTransitionTime":"2026-02-02T21:20:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:57 crc kubenswrapper[4789]: I0202 21:20:57.419013 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 21:20:57 crc kubenswrapper[4789]: I0202 21:20:57.419095 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 21:20:57 crc kubenswrapper[4789]: E0202 21:20:57.419162 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 21:20:57 crc kubenswrapper[4789]: I0202 21:20:57.419371 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 21:20:57 crc kubenswrapper[4789]: E0202 21:20:57.419439 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 21:20:57 crc kubenswrapper[4789]: E0202 21:20:57.419302 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 21:20:57 crc kubenswrapper[4789]: I0202 21:20:57.449545 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:57 crc kubenswrapper[4789]: I0202 21:20:57.449618 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:57 crc kubenswrapper[4789]: I0202 21:20:57.449633 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:57 crc kubenswrapper[4789]: I0202 21:20:57.449652 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:57 crc kubenswrapper[4789]: I0202 21:20:57.449664 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:57Z","lastTransitionTime":"2026-02-02T21:20:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:57 crc kubenswrapper[4789]: I0202 21:20:57.552870 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:57 crc kubenswrapper[4789]: I0202 21:20:57.552923 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:57 crc kubenswrapper[4789]: I0202 21:20:57.552940 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:57 crc kubenswrapper[4789]: I0202 21:20:57.552962 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:57 crc kubenswrapper[4789]: I0202 21:20:57.552981 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:57Z","lastTransitionTime":"2026-02-02T21:20:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:57 crc kubenswrapper[4789]: I0202 21:20:57.655774 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:57 crc kubenswrapper[4789]: I0202 21:20:57.655830 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:57 crc kubenswrapper[4789]: I0202 21:20:57.655846 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:57 crc kubenswrapper[4789]: I0202 21:20:57.655868 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:57 crc kubenswrapper[4789]: I0202 21:20:57.655889 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:57Z","lastTransitionTime":"2026-02-02T21:20:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:57 crc kubenswrapper[4789]: I0202 21:20:57.758434 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:57 crc kubenswrapper[4789]: I0202 21:20:57.758474 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:57 crc kubenswrapper[4789]: I0202 21:20:57.758484 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:57 crc kubenswrapper[4789]: I0202 21:20:57.758499 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:57 crc kubenswrapper[4789]: I0202 21:20:57.758510 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:57Z","lastTransitionTime":"2026-02-02T21:20:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:57 crc kubenswrapper[4789]: I0202 21:20:57.791938 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 09:04:08.144076916 +0000 UTC Feb 02 21:20:57 crc kubenswrapper[4789]: I0202 21:20:57.861291 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:57 crc kubenswrapper[4789]: I0202 21:20:57.861339 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:57 crc kubenswrapper[4789]: I0202 21:20:57.861355 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:57 crc kubenswrapper[4789]: I0202 21:20:57.861375 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:57 crc kubenswrapper[4789]: I0202 21:20:57.861389 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:57Z","lastTransitionTime":"2026-02-02T21:20:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:57 crc kubenswrapper[4789]: I0202 21:20:57.965043 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:57 crc kubenswrapper[4789]: I0202 21:20:57.965101 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:57 crc kubenswrapper[4789]: I0202 21:20:57.965112 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:57 crc kubenswrapper[4789]: I0202 21:20:57.965135 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:57 crc kubenswrapper[4789]: I0202 21:20:57.965148 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:57Z","lastTransitionTime":"2026-02-02T21:20:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:58 crc kubenswrapper[4789]: I0202 21:20:58.068571 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:58 crc kubenswrapper[4789]: I0202 21:20:58.068674 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:58 crc kubenswrapper[4789]: I0202 21:20:58.068692 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:58 crc kubenswrapper[4789]: I0202 21:20:58.068716 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:58 crc kubenswrapper[4789]: I0202 21:20:58.068735 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:58Z","lastTransitionTime":"2026-02-02T21:20:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:58 crc kubenswrapper[4789]: I0202 21:20:58.171918 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:58 crc kubenswrapper[4789]: I0202 21:20:58.171987 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:58 crc kubenswrapper[4789]: I0202 21:20:58.172004 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:58 crc kubenswrapper[4789]: I0202 21:20:58.172028 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:58 crc kubenswrapper[4789]: I0202 21:20:58.172050 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:58Z","lastTransitionTime":"2026-02-02T21:20:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:58 crc kubenswrapper[4789]: I0202 21:20:58.274607 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:58 crc kubenswrapper[4789]: I0202 21:20:58.274684 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:58 crc kubenswrapper[4789]: I0202 21:20:58.274698 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:58 crc kubenswrapper[4789]: I0202 21:20:58.274720 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:58 crc kubenswrapper[4789]: I0202 21:20:58.274733 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:58Z","lastTransitionTime":"2026-02-02T21:20:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:58 crc kubenswrapper[4789]: I0202 21:20:58.378155 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:58 crc kubenswrapper[4789]: I0202 21:20:58.378230 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:58 crc kubenswrapper[4789]: I0202 21:20:58.378252 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:58 crc kubenswrapper[4789]: I0202 21:20:58.378279 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:58 crc kubenswrapper[4789]: I0202 21:20:58.378298 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:58Z","lastTransitionTime":"2026-02-02T21:20:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:58 crc kubenswrapper[4789]: I0202 21:20:58.419477 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vjbpg" Feb 02 21:20:58 crc kubenswrapper[4789]: E0202 21:20:58.419745 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vjbpg" podUID="2dc26662-64d3-47f0-9e0d-d340760ca348" Feb 02 21:20:58 crc kubenswrapper[4789]: I0202 21:20:58.481245 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:58 crc kubenswrapper[4789]: I0202 21:20:58.481314 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:58 crc kubenswrapper[4789]: I0202 21:20:58.481333 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:58 crc kubenswrapper[4789]: I0202 21:20:58.481360 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:58 crc kubenswrapper[4789]: I0202 21:20:58.481380 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:58Z","lastTransitionTime":"2026-02-02T21:20:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:58 crc kubenswrapper[4789]: I0202 21:20:58.587096 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:58 crc kubenswrapper[4789]: I0202 21:20:58.587174 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:58 crc kubenswrapper[4789]: I0202 21:20:58.587194 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:58 crc kubenswrapper[4789]: I0202 21:20:58.587221 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:58 crc kubenswrapper[4789]: I0202 21:20:58.587241 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:58Z","lastTransitionTime":"2026-02-02T21:20:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:58 crc kubenswrapper[4789]: I0202 21:20:58.610257 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:58 crc kubenswrapper[4789]: I0202 21:20:58.610324 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:58 crc kubenswrapper[4789]: I0202 21:20:58.610336 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:58 crc kubenswrapper[4789]: I0202 21:20:58.610359 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:58 crc kubenswrapper[4789]: I0202 21:20:58.610373 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:58Z","lastTransitionTime":"2026-02-02T21:20:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:58 crc kubenswrapper[4789]: E0202 21:20:58.629898 4789 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"007d0037-9447-42ea-b3a4-6e1f0d669307\\\",\\\"systemUUID\\\":\\\"53ecbfdd-0b43-4d74-98ca-c7bcbc951d86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:58Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:58 crc kubenswrapper[4789]: I0202 21:20:58.635320 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:58 crc kubenswrapper[4789]: I0202 21:20:58.635372 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:58 crc kubenswrapper[4789]: I0202 21:20:58.635387 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:58 crc kubenswrapper[4789]: I0202 21:20:58.635408 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:58 crc kubenswrapper[4789]: I0202 21:20:58.635423 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:58Z","lastTransitionTime":"2026-02-02T21:20:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:58 crc kubenswrapper[4789]: E0202 21:20:58.654572 4789 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"007d0037-9447-42ea-b3a4-6e1f0d669307\\\",\\\"systemUUID\\\":\\\"53ecbfdd-0b43-4d74-98ca-c7bcbc951d86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:58Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:58 crc kubenswrapper[4789]: I0202 21:20:58.659367 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:58 crc kubenswrapper[4789]: I0202 21:20:58.659427 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:58 crc kubenswrapper[4789]: I0202 21:20:58.659445 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:58 crc kubenswrapper[4789]: I0202 21:20:58.659474 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:58 crc kubenswrapper[4789]: I0202 21:20:58.659493 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:58Z","lastTransitionTime":"2026-02-02T21:20:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:58 crc kubenswrapper[4789]: E0202 21:20:58.678959 4789 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"007d0037-9447-42ea-b3a4-6e1f0d669307\\\",\\\"systemUUID\\\":\\\"53ecbfdd-0b43-4d74-98ca-c7bcbc951d86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:58Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:58 crc kubenswrapper[4789]: I0202 21:20:58.683472 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:58 crc kubenswrapper[4789]: I0202 21:20:58.683512 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:58 crc kubenswrapper[4789]: I0202 21:20:58.683532 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:58 crc kubenswrapper[4789]: I0202 21:20:58.683554 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:58 crc kubenswrapper[4789]: I0202 21:20:58.683571 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:58Z","lastTransitionTime":"2026-02-02T21:20:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:58 crc kubenswrapper[4789]: E0202 21:20:58.697112 4789 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"007d0037-9447-42ea-b3a4-6e1f0d669307\\\",\\\"systemUUID\\\":\\\"53ecbfdd-0b43-4d74-98ca-c7bcbc951d86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:58Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:58 crc kubenswrapper[4789]: I0202 21:20:58.701848 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:58 crc kubenswrapper[4789]: I0202 21:20:58.701893 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:58 crc kubenswrapper[4789]: I0202 21:20:58.701911 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:58 crc kubenswrapper[4789]: I0202 21:20:58.701931 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:58 crc kubenswrapper[4789]: I0202 21:20:58.701948 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:58Z","lastTransitionTime":"2026-02-02T21:20:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:58 crc kubenswrapper[4789]: E0202 21:20:58.722908 4789 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:20:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"007d0037-9447-42ea-b3a4-6e1f0d669307\\\",\\\"systemUUID\\\":\\\"53ecbfdd-0b43-4d74-98ca-c7bcbc951d86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:20:58Z is after 2025-08-24T17:21:41Z" Feb 02 21:20:58 crc kubenswrapper[4789]: E0202 21:20:58.723129 4789 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 02 21:20:58 crc kubenswrapper[4789]: I0202 21:20:58.725793 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:58 crc kubenswrapper[4789]: I0202 21:20:58.725849 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:58 crc kubenswrapper[4789]: I0202 21:20:58.725868 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:58 crc kubenswrapper[4789]: I0202 21:20:58.725893 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:58 crc kubenswrapper[4789]: I0202 21:20:58.725913 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:58Z","lastTransitionTime":"2026-02-02T21:20:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:58 crc kubenswrapper[4789]: I0202 21:20:58.792524 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 13:18:15.149909058 +0000 UTC Feb 02 21:20:58 crc kubenswrapper[4789]: I0202 21:20:58.830807 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:58 crc kubenswrapper[4789]: I0202 21:20:58.830909 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:58 crc kubenswrapper[4789]: I0202 21:20:58.830979 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:58 crc kubenswrapper[4789]: I0202 21:20:58.831010 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:58 crc kubenswrapper[4789]: I0202 21:20:58.831038 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:58Z","lastTransitionTime":"2026-02-02T21:20:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:58 crc kubenswrapper[4789]: I0202 21:20:58.934732 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:58 crc kubenswrapper[4789]: I0202 21:20:58.934840 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:58 crc kubenswrapper[4789]: I0202 21:20:58.934863 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:58 crc kubenswrapper[4789]: I0202 21:20:58.934891 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:58 crc kubenswrapper[4789]: I0202 21:20:58.934915 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:58Z","lastTransitionTime":"2026-02-02T21:20:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:59 crc kubenswrapper[4789]: I0202 21:20:59.038623 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:59 crc kubenswrapper[4789]: I0202 21:20:59.038675 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:59 crc kubenswrapper[4789]: I0202 21:20:59.038728 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:59 crc kubenswrapper[4789]: I0202 21:20:59.038758 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:59 crc kubenswrapper[4789]: I0202 21:20:59.038781 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:59Z","lastTransitionTime":"2026-02-02T21:20:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:59 crc kubenswrapper[4789]: I0202 21:20:59.141839 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:59 crc kubenswrapper[4789]: I0202 21:20:59.141904 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:59 crc kubenswrapper[4789]: I0202 21:20:59.141922 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:59 crc kubenswrapper[4789]: I0202 21:20:59.141947 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:59 crc kubenswrapper[4789]: I0202 21:20:59.141965 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:59Z","lastTransitionTime":"2026-02-02T21:20:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:59 crc kubenswrapper[4789]: I0202 21:20:59.245015 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:59 crc kubenswrapper[4789]: I0202 21:20:59.245087 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:59 crc kubenswrapper[4789]: I0202 21:20:59.245110 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:59 crc kubenswrapper[4789]: I0202 21:20:59.245137 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:59 crc kubenswrapper[4789]: I0202 21:20:59.245155 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:59Z","lastTransitionTime":"2026-02-02T21:20:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:59 crc kubenswrapper[4789]: I0202 21:20:59.348150 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:59 crc kubenswrapper[4789]: I0202 21:20:59.348205 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:59 crc kubenswrapper[4789]: I0202 21:20:59.348222 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:59 crc kubenswrapper[4789]: I0202 21:20:59.348247 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:59 crc kubenswrapper[4789]: I0202 21:20:59.348265 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:59Z","lastTransitionTime":"2026-02-02T21:20:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:59 crc kubenswrapper[4789]: I0202 21:20:59.418953 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 21:20:59 crc kubenswrapper[4789]: E0202 21:20:59.419132 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 21:20:59 crc kubenswrapper[4789]: I0202 21:20:59.419170 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 21:20:59 crc kubenswrapper[4789]: I0202 21:20:59.419224 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 21:20:59 crc kubenswrapper[4789]: E0202 21:20:59.419344 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 21:20:59 crc kubenswrapper[4789]: E0202 21:20:59.419467 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 21:20:59 crc kubenswrapper[4789]: I0202 21:20:59.451203 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:59 crc kubenswrapper[4789]: I0202 21:20:59.451272 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:59 crc kubenswrapper[4789]: I0202 21:20:59.451307 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:59 crc kubenswrapper[4789]: I0202 21:20:59.451336 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:59 crc kubenswrapper[4789]: I0202 21:20:59.451357 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:59Z","lastTransitionTime":"2026-02-02T21:20:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:59 crc kubenswrapper[4789]: I0202 21:20:59.553787 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:59 crc kubenswrapper[4789]: I0202 21:20:59.553846 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:59 crc kubenswrapper[4789]: I0202 21:20:59.553874 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:59 crc kubenswrapper[4789]: I0202 21:20:59.553897 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:59 crc kubenswrapper[4789]: I0202 21:20:59.553915 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:59Z","lastTransitionTime":"2026-02-02T21:20:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:59 crc kubenswrapper[4789]: I0202 21:20:59.656854 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:59 crc kubenswrapper[4789]: I0202 21:20:59.656908 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:59 crc kubenswrapper[4789]: I0202 21:20:59.656925 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:59 crc kubenswrapper[4789]: I0202 21:20:59.656951 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:59 crc kubenswrapper[4789]: I0202 21:20:59.656969 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:59Z","lastTransitionTime":"2026-02-02T21:20:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:59 crc kubenswrapper[4789]: I0202 21:20:59.760602 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:59 crc kubenswrapper[4789]: I0202 21:20:59.760654 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:59 crc kubenswrapper[4789]: I0202 21:20:59.760667 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:59 crc kubenswrapper[4789]: I0202 21:20:59.760684 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:59 crc kubenswrapper[4789]: I0202 21:20:59.760697 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:59Z","lastTransitionTime":"2026-02-02T21:20:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:59 crc kubenswrapper[4789]: I0202 21:20:59.792885 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 02:08:38.98732313 +0000 UTC Feb 02 21:20:59 crc kubenswrapper[4789]: I0202 21:20:59.865849 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:59 crc kubenswrapper[4789]: I0202 21:20:59.866064 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:59 crc kubenswrapper[4789]: I0202 21:20:59.866094 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:59 crc kubenswrapper[4789]: I0202 21:20:59.866124 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:59 crc kubenswrapper[4789]: I0202 21:20:59.866147 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:59Z","lastTransitionTime":"2026-02-02T21:20:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:20:59 crc kubenswrapper[4789]: I0202 21:20:59.969521 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:20:59 crc kubenswrapper[4789]: I0202 21:20:59.969611 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:20:59 crc kubenswrapper[4789]: I0202 21:20:59.969637 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:20:59 crc kubenswrapper[4789]: I0202 21:20:59.969664 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:20:59 crc kubenswrapper[4789]: I0202 21:20:59.969681 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:20:59Z","lastTransitionTime":"2026-02-02T21:20:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:00 crc kubenswrapper[4789]: I0202 21:21:00.073039 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:00 crc kubenswrapper[4789]: I0202 21:21:00.073113 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:00 crc kubenswrapper[4789]: I0202 21:21:00.073132 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:00 crc kubenswrapper[4789]: I0202 21:21:00.073156 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:00 crc kubenswrapper[4789]: I0202 21:21:00.073178 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:00Z","lastTransitionTime":"2026-02-02T21:21:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:00 crc kubenswrapper[4789]: I0202 21:21:00.176477 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:00 crc kubenswrapper[4789]: I0202 21:21:00.176529 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:00 crc kubenswrapper[4789]: I0202 21:21:00.176546 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:00 crc kubenswrapper[4789]: I0202 21:21:00.176569 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:00 crc kubenswrapper[4789]: I0202 21:21:00.176621 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:00Z","lastTransitionTime":"2026-02-02T21:21:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:00 crc kubenswrapper[4789]: I0202 21:21:00.279285 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:00 crc kubenswrapper[4789]: I0202 21:21:00.279339 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:00 crc kubenswrapper[4789]: I0202 21:21:00.279356 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:00 crc kubenswrapper[4789]: I0202 21:21:00.279378 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:00 crc kubenswrapper[4789]: I0202 21:21:00.279395 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:00Z","lastTransitionTime":"2026-02-02T21:21:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:00 crc kubenswrapper[4789]: I0202 21:21:00.382139 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:00 crc kubenswrapper[4789]: I0202 21:21:00.382240 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:00 crc kubenswrapper[4789]: I0202 21:21:00.382269 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:00 crc kubenswrapper[4789]: I0202 21:21:00.382302 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:00 crc kubenswrapper[4789]: I0202 21:21:00.382325 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:00Z","lastTransitionTime":"2026-02-02T21:21:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:00 crc kubenswrapper[4789]: I0202 21:21:00.419236 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vjbpg" Feb 02 21:21:00 crc kubenswrapper[4789]: E0202 21:21:00.419408 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vjbpg" podUID="2dc26662-64d3-47f0-9e0d-d340760ca348" Feb 02 21:21:00 crc kubenswrapper[4789]: I0202 21:21:00.436650 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdf018b4-1451-4d37-be6e-05802b67c73e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad1716ba11e1b21eb68642e6935312760c389a669a007822290fbe72573bfaad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwk2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b108a17d437cce7e94365267d96e99e84944d9dd12174c5acb380bc1a1f9c885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwk2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c8vcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:00Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:00 crc kubenswrapper[4789]: I0202 21:21:00.451052 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d49gm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39528981-2c85-43f3-8fa0-bfae5c3334cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4ae303f0f4381207f4dd4a443e366d6e3de2014e9bc69aa644e98a76b239868\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnfbx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://277fe88585ee146931597a14fe049a3d69197c94e0d84f5dfb334b08cd685723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnfbx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-d49gm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:00Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:00 crc kubenswrapper[4789]: I0202 21:21:00.461626 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vjbpg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dc26662-64d3-47f0-9e0d-d340760ca348\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9zq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9zq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vjbpg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:00Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:00 crc kubenswrapper[4789]: I0202 21:21:00.480138 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40b3784b5ba59f0147d2889d897d17140759155e24587e9a323338e0a3125ddd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:00Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:00 crc kubenswrapper[4789]: I0202 21:21:00.484844 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:00 crc kubenswrapper[4789]: I0202 21:21:00.484890 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:00 crc kubenswrapper[4789]: I0202 21:21:00.484902 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:00 crc kubenswrapper[4789]: I0202 21:21:00.484918 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:00 crc kubenswrapper[4789]: I0202 21:21:00.484928 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:00Z","lastTransitionTime":"2026-02-02T21:21:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:00 crc kubenswrapper[4789]: I0202 21:21:00.498729 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:00Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:00 crc kubenswrapper[4789]: I0202 21:21:00.518310 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:00Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:00 crc kubenswrapper[4789]: I0202 21:21:00.539547 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19928db343f460eed7b046f14b45c6756081f57ea4b3ad77acc86f29eb856a82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://106225a63f3d4ecb7ba8c5266f738990ec7cc5603f9fabde6d234ae265dcc310\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:00Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:00 crc kubenswrapper[4789]: I0202 21:21:00.555621 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wlsw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b25d791-42e1-4e08-b7da-41803cc40f4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0e2a17d3ade1aa229b2729d66fe953acc746673549e0d929076a3bd18839833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-snjrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wlsw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:00Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:00 crc kubenswrapper[4789]: I0202 21:21:00.570147 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf73e052-94a2-472e-88e9-63ab3a8d428b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c523dec61c09703463bce6b000fb79c832b3c190a960fe0097b654fd672477c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1470060c44a356a82b453ed22ef5c3841993bce37eba8523a13c49331499224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1470060c44a356a82b453ed22ef5c3841993bce37eba8523a13c49331499224\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:19:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:19:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:00Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:00 crc kubenswrapper[4789]: I0202 21:21:00.587514 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:00 crc kubenswrapper[4789]: I0202 21:21:00.587569 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:00 crc kubenswrapper[4789]: I0202 21:21:00.587599 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:00 crc kubenswrapper[4789]: I0202 21:21:00.587617 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:00 crc kubenswrapper[4789]: I0202 21:21:00.587629 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:00Z","lastTransitionTime":"2026-02-02T21:21:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:00 crc kubenswrapper[4789]: I0202 21:21:00.592512 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9563eded-ca82-4eb6-90d4-e62b8acbe296\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72bf64d682578abb47e9fabf5e6c31e3a20594cc4a2d13304b2036f4198af265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cc46c408bb643834e494025442760929995b22175ce2af65a14004761848c2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e95c9e553b8e14141ddde6a221c0e5e4d46533d1631893e5d55416b9d16f4a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66a4db3799101ccca8a89d6bfd2c9d36940b8710ee3d256e47cd61cfe6ac7c07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d8bd6ccf6c7345f0ddc84a2983b618218fc65283b8d351c2df30d1221f304b7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\" 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 21:20:01.773773 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 21:20:01.773776 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 21:20:01.773779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 21:20:01.773999 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0202 21:20:01.777333 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3307553149/tls.crt::/tmp/serving-cert-3307553149/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770067196\\\\\\\\\\\\\\\" (2026-02-02 21:19:55 +0000 UTC to 2026-03-04 21:19:56 +0000 UTC (now=2026-02-02 21:20:01.777292377 +0000 UTC))\\\\\\\"\\\\nI0202 21:20:01.777438 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770067201\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770067201\\\\\\\\\\\\\\\" (2026-02-02 20:20:01 +0000 UTC to 2027-02-02 20:20:01 +0000 UTC (now=2026-02-02 21:20:01.777417111 +0000 UTC))\\\\\\\"\\\\nI0202 21:20:01.777455 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 21:20:01.777484 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 21:20:01.777505 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0202 21:20:01.777526 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0202 21:20:01.777548 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3307553149/tls.crt::/tmp/serving-cert-3307553149/tls.key\\\\\\\"\\\\nF0202 21:20:01.777545 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cbe4a41d8721562e34639bc3ab24cc7f735d8de00e14d38b1b623b58740c5b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94bdc1c30a3c895835afc4f205ec20725ec13707db6957046cae7791b8850679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94bdc1c30a3c895835afc4f205ec20725ec13707db6957046cae7791b8850679\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:19:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:19:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:00Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:00 crc kubenswrapper[4789]: I0202 21:21:00.614636 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dsv6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcbed546-a1c3-4ba4-96b4-61471010b1c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fff0380f801c6909a82bfc8a765cf7a393d7c2f7cc809611ded737a22f448a3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48b8508849b24e18223b62bd73348759ba4c04a525afb7d4055175bceb0183c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e48b8508849b24e18223b62bd73348759ba4c04a525afb7d4055175bceb0183c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92b13f24aa00608821909c688ace0441fd4e432d3c5fc9a1cc06b4239bf86955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92b13f24aa00608821909c688ace0441fd4e432d3c5fc9a1cc06b4239bf86955\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75d7fd5bba471f9371960e5c6cfd5d7cb49402d47459acad01a9c438ea33de0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75d7fd5bba471f9371960e5c6cfd5d7cb49402d47459acad01a9c438ea33de0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eb07d53076d60f3363a1232e742bb4bd801d49104a2be1095af34a61a5b61cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eb07d53076d60f3363a1232e742bb4bd801d49104a2be1095af34a61a5b61cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5339fbb27f553f96497abbf4b0584cbffe399e44fafd90cd0e568e2f4efad6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5339fbb27f553f96497abbf4b0584cbffe399e44fafd90cd0e568e2f4efad6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef304631b36ef7a2edc5ce89f5b4d1efecd4947d1d71f3bb4068395d76aea346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef304631b36ef7a2edc5ce89f5b4d1efecd4947d1d71f3bb4068395d76aea346\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dsv6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:00Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:00 crc kubenswrapper[4789]: I0202 21:21:00.644214 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29dad69c19f9411676fab81684dd613d866c494df1e333b46bfff35b98430103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://047a98525fa1b06be5e6f3bab6b417973a13bb32df7fad26ca90b4558ee4e364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05637515d4c19e054ce43cc23bb6d8e32c4752282dab08786fff3062eb976461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f05a85f44fd6a7b9efa0f591ec0afebd7818a80b132ec3dddd0faa22c380eea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd508dcd4022163f662ad27cf3ffbc4cca551d4020809ca5fac60cee8da80601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://021453f1c74b708ad3f39c2defa2664f098a552f3e22315479c483d24110e583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8afae9b540e38d08b31eca754dc56d77b5477c42e30bd8aa7ff2200857e9906b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8afae9b540e38d08b31eca754dc56d77b5477c42e30bd8aa7ff2200857e9906b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T21:20:38Z\\\",\\\"message\\\":\\\"er\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-apiserver/apiserver\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.93\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0202 21:20:38.597725 6510 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d49gm\\\\nI0202 21:20:38.597738 6510 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d49gm in node crc\\\\nF0202 21:20:38.597745 6510 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: In\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-w8vkt_openshift-ovn-kubernetes(2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f30344b8366cf5ef712d971dc4d8d552e7b50c1ecea8d2ace88bb80fce62231e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e90a8e698f2e9f30a596864d0d038bfad686cca1a56b1a8bc72fab5fe594f355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e90a8e698f2e9f30a596864d0d038bfad686cca1a56b1a8bc72fab5fe594f355\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w8vkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:00Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:00 crc kubenswrapper[4789]: I0202 21:21:00.664893 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fb5d89e-c901-4857-a5e8-b4d8e3701714\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d85613f300f8d008d919038b40efff3fb67e228e25959cbfd2424640c6bc7a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://460e055777327e1734238bf51929751678ea5a757b00a344daa31c8c95e4c463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c292754c81dd4929f8e599e8ec352fd5e90431c60bb741c070a10ffa91fad72\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3685c506f77d3807711a442d68da94932064737cfdc5cb74557da135906e24f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:19:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:00Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:00 crc kubenswrapper[4789]: I0202 21:21:00.688453 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2x5ws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70a32268-2a2d-47f3-9fc6-4281b8dc6a02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75cf318c3d63c5316cbeba8abb93919973f88b415ed7116b55333813b8a889fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9eaf3ca59ce89187ee20f3915b7fd4a8867e156c0be18b435511d2af95ffd949\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T21:20:51Z\\\",\\\"message\\\":\\\"2026-02-02T21:20:06+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_3308657d-555b-440d-96c9-7656e8ca7e3e\\\\n2026-02-02T21:20:06+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_3308657d-555b-440d-96c9-7656e8ca7e3e to /host/opt/cni/bin/\\\\n2026-02-02T21:20:06Z [verbose] multus-daemon started\\\\n2026-02-02T21:20:06Z [verbose] Readiness Indicator file check\\\\n2026-02-02T21:20:51Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:03Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf446\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2x5ws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:00Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:00 crc kubenswrapper[4789]: I0202 21:21:00.691552 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:00 crc kubenswrapper[4789]: I0202 21:21:00.691634 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:00 crc kubenswrapper[4789]: I0202 21:21:00.691654 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:00 crc kubenswrapper[4789]: I0202 21:21:00.691679 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:00 crc kubenswrapper[4789]: I0202 21:21:00.691697 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:00Z","lastTransitionTime":"2026-02-02T21:21:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:00 crc kubenswrapper[4789]: I0202 21:21:00.707699 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af0ef1e0-7fcc-4fa6-8463-a0f5a9145c2e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3c3d527f77f26e052c4ef9c0577938dc23c802918e742b9fb9020cb6ba705f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eface13a61e8f9d1e8e9512c78a4c70973bfad708c3cdea7f7251f6fa408a59f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ed17431bc8880523ea84349c68ea56e389033b550390d88e60373f663d1491f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d78edf2f7e647edfcc306a3feff71c983907077ad05a684af9fa2990deaf3b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d78edf2f7e647edfcc306a3feff71c983907077ad05a684af9fa2990deaf3b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:19:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:19:40Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:00Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:00 crc kubenswrapper[4789]: I0202 21:21:00.728199 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa57a708f883719dd81200ccf975947c0af244b92ead81daee23171c9be412f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:00Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:00 crc kubenswrapper[4789]: I0202 21:21:00.743248 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6l576" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd970d28-4009-48b2-a0f4-2b8b1d54a2cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b641bb7caefade4d07fe41965573b059bd8b23a83bb9f60a9637d0152dffb07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwbqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6l576\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:00Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:00 crc kubenswrapper[4789]: I0202 21:21:00.763511 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:00Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:00 crc kubenswrapper[4789]: I0202 21:21:00.793033 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 09:32:21.767241978 +0000 UTC Feb 02 21:21:00 crc kubenswrapper[4789]: I0202 21:21:00.794183 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:00 crc kubenswrapper[4789]: I0202 21:21:00.794238 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:00 crc kubenswrapper[4789]: I0202 21:21:00.794255 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:00 crc kubenswrapper[4789]: I0202 21:21:00.794280 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:00 crc kubenswrapper[4789]: I0202 21:21:00.794298 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:00Z","lastTransitionTime":"2026-02-02T21:21:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:00 crc kubenswrapper[4789]: I0202 21:21:00.899827 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:00 crc kubenswrapper[4789]: I0202 21:21:00.899874 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:00 crc kubenswrapper[4789]: I0202 21:21:00.899887 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:00 crc kubenswrapper[4789]: I0202 21:21:00.899914 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:00 crc kubenswrapper[4789]: I0202 21:21:00.899928 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:00Z","lastTransitionTime":"2026-02-02T21:21:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:01 crc kubenswrapper[4789]: I0202 21:21:01.003572 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:01 crc kubenswrapper[4789]: I0202 21:21:01.003661 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:01 crc kubenswrapper[4789]: I0202 21:21:01.003679 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:01 crc kubenswrapper[4789]: I0202 21:21:01.003703 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:01 crc kubenswrapper[4789]: I0202 21:21:01.003721 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:01Z","lastTransitionTime":"2026-02-02T21:21:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:01 crc kubenswrapper[4789]: I0202 21:21:01.107041 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:01 crc kubenswrapper[4789]: I0202 21:21:01.107126 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:01 crc kubenswrapper[4789]: I0202 21:21:01.107146 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:01 crc kubenswrapper[4789]: I0202 21:21:01.107173 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:01 crc kubenswrapper[4789]: I0202 21:21:01.107192 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:01Z","lastTransitionTime":"2026-02-02T21:21:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:01 crc kubenswrapper[4789]: I0202 21:21:01.209680 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:01 crc kubenswrapper[4789]: I0202 21:21:01.209769 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:01 crc kubenswrapper[4789]: I0202 21:21:01.209792 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:01 crc kubenswrapper[4789]: I0202 21:21:01.209823 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:01 crc kubenswrapper[4789]: I0202 21:21:01.209845 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:01Z","lastTransitionTime":"2026-02-02T21:21:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:01 crc kubenswrapper[4789]: I0202 21:21:01.313206 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:01 crc kubenswrapper[4789]: I0202 21:21:01.313259 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:01 crc kubenswrapper[4789]: I0202 21:21:01.313275 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:01 crc kubenswrapper[4789]: I0202 21:21:01.313298 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:01 crc kubenswrapper[4789]: I0202 21:21:01.313312 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:01Z","lastTransitionTime":"2026-02-02T21:21:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:01 crc kubenswrapper[4789]: I0202 21:21:01.416856 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:01 crc kubenswrapper[4789]: I0202 21:21:01.416916 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:01 crc kubenswrapper[4789]: I0202 21:21:01.416937 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:01 crc kubenswrapper[4789]: I0202 21:21:01.416963 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:01 crc kubenswrapper[4789]: I0202 21:21:01.416982 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:01Z","lastTransitionTime":"2026-02-02T21:21:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:01 crc kubenswrapper[4789]: I0202 21:21:01.419146 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 21:21:01 crc kubenswrapper[4789]: I0202 21:21:01.419197 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 21:21:01 crc kubenswrapper[4789]: E0202 21:21:01.419365 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 21:21:01 crc kubenswrapper[4789]: I0202 21:21:01.419442 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 21:21:01 crc kubenswrapper[4789]: E0202 21:21:01.419625 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 21:21:01 crc kubenswrapper[4789]: E0202 21:21:01.419715 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 21:21:01 crc kubenswrapper[4789]: I0202 21:21:01.520121 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:01 crc kubenswrapper[4789]: I0202 21:21:01.520189 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:01 crc kubenswrapper[4789]: I0202 21:21:01.520208 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:01 crc kubenswrapper[4789]: I0202 21:21:01.520238 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:01 crc kubenswrapper[4789]: I0202 21:21:01.520255 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:01Z","lastTransitionTime":"2026-02-02T21:21:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:01 crc kubenswrapper[4789]: I0202 21:21:01.622987 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:01 crc kubenswrapper[4789]: I0202 21:21:01.623055 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:01 crc kubenswrapper[4789]: I0202 21:21:01.623080 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:01 crc kubenswrapper[4789]: I0202 21:21:01.623110 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:01 crc kubenswrapper[4789]: I0202 21:21:01.623131 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:01Z","lastTransitionTime":"2026-02-02T21:21:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:01 crc kubenswrapper[4789]: I0202 21:21:01.726305 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:01 crc kubenswrapper[4789]: I0202 21:21:01.726368 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:01 crc kubenswrapper[4789]: I0202 21:21:01.726387 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:01 crc kubenswrapper[4789]: I0202 21:21:01.726412 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:01 crc kubenswrapper[4789]: I0202 21:21:01.726430 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:01Z","lastTransitionTime":"2026-02-02T21:21:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:01 crc kubenswrapper[4789]: I0202 21:21:01.793894 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 21:03:52.75537905 +0000 UTC Feb 02 21:21:01 crc kubenswrapper[4789]: I0202 21:21:01.829607 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:01 crc kubenswrapper[4789]: I0202 21:21:01.829650 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:01 crc kubenswrapper[4789]: I0202 21:21:01.829660 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:01 crc kubenswrapper[4789]: I0202 21:21:01.829675 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:01 crc kubenswrapper[4789]: I0202 21:21:01.829685 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:01Z","lastTransitionTime":"2026-02-02T21:21:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:01 crc kubenswrapper[4789]: I0202 21:21:01.933141 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:01 crc kubenswrapper[4789]: I0202 21:21:01.933215 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:01 crc kubenswrapper[4789]: I0202 21:21:01.933235 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:01 crc kubenswrapper[4789]: I0202 21:21:01.933259 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:01 crc kubenswrapper[4789]: I0202 21:21:01.933280 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:01Z","lastTransitionTime":"2026-02-02T21:21:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:02 crc kubenswrapper[4789]: I0202 21:21:02.035817 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:02 crc kubenswrapper[4789]: I0202 21:21:02.035876 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:02 crc kubenswrapper[4789]: I0202 21:21:02.035895 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:02 crc kubenswrapper[4789]: I0202 21:21:02.035917 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:02 crc kubenswrapper[4789]: I0202 21:21:02.035935 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:02Z","lastTransitionTime":"2026-02-02T21:21:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:02 crc kubenswrapper[4789]: I0202 21:21:02.138760 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:02 crc kubenswrapper[4789]: I0202 21:21:02.138834 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:02 crc kubenswrapper[4789]: I0202 21:21:02.138856 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:02 crc kubenswrapper[4789]: I0202 21:21:02.138887 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:02 crc kubenswrapper[4789]: I0202 21:21:02.138910 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:02Z","lastTransitionTime":"2026-02-02T21:21:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:02 crc kubenswrapper[4789]: I0202 21:21:02.241907 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:02 crc kubenswrapper[4789]: I0202 21:21:02.241971 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:02 crc kubenswrapper[4789]: I0202 21:21:02.241989 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:02 crc kubenswrapper[4789]: I0202 21:21:02.242013 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:02 crc kubenswrapper[4789]: I0202 21:21:02.242031 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:02Z","lastTransitionTime":"2026-02-02T21:21:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:02 crc kubenswrapper[4789]: I0202 21:21:02.345097 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:02 crc kubenswrapper[4789]: I0202 21:21:02.345177 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:02 crc kubenswrapper[4789]: I0202 21:21:02.345190 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:02 crc kubenswrapper[4789]: I0202 21:21:02.345205 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:02 crc kubenswrapper[4789]: I0202 21:21:02.345217 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:02Z","lastTransitionTime":"2026-02-02T21:21:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:02 crc kubenswrapper[4789]: I0202 21:21:02.419115 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vjbpg" Feb 02 21:21:02 crc kubenswrapper[4789]: E0202 21:21:02.419302 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vjbpg" podUID="2dc26662-64d3-47f0-9e0d-d340760ca348" Feb 02 21:21:02 crc kubenswrapper[4789]: I0202 21:21:02.447464 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:02 crc kubenswrapper[4789]: I0202 21:21:02.447509 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:02 crc kubenswrapper[4789]: I0202 21:21:02.447519 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:02 crc kubenswrapper[4789]: I0202 21:21:02.447535 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:02 crc kubenswrapper[4789]: I0202 21:21:02.447547 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:02Z","lastTransitionTime":"2026-02-02T21:21:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:02 crc kubenswrapper[4789]: I0202 21:21:02.549328 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:02 crc kubenswrapper[4789]: I0202 21:21:02.549369 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:02 crc kubenswrapper[4789]: I0202 21:21:02.549377 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:02 crc kubenswrapper[4789]: I0202 21:21:02.549390 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:02 crc kubenswrapper[4789]: I0202 21:21:02.549400 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:02Z","lastTransitionTime":"2026-02-02T21:21:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:02 crc kubenswrapper[4789]: I0202 21:21:02.652523 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:02 crc kubenswrapper[4789]: I0202 21:21:02.652613 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:02 crc kubenswrapper[4789]: I0202 21:21:02.652639 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:02 crc kubenswrapper[4789]: I0202 21:21:02.652669 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:02 crc kubenswrapper[4789]: I0202 21:21:02.652691 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:02Z","lastTransitionTime":"2026-02-02T21:21:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:02 crc kubenswrapper[4789]: I0202 21:21:02.756202 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:02 crc kubenswrapper[4789]: I0202 21:21:02.756270 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:02 crc kubenswrapper[4789]: I0202 21:21:02.756294 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:02 crc kubenswrapper[4789]: I0202 21:21:02.756324 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:02 crc kubenswrapper[4789]: I0202 21:21:02.756346 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:02Z","lastTransitionTime":"2026-02-02T21:21:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:02 crc kubenswrapper[4789]: I0202 21:21:02.795035 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 15:44:29.727380348 +0000 UTC Feb 02 21:21:02 crc kubenswrapper[4789]: I0202 21:21:02.859183 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:02 crc kubenswrapper[4789]: I0202 21:21:02.859226 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:02 crc kubenswrapper[4789]: I0202 21:21:02.859238 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:02 crc kubenswrapper[4789]: I0202 21:21:02.859253 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:02 crc kubenswrapper[4789]: I0202 21:21:02.859268 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:02Z","lastTransitionTime":"2026-02-02T21:21:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:02 crc kubenswrapper[4789]: I0202 21:21:02.962159 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:02 crc kubenswrapper[4789]: I0202 21:21:02.962221 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:02 crc kubenswrapper[4789]: I0202 21:21:02.962240 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:02 crc kubenswrapper[4789]: I0202 21:21:02.962265 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:02 crc kubenswrapper[4789]: I0202 21:21:02.962284 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:02Z","lastTransitionTime":"2026-02-02T21:21:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:03 crc kubenswrapper[4789]: I0202 21:21:03.065839 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:03 crc kubenswrapper[4789]: I0202 21:21:03.065888 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:03 crc kubenswrapper[4789]: I0202 21:21:03.065902 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:03 crc kubenswrapper[4789]: I0202 21:21:03.065919 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:03 crc kubenswrapper[4789]: I0202 21:21:03.065932 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:03Z","lastTransitionTime":"2026-02-02T21:21:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:03 crc kubenswrapper[4789]: I0202 21:21:03.169486 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:03 crc kubenswrapper[4789]: I0202 21:21:03.169575 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:03 crc kubenswrapper[4789]: I0202 21:21:03.169644 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:03 crc kubenswrapper[4789]: I0202 21:21:03.169667 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:03 crc kubenswrapper[4789]: I0202 21:21:03.169688 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:03Z","lastTransitionTime":"2026-02-02T21:21:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:03 crc kubenswrapper[4789]: I0202 21:21:03.282666 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:03 crc kubenswrapper[4789]: I0202 21:21:03.282714 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:03 crc kubenswrapper[4789]: I0202 21:21:03.282724 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:03 crc kubenswrapper[4789]: I0202 21:21:03.282742 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:03 crc kubenswrapper[4789]: I0202 21:21:03.282756 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:03Z","lastTransitionTime":"2026-02-02T21:21:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:03 crc kubenswrapper[4789]: I0202 21:21:03.386087 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:03 crc kubenswrapper[4789]: I0202 21:21:03.386167 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:03 crc kubenswrapper[4789]: I0202 21:21:03.386185 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:03 crc kubenswrapper[4789]: I0202 21:21:03.386306 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:03 crc kubenswrapper[4789]: I0202 21:21:03.386343 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:03Z","lastTransitionTime":"2026-02-02T21:21:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:03 crc kubenswrapper[4789]: I0202 21:21:03.418890 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 21:21:03 crc kubenswrapper[4789]: I0202 21:21:03.418934 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 21:21:03 crc kubenswrapper[4789]: I0202 21:21:03.419045 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 21:21:03 crc kubenswrapper[4789]: E0202 21:21:03.419053 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 21:21:03 crc kubenswrapper[4789]: E0202 21:21:03.419176 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 21:21:03 crc kubenswrapper[4789]: E0202 21:21:03.419277 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 21:21:03 crc kubenswrapper[4789]: I0202 21:21:03.489754 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:03 crc kubenswrapper[4789]: I0202 21:21:03.489795 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:03 crc kubenswrapper[4789]: I0202 21:21:03.489807 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:03 crc kubenswrapper[4789]: I0202 21:21:03.489823 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:03 crc kubenswrapper[4789]: I0202 21:21:03.489834 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:03Z","lastTransitionTime":"2026-02-02T21:21:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:03 crc kubenswrapper[4789]: I0202 21:21:03.592324 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:03 crc kubenswrapper[4789]: I0202 21:21:03.592371 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:03 crc kubenswrapper[4789]: I0202 21:21:03.592382 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:03 crc kubenswrapper[4789]: I0202 21:21:03.592395 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:03 crc kubenswrapper[4789]: I0202 21:21:03.592404 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:03Z","lastTransitionTime":"2026-02-02T21:21:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:03 crc kubenswrapper[4789]: I0202 21:21:03.695198 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:03 crc kubenswrapper[4789]: I0202 21:21:03.695248 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:03 crc kubenswrapper[4789]: I0202 21:21:03.695262 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:03 crc kubenswrapper[4789]: I0202 21:21:03.695281 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:03 crc kubenswrapper[4789]: I0202 21:21:03.695293 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:03Z","lastTransitionTime":"2026-02-02T21:21:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:03 crc kubenswrapper[4789]: I0202 21:21:03.796038 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 21:42:23.48288443 +0000 UTC Feb 02 21:21:03 crc kubenswrapper[4789]: I0202 21:21:03.798550 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:03 crc kubenswrapper[4789]: I0202 21:21:03.798655 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:03 crc kubenswrapper[4789]: I0202 21:21:03.798674 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:03 crc kubenswrapper[4789]: I0202 21:21:03.798697 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:03 crc kubenswrapper[4789]: I0202 21:21:03.798716 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:03Z","lastTransitionTime":"2026-02-02T21:21:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:03 crc kubenswrapper[4789]: I0202 21:21:03.901335 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:03 crc kubenswrapper[4789]: I0202 21:21:03.901382 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:03 crc kubenswrapper[4789]: I0202 21:21:03.901392 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:03 crc kubenswrapper[4789]: I0202 21:21:03.901413 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:03 crc kubenswrapper[4789]: I0202 21:21:03.901424 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:03Z","lastTransitionTime":"2026-02-02T21:21:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:04 crc kubenswrapper[4789]: I0202 21:21:04.005204 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:04 crc kubenswrapper[4789]: I0202 21:21:04.005265 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:04 crc kubenswrapper[4789]: I0202 21:21:04.005279 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:04 crc kubenswrapper[4789]: I0202 21:21:04.005305 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:04 crc kubenswrapper[4789]: I0202 21:21:04.005323 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:04Z","lastTransitionTime":"2026-02-02T21:21:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:04 crc kubenswrapper[4789]: I0202 21:21:04.108101 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:04 crc kubenswrapper[4789]: I0202 21:21:04.108210 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:04 crc kubenswrapper[4789]: I0202 21:21:04.108222 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:04 crc kubenswrapper[4789]: I0202 21:21:04.108237 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:04 crc kubenswrapper[4789]: I0202 21:21:04.108247 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:04Z","lastTransitionTime":"2026-02-02T21:21:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:04 crc kubenswrapper[4789]: I0202 21:21:04.210796 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:04 crc kubenswrapper[4789]: I0202 21:21:04.210887 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:04 crc kubenswrapper[4789]: I0202 21:21:04.210910 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:04 crc kubenswrapper[4789]: I0202 21:21:04.210935 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:04 crc kubenswrapper[4789]: I0202 21:21:04.210955 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:04Z","lastTransitionTime":"2026-02-02T21:21:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:04 crc kubenswrapper[4789]: I0202 21:21:04.314454 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:04 crc kubenswrapper[4789]: I0202 21:21:04.314508 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:04 crc kubenswrapper[4789]: I0202 21:21:04.314831 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:04 crc kubenswrapper[4789]: I0202 21:21:04.314858 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:04 crc kubenswrapper[4789]: I0202 21:21:04.314918 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:04Z","lastTransitionTime":"2026-02-02T21:21:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:04 crc kubenswrapper[4789]: I0202 21:21:04.418442 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:04 crc kubenswrapper[4789]: I0202 21:21:04.418488 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:04 crc kubenswrapper[4789]: I0202 21:21:04.418505 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:04 crc kubenswrapper[4789]: I0202 21:21:04.418547 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vjbpg" Feb 02 21:21:04 crc kubenswrapper[4789]: I0202 21:21:04.418554 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:04 crc kubenswrapper[4789]: E0202 21:21:04.418696 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vjbpg" podUID="2dc26662-64d3-47f0-9e0d-d340760ca348" Feb 02 21:21:04 crc kubenswrapper[4789]: I0202 21:21:04.418760 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:04Z","lastTransitionTime":"2026-02-02T21:21:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:04 crc kubenswrapper[4789]: I0202 21:21:04.521638 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:04 crc kubenswrapper[4789]: I0202 21:21:04.521689 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:04 crc kubenswrapper[4789]: I0202 21:21:04.521702 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:04 crc kubenswrapper[4789]: I0202 21:21:04.521722 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:04 crc kubenswrapper[4789]: I0202 21:21:04.521734 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:04Z","lastTransitionTime":"2026-02-02T21:21:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:04 crc kubenswrapper[4789]: I0202 21:21:04.624699 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:04 crc kubenswrapper[4789]: I0202 21:21:04.624758 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:04 crc kubenswrapper[4789]: I0202 21:21:04.624776 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:04 crc kubenswrapper[4789]: I0202 21:21:04.624800 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:04 crc kubenswrapper[4789]: I0202 21:21:04.624818 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:04Z","lastTransitionTime":"2026-02-02T21:21:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:04 crc kubenswrapper[4789]: I0202 21:21:04.727911 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:04 crc kubenswrapper[4789]: I0202 21:21:04.727969 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:04 crc kubenswrapper[4789]: I0202 21:21:04.727981 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:04 crc kubenswrapper[4789]: I0202 21:21:04.728000 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:04 crc kubenswrapper[4789]: I0202 21:21:04.728012 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:04Z","lastTransitionTime":"2026-02-02T21:21:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:04 crc kubenswrapper[4789]: I0202 21:21:04.796337 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 03:34:26.97978454 +0000 UTC Feb 02 21:21:04 crc kubenswrapper[4789]: I0202 21:21:04.830816 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:04 crc kubenswrapper[4789]: I0202 21:21:04.830888 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:04 crc kubenswrapper[4789]: I0202 21:21:04.830909 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:04 crc kubenswrapper[4789]: I0202 21:21:04.830930 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:04 crc kubenswrapper[4789]: I0202 21:21:04.830948 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:04Z","lastTransitionTime":"2026-02-02T21:21:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:04 crc kubenswrapper[4789]: I0202 21:21:04.933168 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:04 crc kubenswrapper[4789]: I0202 21:21:04.933239 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:04 crc kubenswrapper[4789]: I0202 21:21:04.933259 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:04 crc kubenswrapper[4789]: I0202 21:21:04.933287 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:04 crc kubenswrapper[4789]: I0202 21:21:04.933304 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:04Z","lastTransitionTime":"2026-02-02T21:21:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:05 crc kubenswrapper[4789]: I0202 21:21:05.036375 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:05 crc kubenswrapper[4789]: I0202 21:21:05.036460 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:05 crc kubenswrapper[4789]: I0202 21:21:05.036479 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:05 crc kubenswrapper[4789]: I0202 21:21:05.036504 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:05 crc kubenswrapper[4789]: I0202 21:21:05.036522 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:05Z","lastTransitionTime":"2026-02-02T21:21:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:05 crc kubenswrapper[4789]: I0202 21:21:05.139567 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:05 crc kubenswrapper[4789]: I0202 21:21:05.139621 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:05 crc kubenswrapper[4789]: I0202 21:21:05.139632 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:05 crc kubenswrapper[4789]: I0202 21:21:05.139650 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:05 crc kubenswrapper[4789]: I0202 21:21:05.139664 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:05Z","lastTransitionTime":"2026-02-02T21:21:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:05 crc kubenswrapper[4789]: I0202 21:21:05.224173 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 21:21:05 crc kubenswrapper[4789]: E0202 21:21:05.224327 4789 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 21:21:05 crc kubenswrapper[4789]: E0202 21:21:05.224428 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 21:22:09.224403048 +0000 UTC m=+149.519428127 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 21:21:05 crc kubenswrapper[4789]: I0202 21:21:05.242886 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:05 crc kubenswrapper[4789]: I0202 21:21:05.242994 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:05 crc kubenswrapper[4789]: I0202 21:21:05.243007 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:05 crc kubenswrapper[4789]: I0202 21:21:05.243024 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:05 crc kubenswrapper[4789]: I0202 21:21:05.243036 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:05Z","lastTransitionTime":"2026-02-02T21:21:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:05 crc kubenswrapper[4789]: I0202 21:21:05.324809 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:21:05 crc kubenswrapper[4789]: I0202 21:21:05.325024 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 21:21:05 crc kubenswrapper[4789]: E0202 21:21:05.325048 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 21:22:09.325009765 +0000 UTC m=+149.620034844 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:21:05 crc kubenswrapper[4789]: E0202 21:21:05.325183 4789 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 21:21:05 crc kubenswrapper[4789]: I0202 21:21:05.325212 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 21:21:05 crc kubenswrapper[4789]: E0202 21:21:05.325241 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 21:22:09.325229731 +0000 UTC m=+149.620254850 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 21:21:05 crc kubenswrapper[4789]: I0202 21:21:05.325289 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 21:21:05 crc kubenswrapper[4789]: E0202 21:21:05.325378 4789 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 21:21:05 crc kubenswrapper[4789]: E0202 21:21:05.325414 4789 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 21:21:05 crc kubenswrapper[4789]: E0202 21:21:05.325433 4789 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 21:21:05 crc kubenswrapper[4789]: E0202 21:21:05.325497 4789 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 21:21:05 crc kubenswrapper[4789]: E0202 21:21:05.325524 4789 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 21:21:05 crc kubenswrapper[4789]: E0202 21:21:05.325542 4789 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 21:21:05 crc kubenswrapper[4789]: E0202 21:21:05.325505 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-02 21:22:09.325483228 +0000 UTC m=+149.620508277 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 21:21:05 crc kubenswrapper[4789]: E0202 21:21:05.325638 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-02 21:22:09.325619072 +0000 UTC m=+149.620644131 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 21:21:05 crc kubenswrapper[4789]: I0202 21:21:05.351039 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:05 crc kubenswrapper[4789]: I0202 21:21:05.351128 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:05 crc kubenswrapper[4789]: I0202 21:21:05.351149 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:05 crc kubenswrapper[4789]: I0202 21:21:05.351184 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:05 crc kubenswrapper[4789]: I0202 21:21:05.351203 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:05Z","lastTransitionTime":"2026-02-02T21:21:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:05 crc kubenswrapper[4789]: I0202 21:21:05.418842 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 21:21:05 crc kubenswrapper[4789]: I0202 21:21:05.418887 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 21:21:05 crc kubenswrapper[4789]: I0202 21:21:05.418979 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 21:21:05 crc kubenswrapper[4789]: E0202 21:21:05.419211 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 21:21:05 crc kubenswrapper[4789]: E0202 21:21:05.419668 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 21:21:05 crc kubenswrapper[4789]: E0202 21:21:05.419708 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 21:21:05 crc kubenswrapper[4789]: I0202 21:21:05.420123 4789 scope.go:117] "RemoveContainer" containerID="8afae9b540e38d08b31eca754dc56d77b5477c42e30bd8aa7ff2200857e9906b" Feb 02 21:21:05 crc kubenswrapper[4789]: I0202 21:21:05.454554 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:05 crc kubenswrapper[4789]: I0202 21:21:05.454920 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:05 crc kubenswrapper[4789]: I0202 21:21:05.454931 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:05 crc kubenswrapper[4789]: I0202 21:21:05.454949 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:05 crc kubenswrapper[4789]: I0202 21:21:05.454986 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:05Z","lastTransitionTime":"2026-02-02T21:21:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:05 crc kubenswrapper[4789]: I0202 21:21:05.558231 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:05 crc kubenswrapper[4789]: I0202 21:21:05.558287 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:05 crc kubenswrapper[4789]: I0202 21:21:05.558300 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:05 crc kubenswrapper[4789]: I0202 21:21:05.558316 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:05 crc kubenswrapper[4789]: I0202 21:21:05.558326 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:05Z","lastTransitionTime":"2026-02-02T21:21:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:05 crc kubenswrapper[4789]: I0202 21:21:05.660891 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:05 crc kubenswrapper[4789]: I0202 21:21:05.660930 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:05 crc kubenswrapper[4789]: I0202 21:21:05.660942 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:05 crc kubenswrapper[4789]: I0202 21:21:05.660958 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:05 crc kubenswrapper[4789]: I0202 21:21:05.660970 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:05Z","lastTransitionTime":"2026-02-02T21:21:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:05 crc kubenswrapper[4789]: I0202 21:21:05.768349 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:05 crc kubenswrapper[4789]: I0202 21:21:05.768395 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:05 crc kubenswrapper[4789]: I0202 21:21:05.768406 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:05 crc kubenswrapper[4789]: I0202 21:21:05.768424 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:05 crc kubenswrapper[4789]: I0202 21:21:05.768436 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:05Z","lastTransitionTime":"2026-02-02T21:21:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:05 crc kubenswrapper[4789]: I0202 21:21:05.797144 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 06:37:45.551383643 +0000 UTC Feb 02 21:21:05 crc kubenswrapper[4789]: I0202 21:21:05.881110 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:05 crc kubenswrapper[4789]: I0202 21:21:05.881154 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:05 crc kubenswrapper[4789]: I0202 21:21:05.881164 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:05 crc kubenswrapper[4789]: I0202 21:21:05.881179 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:05 crc kubenswrapper[4789]: I0202 21:21:05.881189 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:05Z","lastTransitionTime":"2026-02-02T21:21:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:05 crc kubenswrapper[4789]: I0202 21:21:05.984206 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:05 crc kubenswrapper[4789]: I0202 21:21:05.984254 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:05 crc kubenswrapper[4789]: I0202 21:21:05.984266 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:05 crc kubenswrapper[4789]: I0202 21:21:05.984285 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:05 crc kubenswrapper[4789]: I0202 21:21:05.984295 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:05Z","lastTransitionTime":"2026-02-02T21:21:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:06 crc kubenswrapper[4789]: I0202 21:21:06.086715 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:06 crc kubenswrapper[4789]: I0202 21:21:06.086770 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:06 crc kubenswrapper[4789]: I0202 21:21:06.086787 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:06 crc kubenswrapper[4789]: I0202 21:21:06.086813 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:06 crc kubenswrapper[4789]: I0202 21:21:06.086863 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:06Z","lastTransitionTime":"2026-02-02T21:21:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:06 crc kubenswrapper[4789]: I0202 21:21:06.189417 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:06 crc kubenswrapper[4789]: I0202 21:21:06.189461 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:06 crc kubenswrapper[4789]: I0202 21:21:06.189473 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:06 crc kubenswrapper[4789]: I0202 21:21:06.189493 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:06 crc kubenswrapper[4789]: I0202 21:21:06.189507 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:06Z","lastTransitionTime":"2026-02-02T21:21:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:06 crc kubenswrapper[4789]: I0202 21:21:06.292622 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:06 crc kubenswrapper[4789]: I0202 21:21:06.292665 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:06 crc kubenswrapper[4789]: I0202 21:21:06.292678 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:06 crc kubenswrapper[4789]: I0202 21:21:06.292697 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:06 crc kubenswrapper[4789]: I0202 21:21:06.292711 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:06Z","lastTransitionTime":"2026-02-02T21:21:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:06 crc kubenswrapper[4789]: I0202 21:21:06.301646 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w8vkt_2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6/ovnkube-controller/2.log" Feb 02 21:21:06 crc kubenswrapper[4789]: I0202 21:21:06.305551 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" event={"ID":"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6","Type":"ContainerStarted","Data":"877f8c63dfc6ee1e69f7f0bc64e5d2896384f737678efb01ad9d27b9030a5fcc"} Feb 02 21:21:06 crc kubenswrapper[4789]: I0202 21:21:06.306336 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" Feb 02 21:21:06 crc kubenswrapper[4789]: I0202 21:21:06.328940 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af0ef1e0-7fcc-4fa6-8463-a0f5a9145c2e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3c3d527f77f26e052c4ef9c0577938dc23c802918e742b9fb9020cb6ba705f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eface13a61e8f9d1e8e9512c78a4c70973bfad708c3cdea7f7251f6fa408a59f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ed17431bc8880523ea84349c68ea56e389033b550390d88e60373f663d1491f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d78edf2f7e647edfcc306a3feff71c983907077ad05a684af9fa2990deaf3b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d78edf2f7e647edfcc306a3feff71c983907077ad05a684af9fa2990deaf3b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:19:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:19:40Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:06Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:06 crc kubenswrapper[4789]: I0202 21:21:06.351932 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa57a708f883719dd81200ccf975947c0af244b92ead81daee23171c9be412f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:06Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:06 crc kubenswrapper[4789]: I0202 21:21:06.365536 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6l576" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd970d28-4009-48b2-a0f4-2b8b1d54a2cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b641bb7caefade4d07fe41965573b059bd8b23a83bb9f60a9637d0152dffb07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwbqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6l576\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:06Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:06 crc kubenswrapper[4789]: I0202 21:21:06.383192 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:06Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:06 crc kubenswrapper[4789]: I0202 21:21:06.396124 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:06 crc kubenswrapper[4789]: I0202 21:21:06.396161 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:06 crc kubenswrapper[4789]: I0202 21:21:06.396175 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:06 crc kubenswrapper[4789]: I0202 21:21:06.396196 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:06 crc kubenswrapper[4789]: I0202 21:21:06.396208 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:06Z","lastTransitionTime":"2026-02-02T21:21:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:06 crc kubenswrapper[4789]: I0202 21:21:06.400852 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdf018b4-1451-4d37-be6e-05802b67c73e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad1716ba11e1b21eb68642e6935312760c389a669a007822290fbe72573bfaad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwk2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b108a17d437cce7e94365267d96e99e84944d9dd12174c5acb380bc1a1f9c885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwk2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c8vcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:06Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:06 crc kubenswrapper[4789]: I0202 21:21:06.416545 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d49gm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39528981-2c85-43f3-8fa0-bfae5c3334cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4ae303f0f4381207f4dd4a443e366d6e3de2014e9bc69aa644e98a76b239868\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnfbx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://277fe88585ee146931597a14fe049a3d69197c94e0d84f5dfb334b08cd685723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnfbx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-d49gm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:06Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:06 crc kubenswrapper[4789]: I0202 21:21:06.418800 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vjbpg" Feb 02 21:21:06 crc kubenswrapper[4789]: E0202 21:21:06.418925 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vjbpg" podUID="2dc26662-64d3-47f0-9e0d-d340760ca348" Feb 02 21:21:06 crc kubenswrapper[4789]: I0202 21:21:06.430546 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vjbpg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dc26662-64d3-47f0-9e0d-d340760ca348\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9zq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9zq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vjbpg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:06Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:06 crc kubenswrapper[4789]: I0202 21:21:06.444743 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40b3784b5ba59f0147d2889d897d17140759155e24587e9a323338e0a3125ddd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:06Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:06 crc kubenswrapper[4789]: I0202 21:21:06.463716 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:06Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:06 crc kubenswrapper[4789]: I0202 21:21:06.476784 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:06Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:06 crc kubenswrapper[4789]: I0202 21:21:06.492095 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19928db343f460eed7b046f14b45c6756081f57ea4b3ad77acc86f29eb856a82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://106225a63f3d4ecb7ba8c5266f738990ec7cc5603f9fabde6d234ae265dcc310\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:06Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:06 crc kubenswrapper[4789]: I0202 21:21:06.499272 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:06 crc kubenswrapper[4789]: I0202 21:21:06.499329 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:06 crc kubenswrapper[4789]: I0202 21:21:06.499348 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:06 crc kubenswrapper[4789]: I0202 21:21:06.499371 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:06 crc kubenswrapper[4789]: I0202 21:21:06.499393 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:06Z","lastTransitionTime":"2026-02-02T21:21:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:06 crc kubenswrapper[4789]: I0202 21:21:06.505924 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wlsw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b25d791-42e1-4e08-b7da-41803cc40f4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0e2a17d3ade1aa229b2729d66fe953acc746673549e0d929076a3bd18839833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-snjrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wlsw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:06Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:06 crc kubenswrapper[4789]: I0202 21:21:06.520625 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf73e052-94a2-472e-88e9-63ab3a8d428b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c523dec61c09703463bce6b000fb79c832b3c190a960fe0097b654fd672477c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1470060c44a356a82b453ed22ef5c3841993bce37eba8523a13c49331499224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1470060c44a356a82b453ed22ef5c3841993bce37eba8523a13c49331499224\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:19:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:19:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:06Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:06 crc kubenswrapper[4789]: I0202 21:21:06.536722 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9563eded-ca82-4eb6-90d4-e62b8acbe296\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72bf64d682578abb47e9fabf5e6c31e3a20594cc4a2d13304b2036f4198af265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cc46c408bb643834e494025442760929995b22175ce2af65a14004761848c2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e95c9e553b8e14141ddde6a221c0e5e4d46533d1631893e5d55416b9d16f4a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66a4db3799101ccca8a89d6bfd2c9d36940b8710ee3d256e47cd61cfe6ac7c07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d8bd6ccf6c7345f0ddc84a2983b618218fc65283b8d351c2df30d1221f304b7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\" 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 21:20:01.773773 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 21:20:01.773776 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 21:20:01.773779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 21:20:01.773999 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0202 21:20:01.777333 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3307553149/tls.crt::/tmp/serving-cert-3307553149/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770067196\\\\\\\\\\\\\\\" (2026-02-02 21:19:55 +0000 UTC to 2026-03-04 21:19:56 +0000 UTC (now=2026-02-02 21:20:01.777292377 +0000 UTC))\\\\\\\"\\\\nI0202 21:20:01.777438 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770067201\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770067201\\\\\\\\\\\\\\\" (2026-02-02 20:20:01 +0000 UTC to 2027-02-02 20:20:01 +0000 UTC (now=2026-02-02 21:20:01.777417111 +0000 UTC))\\\\\\\"\\\\nI0202 21:20:01.777455 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 21:20:01.777484 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 21:20:01.777505 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0202 21:20:01.777526 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0202 21:20:01.777548 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3307553149/tls.crt::/tmp/serving-cert-3307553149/tls.key\\\\\\\"\\\\nF0202 21:20:01.777545 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cbe4a41d8721562e34639bc3ab24cc7f735d8de00e14d38b1b623b58740c5b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94bdc1c30a3c895835afc4f205ec20725ec13707db6957046cae7791b8850679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94bdc1c30a3c895835afc4f205ec20725ec13707db6957046cae7791b8850679\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:19:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:19:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:06Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:06 crc kubenswrapper[4789]: I0202 21:21:06.560644 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dsv6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcbed546-a1c3-4ba4-96b4-61471010b1c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fff0380f801c6909a82bfc8a765cf7a393d7c2f7cc809611ded737a22f448a3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48b8508849b24e18223b62bd73348759ba4c04a525afb7d4055175bceb0183c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e48b8508849b24e18223b62bd73348759ba4c04a525afb7d4055175bceb0183c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92b13f24aa00608821909c688ace0441fd4e432d3c5fc9a1cc06b4239bf86955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92b13f24aa00608821909c688ace0441fd4e432d3c5fc9a1cc06b4239bf86955\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75d7fd5bba471f9371960e5c6cfd5d7cb49402d47459acad01a9c438ea33de0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75d7fd5bba471f9371960e5c6cfd5d7cb49402d47459acad01a9c438ea33de0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eb07d53076d60f3363a1232e742bb4bd801d49104a2be1095af34a61a5b61cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eb07d53076d60f3363a1232e742bb4bd801d49104a2be1095af34a61a5b61cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5339fbb27f553f96497abbf4b0584cbffe399e44fafd90cd0e568e2f4efad6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5339fbb27f553f96497abbf4b0584cbffe399e44fafd90cd0e568e2f4efad6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef304631b36ef7a2edc5ce89f5b4d1efecd4947d1d71f3bb4068395d76aea346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef304631b36ef7a2edc5ce89f5b4d1efecd4947d1d71f3bb4068395d76aea346\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dsv6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:06Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:06 crc kubenswrapper[4789]: I0202 21:21:06.584315 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29dad69c19f9411676fab81684dd613d866c494df1e333b46bfff35b98430103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://047a98525fa1b06be5e6f3bab6b417973a13bb32df7fad26ca90b4558ee4e364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05637515d4c19e054ce43cc23bb6d8e32c4752282dab08786fff3062eb976461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f05a85f44fd6a7b9efa0f591ec0afebd7818a80b132ec3dddd0faa22c380eea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd508dcd4022163f662ad27cf3ffbc4cca551d4020809ca5fac60cee8da80601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://021453f1c74b708ad3f39c2defa2664f098a552f3e22315479c483d24110e583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://877f8c63dfc6ee1e69f7f0bc64e5d2896384f737678efb01ad9d27b9030a5fcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8afae9b540e38d08b31eca754dc56d77b5477c42e30bd8aa7ff2200857e9906b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T21:20:38Z\\\",\\\"message\\\":\\\"er\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-apiserver/apiserver\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.93\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0202 21:20:38.597725 6510 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d49gm\\\\nI0202 21:20:38.597738 6510 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d49gm in node crc\\\\nF0202 21:20:38.597745 6510 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: In\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:21:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f30344b8366cf5ef712d971dc4d8d552e7b50c1ecea8d2ace88bb80fce62231e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e90a8e698f2e9f30a596864d0d038bfad686cca1a56b1a8bc72fab5fe594f355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e90a8e698f2e9f30a596864d0d038bfad686cca1a56b1a8bc72fab5fe594f355\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w8vkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:06Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:06 crc kubenswrapper[4789]: I0202 21:21:06.608561 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:06 crc kubenswrapper[4789]: I0202 21:21:06.608692 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:06 crc kubenswrapper[4789]: I0202 21:21:06.609293 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:06 crc kubenswrapper[4789]: I0202 21:21:06.609365 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:06 crc kubenswrapper[4789]: I0202 21:21:06.609409 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fb5d89e-c901-4857-a5e8-b4d8e3701714\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d85613f300f8d008d919038b40efff3fb67e228e25959cbfd2424640c6bc7a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://460e055777327e1734238bf51929751678ea5a757b00a344daa31c8c95e4c463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c292754c81dd4929f8e599e8ec352fd5e90431c60bb741c070a10ffa91fad72\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3685c506f77d3807711a442d68da94932064737cfdc5cb74557da135906e24f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:19:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:06Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:06 crc kubenswrapper[4789]: I0202 21:21:06.609486 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:06Z","lastTransitionTime":"2026-02-02T21:21:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:06 crc kubenswrapper[4789]: I0202 21:21:06.628818 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2x5ws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70a32268-2a2d-47f3-9fc6-4281b8dc6a02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75cf318c3d63c5316cbeba8abb93919973f88b415ed7116b55333813b8a889fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9eaf3ca59ce89187ee20f3915b7fd4a8867e156c0be18b435511d2af95ffd949\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T21:20:51Z\\\",\\\"message\\\":\\\"2026-02-02T21:20:06+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_3308657d-555b-440d-96c9-7656e8ca7e3e\\\\n2026-02-02T21:20:06+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_3308657d-555b-440d-96c9-7656e8ca7e3e to /host/opt/cni/bin/\\\\n2026-02-02T21:20:06Z [verbose] multus-daemon started\\\\n2026-02-02T21:20:06Z [verbose] Readiness Indicator file check\\\\n2026-02-02T21:20:51Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:03Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf446\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2x5ws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:06Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:06 crc kubenswrapper[4789]: I0202 21:21:06.714058 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:06 crc kubenswrapper[4789]: I0202 21:21:06.714124 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:06 crc kubenswrapper[4789]: I0202 21:21:06.714143 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:06 crc kubenswrapper[4789]: I0202 21:21:06.714167 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:06 crc kubenswrapper[4789]: I0202 21:21:06.714183 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:06Z","lastTransitionTime":"2026-02-02T21:21:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:06 crc kubenswrapper[4789]: I0202 21:21:06.798180 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 22:29:29.888907403 +0000 UTC Feb 02 21:21:06 crc kubenswrapper[4789]: I0202 21:21:06.816878 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:06 crc kubenswrapper[4789]: I0202 21:21:06.816938 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:06 crc kubenswrapper[4789]: I0202 21:21:06.816958 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:06 crc kubenswrapper[4789]: I0202 21:21:06.816979 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:06 crc kubenswrapper[4789]: I0202 21:21:06.816992 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:06Z","lastTransitionTime":"2026-02-02T21:21:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:06 crc kubenswrapper[4789]: I0202 21:21:06.920732 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:06 crc kubenswrapper[4789]: I0202 21:21:06.920806 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:06 crc kubenswrapper[4789]: I0202 21:21:06.920824 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:06 crc kubenswrapper[4789]: I0202 21:21:06.920848 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:06 crc kubenswrapper[4789]: I0202 21:21:06.920865 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:06Z","lastTransitionTime":"2026-02-02T21:21:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:07 crc kubenswrapper[4789]: I0202 21:21:07.023136 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:07 crc kubenswrapper[4789]: I0202 21:21:07.023243 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:07 crc kubenswrapper[4789]: I0202 21:21:07.023260 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:07 crc kubenswrapper[4789]: I0202 21:21:07.023290 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:07 crc kubenswrapper[4789]: I0202 21:21:07.023302 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:07Z","lastTransitionTime":"2026-02-02T21:21:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:07 crc kubenswrapper[4789]: I0202 21:21:07.126538 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:07 crc kubenswrapper[4789]: I0202 21:21:07.126723 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:07 crc kubenswrapper[4789]: I0202 21:21:07.126760 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:07 crc kubenswrapper[4789]: I0202 21:21:07.126792 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:07 crc kubenswrapper[4789]: I0202 21:21:07.126889 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:07Z","lastTransitionTime":"2026-02-02T21:21:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:07 crc kubenswrapper[4789]: I0202 21:21:07.230549 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:07 crc kubenswrapper[4789]: I0202 21:21:07.230627 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:07 crc kubenswrapper[4789]: I0202 21:21:07.230645 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:07 crc kubenswrapper[4789]: I0202 21:21:07.230667 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:07 crc kubenswrapper[4789]: I0202 21:21:07.230688 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:07Z","lastTransitionTime":"2026-02-02T21:21:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:07 crc kubenswrapper[4789]: I0202 21:21:07.312346 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w8vkt_2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6/ovnkube-controller/3.log" Feb 02 21:21:07 crc kubenswrapper[4789]: I0202 21:21:07.313377 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w8vkt_2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6/ovnkube-controller/2.log" Feb 02 21:21:07 crc kubenswrapper[4789]: I0202 21:21:07.317140 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" event={"ID":"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6","Type":"ContainerDied","Data":"877f8c63dfc6ee1e69f7f0bc64e5d2896384f737678efb01ad9d27b9030a5fcc"} Feb 02 21:21:07 crc kubenswrapper[4789]: I0202 21:21:07.317146 4789 generic.go:334] "Generic (PLEG): container finished" podID="2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6" containerID="877f8c63dfc6ee1e69f7f0bc64e5d2896384f737678efb01ad9d27b9030a5fcc" exitCode=1 Feb 02 21:21:07 crc kubenswrapper[4789]: I0202 21:21:07.317240 4789 scope.go:117] "RemoveContainer" containerID="8afae9b540e38d08b31eca754dc56d77b5477c42e30bd8aa7ff2200857e9906b" Feb 02 21:21:07 crc kubenswrapper[4789]: I0202 21:21:07.318304 4789 scope.go:117] "RemoveContainer" containerID="877f8c63dfc6ee1e69f7f0bc64e5d2896384f737678efb01ad9d27b9030a5fcc" Feb 02 21:21:07 crc kubenswrapper[4789]: E0202 21:21:07.318702 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-w8vkt_openshift-ovn-kubernetes(2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6)\"" pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" podUID="2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6" Feb 02 21:21:07 crc kubenswrapper[4789]: I0202 21:21:07.333042 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:07 crc kubenswrapper[4789]: I0202 21:21:07.333097 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:07 crc kubenswrapper[4789]: I0202 21:21:07.333115 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:07 crc kubenswrapper[4789]: I0202 21:21:07.333139 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:07 crc kubenswrapper[4789]: I0202 21:21:07.333157 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:07Z","lastTransitionTime":"2026-02-02T21:21:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:07 crc kubenswrapper[4789]: I0202 21:21:07.336715 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d49gm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39528981-2c85-43f3-8fa0-bfae5c3334cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4ae303f0f4381207f4dd4a443e366d6e3de2014e9bc69aa644e98a76b239868\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnfbx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://277fe88585ee146931597a14fe049a3d69197c94e0d84f5dfb334b08cd685723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnfbx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-d49gm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:07Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:07 crc kubenswrapper[4789]: I0202 21:21:07.351970 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vjbpg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dc26662-64d3-47f0-9e0d-d340760ca348\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9zq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9zq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vjbpg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:07Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:07 crc kubenswrapper[4789]: I0202 21:21:07.370025 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40b3784b5ba59f0147d2889d897d17140759155e24587e9a323338e0a3125ddd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:07Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:07 crc kubenswrapper[4789]: I0202 21:21:07.387190 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:07Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:07 crc kubenswrapper[4789]: I0202 21:21:07.402171 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:07Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:07 crc kubenswrapper[4789]: I0202 21:21:07.416809 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19928db343f460eed7b046f14b45c6756081f57ea4b3ad77acc86f29eb856a82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://106225a63f3d4ecb7ba8c5266f738990ec7cc5603f9fabde6d234ae265dcc310\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:07Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:07 crc kubenswrapper[4789]: I0202 21:21:07.419017 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 21:21:07 crc kubenswrapper[4789]: I0202 21:21:07.419063 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 21:21:07 crc kubenswrapper[4789]: I0202 21:21:07.419092 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 21:21:07 crc kubenswrapper[4789]: E0202 21:21:07.419183 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 21:21:07 crc kubenswrapper[4789]: E0202 21:21:07.419256 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 21:21:07 crc kubenswrapper[4789]: E0202 21:21:07.419355 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 21:21:07 crc kubenswrapper[4789]: I0202 21:21:07.434614 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdf018b4-1451-4d37-be6e-05802b67c73e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad1716ba11e1b21eb68642e6935312760c389a669a007822290fbe72573bfaad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwk2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b108a17d437cce7e94365267d96e99e84944d9dd12174c5acb380bc1a1f9c885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwk2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c8vcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:07Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:07 crc kubenswrapper[4789]: I0202 21:21:07.435999 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:07 crc kubenswrapper[4789]: I0202 21:21:07.436189 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:07 crc kubenswrapper[4789]: I0202 21:21:07.436321 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:07 crc kubenswrapper[4789]: I0202 21:21:07.436367 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:07 crc kubenswrapper[4789]: I0202 21:21:07.436619 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:07Z","lastTransitionTime":"2026-02-02T21:21:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:07 crc kubenswrapper[4789]: I0202 21:21:07.450683 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf73e052-94a2-472e-88e9-63ab3a8d428b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c523dec61c09703463bce6b000fb79c832b3c190a960fe0097b654fd672477c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1470060c44a356a82b453ed22ef5c3841993bce37eba8523a13c49331499224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1470060c44a356a82b453ed22ef5c3841993bce37eba8523a13c49331499224\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:19:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:19:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:07Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:07 crc kubenswrapper[4789]: I0202 21:21:07.475642 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9563eded-ca82-4eb6-90d4-e62b8acbe296\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72bf64d682578abb47e9fabf5e6c31e3a20594cc4a2d13304b2036f4198af265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cc46c408bb643834e494025442760929995b22175ce2af65a14004761848c2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e95c9e553b8e14141ddde6a221c0e5e4d46533d1631893e5d55416b9d16f4a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66a4db3799101ccca8a89d6bfd2c9d36940b8710ee3d256e47cd61cfe6ac7c07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d8bd6ccf6c7345f0ddc84a2983b618218fc65283b8d351c2df30d1221f304b7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\" 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 21:20:01.773773 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 21:20:01.773776 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 21:20:01.773779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 21:20:01.773999 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0202 21:20:01.777333 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3307553149/tls.crt::/tmp/serving-cert-3307553149/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770067196\\\\\\\\\\\\\\\" (2026-02-02 21:19:55 +0000 UTC to 2026-03-04 21:19:56 +0000 UTC (now=2026-02-02 21:20:01.777292377 +0000 UTC))\\\\\\\"\\\\nI0202 21:20:01.777438 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770067201\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770067201\\\\\\\\\\\\\\\" (2026-02-02 20:20:01 +0000 UTC to 2027-02-02 20:20:01 +0000 UTC (now=2026-02-02 21:20:01.777417111 +0000 UTC))\\\\\\\"\\\\nI0202 21:20:01.777455 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 21:20:01.777484 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 21:20:01.777505 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0202 21:20:01.777526 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0202 21:20:01.777548 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3307553149/tls.crt::/tmp/serving-cert-3307553149/tls.key\\\\\\\"\\\\nF0202 21:20:01.777545 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cbe4a41d8721562e34639bc3ab24cc7f735d8de00e14d38b1b623b58740c5b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94bdc1c30a3c895835afc4f205ec20725ec13707db6957046cae7791b8850679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94bdc1c30a3c895835afc4f205ec20725ec13707db6957046cae7791b8850679\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:19:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:19:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:07Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:07 crc kubenswrapper[4789]: I0202 21:21:07.494539 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dsv6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcbed546-a1c3-4ba4-96b4-61471010b1c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fff0380f801c6909a82bfc8a765cf7a393d7c2f7cc809611ded737a22f448a3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48b8508849b24e18223b62bd73348759ba4c04a525afb7d4055175bceb0183c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e48b8508849b24e18223b62bd73348759ba4c04a525afb7d4055175bceb0183c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92b13f24aa00608821909c688ace0441fd4e432d3c5fc9a1cc06b4239bf86955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92b13f24aa00608821909c688ace0441fd4e432d3c5fc9a1cc06b4239bf86955\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75d7fd5bba471f9371960e5c6cfd5d7cb49402d47459acad01a9c438ea33de0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75d7fd5bba471f9371960e5c6cfd5d7cb49402d47459acad01a9c438ea33de0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eb07d53076d60f3363a1232e742bb4bd801d49104a2be1095af34a61a5b61cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eb07d53076d60f3363a1232e742bb4bd801d49104a2be1095af34a61a5b61cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5339fbb27f553f96497abbf4b0584cbffe399e44fafd90cd0e568e2f4efad6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5339fbb27f553f96497abbf4b0584cbffe399e44fafd90cd0e568e2f4efad6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef304631b36ef7a2edc5ce89f5b4d1efecd4947d1d71f3bb4068395d76aea346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef304631b36ef7a2edc5ce89f5b4d1efecd4947d1d71f3bb4068395d76aea346\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dsv6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:07Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:07 crc kubenswrapper[4789]: I0202 21:21:07.529515 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29dad69c19f9411676fab81684dd613d866c494df1e333b46bfff35b98430103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://047a98525fa1b06be5e6f3bab6b417973a13bb32df7fad26ca90b4558ee4e364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05637515d4c19e054ce43cc23bb6d8e32c4752282dab08786fff3062eb976461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f05a85f44fd6a7b9efa0f591ec0afebd7818a80b132ec3dddd0faa22c380eea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd508dcd4022163f662ad27cf3ffbc4cca551d4020809ca5fac60cee8da80601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://021453f1c74b708ad3f39c2defa2664f098a552f3e22315479c483d24110e583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://877f8c63dfc6ee1e69f7f0bc64e5d2896384f737678efb01ad9d27b9030a5fcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8afae9b540e38d08b31eca754dc56d77b5477c42e30bd8aa7ff2200857e9906b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T21:20:38Z\\\",\\\"message\\\":\\\"er\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-apiserver/apiserver\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.93\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0202 21:20:38.597725 6510 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d49gm\\\\nI0202 21:20:38.597738 6510 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d49gm in node crc\\\\nF0202 21:20:38.597745 6510 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: In\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://877f8c63dfc6ee1e69f7f0bc64e5d2896384f737678efb01ad9d27b9030a5fcc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T21:21:06Z\\\",\\\"message\\\":\\\"Removed *v1.EgressIP event handler 8\\\\nI0202 21:21:06.324486 6920 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0202 21:21:06.324713 6920 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 21:21:06.325983 6920 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0202 21:21:06.326123 6920 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0202 21:21:06.327135 6920 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 21:21:06.327379 6920 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0202 21:21:06.328141 6920 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T21:21:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f30344b8366cf5ef712d971dc4d8d552e7b50c1ecea8d2ace88bb80fce62231e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e90a8e698f2e9f30a596864d0d038bfad686cca1a56b1a8bc72fab5fe594f355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e90a8e698f2e9f30a596864d0d038bfad686cca1a56b1a8bc72fab5fe594f355\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w8vkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:07Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:07 crc kubenswrapper[4789]: I0202 21:21:07.539271 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:07 crc kubenswrapper[4789]: I0202 21:21:07.539305 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:07 crc kubenswrapper[4789]: I0202 21:21:07.539317 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:07 crc kubenswrapper[4789]: I0202 21:21:07.539335 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:07 crc kubenswrapper[4789]: I0202 21:21:07.539347 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:07Z","lastTransitionTime":"2026-02-02T21:21:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:07 crc kubenswrapper[4789]: I0202 21:21:07.549776 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wlsw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b25d791-42e1-4e08-b7da-41803cc40f4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0e2a17d3ade1aa229b2729d66fe953acc746673549e0d929076a3bd18839833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-snjrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wlsw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:07Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:07 crc kubenswrapper[4789]: I0202 21:21:07.571409 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fb5d89e-c901-4857-a5e8-b4d8e3701714\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d85613f300f8d008d919038b40efff3fb67e228e25959cbfd2424640c6bc7a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://460e055777327e1734238bf51929751678ea5a757b00a344daa31c8c95e4c463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c292754c81dd4929f8e599e8ec352fd5e90431c60bb741c070a10ffa91fad72\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3685c506f77d3807711a442d68da94932064737cfdc5cb74557da135906e24f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:19:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:07Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:07 crc kubenswrapper[4789]: I0202 21:21:07.588683 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2x5ws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70a32268-2a2d-47f3-9fc6-4281b8dc6a02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75cf318c3d63c5316cbeba8abb93919973f88b415ed7116b55333813b8a889fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9eaf3ca59ce89187ee20f3915b7fd4a8867e156c0be18b435511d2af95ffd949\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T21:20:51Z\\\",\\\"message\\\":\\\"2026-02-02T21:20:06+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_3308657d-555b-440d-96c9-7656e8ca7e3e\\\\n2026-02-02T21:20:06+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_3308657d-555b-440d-96c9-7656e8ca7e3e to /host/opt/cni/bin/\\\\n2026-02-02T21:20:06Z [verbose] multus-daemon started\\\\n2026-02-02T21:20:06Z [verbose] Readiness Indicator file check\\\\n2026-02-02T21:20:51Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:03Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf446\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2x5ws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:07Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:07 crc kubenswrapper[4789]: I0202 21:21:07.602987 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af0ef1e0-7fcc-4fa6-8463-a0f5a9145c2e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3c3d527f77f26e052c4ef9c0577938dc23c802918e742b9fb9020cb6ba705f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eface13a61e8f9d1e8e9512c78a4c70973bfad708c3cdea7f7251f6fa408a59f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ed17431bc8880523ea84349c68ea56e389033b550390d88e60373f663d1491f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d78edf2f7e647edfcc306a3feff71c983907077ad05a684af9fa2990deaf3b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d78edf2f7e647edfcc306a3feff71c983907077ad05a684af9fa2990deaf3b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:19:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:19:40Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:07Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:07 crc kubenswrapper[4789]: I0202 21:21:07.628414 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa57a708f883719dd81200ccf975947c0af244b92ead81daee23171c9be412f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:07Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:07 crc kubenswrapper[4789]: I0202 21:21:07.642324 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:07 crc kubenswrapper[4789]: I0202 21:21:07.642425 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:07 crc kubenswrapper[4789]: I0202 21:21:07.642449 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:07 crc kubenswrapper[4789]: I0202 21:21:07.642485 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:07 crc kubenswrapper[4789]: I0202 21:21:07.642511 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:07Z","lastTransitionTime":"2026-02-02T21:21:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:07 crc kubenswrapper[4789]: I0202 21:21:07.644058 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6l576" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd970d28-4009-48b2-a0f4-2b8b1d54a2cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b641bb7caefade4d07fe41965573b059bd8b23a83bb9f60a9637d0152dffb07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwbqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6l576\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:07Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:07 crc kubenswrapper[4789]: I0202 21:21:07.667532 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:07Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:07 crc kubenswrapper[4789]: I0202 21:21:07.746261 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:07 crc kubenswrapper[4789]: I0202 21:21:07.746797 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:07 crc kubenswrapper[4789]: I0202 21:21:07.746835 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:07 crc kubenswrapper[4789]: I0202 21:21:07.746869 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:07 crc kubenswrapper[4789]: I0202 21:21:07.746893 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:07Z","lastTransitionTime":"2026-02-02T21:21:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:07 crc kubenswrapper[4789]: I0202 21:21:07.799067 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 23:24:42.627071346 +0000 UTC Feb 02 21:21:07 crc kubenswrapper[4789]: I0202 21:21:07.850280 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:07 crc kubenswrapper[4789]: I0202 21:21:07.850324 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:07 crc kubenswrapper[4789]: I0202 21:21:07.850336 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:07 crc kubenswrapper[4789]: I0202 21:21:07.850353 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:07 crc kubenswrapper[4789]: I0202 21:21:07.850366 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:07Z","lastTransitionTime":"2026-02-02T21:21:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:07 crc kubenswrapper[4789]: I0202 21:21:07.953069 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:07 crc kubenswrapper[4789]: I0202 21:21:07.953138 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:07 crc kubenswrapper[4789]: I0202 21:21:07.953157 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:07 crc kubenswrapper[4789]: I0202 21:21:07.953182 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:07 crc kubenswrapper[4789]: I0202 21:21:07.953200 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:07Z","lastTransitionTime":"2026-02-02T21:21:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:08 crc kubenswrapper[4789]: I0202 21:21:08.056240 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:08 crc kubenswrapper[4789]: I0202 21:21:08.056294 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:08 crc kubenswrapper[4789]: I0202 21:21:08.056311 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:08 crc kubenswrapper[4789]: I0202 21:21:08.056335 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:08 crc kubenswrapper[4789]: I0202 21:21:08.056354 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:08Z","lastTransitionTime":"2026-02-02T21:21:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:08 crc kubenswrapper[4789]: I0202 21:21:08.159668 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:08 crc kubenswrapper[4789]: I0202 21:21:08.159736 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:08 crc kubenswrapper[4789]: I0202 21:21:08.159755 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:08 crc kubenswrapper[4789]: I0202 21:21:08.159781 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:08 crc kubenswrapper[4789]: I0202 21:21:08.159799 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:08Z","lastTransitionTime":"2026-02-02T21:21:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:08 crc kubenswrapper[4789]: I0202 21:21:08.262629 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:08 crc kubenswrapper[4789]: I0202 21:21:08.262928 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:08 crc kubenswrapper[4789]: I0202 21:21:08.263016 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:08 crc kubenswrapper[4789]: I0202 21:21:08.263101 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:08 crc kubenswrapper[4789]: I0202 21:21:08.263181 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:08Z","lastTransitionTime":"2026-02-02T21:21:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:08 crc kubenswrapper[4789]: I0202 21:21:08.323415 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w8vkt_2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6/ovnkube-controller/3.log" Feb 02 21:21:08 crc kubenswrapper[4789]: I0202 21:21:08.327431 4789 scope.go:117] "RemoveContainer" containerID="877f8c63dfc6ee1e69f7f0bc64e5d2896384f737678efb01ad9d27b9030a5fcc" Feb 02 21:21:08 crc kubenswrapper[4789]: E0202 21:21:08.327574 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-w8vkt_openshift-ovn-kubernetes(2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6)\"" pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" podUID="2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6" Feb 02 21:21:08 crc kubenswrapper[4789]: I0202 21:21:08.343340 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fb5d89e-c901-4857-a5e8-b4d8e3701714\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d85613f300f8d008d919038b40efff3fb67e228e25959cbfd2424640c6bc7a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://460e055777327e1734238bf51929751678ea5a757b00a344daa31c8c95e4c463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c292754c81dd4929f8e599e8ec352fd5e90431c60bb741c070a10ffa91fad72\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3685c506f77d3807711a442d68da94932064737cfdc5cb74557da135906e24f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:19:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:08Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:08 crc kubenswrapper[4789]: I0202 21:21:08.358495 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2x5ws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70a32268-2a2d-47f3-9fc6-4281b8dc6a02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75cf318c3d63c5316cbeba8abb93919973f88b415ed7116b55333813b8a889fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9eaf3ca59ce89187ee20f3915b7fd4a8867e156c0be18b435511d2af95ffd949\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T21:20:51Z\\\",\\\"message\\\":\\\"2026-02-02T21:20:06+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_3308657d-555b-440d-96c9-7656e8ca7e3e\\\\n2026-02-02T21:20:06+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_3308657d-555b-440d-96c9-7656e8ca7e3e to /host/opt/cni/bin/\\\\n2026-02-02T21:20:06Z [verbose] multus-daemon started\\\\n2026-02-02T21:20:06Z [verbose] Readiness Indicator file check\\\\n2026-02-02T21:20:51Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:03Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf446\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2x5ws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:08Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:08 crc kubenswrapper[4789]: I0202 21:21:08.370528 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:08 crc kubenswrapper[4789]: I0202 21:21:08.370572 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:08 crc kubenswrapper[4789]: I0202 21:21:08.370625 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:08 crc kubenswrapper[4789]: I0202 21:21:08.370642 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:08 crc kubenswrapper[4789]: I0202 21:21:08.370655 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:08Z","lastTransitionTime":"2026-02-02T21:21:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:08 crc kubenswrapper[4789]: I0202 21:21:08.374533 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af0ef1e0-7fcc-4fa6-8463-a0f5a9145c2e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3c3d527f77f26e052c4ef9c0577938dc23c802918e742b9fb9020cb6ba705f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eface13a61e8f9d1e8e9512c78a4c70973bfad708c3cdea7f7251f6fa408a59f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ed17431bc8880523ea84349c68ea56e389033b550390d88e60373f663d1491f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d78edf2f7e647edfcc306a3feff71c983907077ad05a684af9fa2990deaf3b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d78edf2f7e647edfcc306a3feff71c983907077ad05a684af9fa2990deaf3b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:19:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:19:40Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:08Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:08 crc kubenswrapper[4789]: I0202 21:21:08.388900 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa57a708f883719dd81200ccf975947c0af244b92ead81daee23171c9be412f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:08Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:08 crc kubenswrapper[4789]: I0202 21:21:08.399496 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6l576" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd970d28-4009-48b2-a0f4-2b8b1d54a2cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b641bb7caefade4d07fe41965573b059bd8b23a83bb9f60a9637d0152dffb07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwbqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6l576\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:08Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:08 crc kubenswrapper[4789]: I0202 21:21:08.414115 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:08Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:08 crc kubenswrapper[4789]: I0202 21:21:08.419412 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vjbpg" Feb 02 21:21:08 crc kubenswrapper[4789]: E0202 21:21:08.419988 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vjbpg" podUID="2dc26662-64d3-47f0-9e0d-d340760ca348" Feb 02 21:21:08 crc kubenswrapper[4789]: I0202 21:21:08.432391 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40b3784b5ba59f0147d2889d897d17140759155e24587e9a323338e0a3125ddd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:08Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:08 crc kubenswrapper[4789]: I0202 21:21:08.452264 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:08Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:08 crc kubenswrapper[4789]: I0202 21:21:08.471568 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:08Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:08 crc kubenswrapper[4789]: I0202 21:21:08.474069 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:08 crc kubenswrapper[4789]: I0202 21:21:08.474125 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:08 crc kubenswrapper[4789]: I0202 21:21:08.474143 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:08 crc kubenswrapper[4789]: I0202 21:21:08.474167 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:08 crc kubenswrapper[4789]: I0202 21:21:08.474185 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:08Z","lastTransitionTime":"2026-02-02T21:21:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:08 crc kubenswrapper[4789]: I0202 21:21:08.489843 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19928db343f460eed7b046f14b45c6756081f57ea4b3ad77acc86f29eb856a82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://106225a63f3d4ecb7ba8c5266f738990ec7cc5603f9fabde6d234ae265dcc310\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:08Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:08 crc kubenswrapper[4789]: I0202 21:21:08.506358 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdf018b4-1451-4d37-be6e-05802b67c73e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad1716ba11e1b21eb68642e6935312760c389a669a007822290fbe72573bfaad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwk2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b108a17d437cce7e94365267d96e99e84944d9dd12174c5acb380bc1a1f9c885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwk2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c8vcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:08Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:08 crc kubenswrapper[4789]: I0202 21:21:08.519034 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d49gm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39528981-2c85-43f3-8fa0-bfae5c3334cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4ae303f0f4381207f4dd4a443e366d6e3de2014e9bc69aa644e98a76b239868\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnfbx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://277fe88585ee146931597a14fe049a3d69197c94e0d84f5dfb334b08cd685723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnfbx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-d49gm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:08Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:08 crc kubenswrapper[4789]: I0202 21:21:08.535228 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vjbpg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dc26662-64d3-47f0-9e0d-d340760ca348\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9zq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9zq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vjbpg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:08Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:08 crc kubenswrapper[4789]: I0202 21:21:08.552172 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf73e052-94a2-472e-88e9-63ab3a8d428b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c523dec61c09703463bce6b000fb79c832b3c190a960fe0097b654fd672477c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1470060c44a356a82b453ed22ef5c3841993bce37eba8523a13c49331499224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1470060c44a356a82b453ed22ef5c3841993bce37eba8523a13c49331499224\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:19:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:19:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:08Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:08 crc kubenswrapper[4789]: I0202 21:21:08.564109 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9563eded-ca82-4eb6-90d4-e62b8acbe296\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72bf64d682578abb47e9fabf5e6c31e3a20594cc4a2d13304b2036f4198af265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cc46c408bb643834e494025442760929995b22175ce2af65a14004761848c2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e95c9e553b8e14141ddde6a221c0e5e4d46533d1631893e5d55416b9d16f4a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66a4db3799101ccca8a89d6bfd2c9d36940b8710ee3d256e47cd61cfe6ac7c07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d8bd6ccf6c7345f0ddc84a2983b618218fc65283b8d351c2df30d1221f304b7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\" 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 21:20:01.773773 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 21:20:01.773776 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 21:20:01.773779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 21:20:01.773999 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0202 21:20:01.777333 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3307553149/tls.crt::/tmp/serving-cert-3307553149/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770067196\\\\\\\\\\\\\\\" (2026-02-02 21:19:55 +0000 UTC to 2026-03-04 21:19:56 +0000 UTC (now=2026-02-02 21:20:01.777292377 +0000 UTC))\\\\\\\"\\\\nI0202 21:20:01.777438 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770067201\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770067201\\\\\\\\\\\\\\\" (2026-02-02 20:20:01 +0000 UTC to 2027-02-02 20:20:01 +0000 UTC (now=2026-02-02 21:20:01.777417111 +0000 UTC))\\\\\\\"\\\\nI0202 21:20:01.777455 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 21:20:01.777484 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 21:20:01.777505 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0202 21:20:01.777526 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0202 21:20:01.777548 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3307553149/tls.crt::/tmp/serving-cert-3307553149/tls.key\\\\\\\"\\\\nF0202 21:20:01.777545 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cbe4a41d8721562e34639bc3ab24cc7f735d8de00e14d38b1b623b58740c5b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94bdc1c30a3c895835afc4f205ec20725ec13707db6957046cae7791b8850679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94bdc1c30a3c895835afc4f205ec20725ec13707db6957046cae7791b8850679\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:19:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:19:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:08Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:08 crc kubenswrapper[4789]: I0202 21:21:08.577529 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:08 crc kubenswrapper[4789]: I0202 21:21:08.577758 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:08 crc kubenswrapper[4789]: I0202 21:21:08.577845 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:08 crc kubenswrapper[4789]: I0202 21:21:08.577913 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:08 crc kubenswrapper[4789]: I0202 21:21:08.577974 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:08Z","lastTransitionTime":"2026-02-02T21:21:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:08 crc kubenswrapper[4789]: I0202 21:21:08.580258 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dsv6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcbed546-a1c3-4ba4-96b4-61471010b1c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fff0380f801c6909a82bfc8a765cf7a393d7c2f7cc809611ded737a22f448a3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48b8508849b24e18223b62bd73348759ba4c04a525afb7d4055175bceb0183c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e48b8508849b24e18223b62bd73348759ba4c04a525afb7d4055175bceb0183c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92b13f24aa00608821909c688ace0441fd4e432d3c5fc9a1cc06b4239bf86955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92b13f24aa00608821909c688ace0441fd4e432d3c5fc9a1cc06b4239bf86955\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75d7fd5bba471f9371960e5c6cfd5d7cb49402d47459acad01a9c438ea33de0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75d7fd5bba471f9371960e5c6cfd5d7cb49402d47459acad01a9c438ea33de0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eb07d53076d60f3363a1232e742bb4bd801d49104a2be1095af34a61a5b61cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eb07d53076d60f3363a1232e742bb4bd801d49104a2be1095af34a61a5b61cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5339fbb27f553f96497abbf4b0584cbffe399e44fafd90cd0e568e2f4efad6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5339fbb27f553f96497abbf4b0584cbffe399e44fafd90cd0e568e2f4efad6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef304631b36ef7a2edc5ce89f5b4d1efecd4947d1d71f3bb4068395d76aea346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef304631b36ef7a2edc5ce89f5b4d1efecd4947d1d71f3bb4068395d76aea346\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dsv6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:08Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:08 crc kubenswrapper[4789]: I0202 21:21:08.601216 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29dad69c19f9411676fab81684dd613d866c494df1e333b46bfff35b98430103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://047a98525fa1b06be5e6f3bab6b417973a13bb32df7fad26ca90b4558ee4e364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05637515d4c19e054ce43cc23bb6d8e32c4752282dab08786fff3062eb976461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f05a85f44fd6a7b9efa0f591ec0afebd7818a80b132ec3dddd0faa22c380eea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd508dcd4022163f662ad27cf3ffbc4cca551d4020809ca5fac60cee8da80601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://021453f1c74b708ad3f39c2defa2664f098a552f3e22315479c483d24110e583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://877f8c63dfc6ee1e69f7f0bc64e5d2896384f737678efb01ad9d27b9030a5fcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://877f8c63dfc6ee1e69f7f0bc64e5d2896384f737678efb01ad9d27b9030a5fcc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T21:21:06Z\\\",\\\"message\\\":\\\"Removed *v1.EgressIP event handler 8\\\\nI0202 21:21:06.324486 6920 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0202 21:21:06.324713 6920 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 21:21:06.325983 6920 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0202 21:21:06.326123 6920 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0202 21:21:06.327135 6920 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 21:21:06.327379 6920 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0202 21:21:06.328141 6920 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T21:21:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-w8vkt_openshift-ovn-kubernetes(2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f30344b8366cf5ef712d971dc4d8d552e7b50c1ecea8d2ace88bb80fce62231e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e90a8e698f2e9f30a596864d0d038bfad686cca1a56b1a8bc72fab5fe594f355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e90a8e698f2e9f30a596864d0d038bfad686cca1a56b1a8bc72fab5fe594f355\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w8vkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:08Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:08 crc kubenswrapper[4789]: I0202 21:21:08.613834 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wlsw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b25d791-42e1-4e08-b7da-41803cc40f4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0e2a17d3ade1aa229b2729d66fe953acc746673549e0d929076a3bd18839833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-snjrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wlsw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:08Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:08 crc kubenswrapper[4789]: I0202 21:21:08.680263 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:08 crc kubenswrapper[4789]: I0202 21:21:08.680315 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:08 crc kubenswrapper[4789]: I0202 21:21:08.680329 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:08 crc kubenswrapper[4789]: I0202 21:21:08.680345 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:08 crc kubenswrapper[4789]: I0202 21:21:08.680357 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:08Z","lastTransitionTime":"2026-02-02T21:21:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:08 crc kubenswrapper[4789]: I0202 21:21:08.783051 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:08 crc kubenswrapper[4789]: I0202 21:21:08.783098 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:08 crc kubenswrapper[4789]: I0202 21:21:08.783117 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:08 crc kubenswrapper[4789]: I0202 21:21:08.783142 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:08 crc kubenswrapper[4789]: I0202 21:21:08.783159 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:08Z","lastTransitionTime":"2026-02-02T21:21:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:08 crc kubenswrapper[4789]: I0202 21:21:08.799769 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 19:01:44.312093948 +0000 UTC Feb 02 21:21:08 crc kubenswrapper[4789]: I0202 21:21:08.891880 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:08 crc kubenswrapper[4789]: I0202 21:21:08.891938 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:08 crc kubenswrapper[4789]: I0202 21:21:08.891957 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:08 crc kubenswrapper[4789]: I0202 21:21:08.891980 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:08 crc kubenswrapper[4789]: I0202 21:21:08.891998 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:08Z","lastTransitionTime":"2026-02-02T21:21:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:08 crc kubenswrapper[4789]: I0202 21:21:08.995866 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:08 crc kubenswrapper[4789]: I0202 21:21:08.996008 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:08 crc kubenswrapper[4789]: I0202 21:21:08.996029 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:08 crc kubenswrapper[4789]: I0202 21:21:08.996056 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:08 crc kubenswrapper[4789]: I0202 21:21:08.996074 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:08Z","lastTransitionTime":"2026-02-02T21:21:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:09 crc kubenswrapper[4789]: I0202 21:21:09.092120 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:09 crc kubenswrapper[4789]: I0202 21:21:09.092172 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:09 crc kubenswrapper[4789]: I0202 21:21:09.092183 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:09 crc kubenswrapper[4789]: I0202 21:21:09.092199 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:09 crc kubenswrapper[4789]: I0202 21:21:09.092213 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:09Z","lastTransitionTime":"2026-02-02T21:21:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:09 crc kubenswrapper[4789]: E0202 21:21:09.111502 4789 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:21:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:21:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:21:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:21:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:21:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:21:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:21:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:21:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"007d0037-9447-42ea-b3a4-6e1f0d669307\\\",\\\"systemUUID\\\":\\\"53ecbfdd-0b43-4d74-98ca-c7bcbc951d86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:09Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:09 crc kubenswrapper[4789]: I0202 21:21:09.115984 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:09 crc kubenswrapper[4789]: I0202 21:21:09.116042 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:09 crc kubenswrapper[4789]: I0202 21:21:09.116063 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:09 crc kubenswrapper[4789]: I0202 21:21:09.116087 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:09 crc kubenswrapper[4789]: I0202 21:21:09.116104 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:09Z","lastTransitionTime":"2026-02-02T21:21:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:09 crc kubenswrapper[4789]: E0202 21:21:09.136620 4789 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:21:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:21:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:21:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:21:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:21:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:21:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:21:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:21:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"007d0037-9447-42ea-b3a4-6e1f0d669307\\\",\\\"systemUUID\\\":\\\"53ecbfdd-0b43-4d74-98ca-c7bcbc951d86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:09Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:09 crc kubenswrapper[4789]: I0202 21:21:09.141078 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:09 crc kubenswrapper[4789]: I0202 21:21:09.141129 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:09 crc kubenswrapper[4789]: I0202 21:21:09.141147 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:09 crc kubenswrapper[4789]: I0202 21:21:09.141170 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:09 crc kubenswrapper[4789]: I0202 21:21:09.141188 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:09Z","lastTransitionTime":"2026-02-02T21:21:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:09 crc kubenswrapper[4789]: E0202 21:21:09.159465 4789 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:21:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:21:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:21:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:21:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:21:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:21:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:21:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:21:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"007d0037-9447-42ea-b3a4-6e1f0d669307\\\",\\\"systemUUID\\\":\\\"53ecbfdd-0b43-4d74-98ca-c7bcbc951d86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:09Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:09 crc kubenswrapper[4789]: I0202 21:21:09.164141 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:09 crc kubenswrapper[4789]: I0202 21:21:09.164197 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:09 crc kubenswrapper[4789]: I0202 21:21:09.164209 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:09 crc kubenswrapper[4789]: I0202 21:21:09.164225 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:09 crc kubenswrapper[4789]: I0202 21:21:09.164239 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:09Z","lastTransitionTime":"2026-02-02T21:21:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:09 crc kubenswrapper[4789]: E0202 21:21:09.183094 4789 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:21:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:21:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:21:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:21:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:21:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:21:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:21:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:21:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"007d0037-9447-42ea-b3a4-6e1f0d669307\\\",\\\"systemUUID\\\":\\\"53ecbfdd-0b43-4d74-98ca-c7bcbc951d86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:09Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:09 crc kubenswrapper[4789]: I0202 21:21:09.187517 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:09 crc kubenswrapper[4789]: I0202 21:21:09.187554 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:09 crc kubenswrapper[4789]: I0202 21:21:09.187566 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:09 crc kubenswrapper[4789]: I0202 21:21:09.187611 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:09 crc kubenswrapper[4789]: I0202 21:21:09.187627 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:09Z","lastTransitionTime":"2026-02-02T21:21:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:09 crc kubenswrapper[4789]: E0202 21:21:09.207688 4789 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:21:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:21:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:21:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:21:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:21:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:21:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:21:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:21:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"007d0037-9447-42ea-b3a4-6e1f0d669307\\\",\\\"systemUUID\\\":\\\"53ecbfdd-0b43-4d74-98ca-c7bcbc951d86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:09Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:09 crc kubenswrapper[4789]: E0202 21:21:09.207838 4789 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 02 21:21:09 crc kubenswrapper[4789]: I0202 21:21:09.209766 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:09 crc kubenswrapper[4789]: I0202 21:21:09.209816 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:09 crc kubenswrapper[4789]: I0202 21:21:09.209834 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:09 crc kubenswrapper[4789]: I0202 21:21:09.209857 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:09 crc kubenswrapper[4789]: I0202 21:21:09.209876 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:09Z","lastTransitionTime":"2026-02-02T21:21:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:09 crc kubenswrapper[4789]: I0202 21:21:09.313214 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:09 crc kubenswrapper[4789]: I0202 21:21:09.313268 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:09 crc kubenswrapper[4789]: I0202 21:21:09.313279 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:09 crc kubenswrapper[4789]: I0202 21:21:09.313297 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:09 crc kubenswrapper[4789]: I0202 21:21:09.313314 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:09Z","lastTransitionTime":"2026-02-02T21:21:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:09 crc kubenswrapper[4789]: I0202 21:21:09.416193 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:09 crc kubenswrapper[4789]: I0202 21:21:09.416271 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:09 crc kubenswrapper[4789]: I0202 21:21:09.416297 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:09 crc kubenswrapper[4789]: I0202 21:21:09.416327 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:09 crc kubenswrapper[4789]: I0202 21:21:09.416351 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:09Z","lastTransitionTime":"2026-02-02T21:21:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:09 crc kubenswrapper[4789]: I0202 21:21:09.418770 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 21:21:09 crc kubenswrapper[4789]: I0202 21:21:09.418838 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 21:21:09 crc kubenswrapper[4789]: I0202 21:21:09.418886 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 21:21:09 crc kubenswrapper[4789]: E0202 21:21:09.419001 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 21:21:09 crc kubenswrapper[4789]: E0202 21:21:09.419121 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 21:21:09 crc kubenswrapper[4789]: E0202 21:21:09.419275 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 21:21:09 crc kubenswrapper[4789]: I0202 21:21:09.519007 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:09 crc kubenswrapper[4789]: I0202 21:21:09.519063 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:09 crc kubenswrapper[4789]: I0202 21:21:09.519083 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:09 crc kubenswrapper[4789]: I0202 21:21:09.519108 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:09 crc kubenswrapper[4789]: I0202 21:21:09.519124 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:09Z","lastTransitionTime":"2026-02-02T21:21:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:09 crc kubenswrapper[4789]: I0202 21:21:09.622282 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:09 crc kubenswrapper[4789]: I0202 21:21:09.622342 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:09 crc kubenswrapper[4789]: I0202 21:21:09.622362 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:09 crc kubenswrapper[4789]: I0202 21:21:09.622387 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:09 crc kubenswrapper[4789]: I0202 21:21:09.622404 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:09Z","lastTransitionTime":"2026-02-02T21:21:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:09 crc kubenswrapper[4789]: I0202 21:21:09.725621 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:09 crc kubenswrapper[4789]: I0202 21:21:09.725682 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:09 crc kubenswrapper[4789]: I0202 21:21:09.725700 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:09 crc kubenswrapper[4789]: I0202 21:21:09.725724 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:09 crc kubenswrapper[4789]: I0202 21:21:09.725748 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:09Z","lastTransitionTime":"2026-02-02T21:21:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:09 crc kubenswrapper[4789]: I0202 21:21:09.800363 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 17:22:01.619568108 +0000 UTC Feb 02 21:21:09 crc kubenswrapper[4789]: I0202 21:21:09.828673 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:09 crc kubenswrapper[4789]: I0202 21:21:09.828729 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:09 crc kubenswrapper[4789]: I0202 21:21:09.828748 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:09 crc kubenswrapper[4789]: I0202 21:21:09.828773 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:09 crc kubenswrapper[4789]: I0202 21:21:09.828793 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:09Z","lastTransitionTime":"2026-02-02T21:21:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:09 crc kubenswrapper[4789]: I0202 21:21:09.932456 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:09 crc kubenswrapper[4789]: I0202 21:21:09.932513 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:09 crc kubenswrapper[4789]: I0202 21:21:09.932530 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:09 crc kubenswrapper[4789]: I0202 21:21:09.932554 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:09 crc kubenswrapper[4789]: I0202 21:21:09.932571 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:09Z","lastTransitionTime":"2026-02-02T21:21:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:10 crc kubenswrapper[4789]: I0202 21:21:10.035857 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:10 crc kubenswrapper[4789]: I0202 21:21:10.035919 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:10 crc kubenswrapper[4789]: I0202 21:21:10.035934 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:10 crc kubenswrapper[4789]: I0202 21:21:10.035955 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:10 crc kubenswrapper[4789]: I0202 21:21:10.035972 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:10Z","lastTransitionTime":"2026-02-02T21:21:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:10 crc kubenswrapper[4789]: I0202 21:21:10.139351 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:10 crc kubenswrapper[4789]: I0202 21:21:10.139412 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:10 crc kubenswrapper[4789]: I0202 21:21:10.139432 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:10 crc kubenswrapper[4789]: I0202 21:21:10.139455 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:10 crc kubenswrapper[4789]: I0202 21:21:10.139472 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:10Z","lastTransitionTime":"2026-02-02T21:21:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:10 crc kubenswrapper[4789]: I0202 21:21:10.242557 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:10 crc kubenswrapper[4789]: I0202 21:21:10.242638 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:10 crc kubenswrapper[4789]: I0202 21:21:10.242656 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:10 crc kubenswrapper[4789]: I0202 21:21:10.242682 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:10 crc kubenswrapper[4789]: I0202 21:21:10.242699 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:10Z","lastTransitionTime":"2026-02-02T21:21:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:10 crc kubenswrapper[4789]: I0202 21:21:10.346119 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:10 crc kubenswrapper[4789]: I0202 21:21:10.346177 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:10 crc kubenswrapper[4789]: I0202 21:21:10.346195 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:10 crc kubenswrapper[4789]: I0202 21:21:10.346217 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:10 crc kubenswrapper[4789]: I0202 21:21:10.346233 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:10Z","lastTransitionTime":"2026-02-02T21:21:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:10 crc kubenswrapper[4789]: I0202 21:21:10.421132 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vjbpg" Feb 02 21:21:10 crc kubenswrapper[4789]: E0202 21:21:10.421287 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vjbpg" podUID="2dc26662-64d3-47f0-9e0d-d340760ca348" Feb 02 21:21:10 crc kubenswrapper[4789]: I0202 21:21:10.437791 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdf018b4-1451-4d37-be6e-05802b67c73e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad1716ba11e1b21eb68642e6935312760c389a669a007822290fbe72573bfaad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwk2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b108a17d437cce7e94365267d96e99e84944d9dd12174c5acb380bc1a1f9c885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwk2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c8vcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:10Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:10 crc kubenswrapper[4789]: I0202 21:21:10.449371 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:10 crc kubenswrapper[4789]: I0202 21:21:10.449454 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:10 crc kubenswrapper[4789]: I0202 21:21:10.449479 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:10 crc kubenswrapper[4789]: I0202 21:21:10.449510 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:10 crc kubenswrapper[4789]: I0202 21:21:10.449536 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:10Z","lastTransitionTime":"2026-02-02T21:21:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:10 crc kubenswrapper[4789]: I0202 21:21:10.458096 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d49gm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39528981-2c85-43f3-8fa0-bfae5c3334cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4ae303f0f4381207f4dd4a443e366d6e3de2014e9bc69aa644e98a76b239868\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnfbx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://277fe88585ee146931597a14fe049a3d69197c94e0d84f5dfb334b08cd685723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnfbx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-d49gm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:10Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:10 crc kubenswrapper[4789]: I0202 21:21:10.472299 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vjbpg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dc26662-64d3-47f0-9e0d-d340760ca348\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9zq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9zq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vjbpg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:10Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:10 crc kubenswrapper[4789]: I0202 21:21:10.490931 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40b3784b5ba59f0147d2889d897d17140759155e24587e9a323338e0a3125ddd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:10Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:10 crc kubenswrapper[4789]: I0202 21:21:10.508863 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:10Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:10 crc kubenswrapper[4789]: I0202 21:21:10.528169 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:10Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:10 crc kubenswrapper[4789]: I0202 21:21:10.546286 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19928db343f460eed7b046f14b45c6756081f57ea4b3ad77acc86f29eb856a82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://106225a63f3d4ecb7ba8c5266f738990ec7cc5603f9fabde6d234ae265dcc310\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:10Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:10 crc kubenswrapper[4789]: I0202 21:21:10.552080 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:10 crc kubenswrapper[4789]: I0202 21:21:10.552135 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:10 crc kubenswrapper[4789]: I0202 21:21:10.552155 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:10 crc kubenswrapper[4789]: I0202 21:21:10.552180 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:10 crc kubenswrapper[4789]: I0202 21:21:10.552202 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:10Z","lastTransitionTime":"2026-02-02T21:21:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:10 crc kubenswrapper[4789]: I0202 21:21:10.562510 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wlsw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b25d791-42e1-4e08-b7da-41803cc40f4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0e2a17d3ade1aa229b2729d66fe953acc746673549e0d929076a3bd18839833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-snjrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wlsw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:10Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:10 crc kubenswrapper[4789]: I0202 21:21:10.578611 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf73e052-94a2-472e-88e9-63ab3a8d428b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c523dec61c09703463bce6b000fb79c832b3c190a960fe0097b654fd672477c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1470060c44a356a82b453ed22ef5c3841993bce37eba8523a13c49331499224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1470060c44a356a82b453ed22ef5c3841993bce37eba8523a13c49331499224\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:19:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:19:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:10Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:10 crc kubenswrapper[4789]: I0202 21:21:10.602520 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9563eded-ca82-4eb6-90d4-e62b8acbe296\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72bf64d682578abb47e9fabf5e6c31e3a20594cc4a2d13304b2036f4198af265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cc46c408bb643834e494025442760929995b22175ce2af65a14004761848c2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e95c9e553b8e14141ddde6a221c0e5e4d46533d1631893e5d55416b9d16f4a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66a4db3799101ccca8a89d6bfd2c9d36940b8710ee3d256e47cd61cfe6ac7c07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d8bd6ccf6c7345f0ddc84a2983b618218fc65283b8d351c2df30d1221f304b7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\" 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 21:20:01.773773 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 21:20:01.773776 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 21:20:01.773779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 21:20:01.773999 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0202 21:20:01.777333 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3307553149/tls.crt::/tmp/serving-cert-3307553149/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770067196\\\\\\\\\\\\\\\" (2026-02-02 21:19:55 +0000 UTC to 2026-03-04 21:19:56 +0000 UTC (now=2026-02-02 21:20:01.777292377 +0000 UTC))\\\\\\\"\\\\nI0202 21:20:01.777438 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770067201\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770067201\\\\\\\\\\\\\\\" (2026-02-02 20:20:01 +0000 UTC to 2027-02-02 20:20:01 +0000 UTC (now=2026-02-02 21:20:01.777417111 +0000 UTC))\\\\\\\"\\\\nI0202 21:20:01.777455 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 21:20:01.777484 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 21:20:01.777505 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0202 21:20:01.777526 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0202 21:20:01.777548 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3307553149/tls.crt::/tmp/serving-cert-3307553149/tls.key\\\\\\\"\\\\nF0202 21:20:01.777545 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cbe4a41d8721562e34639bc3ab24cc7f735d8de00e14d38b1b623b58740c5b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94bdc1c30a3c895835afc4f205ec20725ec13707db6957046cae7791b8850679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94bdc1c30a3c895835afc4f205ec20725ec13707db6957046cae7791b8850679\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:19:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:19:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:10Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:10 crc kubenswrapper[4789]: I0202 21:21:10.628822 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dsv6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcbed546-a1c3-4ba4-96b4-61471010b1c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fff0380f801c6909a82bfc8a765cf7a393d7c2f7cc809611ded737a22f448a3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48b8508849b24e18223b62bd73348759ba4c04a525afb7d4055175bceb0183c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e48b8508849b24e18223b62bd73348759ba4c04a525afb7d4055175bceb0183c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92b13f24aa00608821909c688ace0441fd4e432d3c5fc9a1cc06b4239bf86955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92b13f24aa00608821909c688ace0441fd4e432d3c5fc9a1cc06b4239bf86955\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75d7fd5bba471f9371960e5c6cfd5d7cb49402d47459acad01a9c438ea33de0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75d7fd5bba471f9371960e5c6cfd5d7cb49402d47459acad01a9c438ea33de0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eb07d53076d60f3363a1232e742bb4bd801d49104a2be1095af34a61a5b61cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eb07d53076d60f3363a1232e742bb4bd801d49104a2be1095af34a61a5b61cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5339fbb27f553f96497abbf4b0584cbffe399e44fafd90cd0e568e2f4efad6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5339fbb27f553f96497abbf4b0584cbffe399e44fafd90cd0e568e2f4efad6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef304631b36ef7a2edc5ce89f5b4d1efecd4947d1d71f3bb4068395d76aea346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef304631b36ef7a2edc5ce89f5b4d1efecd4947d1d71f3bb4068395d76aea346\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dsv6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:10Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:10 crc kubenswrapper[4789]: I0202 21:21:10.657393 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:10 crc kubenswrapper[4789]: I0202 21:21:10.657433 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:10 crc kubenswrapper[4789]: I0202 21:21:10.657449 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:10 crc kubenswrapper[4789]: I0202 21:21:10.657467 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:10 crc kubenswrapper[4789]: I0202 21:21:10.657481 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:10Z","lastTransitionTime":"2026-02-02T21:21:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:10 crc kubenswrapper[4789]: I0202 21:21:10.661569 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29dad69c19f9411676fab81684dd613d866c494df1e333b46bfff35b98430103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://047a98525fa1b06be5e6f3bab6b417973a13bb32df7fad26ca90b4558ee4e364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05637515d4c19e054ce43cc23bb6d8e32c4752282dab08786fff3062eb976461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f05a85f44fd6a7b9efa0f591ec0afebd7818a80b132ec3dddd0faa22c380eea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd508dcd4022163f662ad27cf3ffbc4cca551d4020809ca5fac60cee8da80601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://021453f1c74b708ad3f39c2defa2664f098a552f3e22315479c483d24110e583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://877f8c63dfc6ee1e69f7f0bc64e5d2896384f737678efb01ad9d27b9030a5fcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://877f8c63dfc6ee1e69f7f0bc64e5d2896384f737678efb01ad9d27b9030a5fcc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T21:21:06Z\\\",\\\"message\\\":\\\"Removed *v1.EgressIP event handler 8\\\\nI0202 21:21:06.324486 6920 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0202 21:21:06.324713 6920 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 21:21:06.325983 6920 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0202 21:21:06.326123 6920 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0202 21:21:06.327135 6920 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 21:21:06.327379 6920 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0202 21:21:06.328141 6920 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T21:21:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-w8vkt_openshift-ovn-kubernetes(2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f30344b8366cf5ef712d971dc4d8d552e7b50c1ecea8d2ace88bb80fce62231e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e90a8e698f2e9f30a596864d0d038bfad686cca1a56b1a8bc72fab5fe594f355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e90a8e698f2e9f30a596864d0d038bfad686cca1a56b1a8bc72fab5fe594f355\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w8vkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:10Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:10 crc kubenswrapper[4789]: I0202 21:21:10.683889 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fb5d89e-c901-4857-a5e8-b4d8e3701714\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d85613f300f8d008d919038b40efff3fb67e228e25959cbfd2424640c6bc7a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://460e055777327e1734238bf51929751678ea5a757b00a344daa31c8c95e4c463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c292754c81dd4929f8e599e8ec352fd5e90431c60bb741c070a10ffa91fad72\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3685c506f77d3807711a442d68da94932064737cfdc5cb74557da135906e24f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:19:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:10Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:10 crc kubenswrapper[4789]: I0202 21:21:10.702209 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2x5ws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70a32268-2a2d-47f3-9fc6-4281b8dc6a02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75cf318c3d63c5316cbeba8abb93919973f88b415ed7116b55333813b8a889fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9eaf3ca59ce89187ee20f3915b7fd4a8867e156c0be18b435511d2af95ffd949\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T21:20:51Z\\\",\\\"message\\\":\\\"2026-02-02T21:20:06+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_3308657d-555b-440d-96c9-7656e8ca7e3e\\\\n2026-02-02T21:20:06+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_3308657d-555b-440d-96c9-7656e8ca7e3e to /host/opt/cni/bin/\\\\n2026-02-02T21:20:06Z [verbose] multus-daemon started\\\\n2026-02-02T21:20:06Z [verbose] Readiness Indicator file check\\\\n2026-02-02T21:20:51Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:03Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf446\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2x5ws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:10Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:10 crc kubenswrapper[4789]: I0202 21:21:10.721545 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af0ef1e0-7fcc-4fa6-8463-a0f5a9145c2e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3c3d527f77f26e052c4ef9c0577938dc23c802918e742b9fb9020cb6ba705f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eface13a61e8f9d1e8e9512c78a4c70973bfad708c3cdea7f7251f6fa408a59f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ed17431bc8880523ea84349c68ea56e389033b550390d88e60373f663d1491f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d78edf2f7e647edfcc306a3feff71c983907077ad05a684af9fa2990deaf3b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d78edf2f7e647edfcc306a3feff71c983907077ad05a684af9fa2990deaf3b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:19:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:19:40Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:10Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:10 crc kubenswrapper[4789]: I0202 21:21:10.744483 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa57a708f883719dd81200ccf975947c0af244b92ead81daee23171c9be412f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:10Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:10 crc kubenswrapper[4789]: I0202 21:21:10.760916 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:10 crc kubenswrapper[4789]: I0202 21:21:10.760986 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:10 crc kubenswrapper[4789]: I0202 21:21:10.761010 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:10 crc kubenswrapper[4789]: I0202 21:21:10.761042 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:10 crc kubenswrapper[4789]: I0202 21:21:10.761066 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:10Z","lastTransitionTime":"2026-02-02T21:21:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:10 crc kubenswrapper[4789]: I0202 21:21:10.762379 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6l576" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd970d28-4009-48b2-a0f4-2b8b1d54a2cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b641bb7caefade4d07fe41965573b059bd8b23a83bb9f60a9637d0152dffb07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwbqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6l576\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:10Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:10 crc kubenswrapper[4789]: I0202 21:21:10.780329 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:10Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:10 crc kubenswrapper[4789]: I0202 21:21:10.800570 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 04:38:39.046610391 +0000 UTC Feb 02 21:21:10 crc kubenswrapper[4789]: I0202 21:21:10.864243 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:10 crc kubenswrapper[4789]: I0202 21:21:10.864289 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:10 crc kubenswrapper[4789]: I0202 21:21:10.864298 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:10 crc kubenswrapper[4789]: I0202 21:21:10.864314 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:10 crc kubenswrapper[4789]: I0202 21:21:10.864324 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:10Z","lastTransitionTime":"2026-02-02T21:21:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:10 crc kubenswrapper[4789]: I0202 21:21:10.969090 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:10 crc kubenswrapper[4789]: I0202 21:21:10.969125 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:10 crc kubenswrapper[4789]: I0202 21:21:10.969134 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:10 crc kubenswrapper[4789]: I0202 21:21:10.969149 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:10 crc kubenswrapper[4789]: I0202 21:21:10.969158 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:10Z","lastTransitionTime":"2026-02-02T21:21:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:11 crc kubenswrapper[4789]: I0202 21:21:11.073129 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:11 crc kubenswrapper[4789]: I0202 21:21:11.073197 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:11 crc kubenswrapper[4789]: I0202 21:21:11.073220 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:11 crc kubenswrapper[4789]: I0202 21:21:11.073248 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:11 crc kubenswrapper[4789]: I0202 21:21:11.073271 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:11Z","lastTransitionTime":"2026-02-02T21:21:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:11 crc kubenswrapper[4789]: I0202 21:21:11.176359 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:11 crc kubenswrapper[4789]: I0202 21:21:11.176431 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:11 crc kubenswrapper[4789]: I0202 21:21:11.176452 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:11 crc kubenswrapper[4789]: I0202 21:21:11.176476 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:11 crc kubenswrapper[4789]: I0202 21:21:11.176494 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:11Z","lastTransitionTime":"2026-02-02T21:21:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:11 crc kubenswrapper[4789]: I0202 21:21:11.279433 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:11 crc kubenswrapper[4789]: I0202 21:21:11.279505 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:11 crc kubenswrapper[4789]: I0202 21:21:11.279525 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:11 crc kubenswrapper[4789]: I0202 21:21:11.279548 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:11 crc kubenswrapper[4789]: I0202 21:21:11.279567 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:11Z","lastTransitionTime":"2026-02-02T21:21:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:11 crc kubenswrapper[4789]: I0202 21:21:11.383422 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:11 crc kubenswrapper[4789]: I0202 21:21:11.383984 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:11 crc kubenswrapper[4789]: I0202 21:21:11.384246 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:11 crc kubenswrapper[4789]: I0202 21:21:11.384490 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:11 crc kubenswrapper[4789]: I0202 21:21:11.384816 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:11Z","lastTransitionTime":"2026-02-02T21:21:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:11 crc kubenswrapper[4789]: I0202 21:21:11.418694 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 21:21:11 crc kubenswrapper[4789]: E0202 21:21:11.418816 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 21:21:11 crc kubenswrapper[4789]: I0202 21:21:11.418881 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 21:21:11 crc kubenswrapper[4789]: E0202 21:21:11.419201 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 21:21:11 crc kubenswrapper[4789]: I0202 21:21:11.418706 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 21:21:11 crc kubenswrapper[4789]: E0202 21:21:11.419662 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 21:21:11 crc kubenswrapper[4789]: I0202 21:21:11.487961 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:11 crc kubenswrapper[4789]: I0202 21:21:11.487998 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:11 crc kubenswrapper[4789]: I0202 21:21:11.488009 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:11 crc kubenswrapper[4789]: I0202 21:21:11.488027 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:11 crc kubenswrapper[4789]: I0202 21:21:11.488041 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:11Z","lastTransitionTime":"2026-02-02T21:21:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:11 crc kubenswrapper[4789]: I0202 21:21:11.591014 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:11 crc kubenswrapper[4789]: I0202 21:21:11.591077 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:11 crc kubenswrapper[4789]: I0202 21:21:11.591094 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:11 crc kubenswrapper[4789]: I0202 21:21:11.591117 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:11 crc kubenswrapper[4789]: I0202 21:21:11.591134 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:11Z","lastTransitionTime":"2026-02-02T21:21:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:11 crc kubenswrapper[4789]: I0202 21:21:11.698755 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:11 crc kubenswrapper[4789]: I0202 21:21:11.698819 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:11 crc kubenswrapper[4789]: I0202 21:21:11.698849 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:11 crc kubenswrapper[4789]: I0202 21:21:11.698875 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:11 crc kubenswrapper[4789]: I0202 21:21:11.698896 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:11Z","lastTransitionTime":"2026-02-02T21:21:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:11 crc kubenswrapper[4789]: I0202 21:21:11.801123 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 10:22:49.853040096 +0000 UTC Feb 02 21:21:11 crc kubenswrapper[4789]: I0202 21:21:11.802921 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:11 crc kubenswrapper[4789]: I0202 21:21:11.802980 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:11 crc kubenswrapper[4789]: I0202 21:21:11.802998 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:11 crc kubenswrapper[4789]: I0202 21:21:11.803022 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:11 crc kubenswrapper[4789]: I0202 21:21:11.803041 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:11Z","lastTransitionTime":"2026-02-02T21:21:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:11 crc kubenswrapper[4789]: I0202 21:21:11.906088 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:11 crc kubenswrapper[4789]: I0202 21:21:11.906153 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:11 crc kubenswrapper[4789]: I0202 21:21:11.906170 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:11 crc kubenswrapper[4789]: I0202 21:21:11.906204 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:11 crc kubenswrapper[4789]: I0202 21:21:11.906221 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:11Z","lastTransitionTime":"2026-02-02T21:21:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:12 crc kubenswrapper[4789]: I0202 21:21:12.010490 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:12 crc kubenswrapper[4789]: I0202 21:21:12.010539 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:12 crc kubenswrapper[4789]: I0202 21:21:12.010559 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:12 crc kubenswrapper[4789]: I0202 21:21:12.010601 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:12 crc kubenswrapper[4789]: I0202 21:21:12.010614 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:12Z","lastTransitionTime":"2026-02-02T21:21:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:12 crc kubenswrapper[4789]: I0202 21:21:12.113559 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:12 crc kubenswrapper[4789]: I0202 21:21:12.113684 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:12 crc kubenswrapper[4789]: I0202 21:21:12.113724 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:12 crc kubenswrapper[4789]: I0202 21:21:12.113757 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:12 crc kubenswrapper[4789]: I0202 21:21:12.113781 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:12Z","lastTransitionTime":"2026-02-02T21:21:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:12 crc kubenswrapper[4789]: I0202 21:21:12.217340 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:12 crc kubenswrapper[4789]: I0202 21:21:12.217475 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:12 crc kubenswrapper[4789]: I0202 21:21:12.217506 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:12 crc kubenswrapper[4789]: I0202 21:21:12.217653 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:12 crc kubenswrapper[4789]: I0202 21:21:12.217686 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:12Z","lastTransitionTime":"2026-02-02T21:21:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:12 crc kubenswrapper[4789]: I0202 21:21:12.321407 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:12 crc kubenswrapper[4789]: I0202 21:21:12.321465 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:12 crc kubenswrapper[4789]: I0202 21:21:12.321484 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:12 crc kubenswrapper[4789]: I0202 21:21:12.321506 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:12 crc kubenswrapper[4789]: I0202 21:21:12.321523 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:12Z","lastTransitionTime":"2026-02-02T21:21:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:12 crc kubenswrapper[4789]: I0202 21:21:12.418882 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vjbpg" Feb 02 21:21:12 crc kubenswrapper[4789]: E0202 21:21:12.419135 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vjbpg" podUID="2dc26662-64d3-47f0-9e0d-d340760ca348" Feb 02 21:21:12 crc kubenswrapper[4789]: I0202 21:21:12.425198 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:12 crc kubenswrapper[4789]: I0202 21:21:12.425258 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:12 crc kubenswrapper[4789]: I0202 21:21:12.425280 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:12 crc kubenswrapper[4789]: I0202 21:21:12.425307 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:12 crc kubenswrapper[4789]: I0202 21:21:12.425325 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:12Z","lastTransitionTime":"2026-02-02T21:21:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:12 crc kubenswrapper[4789]: I0202 21:21:12.529136 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:12 crc kubenswrapper[4789]: I0202 21:21:12.529202 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:12 crc kubenswrapper[4789]: I0202 21:21:12.529222 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:12 crc kubenswrapper[4789]: I0202 21:21:12.529247 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:12 crc kubenswrapper[4789]: I0202 21:21:12.529264 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:12Z","lastTransitionTime":"2026-02-02T21:21:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:12 crc kubenswrapper[4789]: I0202 21:21:12.633169 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:12 crc kubenswrapper[4789]: I0202 21:21:12.633248 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:12 crc kubenswrapper[4789]: I0202 21:21:12.633266 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:12 crc kubenswrapper[4789]: I0202 21:21:12.633300 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:12 crc kubenswrapper[4789]: I0202 21:21:12.633325 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:12Z","lastTransitionTime":"2026-02-02T21:21:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:12 crc kubenswrapper[4789]: I0202 21:21:12.736940 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:12 crc kubenswrapper[4789]: I0202 21:21:12.737004 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:12 crc kubenswrapper[4789]: I0202 21:21:12.737020 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:12 crc kubenswrapper[4789]: I0202 21:21:12.737044 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:12 crc kubenswrapper[4789]: I0202 21:21:12.737062 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:12Z","lastTransitionTime":"2026-02-02T21:21:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:12 crc kubenswrapper[4789]: I0202 21:21:12.801654 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 06:26:26.898160807 +0000 UTC Feb 02 21:21:12 crc kubenswrapper[4789]: I0202 21:21:12.839831 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:12 crc kubenswrapper[4789]: I0202 21:21:12.839881 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:12 crc kubenswrapper[4789]: I0202 21:21:12.839898 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:12 crc kubenswrapper[4789]: I0202 21:21:12.839921 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:12 crc kubenswrapper[4789]: I0202 21:21:12.839938 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:12Z","lastTransitionTime":"2026-02-02T21:21:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:12 crc kubenswrapper[4789]: I0202 21:21:12.942791 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:12 crc kubenswrapper[4789]: I0202 21:21:12.942864 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:12 crc kubenswrapper[4789]: I0202 21:21:12.942890 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:12 crc kubenswrapper[4789]: I0202 21:21:12.942922 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:12 crc kubenswrapper[4789]: I0202 21:21:12.942946 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:12Z","lastTransitionTime":"2026-02-02T21:21:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:13 crc kubenswrapper[4789]: I0202 21:21:13.047068 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:13 crc kubenswrapper[4789]: I0202 21:21:13.047141 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:13 crc kubenswrapper[4789]: I0202 21:21:13.047159 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:13 crc kubenswrapper[4789]: I0202 21:21:13.047187 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:13 crc kubenswrapper[4789]: I0202 21:21:13.047207 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:13Z","lastTransitionTime":"2026-02-02T21:21:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:13 crc kubenswrapper[4789]: I0202 21:21:13.150859 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:13 crc kubenswrapper[4789]: I0202 21:21:13.150935 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:13 crc kubenswrapper[4789]: I0202 21:21:13.150953 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:13 crc kubenswrapper[4789]: I0202 21:21:13.150980 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:13 crc kubenswrapper[4789]: I0202 21:21:13.150998 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:13Z","lastTransitionTime":"2026-02-02T21:21:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:13 crc kubenswrapper[4789]: I0202 21:21:13.254622 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:13 crc kubenswrapper[4789]: I0202 21:21:13.254676 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:13 crc kubenswrapper[4789]: I0202 21:21:13.254688 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:13 crc kubenswrapper[4789]: I0202 21:21:13.254709 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:13 crc kubenswrapper[4789]: I0202 21:21:13.254721 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:13Z","lastTransitionTime":"2026-02-02T21:21:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:13 crc kubenswrapper[4789]: I0202 21:21:13.357708 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:13 crc kubenswrapper[4789]: I0202 21:21:13.357795 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:13 crc kubenswrapper[4789]: I0202 21:21:13.357828 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:13 crc kubenswrapper[4789]: I0202 21:21:13.357858 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:13 crc kubenswrapper[4789]: I0202 21:21:13.357880 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:13Z","lastTransitionTime":"2026-02-02T21:21:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:13 crc kubenswrapper[4789]: I0202 21:21:13.418605 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 21:21:13 crc kubenswrapper[4789]: I0202 21:21:13.418714 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 21:21:13 crc kubenswrapper[4789]: E0202 21:21:13.418762 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 21:21:13 crc kubenswrapper[4789]: I0202 21:21:13.418863 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 21:21:13 crc kubenswrapper[4789]: E0202 21:21:13.419109 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 21:21:13 crc kubenswrapper[4789]: E0202 21:21:13.419284 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 21:21:13 crc kubenswrapper[4789]: I0202 21:21:13.461685 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:13 crc kubenswrapper[4789]: I0202 21:21:13.461751 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:13 crc kubenswrapper[4789]: I0202 21:21:13.461775 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:13 crc kubenswrapper[4789]: I0202 21:21:13.461807 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:13 crc kubenswrapper[4789]: I0202 21:21:13.461832 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:13Z","lastTransitionTime":"2026-02-02T21:21:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:13 crc kubenswrapper[4789]: I0202 21:21:13.565416 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:13 crc kubenswrapper[4789]: I0202 21:21:13.565483 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:13 crc kubenswrapper[4789]: I0202 21:21:13.565509 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:13 crc kubenswrapper[4789]: I0202 21:21:13.565542 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:13 crc kubenswrapper[4789]: I0202 21:21:13.565561 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:13Z","lastTransitionTime":"2026-02-02T21:21:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:13 crc kubenswrapper[4789]: I0202 21:21:13.669717 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:13 crc kubenswrapper[4789]: I0202 21:21:13.669799 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:13 crc kubenswrapper[4789]: I0202 21:21:13.669820 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:13 crc kubenswrapper[4789]: I0202 21:21:13.669845 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:13 crc kubenswrapper[4789]: I0202 21:21:13.669862 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:13Z","lastTransitionTime":"2026-02-02T21:21:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:13 crc kubenswrapper[4789]: I0202 21:21:13.774945 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:13 crc kubenswrapper[4789]: I0202 21:21:13.775025 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:13 crc kubenswrapper[4789]: I0202 21:21:13.775049 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:13 crc kubenswrapper[4789]: I0202 21:21:13.775081 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:13 crc kubenswrapper[4789]: I0202 21:21:13.775103 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:13Z","lastTransitionTime":"2026-02-02T21:21:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:13 crc kubenswrapper[4789]: I0202 21:21:13.801990 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 14:46:03.103634114 +0000 UTC Feb 02 21:21:13 crc kubenswrapper[4789]: I0202 21:21:13.877662 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:13 crc kubenswrapper[4789]: I0202 21:21:13.877743 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:13 crc kubenswrapper[4789]: I0202 21:21:13.877769 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:13 crc kubenswrapper[4789]: I0202 21:21:13.877805 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:13 crc kubenswrapper[4789]: I0202 21:21:13.877825 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:13Z","lastTransitionTime":"2026-02-02T21:21:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:13 crc kubenswrapper[4789]: I0202 21:21:13.980921 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:13 crc kubenswrapper[4789]: I0202 21:21:13.981016 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:13 crc kubenswrapper[4789]: I0202 21:21:13.981039 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:13 crc kubenswrapper[4789]: I0202 21:21:13.981071 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:13 crc kubenswrapper[4789]: I0202 21:21:13.981100 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:13Z","lastTransitionTime":"2026-02-02T21:21:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:14 crc kubenswrapper[4789]: I0202 21:21:14.084313 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:14 crc kubenswrapper[4789]: I0202 21:21:14.084381 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:14 crc kubenswrapper[4789]: I0202 21:21:14.084405 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:14 crc kubenswrapper[4789]: I0202 21:21:14.084436 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:14 crc kubenswrapper[4789]: I0202 21:21:14.084459 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:14Z","lastTransitionTime":"2026-02-02T21:21:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:14 crc kubenswrapper[4789]: I0202 21:21:14.187666 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:14 crc kubenswrapper[4789]: I0202 21:21:14.187738 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:14 crc kubenswrapper[4789]: I0202 21:21:14.187764 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:14 crc kubenswrapper[4789]: I0202 21:21:14.187791 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:14 crc kubenswrapper[4789]: I0202 21:21:14.187809 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:14Z","lastTransitionTime":"2026-02-02T21:21:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:14 crc kubenswrapper[4789]: I0202 21:21:14.290698 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:14 crc kubenswrapper[4789]: I0202 21:21:14.290765 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:14 crc kubenswrapper[4789]: I0202 21:21:14.290784 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:14 crc kubenswrapper[4789]: I0202 21:21:14.290810 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:14 crc kubenswrapper[4789]: I0202 21:21:14.290829 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:14Z","lastTransitionTime":"2026-02-02T21:21:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:14 crc kubenswrapper[4789]: I0202 21:21:14.401391 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:14 crc kubenswrapper[4789]: I0202 21:21:14.401435 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:14 crc kubenswrapper[4789]: I0202 21:21:14.401445 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:14 crc kubenswrapper[4789]: I0202 21:21:14.401460 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:14 crc kubenswrapper[4789]: I0202 21:21:14.401470 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:14Z","lastTransitionTime":"2026-02-02T21:21:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:14 crc kubenswrapper[4789]: I0202 21:21:14.418764 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vjbpg" Feb 02 21:21:14 crc kubenswrapper[4789]: E0202 21:21:14.419140 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vjbpg" podUID="2dc26662-64d3-47f0-9e0d-d340760ca348" Feb 02 21:21:14 crc kubenswrapper[4789]: I0202 21:21:14.504749 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:14 crc kubenswrapper[4789]: I0202 21:21:14.504810 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:14 crc kubenswrapper[4789]: I0202 21:21:14.504831 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:14 crc kubenswrapper[4789]: I0202 21:21:14.504855 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:14 crc kubenswrapper[4789]: I0202 21:21:14.504873 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:14Z","lastTransitionTime":"2026-02-02T21:21:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:14 crc kubenswrapper[4789]: I0202 21:21:14.607610 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:14 crc kubenswrapper[4789]: I0202 21:21:14.607655 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:14 crc kubenswrapper[4789]: I0202 21:21:14.607668 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:14 crc kubenswrapper[4789]: I0202 21:21:14.607684 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:14 crc kubenswrapper[4789]: I0202 21:21:14.607696 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:14Z","lastTransitionTime":"2026-02-02T21:21:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:14 crc kubenswrapper[4789]: I0202 21:21:14.710873 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:14 crc kubenswrapper[4789]: I0202 21:21:14.711004 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:14 crc kubenswrapper[4789]: I0202 21:21:14.711033 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:14 crc kubenswrapper[4789]: I0202 21:21:14.711064 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:14 crc kubenswrapper[4789]: I0202 21:21:14.711088 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:14Z","lastTransitionTime":"2026-02-02T21:21:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:14 crc kubenswrapper[4789]: I0202 21:21:14.802694 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 12:58:26.141389021 +0000 UTC Feb 02 21:21:14 crc kubenswrapper[4789]: I0202 21:21:14.814048 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:14 crc kubenswrapper[4789]: I0202 21:21:14.814101 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:14 crc kubenswrapper[4789]: I0202 21:21:14.814120 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:14 crc kubenswrapper[4789]: I0202 21:21:14.814144 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:14 crc kubenswrapper[4789]: I0202 21:21:14.814162 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:14Z","lastTransitionTime":"2026-02-02T21:21:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:14 crc kubenswrapper[4789]: I0202 21:21:14.917128 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:14 crc kubenswrapper[4789]: I0202 21:21:14.917184 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:14 crc kubenswrapper[4789]: I0202 21:21:14.917205 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:14 crc kubenswrapper[4789]: I0202 21:21:14.917233 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:14 crc kubenswrapper[4789]: I0202 21:21:14.917259 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:14Z","lastTransitionTime":"2026-02-02T21:21:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:15 crc kubenswrapper[4789]: I0202 21:21:15.020364 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:15 crc kubenswrapper[4789]: I0202 21:21:15.020419 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:15 crc kubenswrapper[4789]: I0202 21:21:15.020430 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:15 crc kubenswrapper[4789]: I0202 21:21:15.020449 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:15 crc kubenswrapper[4789]: I0202 21:21:15.020462 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:15Z","lastTransitionTime":"2026-02-02T21:21:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:15 crc kubenswrapper[4789]: I0202 21:21:15.123042 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:15 crc kubenswrapper[4789]: I0202 21:21:15.123112 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:15 crc kubenswrapper[4789]: I0202 21:21:15.123134 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:15 crc kubenswrapper[4789]: I0202 21:21:15.123159 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:15 crc kubenswrapper[4789]: I0202 21:21:15.123175 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:15Z","lastTransitionTime":"2026-02-02T21:21:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:15 crc kubenswrapper[4789]: I0202 21:21:15.228124 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:15 crc kubenswrapper[4789]: I0202 21:21:15.228479 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:15 crc kubenswrapper[4789]: I0202 21:21:15.228664 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:15 crc kubenswrapper[4789]: I0202 21:21:15.228859 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:15 crc kubenswrapper[4789]: I0202 21:21:15.229010 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:15Z","lastTransitionTime":"2026-02-02T21:21:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:15 crc kubenswrapper[4789]: I0202 21:21:15.331726 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:15 crc kubenswrapper[4789]: I0202 21:21:15.331804 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:15 crc kubenswrapper[4789]: I0202 21:21:15.331829 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:15 crc kubenswrapper[4789]: I0202 21:21:15.331861 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:15 crc kubenswrapper[4789]: I0202 21:21:15.331885 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:15Z","lastTransitionTime":"2026-02-02T21:21:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:15 crc kubenswrapper[4789]: I0202 21:21:15.418830 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 21:21:15 crc kubenswrapper[4789]: I0202 21:21:15.418870 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 21:21:15 crc kubenswrapper[4789]: E0202 21:21:15.418996 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 21:21:15 crc kubenswrapper[4789]: I0202 21:21:15.419071 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 21:21:15 crc kubenswrapper[4789]: E0202 21:21:15.419264 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 21:21:15 crc kubenswrapper[4789]: E0202 21:21:15.419365 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 21:21:15 crc kubenswrapper[4789]: I0202 21:21:15.434999 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:15 crc kubenswrapper[4789]: I0202 21:21:15.435285 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:15 crc kubenswrapper[4789]: I0202 21:21:15.435432 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:15 crc kubenswrapper[4789]: I0202 21:21:15.435569 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:15 crc kubenswrapper[4789]: I0202 21:21:15.435753 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:15Z","lastTransitionTime":"2026-02-02T21:21:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:15 crc kubenswrapper[4789]: I0202 21:21:15.538163 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:15 crc kubenswrapper[4789]: I0202 21:21:15.538226 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:15 crc kubenswrapper[4789]: I0202 21:21:15.538249 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:15 crc kubenswrapper[4789]: I0202 21:21:15.538279 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:15 crc kubenswrapper[4789]: I0202 21:21:15.538304 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:15Z","lastTransitionTime":"2026-02-02T21:21:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:15 crc kubenswrapper[4789]: I0202 21:21:15.641836 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:15 crc kubenswrapper[4789]: I0202 21:21:15.641911 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:15 crc kubenswrapper[4789]: I0202 21:21:15.641928 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:15 crc kubenswrapper[4789]: I0202 21:21:15.641956 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:15 crc kubenswrapper[4789]: I0202 21:21:15.641973 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:15Z","lastTransitionTime":"2026-02-02T21:21:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:15 crc kubenswrapper[4789]: I0202 21:21:15.744773 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:15 crc kubenswrapper[4789]: I0202 21:21:15.744876 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:15 crc kubenswrapper[4789]: I0202 21:21:15.744895 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:15 crc kubenswrapper[4789]: I0202 21:21:15.744957 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:15 crc kubenswrapper[4789]: I0202 21:21:15.744977 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:15Z","lastTransitionTime":"2026-02-02T21:21:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:15 crc kubenswrapper[4789]: I0202 21:21:15.803365 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 15:04:49.508118484 +0000 UTC Feb 02 21:21:15 crc kubenswrapper[4789]: I0202 21:21:15.847531 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:15 crc kubenswrapper[4789]: I0202 21:21:15.847619 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:15 crc kubenswrapper[4789]: I0202 21:21:15.847638 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:15 crc kubenswrapper[4789]: I0202 21:21:15.847664 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:15 crc kubenswrapper[4789]: I0202 21:21:15.847692 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:15Z","lastTransitionTime":"2026-02-02T21:21:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:15 crc kubenswrapper[4789]: I0202 21:21:15.950329 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:15 crc kubenswrapper[4789]: I0202 21:21:15.950369 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:15 crc kubenswrapper[4789]: I0202 21:21:15.950385 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:15 crc kubenswrapper[4789]: I0202 21:21:15.950407 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:15 crc kubenswrapper[4789]: I0202 21:21:15.950424 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:15Z","lastTransitionTime":"2026-02-02T21:21:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:16 crc kubenswrapper[4789]: I0202 21:21:16.054101 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:16 crc kubenswrapper[4789]: I0202 21:21:16.054156 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:16 crc kubenswrapper[4789]: I0202 21:21:16.054174 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:16 crc kubenswrapper[4789]: I0202 21:21:16.054230 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:16 crc kubenswrapper[4789]: I0202 21:21:16.054250 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:16Z","lastTransitionTime":"2026-02-02T21:21:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:16 crc kubenswrapper[4789]: I0202 21:21:16.157353 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:16 crc kubenswrapper[4789]: I0202 21:21:16.157435 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:16 crc kubenswrapper[4789]: I0202 21:21:16.157460 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:16 crc kubenswrapper[4789]: I0202 21:21:16.157490 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:16 crc kubenswrapper[4789]: I0202 21:21:16.157513 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:16Z","lastTransitionTime":"2026-02-02T21:21:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:16 crc kubenswrapper[4789]: I0202 21:21:16.260218 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:16 crc kubenswrapper[4789]: I0202 21:21:16.260277 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:16 crc kubenswrapper[4789]: I0202 21:21:16.260294 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:16 crc kubenswrapper[4789]: I0202 21:21:16.260318 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:16 crc kubenswrapper[4789]: I0202 21:21:16.260335 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:16Z","lastTransitionTime":"2026-02-02T21:21:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:16 crc kubenswrapper[4789]: I0202 21:21:16.363380 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:16 crc kubenswrapper[4789]: I0202 21:21:16.363437 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:16 crc kubenswrapper[4789]: I0202 21:21:16.363459 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:16 crc kubenswrapper[4789]: I0202 21:21:16.363486 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:16 crc kubenswrapper[4789]: I0202 21:21:16.363506 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:16Z","lastTransitionTime":"2026-02-02T21:21:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:16 crc kubenswrapper[4789]: I0202 21:21:16.419268 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vjbpg" Feb 02 21:21:16 crc kubenswrapper[4789]: E0202 21:21:16.419498 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vjbpg" podUID="2dc26662-64d3-47f0-9e0d-d340760ca348" Feb 02 21:21:16 crc kubenswrapper[4789]: I0202 21:21:16.466486 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:16 crc kubenswrapper[4789]: I0202 21:21:16.466555 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:16 crc kubenswrapper[4789]: I0202 21:21:16.466612 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:16 crc kubenswrapper[4789]: I0202 21:21:16.466639 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:16 crc kubenswrapper[4789]: I0202 21:21:16.466659 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:16Z","lastTransitionTime":"2026-02-02T21:21:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:16 crc kubenswrapper[4789]: I0202 21:21:16.570083 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:16 crc kubenswrapper[4789]: I0202 21:21:16.570148 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:16 crc kubenswrapper[4789]: I0202 21:21:16.570165 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:16 crc kubenswrapper[4789]: I0202 21:21:16.570190 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:16 crc kubenswrapper[4789]: I0202 21:21:16.570207 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:16Z","lastTransitionTime":"2026-02-02T21:21:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:16 crc kubenswrapper[4789]: I0202 21:21:16.673505 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:16 crc kubenswrapper[4789]: I0202 21:21:16.673558 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:16 crc kubenswrapper[4789]: I0202 21:21:16.673765 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:16 crc kubenswrapper[4789]: I0202 21:21:16.673797 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:16 crc kubenswrapper[4789]: I0202 21:21:16.673856 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:16Z","lastTransitionTime":"2026-02-02T21:21:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:16 crc kubenswrapper[4789]: I0202 21:21:16.777687 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:16 crc kubenswrapper[4789]: I0202 21:21:16.777792 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:16 crc kubenswrapper[4789]: I0202 21:21:16.777811 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:16 crc kubenswrapper[4789]: I0202 21:21:16.777882 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:16 crc kubenswrapper[4789]: I0202 21:21:16.777909 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:16Z","lastTransitionTime":"2026-02-02T21:21:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:16 crc kubenswrapper[4789]: I0202 21:21:16.804243 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 22:23:23.065202079 +0000 UTC Feb 02 21:21:16 crc kubenswrapper[4789]: I0202 21:21:16.880679 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:16 crc kubenswrapper[4789]: I0202 21:21:16.880739 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:16 crc kubenswrapper[4789]: I0202 21:21:16.880756 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:16 crc kubenswrapper[4789]: I0202 21:21:16.880783 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:16 crc kubenswrapper[4789]: I0202 21:21:16.880808 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:16Z","lastTransitionTime":"2026-02-02T21:21:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:16 crc kubenswrapper[4789]: I0202 21:21:16.983690 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:16 crc kubenswrapper[4789]: I0202 21:21:16.983783 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:16 crc kubenswrapper[4789]: I0202 21:21:16.983810 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:16 crc kubenswrapper[4789]: I0202 21:21:16.983840 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:16 crc kubenswrapper[4789]: I0202 21:21:16.983859 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:16Z","lastTransitionTime":"2026-02-02T21:21:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:17 crc kubenswrapper[4789]: I0202 21:21:17.086888 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:17 crc kubenswrapper[4789]: I0202 21:21:17.086959 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:17 crc kubenswrapper[4789]: I0202 21:21:17.086977 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:17 crc kubenswrapper[4789]: I0202 21:21:17.087001 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:17 crc kubenswrapper[4789]: I0202 21:21:17.087018 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:17Z","lastTransitionTime":"2026-02-02T21:21:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:17 crc kubenswrapper[4789]: I0202 21:21:17.189673 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:17 crc kubenswrapper[4789]: I0202 21:21:17.189849 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:17 crc kubenswrapper[4789]: I0202 21:21:17.189869 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:17 crc kubenswrapper[4789]: I0202 21:21:17.189894 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:17 crc kubenswrapper[4789]: I0202 21:21:17.189912 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:17Z","lastTransitionTime":"2026-02-02T21:21:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:17 crc kubenswrapper[4789]: I0202 21:21:17.293482 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:17 crc kubenswrapper[4789]: I0202 21:21:17.293548 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:17 crc kubenswrapper[4789]: I0202 21:21:17.293566 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:17 crc kubenswrapper[4789]: I0202 21:21:17.293617 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:17 crc kubenswrapper[4789]: I0202 21:21:17.293635 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:17Z","lastTransitionTime":"2026-02-02T21:21:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:17 crc kubenswrapper[4789]: I0202 21:21:17.396727 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:17 crc kubenswrapper[4789]: I0202 21:21:17.396790 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:17 crc kubenswrapper[4789]: I0202 21:21:17.396811 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:17 crc kubenswrapper[4789]: I0202 21:21:17.396839 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:17 crc kubenswrapper[4789]: I0202 21:21:17.396858 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:17Z","lastTransitionTime":"2026-02-02T21:21:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:17 crc kubenswrapper[4789]: I0202 21:21:17.419296 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 21:21:17 crc kubenswrapper[4789]: I0202 21:21:17.419312 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 21:21:17 crc kubenswrapper[4789]: I0202 21:21:17.419397 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 21:21:17 crc kubenswrapper[4789]: E0202 21:21:17.419554 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 21:21:17 crc kubenswrapper[4789]: E0202 21:21:17.419738 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 21:21:17 crc kubenswrapper[4789]: E0202 21:21:17.420137 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 21:21:17 crc kubenswrapper[4789]: I0202 21:21:17.499998 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:17 crc kubenswrapper[4789]: I0202 21:21:17.500046 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:17 crc kubenswrapper[4789]: I0202 21:21:17.500063 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:17 crc kubenswrapper[4789]: I0202 21:21:17.500084 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:17 crc kubenswrapper[4789]: I0202 21:21:17.500102 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:17Z","lastTransitionTime":"2026-02-02T21:21:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:17 crc kubenswrapper[4789]: I0202 21:21:17.603027 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:17 crc kubenswrapper[4789]: I0202 21:21:17.603124 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:17 crc kubenswrapper[4789]: I0202 21:21:17.603143 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:17 crc kubenswrapper[4789]: I0202 21:21:17.603200 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:17 crc kubenswrapper[4789]: I0202 21:21:17.603219 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:17Z","lastTransitionTime":"2026-02-02T21:21:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:17 crc kubenswrapper[4789]: I0202 21:21:17.706160 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:17 crc kubenswrapper[4789]: I0202 21:21:17.706229 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:17 crc kubenswrapper[4789]: I0202 21:21:17.706246 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:17 crc kubenswrapper[4789]: I0202 21:21:17.706271 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:17 crc kubenswrapper[4789]: I0202 21:21:17.706291 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:17Z","lastTransitionTime":"2026-02-02T21:21:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:17 crc kubenswrapper[4789]: I0202 21:21:17.804871 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 16:54:19.862776354 +0000 UTC Feb 02 21:21:17 crc kubenswrapper[4789]: I0202 21:21:17.809468 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:17 crc kubenswrapper[4789]: I0202 21:21:17.809521 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:17 crc kubenswrapper[4789]: I0202 21:21:17.809539 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:17 crc kubenswrapper[4789]: I0202 21:21:17.809564 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:17 crc kubenswrapper[4789]: I0202 21:21:17.809610 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:17Z","lastTransitionTime":"2026-02-02T21:21:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:17 crc kubenswrapper[4789]: I0202 21:21:17.913150 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:17 crc kubenswrapper[4789]: I0202 21:21:17.913218 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:17 crc kubenswrapper[4789]: I0202 21:21:17.913236 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:17 crc kubenswrapper[4789]: I0202 21:21:17.913260 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:17 crc kubenswrapper[4789]: I0202 21:21:17.913277 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:17Z","lastTransitionTime":"2026-02-02T21:21:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:18 crc kubenswrapper[4789]: I0202 21:21:18.016341 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:18 crc kubenswrapper[4789]: I0202 21:21:18.016412 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:18 crc kubenswrapper[4789]: I0202 21:21:18.016435 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:18 crc kubenswrapper[4789]: I0202 21:21:18.016459 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:18 crc kubenswrapper[4789]: I0202 21:21:18.016479 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:18Z","lastTransitionTime":"2026-02-02T21:21:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:18 crc kubenswrapper[4789]: I0202 21:21:18.119819 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:18 crc kubenswrapper[4789]: I0202 21:21:18.119870 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:18 crc kubenswrapper[4789]: I0202 21:21:18.119878 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:18 crc kubenswrapper[4789]: I0202 21:21:18.119893 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:18 crc kubenswrapper[4789]: I0202 21:21:18.119902 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:18Z","lastTransitionTime":"2026-02-02T21:21:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:18 crc kubenswrapper[4789]: I0202 21:21:18.222066 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:18 crc kubenswrapper[4789]: I0202 21:21:18.222134 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:18 crc kubenswrapper[4789]: I0202 21:21:18.222153 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:18 crc kubenswrapper[4789]: I0202 21:21:18.222183 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:18 crc kubenswrapper[4789]: I0202 21:21:18.222204 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:18Z","lastTransitionTime":"2026-02-02T21:21:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:18 crc kubenswrapper[4789]: I0202 21:21:18.324484 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:18 crc kubenswrapper[4789]: I0202 21:21:18.324551 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:18 crc kubenswrapper[4789]: I0202 21:21:18.324574 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:18 crc kubenswrapper[4789]: I0202 21:21:18.324643 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:18 crc kubenswrapper[4789]: I0202 21:21:18.324662 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:18Z","lastTransitionTime":"2026-02-02T21:21:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:18 crc kubenswrapper[4789]: I0202 21:21:18.419649 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vjbpg" Feb 02 21:21:18 crc kubenswrapper[4789]: E0202 21:21:18.420015 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vjbpg" podUID="2dc26662-64d3-47f0-9e0d-d340760ca348" Feb 02 21:21:18 crc kubenswrapper[4789]: I0202 21:21:18.427150 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:18 crc kubenswrapper[4789]: I0202 21:21:18.427224 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:18 crc kubenswrapper[4789]: I0202 21:21:18.427235 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:18 crc kubenswrapper[4789]: I0202 21:21:18.427254 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:18 crc kubenswrapper[4789]: I0202 21:21:18.427266 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:18Z","lastTransitionTime":"2026-02-02T21:21:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:18 crc kubenswrapper[4789]: I0202 21:21:18.530868 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:18 crc kubenswrapper[4789]: I0202 21:21:18.530925 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:18 crc kubenswrapper[4789]: I0202 21:21:18.530942 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:18 crc kubenswrapper[4789]: I0202 21:21:18.530964 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:18 crc kubenswrapper[4789]: I0202 21:21:18.530981 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:18Z","lastTransitionTime":"2026-02-02T21:21:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:18 crc kubenswrapper[4789]: I0202 21:21:18.633968 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:18 crc kubenswrapper[4789]: I0202 21:21:18.634021 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:18 crc kubenswrapper[4789]: I0202 21:21:18.634043 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:18 crc kubenswrapper[4789]: I0202 21:21:18.634071 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:18 crc kubenswrapper[4789]: I0202 21:21:18.634092 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:18Z","lastTransitionTime":"2026-02-02T21:21:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:18 crc kubenswrapper[4789]: I0202 21:21:18.736506 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:18 crc kubenswrapper[4789]: I0202 21:21:18.736668 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:18 crc kubenswrapper[4789]: I0202 21:21:18.736699 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:18 crc kubenswrapper[4789]: I0202 21:21:18.736732 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:18 crc kubenswrapper[4789]: I0202 21:21:18.736759 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:18Z","lastTransitionTime":"2026-02-02T21:21:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:18 crc kubenswrapper[4789]: I0202 21:21:18.805916 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 20:53:53.980236381 +0000 UTC Feb 02 21:21:18 crc kubenswrapper[4789]: I0202 21:21:18.839759 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:18 crc kubenswrapper[4789]: I0202 21:21:18.839821 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:18 crc kubenswrapper[4789]: I0202 21:21:18.839838 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:18 crc kubenswrapper[4789]: I0202 21:21:18.839864 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:18 crc kubenswrapper[4789]: I0202 21:21:18.839881 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:18Z","lastTransitionTime":"2026-02-02T21:21:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:18 crc kubenswrapper[4789]: I0202 21:21:18.943017 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:18 crc kubenswrapper[4789]: I0202 21:21:18.943081 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:18 crc kubenswrapper[4789]: I0202 21:21:18.943105 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:18 crc kubenswrapper[4789]: I0202 21:21:18.943135 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:18 crc kubenswrapper[4789]: I0202 21:21:18.943157 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:18Z","lastTransitionTime":"2026-02-02T21:21:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:19 crc kubenswrapper[4789]: I0202 21:21:19.045751 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:19 crc kubenswrapper[4789]: I0202 21:21:19.045790 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:19 crc kubenswrapper[4789]: I0202 21:21:19.045800 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:19 crc kubenswrapper[4789]: I0202 21:21:19.045815 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:19 crc kubenswrapper[4789]: I0202 21:21:19.045826 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:19Z","lastTransitionTime":"2026-02-02T21:21:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:19 crc kubenswrapper[4789]: I0202 21:21:19.149024 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:19 crc kubenswrapper[4789]: I0202 21:21:19.149095 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:19 crc kubenswrapper[4789]: I0202 21:21:19.149105 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:19 crc kubenswrapper[4789]: I0202 21:21:19.149126 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:19 crc kubenswrapper[4789]: I0202 21:21:19.149137 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:19Z","lastTransitionTime":"2026-02-02T21:21:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:19 crc kubenswrapper[4789]: I0202 21:21:19.252394 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:19 crc kubenswrapper[4789]: I0202 21:21:19.252459 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:19 crc kubenswrapper[4789]: I0202 21:21:19.252477 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:19 crc kubenswrapper[4789]: I0202 21:21:19.252505 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:19 crc kubenswrapper[4789]: I0202 21:21:19.252523 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:19Z","lastTransitionTime":"2026-02-02T21:21:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:19 crc kubenswrapper[4789]: I0202 21:21:19.355388 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:19 crc kubenswrapper[4789]: I0202 21:21:19.355440 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:19 crc kubenswrapper[4789]: I0202 21:21:19.355455 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:19 crc kubenswrapper[4789]: I0202 21:21:19.355477 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:19 crc kubenswrapper[4789]: I0202 21:21:19.355492 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:19Z","lastTransitionTime":"2026-02-02T21:21:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:19 crc kubenswrapper[4789]: I0202 21:21:19.419363 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 21:21:19 crc kubenswrapper[4789]: I0202 21:21:19.419393 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 21:21:19 crc kubenswrapper[4789]: I0202 21:21:19.419424 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 21:21:19 crc kubenswrapper[4789]: E0202 21:21:19.419481 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 21:21:19 crc kubenswrapper[4789]: E0202 21:21:19.419549 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 21:21:19 crc kubenswrapper[4789]: E0202 21:21:19.419655 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 21:21:19 crc kubenswrapper[4789]: I0202 21:21:19.457920 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:19 crc kubenswrapper[4789]: I0202 21:21:19.457955 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:19 crc kubenswrapper[4789]: I0202 21:21:19.457964 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:19 crc kubenswrapper[4789]: I0202 21:21:19.457975 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:19 crc kubenswrapper[4789]: I0202 21:21:19.457983 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:19Z","lastTransitionTime":"2026-02-02T21:21:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:19 crc kubenswrapper[4789]: I0202 21:21:19.483924 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:19 crc kubenswrapper[4789]: I0202 21:21:19.483973 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:19 crc kubenswrapper[4789]: I0202 21:21:19.483983 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:19 crc kubenswrapper[4789]: I0202 21:21:19.483998 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:19 crc kubenswrapper[4789]: I0202 21:21:19.484005 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:19Z","lastTransitionTime":"2026-02-02T21:21:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:19 crc kubenswrapper[4789]: E0202 21:21:19.498460 4789 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:21:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:21:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:21:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:21:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:21:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:21:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:21:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:21:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"007d0037-9447-42ea-b3a4-6e1f0d669307\\\",\\\"systemUUID\\\":\\\"53ecbfdd-0b43-4d74-98ca-c7bcbc951d86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:19Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:19 crc kubenswrapper[4789]: I0202 21:21:19.501984 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:19 crc kubenswrapper[4789]: I0202 21:21:19.502015 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:19 crc kubenswrapper[4789]: I0202 21:21:19.502025 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:19 crc kubenswrapper[4789]: I0202 21:21:19.502040 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:19 crc kubenswrapper[4789]: I0202 21:21:19.502051 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:19Z","lastTransitionTime":"2026-02-02T21:21:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:19 crc kubenswrapper[4789]: E0202 21:21:19.514849 4789 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:21:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:21:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:21:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:21:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:21:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:21:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:21:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:21:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"007d0037-9447-42ea-b3a4-6e1f0d669307\\\",\\\"systemUUID\\\":\\\"53ecbfdd-0b43-4d74-98ca-c7bcbc951d86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:19Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:19 crc kubenswrapper[4789]: I0202 21:21:19.519568 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:19 crc kubenswrapper[4789]: I0202 21:21:19.519626 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:19 crc kubenswrapper[4789]: I0202 21:21:19.519635 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:19 crc kubenswrapper[4789]: I0202 21:21:19.519647 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:19 crc kubenswrapper[4789]: I0202 21:21:19.519658 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:19Z","lastTransitionTime":"2026-02-02T21:21:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:19 crc kubenswrapper[4789]: E0202 21:21:19.537738 4789 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:21:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:21:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:21:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:21:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:21:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:21:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:21:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:21:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"007d0037-9447-42ea-b3a4-6e1f0d669307\\\",\\\"systemUUID\\\":\\\"53ecbfdd-0b43-4d74-98ca-c7bcbc951d86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:19Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:19 crc kubenswrapper[4789]: I0202 21:21:19.541720 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:19 crc kubenswrapper[4789]: I0202 21:21:19.541764 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:19 crc kubenswrapper[4789]: I0202 21:21:19.541776 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:19 crc kubenswrapper[4789]: I0202 21:21:19.541818 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:19 crc kubenswrapper[4789]: I0202 21:21:19.541832 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:19Z","lastTransitionTime":"2026-02-02T21:21:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:19 crc kubenswrapper[4789]: E0202 21:21:19.554844 4789 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:21:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:21:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:21:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:21:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:21:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:21:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:21:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:21:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"007d0037-9447-42ea-b3a4-6e1f0d669307\\\",\\\"systemUUID\\\":\\\"53ecbfdd-0b43-4d74-98ca-c7bcbc951d86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:19Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:19 crc kubenswrapper[4789]: I0202 21:21:19.557675 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:19 crc kubenswrapper[4789]: I0202 21:21:19.557709 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:19 crc kubenswrapper[4789]: I0202 21:21:19.557720 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:19 crc kubenswrapper[4789]: I0202 21:21:19.557734 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:19 crc kubenswrapper[4789]: I0202 21:21:19.557745 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:19Z","lastTransitionTime":"2026-02-02T21:21:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:19 crc kubenswrapper[4789]: E0202 21:21:19.569384 4789 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:21:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:21:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:21:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:21:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:21:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:21:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:21:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:21:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"007d0037-9447-42ea-b3a4-6e1f0d669307\\\",\\\"systemUUID\\\":\\\"53ecbfdd-0b43-4d74-98ca-c7bcbc951d86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:19Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:19 crc kubenswrapper[4789]: E0202 21:21:19.569492 4789 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 02 21:21:19 crc kubenswrapper[4789]: I0202 21:21:19.570665 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:19 crc kubenswrapper[4789]: I0202 21:21:19.570691 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:19 crc kubenswrapper[4789]: I0202 21:21:19.570698 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:19 crc kubenswrapper[4789]: I0202 21:21:19.570708 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:19 crc kubenswrapper[4789]: I0202 21:21:19.570716 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:19Z","lastTransitionTime":"2026-02-02T21:21:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:19 crc kubenswrapper[4789]: I0202 21:21:19.673372 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:19 crc kubenswrapper[4789]: I0202 21:21:19.673827 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:19 crc kubenswrapper[4789]: I0202 21:21:19.673902 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:19 crc kubenswrapper[4789]: I0202 21:21:19.673936 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:19 crc kubenswrapper[4789]: I0202 21:21:19.673960 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:19Z","lastTransitionTime":"2026-02-02T21:21:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:19 crc kubenswrapper[4789]: I0202 21:21:19.777519 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:19 crc kubenswrapper[4789]: I0202 21:21:19.777835 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:19 crc kubenswrapper[4789]: I0202 21:21:19.777870 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:19 crc kubenswrapper[4789]: I0202 21:21:19.777900 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:19 crc kubenswrapper[4789]: I0202 21:21:19.777921 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:19Z","lastTransitionTime":"2026-02-02T21:21:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:19 crc kubenswrapper[4789]: I0202 21:21:19.819007 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 10:45:10.200580836 +0000 UTC Feb 02 21:21:19 crc kubenswrapper[4789]: I0202 21:21:19.880457 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:19 crc kubenswrapper[4789]: I0202 21:21:19.880509 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:19 crc kubenswrapper[4789]: I0202 21:21:19.880527 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:19 crc kubenswrapper[4789]: I0202 21:21:19.880548 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:19 crc kubenswrapper[4789]: I0202 21:21:19.880566 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:19Z","lastTransitionTime":"2026-02-02T21:21:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:19 crc kubenswrapper[4789]: I0202 21:21:19.983496 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:19 crc kubenswrapper[4789]: I0202 21:21:19.983615 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:19 crc kubenswrapper[4789]: I0202 21:21:19.983627 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:19 crc kubenswrapper[4789]: I0202 21:21:19.983646 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:19 crc kubenswrapper[4789]: I0202 21:21:19.983660 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:19Z","lastTransitionTime":"2026-02-02T21:21:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:20 crc kubenswrapper[4789]: I0202 21:21:20.086553 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:20 crc kubenswrapper[4789]: I0202 21:21:20.086658 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:20 crc kubenswrapper[4789]: I0202 21:21:20.086680 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:20 crc kubenswrapper[4789]: I0202 21:21:20.086706 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:20 crc kubenswrapper[4789]: I0202 21:21:20.086728 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:20Z","lastTransitionTime":"2026-02-02T21:21:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:20 crc kubenswrapper[4789]: I0202 21:21:20.190344 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:20 crc kubenswrapper[4789]: I0202 21:21:20.190424 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:20 crc kubenswrapper[4789]: I0202 21:21:20.190444 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:20 crc kubenswrapper[4789]: I0202 21:21:20.190470 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:20 crc kubenswrapper[4789]: I0202 21:21:20.190489 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:20Z","lastTransitionTime":"2026-02-02T21:21:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:20 crc kubenswrapper[4789]: I0202 21:21:20.293417 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:20 crc kubenswrapper[4789]: I0202 21:21:20.293479 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:20 crc kubenswrapper[4789]: I0202 21:21:20.293498 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:20 crc kubenswrapper[4789]: I0202 21:21:20.293522 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:20 crc kubenswrapper[4789]: I0202 21:21:20.293541 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:20Z","lastTransitionTime":"2026-02-02T21:21:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:20 crc kubenswrapper[4789]: I0202 21:21:20.396656 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:20 crc kubenswrapper[4789]: I0202 21:21:20.396731 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:20 crc kubenswrapper[4789]: I0202 21:21:20.396755 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:20 crc kubenswrapper[4789]: I0202 21:21:20.396788 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:20 crc kubenswrapper[4789]: I0202 21:21:20.396811 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:20Z","lastTransitionTime":"2026-02-02T21:21:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:20 crc kubenswrapper[4789]: I0202 21:21:20.419206 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vjbpg" Feb 02 21:21:20 crc kubenswrapper[4789]: E0202 21:21:20.419476 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vjbpg" podUID="2dc26662-64d3-47f0-9e0d-d340760ca348" Feb 02 21:21:20 crc kubenswrapper[4789]: I0202 21:21:20.441026 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fb5d89e-c901-4857-a5e8-b4d8e3701714\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d85613f300f8d008d919038b40efff3fb67e228e25959cbfd2424640c6bc7a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://460e055777327e1734238bf51929751678ea5a757b00a344daa31c8c95e4c463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c292754c81dd4929f8e599e8ec352fd5e90431c60bb741c070a10ffa91fad72\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3685c506f77d3807711a442d68da94932064737cfdc5cb74557da135906e24f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:19:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:20Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:20 crc kubenswrapper[4789]: I0202 21:21:20.465011 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2x5ws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70a32268-2a2d-47f3-9fc6-4281b8dc6a02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75cf318c3d63c5316cbeba8abb93919973f88b415ed7116b55333813b8a889fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9eaf3ca59ce89187ee20f3915b7fd4a8867e156c0be18b435511d2af95ffd949\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T21:20:51Z\\\",\\\"message\\\":\\\"2026-02-02T21:20:06+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_3308657d-555b-440d-96c9-7656e8ca7e3e\\\\n2026-02-02T21:20:06+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_3308657d-555b-440d-96c9-7656e8ca7e3e to /host/opt/cni/bin/\\\\n2026-02-02T21:20:06Z [verbose] multus-daemon started\\\\n2026-02-02T21:20:06Z [verbose] Readiness Indicator file check\\\\n2026-02-02T21:20:51Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:03Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf446\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2x5ws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:20Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:20 crc kubenswrapper[4789]: I0202 21:21:20.484419 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:20Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:20 crc kubenswrapper[4789]: I0202 21:21:20.499235 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:20 crc kubenswrapper[4789]: I0202 21:21:20.499289 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:20 crc kubenswrapper[4789]: I0202 21:21:20.499333 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:20 crc kubenswrapper[4789]: I0202 21:21:20.499351 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:20 crc kubenswrapper[4789]: I0202 21:21:20.499363 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:20Z","lastTransitionTime":"2026-02-02T21:21:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:20 crc kubenswrapper[4789]: I0202 21:21:20.500294 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af0ef1e0-7fcc-4fa6-8463-a0f5a9145c2e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3c3d527f77f26e052c4ef9c0577938dc23c802918e742b9fb9020cb6ba705f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eface13a61e8f9d1e8e9512c78a4c70973bfad708c3cdea7f7251f6fa408a59f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ed17431bc8880523ea84349c68ea56e389033b550390d88e60373f663d1491f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d78edf2f7e647edfcc306a3feff71c983907077ad05a684af9fa2990deaf3b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d78edf2f7e647edfcc306a3feff71c983907077ad05a684af9fa2990deaf3b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:19:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:19:40Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:20Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:20 crc kubenswrapper[4789]: I0202 21:21:20.516931 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa57a708f883719dd81200ccf975947c0af244b92ead81daee23171c9be412f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:20Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:20 crc kubenswrapper[4789]: I0202 21:21:20.531177 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6l576" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd970d28-4009-48b2-a0f4-2b8b1d54a2cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b641bb7caefade4d07fe41965573b059bd8b23a83bb9f60a9637d0152dffb07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwbqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6l576\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:20Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:20 crc kubenswrapper[4789]: I0202 21:21:20.551115 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19928db343f460eed7b046f14b45c6756081f57ea4b3ad77acc86f29eb856a82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://106225a63f3d4ecb7ba8c5266f738990ec7cc5603f9fabde6d234ae265dcc310\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:20Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:20 crc kubenswrapper[4789]: I0202 21:21:20.567709 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdf018b4-1451-4d37-be6e-05802b67c73e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad1716ba11e1b21eb68642e6935312760c389a669a007822290fbe72573bfaad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwk2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b108a17d437cce7e94365267d96e99e84944d9dd12174c5acb380bc1a1f9c885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwk2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c8vcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:20Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:20 crc kubenswrapper[4789]: I0202 21:21:20.581673 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d49gm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39528981-2c85-43f3-8fa0-bfae5c3334cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4ae303f0f4381207f4dd4a443e366d6e3de2014e9bc69aa644e98a76b239868\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnfbx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://277fe88585ee146931597a14fe049a3d69197c94e0d84f5dfb334b08cd685723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnfbx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-d49gm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:20Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:20 crc kubenswrapper[4789]: I0202 21:21:20.597937 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vjbpg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dc26662-64d3-47f0-9e0d-d340760ca348\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9zq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9zq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vjbpg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:20Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:20 crc kubenswrapper[4789]: I0202 21:21:20.602950 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:20 crc kubenswrapper[4789]: I0202 21:21:20.603387 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:20 crc kubenswrapper[4789]: I0202 21:21:20.603404 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:20 crc kubenswrapper[4789]: I0202 21:21:20.603428 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:20 crc kubenswrapper[4789]: I0202 21:21:20.603444 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:20Z","lastTransitionTime":"2026-02-02T21:21:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:20 crc kubenswrapper[4789]: I0202 21:21:20.613693 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40b3784b5ba59f0147d2889d897d17140759155e24587e9a323338e0a3125ddd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:20Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:20 crc kubenswrapper[4789]: I0202 21:21:20.630145 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:20Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:20 crc kubenswrapper[4789]: I0202 21:21:20.644668 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:20Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:20 crc kubenswrapper[4789]: I0202 21:21:20.672417 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29dad69c19f9411676fab81684dd613d866c494df1e333b46bfff35b98430103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://047a98525fa1b06be5e6f3bab6b417973a13bb32df7fad26ca90b4558ee4e364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05637515d4c19e054ce43cc23bb6d8e32c4752282dab08786fff3062eb976461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f05a85f44fd6a7b9efa0f591ec0afebd7818a80b132ec3dddd0faa22c380eea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd508dcd4022163f662ad27cf3ffbc4cca551d4020809ca5fac60cee8da80601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://021453f1c74b708ad3f39c2defa2664f098a552f3e22315479c483d24110e583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://877f8c63dfc6ee1e69f7f0bc64e5d2896384f737678efb01ad9d27b9030a5fcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://877f8c63dfc6ee1e69f7f0bc64e5d2896384f737678efb01ad9d27b9030a5fcc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T21:21:06Z\\\",\\\"message\\\":\\\"Removed *v1.EgressIP event handler 8\\\\nI0202 21:21:06.324486 6920 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0202 21:21:06.324713 6920 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 21:21:06.325983 6920 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0202 21:21:06.326123 6920 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0202 21:21:06.327135 6920 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 21:21:06.327379 6920 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0202 21:21:06.328141 6920 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T21:21:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-w8vkt_openshift-ovn-kubernetes(2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f30344b8366cf5ef712d971dc4d8d552e7b50c1ecea8d2ace88bb80fce62231e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e90a8e698f2e9f30a596864d0d038bfad686cca1a56b1a8bc72fab5fe594f355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e90a8e698f2e9f30a596864d0d038bfad686cca1a56b1a8bc72fab5fe594f355\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w8vkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:20Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:20 crc kubenswrapper[4789]: I0202 21:21:20.687365 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wlsw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b25d791-42e1-4e08-b7da-41803cc40f4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0e2a17d3ade1aa229b2729d66fe953acc746673549e0d929076a3bd18839833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-snjrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wlsw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:20Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:20 crc kubenswrapper[4789]: I0202 21:21:20.700330 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf73e052-94a2-472e-88e9-63ab3a8d428b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c523dec61c09703463bce6b000fb79c832b3c190a960fe0097b654fd672477c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1470060c44a356a82b453ed22ef5c3841993bce37eba8523a13c49331499224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1470060c44a356a82b453ed22ef5c3841993bce37eba8523a13c49331499224\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:19:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:19:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:20Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:20 crc kubenswrapper[4789]: I0202 21:21:20.705991 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:20 crc kubenswrapper[4789]: I0202 21:21:20.706038 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:20 crc kubenswrapper[4789]: I0202 21:21:20.706055 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:20 crc kubenswrapper[4789]: I0202 21:21:20.706077 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:20 crc kubenswrapper[4789]: I0202 21:21:20.706093 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:20Z","lastTransitionTime":"2026-02-02T21:21:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:20 crc kubenswrapper[4789]: I0202 21:21:20.719480 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9563eded-ca82-4eb6-90d4-e62b8acbe296\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72bf64d682578abb47e9fabf5e6c31e3a20594cc4a2d13304b2036f4198af265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cc46c408bb643834e494025442760929995b22175ce2af65a14004761848c2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e95c9e553b8e14141ddde6a221c0e5e4d46533d1631893e5d55416b9d16f4a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66a4db3799101ccca8a89d6bfd2c9d36940b8710ee3d256e47cd61cfe6ac7c07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d8bd6ccf6c7345f0ddc84a2983b618218fc65283b8d351c2df30d1221f304b7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\" 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 21:20:01.773773 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 21:20:01.773776 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 21:20:01.773779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 21:20:01.773999 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0202 21:20:01.777333 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3307553149/tls.crt::/tmp/serving-cert-3307553149/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770067196\\\\\\\\\\\\\\\" (2026-02-02 21:19:55 +0000 UTC to 2026-03-04 21:19:56 +0000 UTC (now=2026-02-02 21:20:01.777292377 +0000 UTC))\\\\\\\"\\\\nI0202 21:20:01.777438 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770067201\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770067201\\\\\\\\\\\\\\\" (2026-02-02 20:20:01 +0000 UTC to 2027-02-02 20:20:01 +0000 UTC (now=2026-02-02 21:20:01.777417111 +0000 UTC))\\\\\\\"\\\\nI0202 21:20:01.777455 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 21:20:01.777484 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 21:20:01.777505 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0202 21:20:01.777526 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0202 21:20:01.777548 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3307553149/tls.crt::/tmp/serving-cert-3307553149/tls.key\\\\\\\"\\\\nF0202 21:20:01.777545 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cbe4a41d8721562e34639bc3ab24cc7f735d8de00e14d38b1b623b58740c5b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94bdc1c30a3c895835afc4f205ec20725ec13707db6957046cae7791b8850679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94bdc1c30a3c895835afc4f205ec20725ec13707db6957046cae7791b8850679\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:19:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:19:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:20Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:20 crc kubenswrapper[4789]: I0202 21:21:20.741636 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dsv6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcbed546-a1c3-4ba4-96b4-61471010b1c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fff0380f801c6909a82bfc8a765cf7a393d7c2f7cc809611ded737a22f448a3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48b8508849b24e18223b62bd73348759ba4c04a525afb7d4055175bceb0183c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e48b8508849b24e18223b62bd73348759ba4c04a525afb7d4055175bceb0183c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92b13f24aa00608821909c688ace0441fd4e432d3c5fc9a1cc06b4239bf86955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92b13f24aa00608821909c688ace0441fd4e432d3c5fc9a1cc06b4239bf86955\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75d7fd5bba471f9371960e5c6cfd5d7cb49402d47459acad01a9c438ea33de0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75d7fd5bba471f9371960e5c6cfd5d7cb49402d47459acad01a9c438ea33de0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eb07d53076d60f3363a1232e742bb4bd801d49104a2be1095af34a61a5b61cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eb07d53076d60f3363a1232e742bb4bd801d49104a2be1095af34a61a5b61cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5339fbb27f553f96497abbf4b0584cbffe399e44fafd90cd0e568e2f4efad6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5339fbb27f553f96497abbf4b0584cbffe399e44fafd90cd0e568e2f4efad6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef304631b36ef7a2edc5ce89f5b4d1efecd4947d1d71f3bb4068395d76aea346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef304631b36ef7a2edc5ce89f5b4d1efecd4947d1d71f3bb4068395d76aea346\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dsv6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:20Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:20 crc kubenswrapper[4789]: I0202 21:21:20.808986 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:20 crc kubenswrapper[4789]: I0202 21:21:20.809043 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:20 crc kubenswrapper[4789]: I0202 21:21:20.809058 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:20 crc kubenswrapper[4789]: I0202 21:21:20.809079 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:20 crc kubenswrapper[4789]: I0202 21:21:20.809093 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:20Z","lastTransitionTime":"2026-02-02T21:21:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:20 crc kubenswrapper[4789]: I0202 21:21:20.819423 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 20:11:20.260641302 +0000 UTC Feb 02 21:21:20 crc kubenswrapper[4789]: I0202 21:21:20.912313 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:20 crc kubenswrapper[4789]: I0202 21:21:20.912378 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:20 crc kubenswrapper[4789]: I0202 21:21:20.912398 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:20 crc kubenswrapper[4789]: I0202 21:21:20.912423 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:20 crc kubenswrapper[4789]: I0202 21:21:20.912441 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:20Z","lastTransitionTime":"2026-02-02T21:21:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:20 crc kubenswrapper[4789]: I0202 21:21:20.999798 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2dc26662-64d3-47f0-9e0d-d340760ca348-metrics-certs\") pod \"network-metrics-daemon-vjbpg\" (UID: \"2dc26662-64d3-47f0-9e0d-d340760ca348\") " pod="openshift-multus/network-metrics-daemon-vjbpg" Feb 02 21:21:20 crc kubenswrapper[4789]: E0202 21:21:20.999946 4789 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 21:21:21 crc kubenswrapper[4789]: E0202 21:21:21.000006 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2dc26662-64d3-47f0-9e0d-d340760ca348-metrics-certs podName:2dc26662-64d3-47f0-9e0d-d340760ca348 nodeName:}" failed. No retries permitted until 2026-02-02 21:22:24.999989681 +0000 UTC m=+165.295014710 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2dc26662-64d3-47f0-9e0d-d340760ca348-metrics-certs") pod "network-metrics-daemon-vjbpg" (UID: "2dc26662-64d3-47f0-9e0d-d340760ca348") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 21:21:21 crc kubenswrapper[4789]: I0202 21:21:21.014466 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:21 crc kubenswrapper[4789]: I0202 21:21:21.014509 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:21 crc kubenswrapper[4789]: I0202 21:21:21.014522 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:21 crc kubenswrapper[4789]: I0202 21:21:21.014537 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:21 crc kubenswrapper[4789]: I0202 21:21:21.014548 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:21Z","lastTransitionTime":"2026-02-02T21:21:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:21 crc kubenswrapper[4789]: I0202 21:21:21.118139 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:21 crc kubenswrapper[4789]: I0202 21:21:21.118299 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:21 crc kubenswrapper[4789]: I0202 21:21:21.118314 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:21 crc kubenswrapper[4789]: I0202 21:21:21.118334 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:21 crc kubenswrapper[4789]: I0202 21:21:21.118345 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:21Z","lastTransitionTime":"2026-02-02T21:21:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:21 crc kubenswrapper[4789]: I0202 21:21:21.221204 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:21 crc kubenswrapper[4789]: I0202 21:21:21.221282 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:21 crc kubenswrapper[4789]: I0202 21:21:21.221297 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:21 crc kubenswrapper[4789]: I0202 21:21:21.221317 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:21 crc kubenswrapper[4789]: I0202 21:21:21.221329 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:21Z","lastTransitionTime":"2026-02-02T21:21:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:21 crc kubenswrapper[4789]: I0202 21:21:21.324653 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:21 crc kubenswrapper[4789]: I0202 21:21:21.324710 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:21 crc kubenswrapper[4789]: I0202 21:21:21.324726 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:21 crc kubenswrapper[4789]: I0202 21:21:21.324749 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:21 crc kubenswrapper[4789]: I0202 21:21:21.324770 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:21Z","lastTransitionTime":"2026-02-02T21:21:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:21 crc kubenswrapper[4789]: I0202 21:21:21.419231 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 21:21:21 crc kubenswrapper[4789]: I0202 21:21:21.419299 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 21:21:21 crc kubenswrapper[4789]: I0202 21:21:21.419231 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 21:21:21 crc kubenswrapper[4789]: E0202 21:21:21.419466 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 21:21:21 crc kubenswrapper[4789]: E0202 21:21:21.419657 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 21:21:21 crc kubenswrapper[4789]: E0202 21:21:21.419796 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 21:21:21 crc kubenswrapper[4789]: I0202 21:21:21.420983 4789 scope.go:117] "RemoveContainer" containerID="877f8c63dfc6ee1e69f7f0bc64e5d2896384f737678efb01ad9d27b9030a5fcc" Feb 02 21:21:21 crc kubenswrapper[4789]: E0202 21:21:21.421366 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-w8vkt_openshift-ovn-kubernetes(2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6)\"" pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" podUID="2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6" Feb 02 21:21:21 crc kubenswrapper[4789]: I0202 21:21:21.427498 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:21 crc kubenswrapper[4789]: I0202 21:21:21.427558 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:21 crc kubenswrapper[4789]: I0202 21:21:21.427603 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:21 crc kubenswrapper[4789]: I0202 21:21:21.427633 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:21 crc kubenswrapper[4789]: I0202 21:21:21.427656 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:21Z","lastTransitionTime":"2026-02-02T21:21:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:21 crc kubenswrapper[4789]: I0202 21:21:21.530834 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:21 crc kubenswrapper[4789]: I0202 21:21:21.530929 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:21 crc kubenswrapper[4789]: I0202 21:21:21.530953 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:21 crc kubenswrapper[4789]: I0202 21:21:21.530986 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:21 crc kubenswrapper[4789]: I0202 21:21:21.531012 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:21Z","lastTransitionTime":"2026-02-02T21:21:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:21 crc kubenswrapper[4789]: I0202 21:21:21.633699 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:21 crc kubenswrapper[4789]: I0202 21:21:21.633749 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:21 crc kubenswrapper[4789]: I0202 21:21:21.633805 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:21 crc kubenswrapper[4789]: I0202 21:21:21.633828 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:21 crc kubenswrapper[4789]: I0202 21:21:21.633842 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:21Z","lastTransitionTime":"2026-02-02T21:21:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:21 crc kubenswrapper[4789]: I0202 21:21:21.736744 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:21 crc kubenswrapper[4789]: I0202 21:21:21.736833 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:21 crc kubenswrapper[4789]: I0202 21:21:21.736848 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:21 crc kubenswrapper[4789]: I0202 21:21:21.736867 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:21 crc kubenswrapper[4789]: I0202 21:21:21.736885 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:21Z","lastTransitionTime":"2026-02-02T21:21:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:21 crc kubenswrapper[4789]: I0202 21:21:21.819610 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 06:49:20.030148048 +0000 UTC Feb 02 21:21:21 crc kubenswrapper[4789]: I0202 21:21:21.839772 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:21 crc kubenswrapper[4789]: I0202 21:21:21.839812 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:21 crc kubenswrapper[4789]: I0202 21:21:21.839824 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:21 crc kubenswrapper[4789]: I0202 21:21:21.839839 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:21 crc kubenswrapper[4789]: I0202 21:21:21.839850 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:21Z","lastTransitionTime":"2026-02-02T21:21:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:21 crc kubenswrapper[4789]: I0202 21:21:21.942535 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:21 crc kubenswrapper[4789]: I0202 21:21:21.942613 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:21 crc kubenswrapper[4789]: I0202 21:21:21.942633 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:21 crc kubenswrapper[4789]: I0202 21:21:21.942656 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:21 crc kubenswrapper[4789]: I0202 21:21:21.942672 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:21Z","lastTransitionTime":"2026-02-02T21:21:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:22 crc kubenswrapper[4789]: I0202 21:21:22.046351 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:22 crc kubenswrapper[4789]: I0202 21:21:22.046411 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:22 crc kubenswrapper[4789]: I0202 21:21:22.046429 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:22 crc kubenswrapper[4789]: I0202 21:21:22.046451 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:22 crc kubenswrapper[4789]: I0202 21:21:22.046468 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:22Z","lastTransitionTime":"2026-02-02T21:21:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:22 crc kubenswrapper[4789]: I0202 21:21:22.149748 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:22 crc kubenswrapper[4789]: I0202 21:21:22.149777 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:22 crc kubenswrapper[4789]: I0202 21:21:22.149784 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:22 crc kubenswrapper[4789]: I0202 21:21:22.149797 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:22 crc kubenswrapper[4789]: I0202 21:21:22.149805 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:22Z","lastTransitionTime":"2026-02-02T21:21:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:22 crc kubenswrapper[4789]: I0202 21:21:22.252711 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:22 crc kubenswrapper[4789]: I0202 21:21:22.252804 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:22 crc kubenswrapper[4789]: I0202 21:21:22.252823 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:22 crc kubenswrapper[4789]: I0202 21:21:22.252849 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:22 crc kubenswrapper[4789]: I0202 21:21:22.252871 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:22Z","lastTransitionTime":"2026-02-02T21:21:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:22 crc kubenswrapper[4789]: I0202 21:21:22.356318 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:22 crc kubenswrapper[4789]: I0202 21:21:22.356716 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:22 crc kubenswrapper[4789]: I0202 21:21:22.356843 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:22 crc kubenswrapper[4789]: I0202 21:21:22.356959 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:22 crc kubenswrapper[4789]: I0202 21:21:22.357053 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:22Z","lastTransitionTime":"2026-02-02T21:21:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:22 crc kubenswrapper[4789]: I0202 21:21:22.421599 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vjbpg" Feb 02 21:21:22 crc kubenswrapper[4789]: E0202 21:21:22.421973 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vjbpg" podUID="2dc26662-64d3-47f0-9e0d-d340760ca348" Feb 02 21:21:22 crc kubenswrapper[4789]: I0202 21:21:22.460902 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:22 crc kubenswrapper[4789]: I0202 21:21:22.460942 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:22 crc kubenswrapper[4789]: I0202 21:21:22.460976 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:22 crc kubenswrapper[4789]: I0202 21:21:22.460992 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:22 crc kubenswrapper[4789]: I0202 21:21:22.461009 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:22Z","lastTransitionTime":"2026-02-02T21:21:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:22 crc kubenswrapper[4789]: I0202 21:21:22.564945 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:22 crc kubenswrapper[4789]: I0202 21:21:22.565007 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:22 crc kubenswrapper[4789]: I0202 21:21:22.565025 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:22 crc kubenswrapper[4789]: I0202 21:21:22.565053 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:22 crc kubenswrapper[4789]: I0202 21:21:22.565076 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:22Z","lastTransitionTime":"2026-02-02T21:21:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:22 crc kubenswrapper[4789]: I0202 21:21:22.667852 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:22 crc kubenswrapper[4789]: I0202 21:21:22.668636 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:22 crc kubenswrapper[4789]: I0202 21:21:22.668832 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:22 crc kubenswrapper[4789]: I0202 21:21:22.669014 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:22 crc kubenswrapper[4789]: I0202 21:21:22.669139 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:22Z","lastTransitionTime":"2026-02-02T21:21:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:22 crc kubenswrapper[4789]: I0202 21:21:22.773682 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:22 crc kubenswrapper[4789]: I0202 21:21:22.773749 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:22 crc kubenswrapper[4789]: I0202 21:21:22.773772 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:22 crc kubenswrapper[4789]: I0202 21:21:22.773801 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:22 crc kubenswrapper[4789]: I0202 21:21:22.773821 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:22Z","lastTransitionTime":"2026-02-02T21:21:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:22 crc kubenswrapper[4789]: I0202 21:21:22.819904 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 20:55:39.953311054 +0000 UTC Feb 02 21:21:22 crc kubenswrapper[4789]: I0202 21:21:22.877612 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:22 crc kubenswrapper[4789]: I0202 21:21:22.877783 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:22 crc kubenswrapper[4789]: I0202 21:21:22.877802 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:22 crc kubenswrapper[4789]: I0202 21:21:22.877826 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:22 crc kubenswrapper[4789]: I0202 21:21:22.877843 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:22Z","lastTransitionTime":"2026-02-02T21:21:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:22 crc kubenswrapper[4789]: I0202 21:21:22.980662 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:22 crc kubenswrapper[4789]: I0202 21:21:22.980717 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:22 crc kubenswrapper[4789]: I0202 21:21:22.980733 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:22 crc kubenswrapper[4789]: I0202 21:21:22.980757 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:22 crc kubenswrapper[4789]: I0202 21:21:22.980775 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:22Z","lastTransitionTime":"2026-02-02T21:21:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:23 crc kubenswrapper[4789]: I0202 21:21:23.083990 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:23 crc kubenswrapper[4789]: I0202 21:21:23.084051 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:23 crc kubenswrapper[4789]: I0202 21:21:23.084068 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:23 crc kubenswrapper[4789]: I0202 21:21:23.084091 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:23 crc kubenswrapper[4789]: I0202 21:21:23.084108 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:23Z","lastTransitionTime":"2026-02-02T21:21:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:23 crc kubenswrapper[4789]: I0202 21:21:23.187748 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:23 crc kubenswrapper[4789]: I0202 21:21:23.187814 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:23 crc kubenswrapper[4789]: I0202 21:21:23.187832 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:23 crc kubenswrapper[4789]: I0202 21:21:23.187856 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:23 crc kubenswrapper[4789]: I0202 21:21:23.187876 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:23Z","lastTransitionTime":"2026-02-02T21:21:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:23 crc kubenswrapper[4789]: I0202 21:21:23.290965 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:23 crc kubenswrapper[4789]: I0202 21:21:23.291030 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:23 crc kubenswrapper[4789]: I0202 21:21:23.291051 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:23 crc kubenswrapper[4789]: I0202 21:21:23.291076 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:23 crc kubenswrapper[4789]: I0202 21:21:23.291095 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:23Z","lastTransitionTime":"2026-02-02T21:21:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:23 crc kubenswrapper[4789]: I0202 21:21:23.393325 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:23 crc kubenswrapper[4789]: I0202 21:21:23.393382 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:23 crc kubenswrapper[4789]: I0202 21:21:23.393398 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:23 crc kubenswrapper[4789]: I0202 21:21:23.393419 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:23 crc kubenswrapper[4789]: I0202 21:21:23.393436 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:23Z","lastTransitionTime":"2026-02-02T21:21:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:23 crc kubenswrapper[4789]: I0202 21:21:23.418987 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 21:21:23 crc kubenswrapper[4789]: E0202 21:21:23.419196 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 21:21:23 crc kubenswrapper[4789]: I0202 21:21:23.419236 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 21:21:23 crc kubenswrapper[4789]: I0202 21:21:23.419281 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 21:21:23 crc kubenswrapper[4789]: E0202 21:21:23.419940 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 21:21:23 crc kubenswrapper[4789]: E0202 21:21:23.420364 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 21:21:23 crc kubenswrapper[4789]: I0202 21:21:23.439062 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 02 21:21:23 crc kubenswrapper[4789]: I0202 21:21:23.496298 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:23 crc kubenswrapper[4789]: I0202 21:21:23.496355 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:23 crc kubenswrapper[4789]: I0202 21:21:23.496373 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:23 crc kubenswrapper[4789]: I0202 21:21:23.496395 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:23 crc kubenswrapper[4789]: I0202 21:21:23.496412 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:23Z","lastTransitionTime":"2026-02-02T21:21:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:23 crc kubenswrapper[4789]: I0202 21:21:23.599400 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:23 crc kubenswrapper[4789]: I0202 21:21:23.599463 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:23 crc kubenswrapper[4789]: I0202 21:21:23.599481 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:23 crc kubenswrapper[4789]: I0202 21:21:23.599504 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:23 crc kubenswrapper[4789]: I0202 21:21:23.599521 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:23Z","lastTransitionTime":"2026-02-02T21:21:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:23 crc kubenswrapper[4789]: I0202 21:21:23.702652 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:23 crc kubenswrapper[4789]: I0202 21:21:23.702722 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:23 crc kubenswrapper[4789]: I0202 21:21:23.702743 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:23 crc kubenswrapper[4789]: I0202 21:21:23.702765 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:23 crc kubenswrapper[4789]: I0202 21:21:23.702782 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:23Z","lastTransitionTime":"2026-02-02T21:21:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:23 crc kubenswrapper[4789]: I0202 21:21:23.806315 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:23 crc kubenswrapper[4789]: I0202 21:21:23.806390 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:23 crc kubenswrapper[4789]: I0202 21:21:23.806410 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:23 crc kubenswrapper[4789]: I0202 21:21:23.806435 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:23 crc kubenswrapper[4789]: I0202 21:21:23.806458 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:23Z","lastTransitionTime":"2026-02-02T21:21:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:23 crc kubenswrapper[4789]: I0202 21:21:23.821831 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 02:26:03.592163858 +0000 UTC Feb 02 21:21:23 crc kubenswrapper[4789]: I0202 21:21:23.911363 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:23 crc kubenswrapper[4789]: I0202 21:21:23.911431 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:23 crc kubenswrapper[4789]: I0202 21:21:23.911453 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:23 crc kubenswrapper[4789]: I0202 21:21:23.911489 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:23 crc kubenswrapper[4789]: I0202 21:21:23.911510 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:23Z","lastTransitionTime":"2026-02-02T21:21:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:24 crc kubenswrapper[4789]: I0202 21:21:24.015026 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:24 crc kubenswrapper[4789]: I0202 21:21:24.015185 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:24 crc kubenswrapper[4789]: I0202 21:21:24.015212 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:24 crc kubenswrapper[4789]: I0202 21:21:24.015236 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:24 crc kubenswrapper[4789]: I0202 21:21:24.015253 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:24Z","lastTransitionTime":"2026-02-02T21:21:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:24 crc kubenswrapper[4789]: I0202 21:21:24.120313 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:24 crc kubenswrapper[4789]: I0202 21:21:24.120388 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:24 crc kubenswrapper[4789]: I0202 21:21:24.120424 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:24 crc kubenswrapper[4789]: I0202 21:21:24.120453 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:24 crc kubenswrapper[4789]: I0202 21:21:24.120472 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:24Z","lastTransitionTime":"2026-02-02T21:21:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:24 crc kubenswrapper[4789]: I0202 21:21:24.261076 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:24 crc kubenswrapper[4789]: I0202 21:21:24.261157 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:24 crc kubenswrapper[4789]: I0202 21:21:24.261183 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:24 crc kubenswrapper[4789]: I0202 21:21:24.261214 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:24 crc kubenswrapper[4789]: I0202 21:21:24.261237 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:24Z","lastTransitionTime":"2026-02-02T21:21:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:24 crc kubenswrapper[4789]: I0202 21:21:24.364698 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:24 crc kubenswrapper[4789]: I0202 21:21:24.364763 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:24 crc kubenswrapper[4789]: I0202 21:21:24.364785 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:24 crc kubenswrapper[4789]: I0202 21:21:24.364810 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:24 crc kubenswrapper[4789]: I0202 21:21:24.364827 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:24Z","lastTransitionTime":"2026-02-02T21:21:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:24 crc kubenswrapper[4789]: I0202 21:21:24.419203 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vjbpg" Feb 02 21:21:24 crc kubenswrapper[4789]: E0202 21:21:24.419541 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vjbpg" podUID="2dc26662-64d3-47f0-9e0d-d340760ca348" Feb 02 21:21:24 crc kubenswrapper[4789]: I0202 21:21:24.467788 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:24 crc kubenswrapper[4789]: I0202 21:21:24.467838 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:24 crc kubenswrapper[4789]: I0202 21:21:24.467856 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:24 crc kubenswrapper[4789]: I0202 21:21:24.467877 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:24 crc kubenswrapper[4789]: I0202 21:21:24.467894 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:24Z","lastTransitionTime":"2026-02-02T21:21:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:24 crc kubenswrapper[4789]: I0202 21:21:24.570668 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:24 crc kubenswrapper[4789]: I0202 21:21:24.570731 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:24 crc kubenswrapper[4789]: I0202 21:21:24.570748 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:24 crc kubenswrapper[4789]: I0202 21:21:24.570772 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:24 crc kubenswrapper[4789]: I0202 21:21:24.570789 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:24Z","lastTransitionTime":"2026-02-02T21:21:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:24 crc kubenswrapper[4789]: I0202 21:21:24.673760 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:24 crc kubenswrapper[4789]: I0202 21:21:24.673827 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:24 crc kubenswrapper[4789]: I0202 21:21:24.673847 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:24 crc kubenswrapper[4789]: I0202 21:21:24.673875 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:24 crc kubenswrapper[4789]: I0202 21:21:24.673892 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:24Z","lastTransitionTime":"2026-02-02T21:21:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:24 crc kubenswrapper[4789]: I0202 21:21:24.776651 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:24 crc kubenswrapper[4789]: I0202 21:21:24.776718 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:24 crc kubenswrapper[4789]: I0202 21:21:24.776737 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:24 crc kubenswrapper[4789]: I0202 21:21:24.776764 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:24 crc kubenswrapper[4789]: I0202 21:21:24.776821 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:24Z","lastTransitionTime":"2026-02-02T21:21:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:24 crc kubenswrapper[4789]: I0202 21:21:24.822717 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 20:12:30.078516276 +0000 UTC Feb 02 21:21:24 crc kubenswrapper[4789]: I0202 21:21:24.879950 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:24 crc kubenswrapper[4789]: I0202 21:21:24.879999 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:24 crc kubenswrapper[4789]: I0202 21:21:24.880015 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:24 crc kubenswrapper[4789]: I0202 21:21:24.880037 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:24 crc kubenswrapper[4789]: I0202 21:21:24.880054 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:24Z","lastTransitionTime":"2026-02-02T21:21:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:24 crc kubenswrapper[4789]: I0202 21:21:24.982910 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:24 crc kubenswrapper[4789]: I0202 21:21:24.982970 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:24 crc kubenswrapper[4789]: I0202 21:21:24.982988 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:24 crc kubenswrapper[4789]: I0202 21:21:24.983011 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:24 crc kubenswrapper[4789]: I0202 21:21:24.983028 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:24Z","lastTransitionTime":"2026-02-02T21:21:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:25 crc kubenswrapper[4789]: I0202 21:21:25.086270 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:25 crc kubenswrapper[4789]: I0202 21:21:25.086315 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:25 crc kubenswrapper[4789]: I0202 21:21:25.086333 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:25 crc kubenswrapper[4789]: I0202 21:21:25.086355 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:25 crc kubenswrapper[4789]: I0202 21:21:25.086374 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:25Z","lastTransitionTime":"2026-02-02T21:21:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:25 crc kubenswrapper[4789]: I0202 21:21:25.189082 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:25 crc kubenswrapper[4789]: I0202 21:21:25.189171 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:25 crc kubenswrapper[4789]: I0202 21:21:25.189210 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:25 crc kubenswrapper[4789]: I0202 21:21:25.189246 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:25 crc kubenswrapper[4789]: I0202 21:21:25.189268 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:25Z","lastTransitionTime":"2026-02-02T21:21:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:25 crc kubenswrapper[4789]: I0202 21:21:25.292645 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:25 crc kubenswrapper[4789]: I0202 21:21:25.292697 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:25 crc kubenswrapper[4789]: I0202 21:21:25.292715 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:25 crc kubenswrapper[4789]: I0202 21:21:25.292761 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:25 crc kubenswrapper[4789]: I0202 21:21:25.292779 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:25Z","lastTransitionTime":"2026-02-02T21:21:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:25 crc kubenswrapper[4789]: I0202 21:21:25.396005 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:25 crc kubenswrapper[4789]: I0202 21:21:25.396064 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:25 crc kubenswrapper[4789]: I0202 21:21:25.396081 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:25 crc kubenswrapper[4789]: I0202 21:21:25.396104 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:25 crc kubenswrapper[4789]: I0202 21:21:25.396123 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:25Z","lastTransitionTime":"2026-02-02T21:21:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:25 crc kubenswrapper[4789]: I0202 21:21:25.418946 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 21:21:25 crc kubenswrapper[4789]: I0202 21:21:25.418994 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 21:21:25 crc kubenswrapper[4789]: I0202 21:21:25.418915 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 21:21:25 crc kubenswrapper[4789]: E0202 21:21:25.419132 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 21:21:25 crc kubenswrapper[4789]: E0202 21:21:25.419351 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 21:21:25 crc kubenswrapper[4789]: E0202 21:21:25.419503 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 21:21:25 crc kubenswrapper[4789]: I0202 21:21:25.499172 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:25 crc kubenswrapper[4789]: I0202 21:21:25.499222 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:25 crc kubenswrapper[4789]: I0202 21:21:25.499239 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:25 crc kubenswrapper[4789]: I0202 21:21:25.499260 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:25 crc kubenswrapper[4789]: I0202 21:21:25.499277 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:25Z","lastTransitionTime":"2026-02-02T21:21:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:25 crc kubenswrapper[4789]: I0202 21:21:25.601785 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:25 crc kubenswrapper[4789]: I0202 21:21:25.601857 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:25 crc kubenswrapper[4789]: I0202 21:21:25.601877 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:25 crc kubenswrapper[4789]: I0202 21:21:25.601902 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:25 crc kubenswrapper[4789]: I0202 21:21:25.601917 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:25Z","lastTransitionTime":"2026-02-02T21:21:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:25 crc kubenswrapper[4789]: I0202 21:21:25.706066 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:25 crc kubenswrapper[4789]: I0202 21:21:25.706161 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:25 crc kubenswrapper[4789]: I0202 21:21:25.706191 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:25 crc kubenswrapper[4789]: I0202 21:21:25.706227 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:25 crc kubenswrapper[4789]: I0202 21:21:25.706302 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:25Z","lastTransitionTime":"2026-02-02T21:21:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:25 crc kubenswrapper[4789]: I0202 21:21:25.810535 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:25 crc kubenswrapper[4789]: I0202 21:21:25.810677 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:25 crc kubenswrapper[4789]: I0202 21:21:25.810707 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:25 crc kubenswrapper[4789]: I0202 21:21:25.810739 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:25 crc kubenswrapper[4789]: I0202 21:21:25.810761 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:25Z","lastTransitionTime":"2026-02-02T21:21:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:25 crc kubenswrapper[4789]: I0202 21:21:25.823466 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 07:26:28.978317086 +0000 UTC Feb 02 21:21:25 crc kubenswrapper[4789]: I0202 21:21:25.913357 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:25 crc kubenswrapper[4789]: I0202 21:21:25.913425 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:25 crc kubenswrapper[4789]: I0202 21:21:25.913447 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:25 crc kubenswrapper[4789]: I0202 21:21:25.913473 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:25 crc kubenswrapper[4789]: I0202 21:21:25.913493 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:25Z","lastTransitionTime":"2026-02-02T21:21:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:26 crc kubenswrapper[4789]: I0202 21:21:26.016219 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:26 crc kubenswrapper[4789]: I0202 21:21:26.016275 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:26 crc kubenswrapper[4789]: I0202 21:21:26.016292 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:26 crc kubenswrapper[4789]: I0202 21:21:26.016318 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:26 crc kubenswrapper[4789]: I0202 21:21:26.016336 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:26Z","lastTransitionTime":"2026-02-02T21:21:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:26 crc kubenswrapper[4789]: I0202 21:21:26.119760 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:26 crc kubenswrapper[4789]: I0202 21:21:26.119835 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:26 crc kubenswrapper[4789]: I0202 21:21:26.119854 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:26 crc kubenswrapper[4789]: I0202 21:21:26.119880 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:26 crc kubenswrapper[4789]: I0202 21:21:26.119897 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:26Z","lastTransitionTime":"2026-02-02T21:21:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:26 crc kubenswrapper[4789]: I0202 21:21:26.222818 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:26 crc kubenswrapper[4789]: I0202 21:21:26.222899 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:26 crc kubenswrapper[4789]: I0202 21:21:26.222922 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:26 crc kubenswrapper[4789]: I0202 21:21:26.222953 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:26 crc kubenswrapper[4789]: I0202 21:21:26.222972 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:26Z","lastTransitionTime":"2026-02-02T21:21:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:26 crc kubenswrapper[4789]: I0202 21:21:26.325615 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:26 crc kubenswrapper[4789]: I0202 21:21:26.325834 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:26 crc kubenswrapper[4789]: I0202 21:21:26.325882 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:26 crc kubenswrapper[4789]: I0202 21:21:26.325920 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:26 crc kubenswrapper[4789]: I0202 21:21:26.325939 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:26Z","lastTransitionTime":"2026-02-02T21:21:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:26 crc kubenswrapper[4789]: I0202 21:21:26.420969 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vjbpg" Feb 02 21:21:26 crc kubenswrapper[4789]: E0202 21:21:26.421218 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vjbpg" podUID="2dc26662-64d3-47f0-9e0d-d340760ca348" Feb 02 21:21:26 crc kubenswrapper[4789]: I0202 21:21:26.428444 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:26 crc kubenswrapper[4789]: I0202 21:21:26.428505 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:26 crc kubenswrapper[4789]: I0202 21:21:26.428524 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:26 crc kubenswrapper[4789]: I0202 21:21:26.428548 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:26 crc kubenswrapper[4789]: I0202 21:21:26.428567 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:26Z","lastTransitionTime":"2026-02-02T21:21:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:26 crc kubenswrapper[4789]: I0202 21:21:26.531484 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:26 crc kubenswrapper[4789]: I0202 21:21:26.531796 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:26 crc kubenswrapper[4789]: I0202 21:21:26.531956 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:26 crc kubenswrapper[4789]: I0202 21:21:26.532115 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:26 crc kubenswrapper[4789]: I0202 21:21:26.532258 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:26Z","lastTransitionTime":"2026-02-02T21:21:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:26 crc kubenswrapper[4789]: I0202 21:21:26.634737 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:26 crc kubenswrapper[4789]: I0202 21:21:26.634808 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:26 crc kubenswrapper[4789]: I0202 21:21:26.634830 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:26 crc kubenswrapper[4789]: I0202 21:21:26.634858 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:26 crc kubenswrapper[4789]: I0202 21:21:26.634879 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:26Z","lastTransitionTime":"2026-02-02T21:21:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:26 crc kubenswrapper[4789]: I0202 21:21:26.737773 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:26 crc kubenswrapper[4789]: I0202 21:21:26.737840 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:26 crc kubenswrapper[4789]: I0202 21:21:26.737861 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:26 crc kubenswrapper[4789]: I0202 21:21:26.737885 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:26 crc kubenswrapper[4789]: I0202 21:21:26.737902 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:26Z","lastTransitionTime":"2026-02-02T21:21:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:26 crc kubenswrapper[4789]: I0202 21:21:26.823973 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 10:55:17.361779613 +0000 UTC Feb 02 21:21:26 crc kubenswrapper[4789]: I0202 21:21:26.840213 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:26 crc kubenswrapper[4789]: I0202 21:21:26.840253 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:26 crc kubenswrapper[4789]: I0202 21:21:26.840270 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:26 crc kubenswrapper[4789]: I0202 21:21:26.840295 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:26 crc kubenswrapper[4789]: I0202 21:21:26.840315 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:26Z","lastTransitionTime":"2026-02-02T21:21:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:26 crc kubenswrapper[4789]: I0202 21:21:26.943322 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:26 crc kubenswrapper[4789]: I0202 21:21:26.943393 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:26 crc kubenswrapper[4789]: I0202 21:21:26.943413 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:26 crc kubenswrapper[4789]: I0202 21:21:26.943441 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:26 crc kubenswrapper[4789]: I0202 21:21:26.943460 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:26Z","lastTransitionTime":"2026-02-02T21:21:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:27 crc kubenswrapper[4789]: I0202 21:21:27.046494 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:27 crc kubenswrapper[4789]: I0202 21:21:27.046567 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:27 crc kubenswrapper[4789]: I0202 21:21:27.046628 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:27 crc kubenswrapper[4789]: I0202 21:21:27.046660 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:27 crc kubenswrapper[4789]: I0202 21:21:27.046683 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:27Z","lastTransitionTime":"2026-02-02T21:21:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:27 crc kubenswrapper[4789]: I0202 21:21:27.150413 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:27 crc kubenswrapper[4789]: I0202 21:21:27.150514 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:27 crc kubenswrapper[4789]: I0202 21:21:27.150545 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:27 crc kubenswrapper[4789]: I0202 21:21:27.150618 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:27 crc kubenswrapper[4789]: I0202 21:21:27.150643 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:27Z","lastTransitionTime":"2026-02-02T21:21:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:27 crc kubenswrapper[4789]: I0202 21:21:27.254161 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:27 crc kubenswrapper[4789]: I0202 21:21:27.254242 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:27 crc kubenswrapper[4789]: I0202 21:21:27.254267 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:27 crc kubenswrapper[4789]: I0202 21:21:27.254300 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:27 crc kubenswrapper[4789]: I0202 21:21:27.254320 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:27Z","lastTransitionTime":"2026-02-02T21:21:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:27 crc kubenswrapper[4789]: I0202 21:21:27.357937 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:27 crc kubenswrapper[4789]: I0202 21:21:27.357975 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:27 crc kubenswrapper[4789]: I0202 21:21:27.357986 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:27 crc kubenswrapper[4789]: I0202 21:21:27.358003 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:27 crc kubenswrapper[4789]: I0202 21:21:27.358015 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:27Z","lastTransitionTime":"2026-02-02T21:21:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:27 crc kubenswrapper[4789]: I0202 21:21:27.419027 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 21:21:27 crc kubenswrapper[4789]: I0202 21:21:27.419062 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 21:21:27 crc kubenswrapper[4789]: I0202 21:21:27.419027 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 21:21:27 crc kubenswrapper[4789]: E0202 21:21:27.419185 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 21:21:27 crc kubenswrapper[4789]: E0202 21:21:27.419294 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 21:21:27 crc kubenswrapper[4789]: E0202 21:21:27.419450 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 21:21:27 crc kubenswrapper[4789]: I0202 21:21:27.460784 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:27 crc kubenswrapper[4789]: I0202 21:21:27.460853 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:27 crc kubenswrapper[4789]: I0202 21:21:27.460873 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:27 crc kubenswrapper[4789]: I0202 21:21:27.460899 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:27 crc kubenswrapper[4789]: I0202 21:21:27.460917 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:27Z","lastTransitionTime":"2026-02-02T21:21:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:27 crc kubenswrapper[4789]: I0202 21:21:27.563443 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:27 crc kubenswrapper[4789]: I0202 21:21:27.563481 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:27 crc kubenswrapper[4789]: I0202 21:21:27.563490 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:27 crc kubenswrapper[4789]: I0202 21:21:27.563503 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:27 crc kubenswrapper[4789]: I0202 21:21:27.563511 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:27Z","lastTransitionTime":"2026-02-02T21:21:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:27 crc kubenswrapper[4789]: I0202 21:21:27.666349 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:27 crc kubenswrapper[4789]: I0202 21:21:27.666387 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:27 crc kubenswrapper[4789]: I0202 21:21:27.666398 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:27 crc kubenswrapper[4789]: I0202 21:21:27.666412 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:27 crc kubenswrapper[4789]: I0202 21:21:27.666422 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:27Z","lastTransitionTime":"2026-02-02T21:21:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:27 crc kubenswrapper[4789]: I0202 21:21:27.769129 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:27 crc kubenswrapper[4789]: I0202 21:21:27.769177 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:27 crc kubenswrapper[4789]: I0202 21:21:27.769194 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:27 crc kubenswrapper[4789]: I0202 21:21:27.769217 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:27 crc kubenswrapper[4789]: I0202 21:21:27.769234 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:27Z","lastTransitionTime":"2026-02-02T21:21:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:27 crc kubenswrapper[4789]: I0202 21:21:27.825062 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 09:29:08.618164227 +0000 UTC Feb 02 21:21:27 crc kubenswrapper[4789]: I0202 21:21:27.871657 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:27 crc kubenswrapper[4789]: I0202 21:21:27.871708 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:27 crc kubenswrapper[4789]: I0202 21:21:27.871743 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:27 crc kubenswrapper[4789]: I0202 21:21:27.871762 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:27 crc kubenswrapper[4789]: I0202 21:21:27.871774 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:27Z","lastTransitionTime":"2026-02-02T21:21:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:27 crc kubenswrapper[4789]: I0202 21:21:27.974918 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:27 crc kubenswrapper[4789]: I0202 21:21:27.974958 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:27 crc kubenswrapper[4789]: I0202 21:21:27.974967 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:27 crc kubenswrapper[4789]: I0202 21:21:27.974999 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:27 crc kubenswrapper[4789]: I0202 21:21:27.975009 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:27Z","lastTransitionTime":"2026-02-02T21:21:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:28 crc kubenswrapper[4789]: I0202 21:21:28.083220 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:28 crc kubenswrapper[4789]: I0202 21:21:28.083335 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:28 crc kubenswrapper[4789]: I0202 21:21:28.083361 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:28 crc kubenswrapper[4789]: I0202 21:21:28.083396 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:28 crc kubenswrapper[4789]: I0202 21:21:28.083422 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:28Z","lastTransitionTime":"2026-02-02T21:21:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:28 crc kubenswrapper[4789]: I0202 21:21:28.185977 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:28 crc kubenswrapper[4789]: I0202 21:21:28.186018 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:28 crc kubenswrapper[4789]: I0202 21:21:28.186028 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:28 crc kubenswrapper[4789]: I0202 21:21:28.186042 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:28 crc kubenswrapper[4789]: I0202 21:21:28.186053 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:28Z","lastTransitionTime":"2026-02-02T21:21:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:28 crc kubenswrapper[4789]: I0202 21:21:28.288953 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:28 crc kubenswrapper[4789]: I0202 21:21:28.289073 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:28 crc kubenswrapper[4789]: I0202 21:21:28.289095 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:28 crc kubenswrapper[4789]: I0202 21:21:28.289121 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:28 crc kubenswrapper[4789]: I0202 21:21:28.289139 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:28Z","lastTransitionTime":"2026-02-02T21:21:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:28 crc kubenswrapper[4789]: I0202 21:21:28.392272 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:28 crc kubenswrapper[4789]: I0202 21:21:28.392306 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:28 crc kubenswrapper[4789]: I0202 21:21:28.392315 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:28 crc kubenswrapper[4789]: I0202 21:21:28.392328 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:28 crc kubenswrapper[4789]: I0202 21:21:28.392336 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:28Z","lastTransitionTime":"2026-02-02T21:21:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:28 crc kubenswrapper[4789]: I0202 21:21:28.419467 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vjbpg" Feb 02 21:21:28 crc kubenswrapper[4789]: E0202 21:21:28.419686 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vjbpg" podUID="2dc26662-64d3-47f0-9e0d-d340760ca348" Feb 02 21:21:28 crc kubenswrapper[4789]: I0202 21:21:28.495288 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:28 crc kubenswrapper[4789]: I0202 21:21:28.495342 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:28 crc kubenswrapper[4789]: I0202 21:21:28.495359 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:28 crc kubenswrapper[4789]: I0202 21:21:28.495382 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:28 crc kubenswrapper[4789]: I0202 21:21:28.495399 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:28Z","lastTransitionTime":"2026-02-02T21:21:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:28 crc kubenswrapper[4789]: I0202 21:21:28.599038 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:28 crc kubenswrapper[4789]: I0202 21:21:28.599405 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:28 crc kubenswrapper[4789]: I0202 21:21:28.599642 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:28 crc kubenswrapper[4789]: I0202 21:21:28.599863 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:28 crc kubenswrapper[4789]: I0202 21:21:28.600050 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:28Z","lastTransitionTime":"2026-02-02T21:21:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:28 crc kubenswrapper[4789]: I0202 21:21:28.703991 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:28 crc kubenswrapper[4789]: I0202 21:21:28.704037 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:28 crc kubenswrapper[4789]: I0202 21:21:28.704048 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:28 crc kubenswrapper[4789]: I0202 21:21:28.704068 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:28 crc kubenswrapper[4789]: I0202 21:21:28.704083 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:28Z","lastTransitionTime":"2026-02-02T21:21:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:28 crc kubenswrapper[4789]: I0202 21:21:28.807540 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:28 crc kubenswrapper[4789]: I0202 21:21:28.807911 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:28 crc kubenswrapper[4789]: I0202 21:21:28.808111 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:28 crc kubenswrapper[4789]: I0202 21:21:28.808327 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:28 crc kubenswrapper[4789]: I0202 21:21:28.808542 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:28Z","lastTransitionTime":"2026-02-02T21:21:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:28 crc kubenswrapper[4789]: I0202 21:21:28.825850 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 21:06:09.024292137 +0000 UTC Feb 02 21:21:28 crc kubenswrapper[4789]: I0202 21:21:28.912684 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:28 crc kubenswrapper[4789]: I0202 21:21:28.912738 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:28 crc kubenswrapper[4789]: I0202 21:21:28.912754 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:28 crc kubenswrapper[4789]: I0202 21:21:28.912779 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:28 crc kubenswrapper[4789]: I0202 21:21:28.912796 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:28Z","lastTransitionTime":"2026-02-02T21:21:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:29 crc kubenswrapper[4789]: I0202 21:21:29.015385 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:29 crc kubenswrapper[4789]: I0202 21:21:29.015448 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:29 crc kubenswrapper[4789]: I0202 21:21:29.015471 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:29 crc kubenswrapper[4789]: I0202 21:21:29.015501 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:29 crc kubenswrapper[4789]: I0202 21:21:29.015523 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:29Z","lastTransitionTime":"2026-02-02T21:21:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:29 crc kubenswrapper[4789]: I0202 21:21:29.118862 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:29 crc kubenswrapper[4789]: I0202 21:21:29.118917 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:29 crc kubenswrapper[4789]: I0202 21:21:29.118939 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:29 crc kubenswrapper[4789]: I0202 21:21:29.118968 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:29 crc kubenswrapper[4789]: I0202 21:21:29.118990 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:29Z","lastTransitionTime":"2026-02-02T21:21:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:29 crc kubenswrapper[4789]: I0202 21:21:29.221449 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:29 crc kubenswrapper[4789]: I0202 21:21:29.221505 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:29 crc kubenswrapper[4789]: I0202 21:21:29.221523 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:29 crc kubenswrapper[4789]: I0202 21:21:29.221545 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:29 crc kubenswrapper[4789]: I0202 21:21:29.221563 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:29Z","lastTransitionTime":"2026-02-02T21:21:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:29 crc kubenswrapper[4789]: I0202 21:21:29.324347 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:29 crc kubenswrapper[4789]: I0202 21:21:29.324417 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:29 crc kubenswrapper[4789]: I0202 21:21:29.324435 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:29 crc kubenswrapper[4789]: I0202 21:21:29.324460 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:29 crc kubenswrapper[4789]: I0202 21:21:29.324479 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:29Z","lastTransitionTime":"2026-02-02T21:21:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:29 crc kubenswrapper[4789]: I0202 21:21:29.418865 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 21:21:29 crc kubenswrapper[4789]: I0202 21:21:29.418943 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 21:21:29 crc kubenswrapper[4789]: E0202 21:21:29.419020 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 21:21:29 crc kubenswrapper[4789]: E0202 21:21:29.419163 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 21:21:29 crc kubenswrapper[4789]: I0202 21:21:29.419241 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 21:21:29 crc kubenswrapper[4789]: E0202 21:21:29.419347 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 21:21:29 crc kubenswrapper[4789]: I0202 21:21:29.426949 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:29 crc kubenswrapper[4789]: I0202 21:21:29.426988 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:29 crc kubenswrapper[4789]: I0202 21:21:29.427005 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:29 crc kubenswrapper[4789]: I0202 21:21:29.427024 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:29 crc kubenswrapper[4789]: I0202 21:21:29.427037 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:29Z","lastTransitionTime":"2026-02-02T21:21:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:29 crc kubenswrapper[4789]: I0202 21:21:29.530869 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:29 crc kubenswrapper[4789]: I0202 21:21:29.530966 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:29 crc kubenswrapper[4789]: I0202 21:21:29.531015 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:29 crc kubenswrapper[4789]: I0202 21:21:29.531043 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:29 crc kubenswrapper[4789]: I0202 21:21:29.531059 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:29Z","lastTransitionTime":"2026-02-02T21:21:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:29 crc kubenswrapper[4789]: I0202 21:21:29.634347 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:29 crc kubenswrapper[4789]: I0202 21:21:29.634720 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:29 crc kubenswrapper[4789]: I0202 21:21:29.634874 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:29 crc kubenswrapper[4789]: I0202 21:21:29.635010 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:29 crc kubenswrapper[4789]: I0202 21:21:29.635160 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:29Z","lastTransitionTime":"2026-02-02T21:21:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:29 crc kubenswrapper[4789]: I0202 21:21:29.738166 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:29 crc kubenswrapper[4789]: I0202 21:21:29.738223 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:29 crc kubenswrapper[4789]: I0202 21:21:29.738239 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:29 crc kubenswrapper[4789]: I0202 21:21:29.738260 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:29 crc kubenswrapper[4789]: I0202 21:21:29.738273 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:29Z","lastTransitionTime":"2026-02-02T21:21:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:29 crc kubenswrapper[4789]: I0202 21:21:29.776018 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:29 crc kubenswrapper[4789]: I0202 21:21:29.776063 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:29 crc kubenswrapper[4789]: I0202 21:21:29.776076 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:29 crc kubenswrapper[4789]: I0202 21:21:29.776093 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:29 crc kubenswrapper[4789]: I0202 21:21:29.776106 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:29Z","lastTransitionTime":"2026-02-02T21:21:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:29 crc kubenswrapper[4789]: E0202 21:21:29.792737 4789 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:21:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:21:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:21:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:21:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:21:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:21:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:21:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:21:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"007d0037-9447-42ea-b3a4-6e1f0d669307\\\",\\\"systemUUID\\\":\\\"53ecbfdd-0b43-4d74-98ca-c7bcbc951d86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:29Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:29 crc kubenswrapper[4789]: I0202 21:21:29.798117 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:29 crc kubenswrapper[4789]: I0202 21:21:29.798167 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:29 crc kubenswrapper[4789]: I0202 21:21:29.798179 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:29 crc kubenswrapper[4789]: I0202 21:21:29.798199 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:29 crc kubenswrapper[4789]: I0202 21:21:29.798212 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:29Z","lastTransitionTime":"2026-02-02T21:21:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:29 crc kubenswrapper[4789]: E0202 21:21:29.820553 4789 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:21:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:21:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:21:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:21:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:21:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:21:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:21:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:21:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"007d0037-9447-42ea-b3a4-6e1f0d669307\\\",\\\"systemUUID\\\":\\\"53ecbfdd-0b43-4d74-98ca-c7bcbc951d86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:29Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:29 crc kubenswrapper[4789]: I0202 21:21:29.825121 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:29 crc kubenswrapper[4789]: I0202 21:21:29.825335 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:29 crc kubenswrapper[4789]: I0202 21:21:29.825509 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:29 crc kubenswrapper[4789]: I0202 21:21:29.825810 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:29 crc kubenswrapper[4789]: I0202 21:21:29.825951 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 04:01:43.131027953 +0000 UTC Feb 02 21:21:29 crc kubenswrapper[4789]: I0202 21:21:29.826661 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:29Z","lastTransitionTime":"2026-02-02T21:21:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:29 crc kubenswrapper[4789]: E0202 21:21:29.842507 4789 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:21:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:21:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:21:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:21:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:21:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:21:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:21:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:21:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"007d0037-9447-42ea-b3a4-6e1f0d669307\\\",\\\"systemUUID\\\":\\\"53ecbfdd-0b43-4d74-98ca-c7bcbc951d86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:29Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:29 crc kubenswrapper[4789]: I0202 21:21:29.847267 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:29 crc kubenswrapper[4789]: I0202 21:21:29.847318 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:29 crc kubenswrapper[4789]: I0202 21:21:29.847334 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:29 crc kubenswrapper[4789]: I0202 21:21:29.847359 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:29 crc kubenswrapper[4789]: I0202 21:21:29.847376 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:29Z","lastTransitionTime":"2026-02-02T21:21:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:29 crc kubenswrapper[4789]: E0202 21:21:29.867074 4789 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:21:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:21:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:21:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:21:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:21:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:21:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:21:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:21:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"007d0037-9447-42ea-b3a4-6e1f0d669307\\\",\\\"systemUUID\\\":\\\"53ecbfdd-0b43-4d74-98ca-c7bcbc951d86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:29Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:29 crc kubenswrapper[4789]: I0202 21:21:29.871799 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:29 crc kubenswrapper[4789]: I0202 21:21:29.872029 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:29 crc kubenswrapper[4789]: I0202 21:21:29.872187 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:29 crc kubenswrapper[4789]: I0202 21:21:29.872347 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:29 crc kubenswrapper[4789]: I0202 21:21:29.872534 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:29Z","lastTransitionTime":"2026-02-02T21:21:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:29 crc kubenswrapper[4789]: E0202 21:21:29.892767 4789 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:21:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:21:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:21:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:21:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:21:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:21:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T21:21:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T21:21:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"007d0037-9447-42ea-b3a4-6e1f0d669307\\\",\\\"systemUUID\\\":\\\"53ecbfdd-0b43-4d74-98ca-c7bcbc951d86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:29Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:29 crc kubenswrapper[4789]: E0202 21:21:29.893428 4789 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 02 21:21:29 crc kubenswrapper[4789]: I0202 21:21:29.895461 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:29 crc kubenswrapper[4789]: I0202 21:21:29.895516 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:29 crc kubenswrapper[4789]: I0202 21:21:29.895529 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:29 crc kubenswrapper[4789]: I0202 21:21:29.895546 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:29 crc kubenswrapper[4789]: I0202 21:21:29.895559 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:29Z","lastTransitionTime":"2026-02-02T21:21:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:29 crc kubenswrapper[4789]: I0202 21:21:29.998385 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:29 crc kubenswrapper[4789]: I0202 21:21:29.998437 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:29 crc kubenswrapper[4789]: I0202 21:21:29.998454 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:29 crc kubenswrapper[4789]: I0202 21:21:29.998476 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:29 crc kubenswrapper[4789]: I0202 21:21:29.998494 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:29Z","lastTransitionTime":"2026-02-02T21:21:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:30 crc kubenswrapper[4789]: I0202 21:21:30.101131 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:30 crc kubenswrapper[4789]: I0202 21:21:30.101182 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:30 crc kubenswrapper[4789]: I0202 21:21:30.101199 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:30 crc kubenswrapper[4789]: I0202 21:21:30.101220 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:30 crc kubenswrapper[4789]: I0202 21:21:30.101235 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:30Z","lastTransitionTime":"2026-02-02T21:21:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:30 crc kubenswrapper[4789]: I0202 21:21:30.204717 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:30 crc kubenswrapper[4789]: I0202 21:21:30.205137 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:30 crc kubenswrapper[4789]: I0202 21:21:30.205345 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:30 crc kubenswrapper[4789]: I0202 21:21:30.205562 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:30 crc kubenswrapper[4789]: I0202 21:21:30.205867 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:30Z","lastTransitionTime":"2026-02-02T21:21:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:30 crc kubenswrapper[4789]: I0202 21:21:30.308536 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:30 crc kubenswrapper[4789]: I0202 21:21:30.308868 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:30 crc kubenswrapper[4789]: I0202 21:21:30.309129 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:30 crc kubenswrapper[4789]: I0202 21:21:30.309357 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:30 crc kubenswrapper[4789]: I0202 21:21:30.309609 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:30Z","lastTransitionTime":"2026-02-02T21:21:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:30 crc kubenswrapper[4789]: I0202 21:21:30.411748 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:30 crc kubenswrapper[4789]: I0202 21:21:30.411814 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:30 crc kubenswrapper[4789]: I0202 21:21:30.411838 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:30 crc kubenswrapper[4789]: I0202 21:21:30.411867 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:30 crc kubenswrapper[4789]: I0202 21:21:30.411887 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:30Z","lastTransitionTime":"2026-02-02T21:21:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:30 crc kubenswrapper[4789]: I0202 21:21:30.419227 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vjbpg" Feb 02 21:21:30 crc kubenswrapper[4789]: E0202 21:21:30.420623 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vjbpg" podUID="2dc26662-64d3-47f0-9e0d-d340760ca348" Feb 02 21:21:30 crc kubenswrapper[4789]: I0202 21:21:30.436156 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf73e052-94a2-472e-88e9-63ab3a8d428b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c523dec61c09703463bce6b000fb79c832b3c190a960fe0097b654fd672477c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1470060c44a356a82b453ed22ef5c3841993bce37eba8523a13c49331499224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1470060c44a356a82b453ed22ef5c3841993bce37eba8523a13c49331499224\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:19:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:19:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:30Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:30 crc kubenswrapper[4789]: I0202 21:21:30.457230 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9563eded-ca82-4eb6-90d4-e62b8acbe296\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72bf64d682578abb47e9fabf5e6c31e3a20594cc4a2d13304b2036f4198af265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cc46c408bb643834e494025442760929995b22175ce2af65a14004761848c2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e95c9e553b8e14141ddde6a221c0e5e4d46533d1631893e5d55416b9d16f4a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66a4db3799101ccca8a89d6bfd2c9d36940b8710ee3d256e47cd61cfe6ac7c07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d8bd6ccf6c7345f0ddc84a2983b618218fc65283b8d351c2df30d1221f304b7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\" 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 21:20:01.773773 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 21:20:01.773776 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 21:20:01.773779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 21:20:01.773999 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0202 21:20:01.777333 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3307553149/tls.crt::/tmp/serving-cert-3307553149/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770067196\\\\\\\\\\\\\\\" (2026-02-02 21:19:55 +0000 UTC to 2026-03-04 21:19:56 +0000 UTC (now=2026-02-02 21:20:01.777292377 +0000 UTC))\\\\\\\"\\\\nI0202 21:20:01.777438 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770067201\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770067201\\\\\\\\\\\\\\\" (2026-02-02 20:20:01 +0000 UTC to 2027-02-02 20:20:01 +0000 UTC (now=2026-02-02 21:20:01.777417111 +0000 UTC))\\\\\\\"\\\\nI0202 21:20:01.777455 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 21:20:01.777484 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 21:20:01.777505 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0202 21:20:01.777526 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0202 21:20:01.777548 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3307553149/tls.crt::/tmp/serving-cert-3307553149/tls.key\\\\\\\"\\\\nF0202 21:20:01.777545 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cbe4a41d8721562e34639bc3ab24cc7f735d8de00e14d38b1b623b58740c5b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94bdc1c30a3c895835afc4f205ec20725ec13707db6957046cae7791b8850679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94bdc1c30a3c895835afc4f205ec20725ec13707db6957046cae7791b8850679\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:19:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:19:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:30Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:30 crc kubenswrapper[4789]: I0202 21:21:30.478908 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dsv6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcbed546-a1c3-4ba4-96b4-61471010b1c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fff0380f801c6909a82bfc8a765cf7a393d7c2f7cc809611ded737a22f448a3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48b8508849b24e18223b62bd73348759ba4c04a525afb7d4055175bceb0183c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e48b8508849b24e18223b62bd73348759ba4c04a525afb7d4055175bceb0183c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92b13f24aa00608821909c688ace0441fd4e432d3c5fc9a1cc06b4239bf86955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92b13f24aa00608821909c688ace0441fd4e432d3c5fc9a1cc06b4239bf86955\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75d7fd5bba471f9371960e5c6cfd5d7cb49402d47459acad01a9c438ea33de0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75d7fd5bba471f9371960e5c6cfd5d7cb49402d47459acad01a9c438ea33de0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eb07d53076d60f3363a1232e742bb4bd801d49104a2be1095af34a61a5b61cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eb07d53076d60f3363a1232e742bb4bd801d49104a2be1095af34a61a5b61cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5339fbb27f553f96497abbf4b0584cbffe399e44fafd90cd0e568e2f4efad6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5339fbb27f553f96497abbf4b0584cbffe399e44fafd90cd0e568e2f4efad6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef304631b36ef7a2edc5ce89f5b4d1efecd4947d1d71f3bb4068395d76aea346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef304631b36ef7a2edc5ce89f5b4d1efecd4947d1d71f3bb4068395d76aea346\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5726m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dsv6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:30Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:30 crc kubenswrapper[4789]: I0202 21:21:30.511081 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29dad69c19f9411676fab81684dd613d866c494df1e333b46bfff35b98430103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://047a98525fa1b06be5e6f3bab6b417973a13bb32df7fad26ca90b4558ee4e364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05637515d4c19e054ce43cc23bb6d8e32c4752282dab08786fff3062eb976461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f05a85f44fd6a7b9efa0f591ec0afebd7818a80b132ec3dddd0faa22c380eea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd508dcd4022163f662ad27cf3ffbc4cca551d4020809ca5fac60cee8da80601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://021453f1c74b708ad3f39c2defa2664f098a552f3e22315479c483d24110e583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://877f8c63dfc6ee1e69f7f0bc64e5d2896384f737678efb01ad9d27b9030a5fcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://877f8c63dfc6ee1e69f7f0bc64e5d2896384f737678efb01ad9d27b9030a5fcc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T21:21:06Z\\\",\\\"message\\\":\\\"Removed *v1.EgressIP event handler 8\\\\nI0202 21:21:06.324486 6920 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0202 21:21:06.324713 6920 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 21:21:06.325983 6920 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0202 21:21:06.326123 6920 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0202 21:21:06.327135 6920 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 21:21:06.327379 6920 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0202 21:21:06.328141 6920 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T21:21:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-w8vkt_openshift-ovn-kubernetes(2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f30344b8366cf5ef712d971dc4d8d552e7b50c1ecea8d2ace88bb80fce62231e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e90a8e698f2e9f30a596864d0d038bfad686cca1a56b1a8bc72fab5fe594f355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e90a8e698f2e9f30a596864d0d038bfad686cca1a56b1a8bc72fab5fe594f355\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnmmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w8vkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:30Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:30 crc kubenswrapper[4789]: I0202 21:21:30.515787 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:30 crc kubenswrapper[4789]: I0202 21:21:30.515842 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:30 crc kubenswrapper[4789]: I0202 21:21:30.515862 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:30 crc kubenswrapper[4789]: I0202 21:21:30.515893 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:30 crc kubenswrapper[4789]: I0202 21:21:30.515916 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:30Z","lastTransitionTime":"2026-02-02T21:21:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:30 crc kubenswrapper[4789]: I0202 21:21:30.527939 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wlsw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b25d791-42e1-4e08-b7da-41803cc40f4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0e2a17d3ade1aa229b2729d66fe953acc746673549e0d929076a3bd18839833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-snjrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wlsw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:30Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:30 crc kubenswrapper[4789]: I0202 21:21:30.566548 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"586c8e58-c2b7-4206-9408-54313854f46b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0b49204c1d59d2d4b80e160161b16cba7e45b50c5e3b2c2f0c8249140c7fd08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e943cb2ca6cd627faca126484144191f1e66200e7532b03ae34d8ba2ce55b55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74fffc681c9c96016d5d3898000dc8910d872e08a31d2fb520a6ad0a9ae3307a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6775862ab8c12cb8e8a61fab828f307d6dd94289ad0f86d897edee089db17f31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7f62245b06747aa7501fa552aff9d13f26b867b8ac6e26543b42e258351ba1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://071e779f17817dde231ae50cb1fbe6f00143c4352a32fff682846c5e79283057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://071e779f17817dde231ae50cb1fbe6f00143c4352a32fff682846c5e79283057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:19:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af6e1b9f1634a6356429edaacc8ca24aa1c6f76d08b4996af50735c22fccf6b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af6e1b9f1634a6356429edaacc8ca24aa1c6f76d08b4996af50735c22fccf6b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://07979540ce64d2a7ae83b290fd86f7146e81a911a880c37ab73c3db14f6d00df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07979540ce64d2a7ae83b290fd86f7146e81a911a880c37ab73c3db14f6d00df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:19:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:19:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:30Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:30 crc kubenswrapper[4789]: I0202 21:21:30.587801 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fb5d89e-c901-4857-a5e8-b4d8e3701714\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d85613f300f8d008d919038b40efff3fb67e228e25959cbfd2424640c6bc7a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://460e055777327e1734238bf51929751678ea5a757b00a344daa31c8c95e4c463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c292754c81dd4929f8e599e8ec352fd5e90431c60bb741c070a10ffa91fad72\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3685c506f77d3807711a442d68da94932064737cfdc5cb74557da135906e24f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:19:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:30Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:30 crc kubenswrapper[4789]: I0202 21:21:30.609410 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2x5ws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70a32268-2a2d-47f3-9fc6-4281b8dc6a02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75cf318c3d63c5316cbeba8abb93919973f88b415ed7116b55333813b8a889fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9eaf3ca59ce89187ee20f3915b7fd4a8867e156c0be18b435511d2af95ffd949\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T21:20:51Z\\\",\\\"message\\\":\\\"2026-02-02T21:20:06+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_3308657d-555b-440d-96c9-7656e8ca7e3e\\\\n2026-02-02T21:20:06+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_3308657d-555b-440d-96c9-7656e8ca7e3e to /host/opt/cni/bin/\\\\n2026-02-02T21:20:06Z [verbose] multus-daemon started\\\\n2026-02-02T21:20:06Z [verbose] Readiness Indicator file check\\\\n2026-02-02T21:20:51Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T21:20:03Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf446\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2x5ws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:30Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:30 crc kubenswrapper[4789]: I0202 21:21:30.618725 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:30 crc kubenswrapper[4789]: I0202 21:21:30.618788 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:30 crc kubenswrapper[4789]: I0202 21:21:30.618808 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:30 crc kubenswrapper[4789]: I0202 21:21:30.618838 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:30 crc kubenswrapper[4789]: I0202 21:21:30.618860 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:30Z","lastTransitionTime":"2026-02-02T21:21:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:30 crc kubenswrapper[4789]: I0202 21:21:30.628791 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af0ef1e0-7fcc-4fa6-8463-a0f5a9145c2e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:19:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3c3d527f77f26e052c4ef9c0577938dc23c802918e742b9fb9020cb6ba705f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eface13a61e8f9d1e8e9512c78a4c70973bfad708c3cdea7f7251f6fa408a59f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ed17431bc8880523ea84349c68ea56e389033b550390d88e60373f663d1491f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d78edf2f7e647edfcc306a3feff71c983907077ad05a684af9fa2990deaf3b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d78edf2f7e647edfcc306a3feff71c983907077ad05a684af9fa2990deaf3b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T21:19:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T21:19:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:19:40Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:30Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:30 crc kubenswrapper[4789]: I0202 21:21:30.643202 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa57a708f883719dd81200ccf975947c0af244b92ead81daee23171c9be412f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:30Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:30 crc kubenswrapper[4789]: I0202 21:21:30.658346 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6l576" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd970d28-4009-48b2-a0f4-2b8b1d54a2cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b641bb7caefade4d07fe41965573b059bd8b23a83bb9f60a9637d0152dffb07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwbqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6l576\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:30Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:30 crc kubenswrapper[4789]: I0202 21:21:30.672841 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:30Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:30 crc kubenswrapper[4789]: I0202 21:21:30.688451 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vjbpg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dc26662-64d3-47f0-9e0d-d340760ca348\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9zq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9zq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vjbpg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:30Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:30 crc kubenswrapper[4789]: I0202 21:21:30.707236 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40b3784b5ba59f0147d2889d897d17140759155e24587e9a323338e0a3125ddd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:30Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:30 crc kubenswrapper[4789]: I0202 21:21:30.722394 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:30 crc kubenswrapper[4789]: I0202 21:21:30.722460 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:30 crc kubenswrapper[4789]: I0202 21:21:30.722478 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:30 crc kubenswrapper[4789]: I0202 21:21:30.722502 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:30 crc kubenswrapper[4789]: I0202 21:21:30.722519 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:30Z","lastTransitionTime":"2026-02-02T21:21:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:30 crc kubenswrapper[4789]: I0202 21:21:30.722784 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:30Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:30 crc kubenswrapper[4789]: I0202 21:21:30.738440 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:30Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:30 crc kubenswrapper[4789]: I0202 21:21:30.758478 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19928db343f460eed7b046f14b45c6756081f57ea4b3ad77acc86f29eb856a82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://106225a63f3d4ecb7ba8c5266f738990ec7cc5603f9fabde6d234ae265dcc310\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:30Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:30 crc kubenswrapper[4789]: I0202 21:21:30.772749 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdf018b4-1451-4d37-be6e-05802b67c73e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad1716ba11e1b21eb68642e6935312760c389a669a007822290fbe72573bfaad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwk2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b108a17d437cce7e94365267d96e99e84944d9dd12174c5acb380bc1a1f9c885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwk2n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c8vcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:30Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:30 crc kubenswrapper[4789]: I0202 21:21:30.789205 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d49gm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39528981-2c85-43f3-8fa0-bfae5c3334cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T21:20:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4ae303f0f4381207f4dd4a443e366d6e3de2014e9bc69aa644e98a76b239868\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnfbx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://277fe88585ee146931597a14fe049a3d69197c94e0d84f5dfb334b08cd685723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T21:20:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnfbx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T21:20:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-d49gm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T21:21:30Z is after 2025-08-24T17:21:41Z" Feb 02 21:21:30 crc kubenswrapper[4789]: I0202 21:21:30.825368 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:30 crc kubenswrapper[4789]: I0202 21:21:30.825628 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:30 crc kubenswrapper[4789]: I0202 21:21:30.825782 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:30 crc kubenswrapper[4789]: I0202 21:21:30.825934 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:30 crc kubenswrapper[4789]: I0202 21:21:30.826064 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:30Z","lastTransitionTime":"2026-02-02T21:21:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:30 crc kubenswrapper[4789]: I0202 21:21:30.826057 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 05:06:16.390457358 +0000 UTC Feb 02 21:21:30 crc kubenswrapper[4789]: I0202 21:21:30.929143 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:30 crc kubenswrapper[4789]: I0202 21:21:30.929174 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:30 crc kubenswrapper[4789]: I0202 21:21:30.929183 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:30 crc kubenswrapper[4789]: I0202 21:21:30.929200 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:30 crc kubenswrapper[4789]: I0202 21:21:30.929211 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:30Z","lastTransitionTime":"2026-02-02T21:21:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:31 crc kubenswrapper[4789]: I0202 21:21:31.031064 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:31 crc kubenswrapper[4789]: I0202 21:21:31.031095 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:31 crc kubenswrapper[4789]: I0202 21:21:31.031103 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:31 crc kubenswrapper[4789]: I0202 21:21:31.031116 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:31 crc kubenswrapper[4789]: I0202 21:21:31.031125 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:31Z","lastTransitionTime":"2026-02-02T21:21:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:31 crc kubenswrapper[4789]: I0202 21:21:31.133893 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:31 crc kubenswrapper[4789]: I0202 21:21:31.133956 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:31 crc kubenswrapper[4789]: I0202 21:21:31.133973 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:31 crc kubenswrapper[4789]: I0202 21:21:31.133997 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:31 crc kubenswrapper[4789]: I0202 21:21:31.134015 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:31Z","lastTransitionTime":"2026-02-02T21:21:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:31 crc kubenswrapper[4789]: I0202 21:21:31.237961 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:31 crc kubenswrapper[4789]: I0202 21:21:31.238073 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:31 crc kubenswrapper[4789]: I0202 21:21:31.238100 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:31 crc kubenswrapper[4789]: I0202 21:21:31.238138 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:31 crc kubenswrapper[4789]: I0202 21:21:31.238163 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:31Z","lastTransitionTime":"2026-02-02T21:21:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:31 crc kubenswrapper[4789]: I0202 21:21:31.341472 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:31 crc kubenswrapper[4789]: I0202 21:21:31.341525 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:31 crc kubenswrapper[4789]: I0202 21:21:31.341535 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:31 crc kubenswrapper[4789]: I0202 21:21:31.341552 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:31 crc kubenswrapper[4789]: I0202 21:21:31.341564 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:31Z","lastTransitionTime":"2026-02-02T21:21:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:31 crc kubenswrapper[4789]: I0202 21:21:31.419617 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 21:21:31 crc kubenswrapper[4789]: I0202 21:21:31.419673 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 21:21:31 crc kubenswrapper[4789]: I0202 21:21:31.419771 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 21:21:31 crc kubenswrapper[4789]: E0202 21:21:31.419811 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 21:21:31 crc kubenswrapper[4789]: E0202 21:21:31.420030 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 21:21:31 crc kubenswrapper[4789]: E0202 21:21:31.420306 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 21:21:31 crc kubenswrapper[4789]: I0202 21:21:31.445482 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:31 crc kubenswrapper[4789]: I0202 21:21:31.445562 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:31 crc kubenswrapper[4789]: I0202 21:21:31.445605 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:31 crc kubenswrapper[4789]: I0202 21:21:31.445636 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:31 crc kubenswrapper[4789]: I0202 21:21:31.445657 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:31Z","lastTransitionTime":"2026-02-02T21:21:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:31 crc kubenswrapper[4789]: I0202 21:21:31.547867 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:31 crc kubenswrapper[4789]: I0202 21:21:31.548230 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:31 crc kubenswrapper[4789]: I0202 21:21:31.548387 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:31 crc kubenswrapper[4789]: I0202 21:21:31.548529 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:31 crc kubenswrapper[4789]: I0202 21:21:31.548776 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:31Z","lastTransitionTime":"2026-02-02T21:21:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:31 crc kubenswrapper[4789]: I0202 21:21:31.651915 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:31 crc kubenswrapper[4789]: I0202 21:21:31.652342 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:31 crc kubenswrapper[4789]: I0202 21:21:31.652560 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:31 crc kubenswrapper[4789]: I0202 21:21:31.652855 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:31 crc kubenswrapper[4789]: I0202 21:21:31.653025 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:31Z","lastTransitionTime":"2026-02-02T21:21:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:31 crc kubenswrapper[4789]: I0202 21:21:31.757213 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:31 crc kubenswrapper[4789]: I0202 21:21:31.757294 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:31 crc kubenswrapper[4789]: I0202 21:21:31.757315 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:31 crc kubenswrapper[4789]: I0202 21:21:31.757346 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:31 crc kubenswrapper[4789]: I0202 21:21:31.757365 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:31Z","lastTransitionTime":"2026-02-02T21:21:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:31 crc kubenswrapper[4789]: I0202 21:21:31.827405 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 14:49:43.781340487 +0000 UTC Feb 02 21:21:31 crc kubenswrapper[4789]: I0202 21:21:31.860416 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:31 crc kubenswrapper[4789]: I0202 21:21:31.860477 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:31 crc kubenswrapper[4789]: I0202 21:21:31.860495 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:31 crc kubenswrapper[4789]: I0202 21:21:31.860518 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:31 crc kubenswrapper[4789]: I0202 21:21:31.860535 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:31Z","lastTransitionTime":"2026-02-02T21:21:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:31 crc kubenswrapper[4789]: I0202 21:21:31.963348 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:31 crc kubenswrapper[4789]: I0202 21:21:31.963406 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:31 crc kubenswrapper[4789]: I0202 21:21:31.963426 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:31 crc kubenswrapper[4789]: I0202 21:21:31.963449 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:31 crc kubenswrapper[4789]: I0202 21:21:31.963466 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:31Z","lastTransitionTime":"2026-02-02T21:21:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:32 crc kubenswrapper[4789]: I0202 21:21:32.065788 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:32 crc kubenswrapper[4789]: I0202 21:21:32.065826 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:32 crc kubenswrapper[4789]: I0202 21:21:32.065835 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:32 crc kubenswrapper[4789]: I0202 21:21:32.065854 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:32 crc kubenswrapper[4789]: I0202 21:21:32.065870 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:32Z","lastTransitionTime":"2026-02-02T21:21:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:32 crc kubenswrapper[4789]: I0202 21:21:32.168315 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:32 crc kubenswrapper[4789]: I0202 21:21:32.168354 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:32 crc kubenswrapper[4789]: I0202 21:21:32.168364 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:32 crc kubenswrapper[4789]: I0202 21:21:32.168379 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:32 crc kubenswrapper[4789]: I0202 21:21:32.168389 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:32Z","lastTransitionTime":"2026-02-02T21:21:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:32 crc kubenswrapper[4789]: I0202 21:21:32.271398 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:32 crc kubenswrapper[4789]: I0202 21:21:32.271450 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:32 crc kubenswrapper[4789]: I0202 21:21:32.271466 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:32 crc kubenswrapper[4789]: I0202 21:21:32.271487 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:32 crc kubenswrapper[4789]: I0202 21:21:32.271503 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:32Z","lastTransitionTime":"2026-02-02T21:21:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:32 crc kubenswrapper[4789]: I0202 21:21:32.373741 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:32 crc kubenswrapper[4789]: I0202 21:21:32.373772 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:32 crc kubenswrapper[4789]: I0202 21:21:32.373782 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:32 crc kubenswrapper[4789]: I0202 21:21:32.373809 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:32 crc kubenswrapper[4789]: I0202 21:21:32.373821 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:32Z","lastTransitionTime":"2026-02-02T21:21:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:32 crc kubenswrapper[4789]: I0202 21:21:32.418939 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vjbpg" Feb 02 21:21:32 crc kubenswrapper[4789]: E0202 21:21:32.419137 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vjbpg" podUID="2dc26662-64d3-47f0-9e0d-d340760ca348" Feb 02 21:21:32 crc kubenswrapper[4789]: I0202 21:21:32.476044 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:32 crc kubenswrapper[4789]: I0202 21:21:32.476102 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:32 crc kubenswrapper[4789]: I0202 21:21:32.476115 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:32 crc kubenswrapper[4789]: I0202 21:21:32.476135 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:32 crc kubenswrapper[4789]: I0202 21:21:32.476149 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:32Z","lastTransitionTime":"2026-02-02T21:21:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:32 crc kubenswrapper[4789]: I0202 21:21:32.579528 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:32 crc kubenswrapper[4789]: I0202 21:21:32.580029 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:32 crc kubenswrapper[4789]: I0202 21:21:32.580270 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:32 crc kubenswrapper[4789]: I0202 21:21:32.580491 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:32 crc kubenswrapper[4789]: I0202 21:21:32.580775 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:32Z","lastTransitionTime":"2026-02-02T21:21:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:32 crc kubenswrapper[4789]: I0202 21:21:32.684477 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:32 crc kubenswrapper[4789]: I0202 21:21:32.684872 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:32 crc kubenswrapper[4789]: I0202 21:21:32.684936 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:32 crc kubenswrapper[4789]: I0202 21:21:32.684975 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:32 crc kubenswrapper[4789]: I0202 21:21:32.684994 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:32Z","lastTransitionTime":"2026-02-02T21:21:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:32 crc kubenswrapper[4789]: I0202 21:21:32.787360 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:32 crc kubenswrapper[4789]: I0202 21:21:32.787448 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:32 crc kubenswrapper[4789]: I0202 21:21:32.787466 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:32 crc kubenswrapper[4789]: I0202 21:21:32.787490 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:32 crc kubenswrapper[4789]: I0202 21:21:32.787510 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:32Z","lastTransitionTime":"2026-02-02T21:21:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:32 crc kubenswrapper[4789]: I0202 21:21:32.828453 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 14:12:56.173738011 +0000 UTC Feb 02 21:21:32 crc kubenswrapper[4789]: I0202 21:21:32.890455 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:32 crc kubenswrapper[4789]: I0202 21:21:32.890518 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:32 crc kubenswrapper[4789]: I0202 21:21:32.890535 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:32 crc kubenswrapper[4789]: I0202 21:21:32.890561 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:32 crc kubenswrapper[4789]: I0202 21:21:32.890609 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:32Z","lastTransitionTime":"2026-02-02T21:21:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:32 crc kubenswrapper[4789]: I0202 21:21:32.993945 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:32 crc kubenswrapper[4789]: I0202 21:21:32.994035 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:32 crc kubenswrapper[4789]: I0202 21:21:32.994056 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:32 crc kubenswrapper[4789]: I0202 21:21:32.994081 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:32 crc kubenswrapper[4789]: I0202 21:21:32.994099 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:32Z","lastTransitionTime":"2026-02-02T21:21:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:33 crc kubenswrapper[4789]: I0202 21:21:33.097332 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:33 crc kubenswrapper[4789]: I0202 21:21:33.097402 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:33 crc kubenswrapper[4789]: I0202 21:21:33.097421 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:33 crc kubenswrapper[4789]: I0202 21:21:33.097449 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:33 crc kubenswrapper[4789]: I0202 21:21:33.097467 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:33Z","lastTransitionTime":"2026-02-02T21:21:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:33 crc kubenswrapper[4789]: I0202 21:21:33.201431 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:33 crc kubenswrapper[4789]: I0202 21:21:33.201500 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:33 crc kubenswrapper[4789]: I0202 21:21:33.201521 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:33 crc kubenswrapper[4789]: I0202 21:21:33.201546 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:33 crc kubenswrapper[4789]: I0202 21:21:33.201563 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:33Z","lastTransitionTime":"2026-02-02T21:21:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:33 crc kubenswrapper[4789]: I0202 21:21:33.304888 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:33 crc kubenswrapper[4789]: I0202 21:21:33.304954 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:33 crc kubenswrapper[4789]: I0202 21:21:33.304973 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:33 crc kubenswrapper[4789]: I0202 21:21:33.304997 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:33 crc kubenswrapper[4789]: I0202 21:21:33.305015 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:33Z","lastTransitionTime":"2026-02-02T21:21:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:33 crc kubenswrapper[4789]: I0202 21:21:33.408398 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:33 crc kubenswrapper[4789]: I0202 21:21:33.408473 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:33 crc kubenswrapper[4789]: I0202 21:21:33.408495 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:33 crc kubenswrapper[4789]: I0202 21:21:33.408550 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:33 crc kubenswrapper[4789]: I0202 21:21:33.408572 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:33Z","lastTransitionTime":"2026-02-02T21:21:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:33 crc kubenswrapper[4789]: I0202 21:21:33.419475 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 21:21:33 crc kubenswrapper[4789]: I0202 21:21:33.419520 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 21:21:33 crc kubenswrapper[4789]: I0202 21:21:33.419483 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 21:21:33 crc kubenswrapper[4789]: E0202 21:21:33.419982 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 21:21:33 crc kubenswrapper[4789]: E0202 21:21:33.420166 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 21:21:33 crc kubenswrapper[4789]: E0202 21:21:33.420554 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 21:21:33 crc kubenswrapper[4789]: I0202 21:21:33.511713 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:33 crc kubenswrapper[4789]: I0202 21:21:33.511785 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:33 crc kubenswrapper[4789]: I0202 21:21:33.511803 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:33 crc kubenswrapper[4789]: I0202 21:21:33.511827 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:33 crc kubenswrapper[4789]: I0202 21:21:33.511845 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:33Z","lastTransitionTime":"2026-02-02T21:21:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:33 crc kubenswrapper[4789]: I0202 21:21:33.614594 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:33 crc kubenswrapper[4789]: I0202 21:21:33.614647 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:33 crc kubenswrapper[4789]: I0202 21:21:33.614660 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:33 crc kubenswrapper[4789]: I0202 21:21:33.614677 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:33 crc kubenswrapper[4789]: I0202 21:21:33.614688 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:33Z","lastTransitionTime":"2026-02-02T21:21:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:33 crc kubenswrapper[4789]: I0202 21:21:33.718417 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:33 crc kubenswrapper[4789]: I0202 21:21:33.718474 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:33 crc kubenswrapper[4789]: I0202 21:21:33.718492 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:33 crc kubenswrapper[4789]: I0202 21:21:33.718515 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:33 crc kubenswrapper[4789]: I0202 21:21:33.718533 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:33Z","lastTransitionTime":"2026-02-02T21:21:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:33 crc kubenswrapper[4789]: I0202 21:21:33.821037 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:33 crc kubenswrapper[4789]: I0202 21:21:33.821079 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:33 crc kubenswrapper[4789]: I0202 21:21:33.821089 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:33 crc kubenswrapper[4789]: I0202 21:21:33.821104 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:33 crc kubenswrapper[4789]: I0202 21:21:33.821114 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:33Z","lastTransitionTime":"2026-02-02T21:21:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:33 crc kubenswrapper[4789]: I0202 21:21:33.828677 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 20:38:39.047317101 +0000 UTC Feb 02 21:21:33 crc kubenswrapper[4789]: I0202 21:21:33.924445 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:33 crc kubenswrapper[4789]: I0202 21:21:33.924512 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:33 crc kubenswrapper[4789]: I0202 21:21:33.924529 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:33 crc kubenswrapper[4789]: I0202 21:21:33.924554 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:33 crc kubenswrapper[4789]: I0202 21:21:33.924570 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:33Z","lastTransitionTime":"2026-02-02T21:21:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:34 crc kubenswrapper[4789]: I0202 21:21:34.027637 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:34 crc kubenswrapper[4789]: I0202 21:21:34.027691 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:34 crc kubenswrapper[4789]: I0202 21:21:34.027710 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:34 crc kubenswrapper[4789]: I0202 21:21:34.027736 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:34 crc kubenswrapper[4789]: I0202 21:21:34.027754 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:34Z","lastTransitionTime":"2026-02-02T21:21:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:34 crc kubenswrapper[4789]: I0202 21:21:34.131097 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:34 crc kubenswrapper[4789]: I0202 21:21:34.131154 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:34 crc kubenswrapper[4789]: I0202 21:21:34.131174 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:34 crc kubenswrapper[4789]: I0202 21:21:34.131203 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:34 crc kubenswrapper[4789]: I0202 21:21:34.131224 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:34Z","lastTransitionTime":"2026-02-02T21:21:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:34 crc kubenswrapper[4789]: I0202 21:21:34.233897 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:34 crc kubenswrapper[4789]: I0202 21:21:34.234270 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:34 crc kubenswrapper[4789]: I0202 21:21:34.234283 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:34 crc kubenswrapper[4789]: I0202 21:21:34.234302 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:34 crc kubenswrapper[4789]: I0202 21:21:34.234318 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:34Z","lastTransitionTime":"2026-02-02T21:21:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:34 crc kubenswrapper[4789]: I0202 21:21:34.337797 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:34 crc kubenswrapper[4789]: I0202 21:21:34.337859 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:34 crc kubenswrapper[4789]: I0202 21:21:34.337877 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:34 crc kubenswrapper[4789]: I0202 21:21:34.337905 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:34 crc kubenswrapper[4789]: I0202 21:21:34.337925 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:34Z","lastTransitionTime":"2026-02-02T21:21:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:34 crc kubenswrapper[4789]: I0202 21:21:34.418685 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vjbpg" Feb 02 21:21:34 crc kubenswrapper[4789]: E0202 21:21:34.418923 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vjbpg" podUID="2dc26662-64d3-47f0-9e0d-d340760ca348" Feb 02 21:21:34 crc kubenswrapper[4789]: I0202 21:21:34.440994 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:34 crc kubenswrapper[4789]: I0202 21:21:34.441058 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:34 crc kubenswrapper[4789]: I0202 21:21:34.441078 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:34 crc kubenswrapper[4789]: I0202 21:21:34.441106 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:34 crc kubenswrapper[4789]: I0202 21:21:34.441127 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:34Z","lastTransitionTime":"2026-02-02T21:21:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:34 crc kubenswrapper[4789]: I0202 21:21:34.544285 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:34 crc kubenswrapper[4789]: I0202 21:21:34.544364 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:34 crc kubenswrapper[4789]: I0202 21:21:34.544382 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:34 crc kubenswrapper[4789]: I0202 21:21:34.544408 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:34 crc kubenswrapper[4789]: I0202 21:21:34.544452 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:34Z","lastTransitionTime":"2026-02-02T21:21:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:34 crc kubenswrapper[4789]: I0202 21:21:34.647636 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:34 crc kubenswrapper[4789]: I0202 21:21:34.647714 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:34 crc kubenswrapper[4789]: I0202 21:21:34.647740 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:34 crc kubenswrapper[4789]: I0202 21:21:34.647770 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:34 crc kubenswrapper[4789]: I0202 21:21:34.647793 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:34Z","lastTransitionTime":"2026-02-02T21:21:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:34 crc kubenswrapper[4789]: I0202 21:21:34.749924 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:34 crc kubenswrapper[4789]: I0202 21:21:34.749976 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:34 crc kubenswrapper[4789]: I0202 21:21:34.749993 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:34 crc kubenswrapper[4789]: I0202 21:21:34.750014 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:34 crc kubenswrapper[4789]: I0202 21:21:34.750028 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:34Z","lastTransitionTime":"2026-02-02T21:21:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:34 crc kubenswrapper[4789]: I0202 21:21:34.829269 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 17:29:40.944249261 +0000 UTC Feb 02 21:21:34 crc kubenswrapper[4789]: I0202 21:21:34.852263 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:34 crc kubenswrapper[4789]: I0202 21:21:34.852303 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:34 crc kubenswrapper[4789]: I0202 21:21:34.852316 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:34 crc kubenswrapper[4789]: I0202 21:21:34.852331 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:34 crc kubenswrapper[4789]: I0202 21:21:34.852341 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:34Z","lastTransitionTime":"2026-02-02T21:21:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:34 crc kubenswrapper[4789]: I0202 21:21:34.954745 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:34 crc kubenswrapper[4789]: I0202 21:21:34.954801 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:34 crc kubenswrapper[4789]: I0202 21:21:34.954818 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:34 crc kubenswrapper[4789]: I0202 21:21:34.954840 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:34 crc kubenswrapper[4789]: I0202 21:21:34.954867 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:34Z","lastTransitionTime":"2026-02-02T21:21:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:35 crc kubenswrapper[4789]: I0202 21:21:35.057181 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:35 crc kubenswrapper[4789]: I0202 21:21:35.057254 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:35 crc kubenswrapper[4789]: I0202 21:21:35.057275 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:35 crc kubenswrapper[4789]: I0202 21:21:35.057300 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:35 crc kubenswrapper[4789]: I0202 21:21:35.057319 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:35Z","lastTransitionTime":"2026-02-02T21:21:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:35 crc kubenswrapper[4789]: I0202 21:21:35.160500 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:35 crc kubenswrapper[4789]: I0202 21:21:35.160553 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:35 crc kubenswrapper[4789]: I0202 21:21:35.160569 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:35 crc kubenswrapper[4789]: I0202 21:21:35.160619 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:35 crc kubenswrapper[4789]: I0202 21:21:35.160636 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:35Z","lastTransitionTime":"2026-02-02T21:21:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:35 crc kubenswrapper[4789]: I0202 21:21:35.264152 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:35 crc kubenswrapper[4789]: I0202 21:21:35.264197 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:35 crc kubenswrapper[4789]: I0202 21:21:35.264208 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:35 crc kubenswrapper[4789]: I0202 21:21:35.264224 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:35 crc kubenswrapper[4789]: I0202 21:21:35.264239 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:35Z","lastTransitionTime":"2026-02-02T21:21:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:35 crc kubenswrapper[4789]: I0202 21:21:35.366881 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:35 crc kubenswrapper[4789]: I0202 21:21:35.366940 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:35 crc kubenswrapper[4789]: I0202 21:21:35.366957 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:35 crc kubenswrapper[4789]: I0202 21:21:35.366982 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:35 crc kubenswrapper[4789]: I0202 21:21:35.366999 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:35Z","lastTransitionTime":"2026-02-02T21:21:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:35 crc kubenswrapper[4789]: I0202 21:21:35.419828 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 21:21:35 crc kubenswrapper[4789]: I0202 21:21:35.419893 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 21:21:35 crc kubenswrapper[4789]: E0202 21:21:35.420152 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 21:21:35 crc kubenswrapper[4789]: I0202 21:21:35.420509 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 21:21:35 crc kubenswrapper[4789]: E0202 21:21:35.420671 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 21:21:35 crc kubenswrapper[4789]: E0202 21:21:35.421658 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 21:21:35 crc kubenswrapper[4789]: I0202 21:21:35.422258 4789 scope.go:117] "RemoveContainer" containerID="877f8c63dfc6ee1e69f7f0bc64e5d2896384f737678efb01ad9d27b9030a5fcc" Feb 02 21:21:35 crc kubenswrapper[4789]: E0202 21:21:35.422546 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-w8vkt_openshift-ovn-kubernetes(2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6)\"" pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" podUID="2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6" Feb 02 21:21:35 crc kubenswrapper[4789]: I0202 21:21:35.470276 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:35 crc kubenswrapper[4789]: I0202 21:21:35.470346 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:35 crc kubenswrapper[4789]: I0202 21:21:35.470373 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:35 crc kubenswrapper[4789]: I0202 21:21:35.470405 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:35 crc kubenswrapper[4789]: I0202 21:21:35.470428 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:35Z","lastTransitionTime":"2026-02-02T21:21:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:35 crc kubenswrapper[4789]: I0202 21:21:35.573410 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:35 crc kubenswrapper[4789]: I0202 21:21:35.573759 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:35 crc kubenswrapper[4789]: I0202 21:21:35.573901 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:35 crc kubenswrapper[4789]: I0202 21:21:35.574041 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:35 crc kubenswrapper[4789]: I0202 21:21:35.574173 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:35Z","lastTransitionTime":"2026-02-02T21:21:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:35 crc kubenswrapper[4789]: I0202 21:21:35.678176 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:35 crc kubenswrapper[4789]: I0202 21:21:35.678230 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:35 crc kubenswrapper[4789]: I0202 21:21:35.678244 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:35 crc kubenswrapper[4789]: I0202 21:21:35.678265 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:35 crc kubenswrapper[4789]: I0202 21:21:35.678279 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:35Z","lastTransitionTime":"2026-02-02T21:21:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:35 crc kubenswrapper[4789]: I0202 21:21:35.780927 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:35 crc kubenswrapper[4789]: I0202 21:21:35.780994 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:35 crc kubenswrapper[4789]: I0202 21:21:35.781012 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:35 crc kubenswrapper[4789]: I0202 21:21:35.781040 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:35 crc kubenswrapper[4789]: I0202 21:21:35.781059 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:35Z","lastTransitionTime":"2026-02-02T21:21:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:35 crc kubenswrapper[4789]: I0202 21:21:35.829367 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 07:49:24.575132796 +0000 UTC Feb 02 21:21:35 crc kubenswrapper[4789]: I0202 21:21:35.883686 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:35 crc kubenswrapper[4789]: I0202 21:21:35.883732 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:35 crc kubenswrapper[4789]: I0202 21:21:35.883744 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:35 crc kubenswrapper[4789]: I0202 21:21:35.883761 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:35 crc kubenswrapper[4789]: I0202 21:21:35.883775 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:35Z","lastTransitionTime":"2026-02-02T21:21:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:35 crc kubenswrapper[4789]: I0202 21:21:35.986732 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:35 crc kubenswrapper[4789]: I0202 21:21:35.986778 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:35 crc kubenswrapper[4789]: I0202 21:21:35.986792 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:35 crc kubenswrapper[4789]: I0202 21:21:35.986811 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:35 crc kubenswrapper[4789]: I0202 21:21:35.986825 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:35Z","lastTransitionTime":"2026-02-02T21:21:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:36 crc kubenswrapper[4789]: I0202 21:21:36.089801 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:36 crc kubenswrapper[4789]: I0202 21:21:36.089910 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:36 crc kubenswrapper[4789]: I0202 21:21:36.089937 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:36 crc kubenswrapper[4789]: I0202 21:21:36.089964 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:36 crc kubenswrapper[4789]: I0202 21:21:36.089985 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:36Z","lastTransitionTime":"2026-02-02T21:21:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:36 crc kubenswrapper[4789]: I0202 21:21:36.192989 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:36 crc kubenswrapper[4789]: I0202 21:21:36.193037 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:36 crc kubenswrapper[4789]: I0202 21:21:36.193049 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:36 crc kubenswrapper[4789]: I0202 21:21:36.193065 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:36 crc kubenswrapper[4789]: I0202 21:21:36.193077 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:36Z","lastTransitionTime":"2026-02-02T21:21:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:36 crc kubenswrapper[4789]: I0202 21:21:36.296121 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:36 crc kubenswrapper[4789]: I0202 21:21:36.296185 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:36 crc kubenswrapper[4789]: I0202 21:21:36.296199 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:36 crc kubenswrapper[4789]: I0202 21:21:36.296215 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:36 crc kubenswrapper[4789]: I0202 21:21:36.296247 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:36Z","lastTransitionTime":"2026-02-02T21:21:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:36 crc kubenswrapper[4789]: I0202 21:21:36.398711 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:36 crc kubenswrapper[4789]: I0202 21:21:36.398795 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:36 crc kubenswrapper[4789]: I0202 21:21:36.398820 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:36 crc kubenswrapper[4789]: I0202 21:21:36.398854 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:36 crc kubenswrapper[4789]: I0202 21:21:36.398876 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:36Z","lastTransitionTime":"2026-02-02T21:21:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:36 crc kubenswrapper[4789]: I0202 21:21:36.418732 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vjbpg" Feb 02 21:21:36 crc kubenswrapper[4789]: E0202 21:21:36.418911 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vjbpg" podUID="2dc26662-64d3-47f0-9e0d-d340760ca348" Feb 02 21:21:36 crc kubenswrapper[4789]: I0202 21:21:36.501252 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:36 crc kubenswrapper[4789]: I0202 21:21:36.501315 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:36 crc kubenswrapper[4789]: I0202 21:21:36.501337 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:36 crc kubenswrapper[4789]: I0202 21:21:36.501366 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:36 crc kubenswrapper[4789]: I0202 21:21:36.501387 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:36Z","lastTransitionTime":"2026-02-02T21:21:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:36 crc kubenswrapper[4789]: I0202 21:21:36.604145 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:36 crc kubenswrapper[4789]: I0202 21:21:36.604204 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:36 crc kubenswrapper[4789]: I0202 21:21:36.604213 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:36 crc kubenswrapper[4789]: I0202 21:21:36.604229 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:36 crc kubenswrapper[4789]: I0202 21:21:36.604239 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:36Z","lastTransitionTime":"2026-02-02T21:21:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:36 crc kubenswrapper[4789]: I0202 21:21:36.708110 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:36 crc kubenswrapper[4789]: I0202 21:21:36.708173 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:36 crc kubenswrapper[4789]: I0202 21:21:36.708197 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:36 crc kubenswrapper[4789]: I0202 21:21:36.708228 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:36 crc kubenswrapper[4789]: I0202 21:21:36.708253 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:36Z","lastTransitionTime":"2026-02-02T21:21:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:36 crc kubenswrapper[4789]: I0202 21:21:36.811710 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:36 crc kubenswrapper[4789]: I0202 21:21:36.811759 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:36 crc kubenswrapper[4789]: I0202 21:21:36.811784 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:36 crc kubenswrapper[4789]: I0202 21:21:36.811804 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:36 crc kubenswrapper[4789]: I0202 21:21:36.811820 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:36Z","lastTransitionTime":"2026-02-02T21:21:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:36 crc kubenswrapper[4789]: I0202 21:21:36.830186 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 03:48:33.12328149 +0000 UTC Feb 02 21:21:36 crc kubenswrapper[4789]: I0202 21:21:36.914361 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:36 crc kubenswrapper[4789]: I0202 21:21:36.914410 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:36 crc kubenswrapper[4789]: I0202 21:21:36.914427 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:36 crc kubenswrapper[4789]: I0202 21:21:36.914450 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:36 crc kubenswrapper[4789]: I0202 21:21:36.914468 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:36Z","lastTransitionTime":"2026-02-02T21:21:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:37 crc kubenswrapper[4789]: I0202 21:21:37.017307 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:37 crc kubenswrapper[4789]: I0202 21:21:37.017363 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:37 crc kubenswrapper[4789]: I0202 21:21:37.017373 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:37 crc kubenswrapper[4789]: I0202 21:21:37.017391 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:37 crc kubenswrapper[4789]: I0202 21:21:37.017402 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:37Z","lastTransitionTime":"2026-02-02T21:21:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:37 crc kubenswrapper[4789]: I0202 21:21:37.120997 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:37 crc kubenswrapper[4789]: I0202 21:21:37.121263 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:37 crc kubenswrapper[4789]: I0202 21:21:37.121355 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:37 crc kubenswrapper[4789]: I0202 21:21:37.121449 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:37 crc kubenswrapper[4789]: I0202 21:21:37.121544 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:37Z","lastTransitionTime":"2026-02-02T21:21:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:37 crc kubenswrapper[4789]: I0202 21:21:37.223616 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:37 crc kubenswrapper[4789]: I0202 21:21:37.223669 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:37 crc kubenswrapper[4789]: I0202 21:21:37.223684 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:37 crc kubenswrapper[4789]: I0202 21:21:37.223705 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:37 crc kubenswrapper[4789]: I0202 21:21:37.223721 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:37Z","lastTransitionTime":"2026-02-02T21:21:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:37 crc kubenswrapper[4789]: I0202 21:21:37.326199 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:37 crc kubenswrapper[4789]: I0202 21:21:37.326227 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:37 crc kubenswrapper[4789]: I0202 21:21:37.326235 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:37 crc kubenswrapper[4789]: I0202 21:21:37.326262 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:37 crc kubenswrapper[4789]: I0202 21:21:37.326272 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:37Z","lastTransitionTime":"2026-02-02T21:21:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:37 crc kubenswrapper[4789]: I0202 21:21:37.419388 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 21:21:37 crc kubenswrapper[4789]: E0202 21:21:37.419549 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 21:21:37 crc kubenswrapper[4789]: I0202 21:21:37.419388 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 21:21:37 crc kubenswrapper[4789]: E0202 21:21:37.419671 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 21:21:37 crc kubenswrapper[4789]: I0202 21:21:37.419388 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 21:21:37 crc kubenswrapper[4789]: E0202 21:21:37.419755 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 21:21:37 crc kubenswrapper[4789]: I0202 21:21:37.429051 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:37 crc kubenswrapper[4789]: I0202 21:21:37.429109 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:37 crc kubenswrapper[4789]: I0202 21:21:37.429129 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:37 crc kubenswrapper[4789]: I0202 21:21:37.429152 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:37 crc kubenswrapper[4789]: I0202 21:21:37.429169 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:37Z","lastTransitionTime":"2026-02-02T21:21:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:37 crc kubenswrapper[4789]: I0202 21:21:37.531204 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:37 crc kubenswrapper[4789]: I0202 21:21:37.531236 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:37 crc kubenswrapper[4789]: I0202 21:21:37.531244 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:37 crc kubenswrapper[4789]: I0202 21:21:37.531257 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:37 crc kubenswrapper[4789]: I0202 21:21:37.531265 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:37Z","lastTransitionTime":"2026-02-02T21:21:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:37 crc kubenswrapper[4789]: I0202 21:21:37.633480 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:37 crc kubenswrapper[4789]: I0202 21:21:37.633539 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:37 crc kubenswrapper[4789]: I0202 21:21:37.633561 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:37 crc kubenswrapper[4789]: I0202 21:21:37.633616 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:37 crc kubenswrapper[4789]: I0202 21:21:37.633638 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:37Z","lastTransitionTime":"2026-02-02T21:21:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:37 crc kubenswrapper[4789]: I0202 21:21:37.735914 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:37 crc kubenswrapper[4789]: I0202 21:21:37.736198 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:37 crc kubenswrapper[4789]: I0202 21:21:37.736440 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:37 crc kubenswrapper[4789]: I0202 21:21:37.736686 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:37 crc kubenswrapper[4789]: I0202 21:21:37.736889 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:37Z","lastTransitionTime":"2026-02-02T21:21:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:37 crc kubenswrapper[4789]: I0202 21:21:37.831189 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 05:10:50.848212359 +0000 UTC Feb 02 21:21:37 crc kubenswrapper[4789]: I0202 21:21:37.839266 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:37 crc kubenswrapper[4789]: I0202 21:21:37.839332 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:37 crc kubenswrapper[4789]: I0202 21:21:37.839350 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:37 crc kubenswrapper[4789]: I0202 21:21:37.839377 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:37 crc kubenswrapper[4789]: I0202 21:21:37.839398 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:37Z","lastTransitionTime":"2026-02-02T21:21:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:37 crc kubenswrapper[4789]: I0202 21:21:37.942148 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:37 crc kubenswrapper[4789]: I0202 21:21:37.942201 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:37 crc kubenswrapper[4789]: I0202 21:21:37.942218 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:37 crc kubenswrapper[4789]: I0202 21:21:37.942238 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:37 crc kubenswrapper[4789]: I0202 21:21:37.942252 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:37Z","lastTransitionTime":"2026-02-02T21:21:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:38 crc kubenswrapper[4789]: I0202 21:21:38.044615 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:38 crc kubenswrapper[4789]: I0202 21:21:38.044651 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:38 crc kubenswrapper[4789]: I0202 21:21:38.044664 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:38 crc kubenswrapper[4789]: I0202 21:21:38.044685 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:38 crc kubenswrapper[4789]: I0202 21:21:38.044700 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:38Z","lastTransitionTime":"2026-02-02T21:21:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:38 crc kubenswrapper[4789]: I0202 21:21:38.147270 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:38 crc kubenswrapper[4789]: I0202 21:21:38.147311 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:38 crc kubenswrapper[4789]: I0202 21:21:38.147321 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:38 crc kubenswrapper[4789]: I0202 21:21:38.147341 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:38 crc kubenswrapper[4789]: I0202 21:21:38.147355 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:38Z","lastTransitionTime":"2026-02-02T21:21:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:38 crc kubenswrapper[4789]: I0202 21:21:38.250306 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:38 crc kubenswrapper[4789]: I0202 21:21:38.250351 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:38 crc kubenswrapper[4789]: I0202 21:21:38.250364 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:38 crc kubenswrapper[4789]: I0202 21:21:38.250382 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:38 crc kubenswrapper[4789]: I0202 21:21:38.250396 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:38Z","lastTransitionTime":"2026-02-02T21:21:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:38 crc kubenswrapper[4789]: I0202 21:21:38.354288 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:38 crc kubenswrapper[4789]: I0202 21:21:38.354332 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:38 crc kubenswrapper[4789]: I0202 21:21:38.354345 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:38 crc kubenswrapper[4789]: I0202 21:21:38.354362 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:38 crc kubenswrapper[4789]: I0202 21:21:38.354374 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:38Z","lastTransitionTime":"2026-02-02T21:21:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:38 crc kubenswrapper[4789]: I0202 21:21:38.419309 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vjbpg" Feb 02 21:21:38 crc kubenswrapper[4789]: E0202 21:21:38.419500 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vjbpg" podUID="2dc26662-64d3-47f0-9e0d-d340760ca348" Feb 02 21:21:38 crc kubenswrapper[4789]: I0202 21:21:38.430744 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2x5ws_70a32268-2a2d-47f3-9fc6-4281b8dc6a02/kube-multus/1.log" Feb 02 21:21:38 crc kubenswrapper[4789]: I0202 21:21:38.431458 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2x5ws_70a32268-2a2d-47f3-9fc6-4281b8dc6a02/kube-multus/0.log" Feb 02 21:21:38 crc kubenswrapper[4789]: I0202 21:21:38.431526 4789 generic.go:334] "Generic (PLEG): container finished" podID="70a32268-2a2d-47f3-9fc6-4281b8dc6a02" containerID="75cf318c3d63c5316cbeba8abb93919973f88b415ed7116b55333813b8a889fa" exitCode=1 Feb 02 21:21:38 crc kubenswrapper[4789]: I0202 21:21:38.431567 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2x5ws" event={"ID":"70a32268-2a2d-47f3-9fc6-4281b8dc6a02","Type":"ContainerDied","Data":"75cf318c3d63c5316cbeba8abb93919973f88b415ed7116b55333813b8a889fa"} Feb 02 21:21:38 crc kubenswrapper[4789]: I0202 21:21:38.431678 4789 scope.go:117] "RemoveContainer" containerID="9eaf3ca59ce89187ee20f3915b7fd4a8867e156c0be18b435511d2af95ffd949" Feb 02 21:21:38 crc kubenswrapper[4789]: I0202 21:21:38.431985 4789 scope.go:117] "RemoveContainer" containerID="75cf318c3d63c5316cbeba8abb93919973f88b415ed7116b55333813b8a889fa" Feb 02 21:21:38 crc kubenswrapper[4789]: E0202 21:21:38.432141 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-2x5ws_openshift-multus(70a32268-2a2d-47f3-9fc6-4281b8dc6a02)\"" pod="openshift-multus/multus-2x5ws" podUID="70a32268-2a2d-47f3-9fc6-4281b8dc6a02" Feb 02 21:21:38 crc kubenswrapper[4789]: I0202 21:21:38.458527 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:38 crc kubenswrapper[4789]: I0202 21:21:38.458613 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:38 crc kubenswrapper[4789]: I0202 21:21:38.458635 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:38 crc kubenswrapper[4789]: I0202 21:21:38.458662 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:38 crc kubenswrapper[4789]: I0202 21:21:38.458681 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:38Z","lastTransitionTime":"2026-02-02T21:21:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:38 crc kubenswrapper[4789]: I0202 21:21:38.562694 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:38 crc kubenswrapper[4789]: I0202 21:21:38.562758 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:38 crc kubenswrapper[4789]: I0202 21:21:38.562774 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:38 crc kubenswrapper[4789]: I0202 21:21:38.562795 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:38 crc kubenswrapper[4789]: I0202 21:21:38.562811 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:38Z","lastTransitionTime":"2026-02-02T21:21:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:38 crc kubenswrapper[4789]: I0202 21:21:38.576283 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podStartSLOduration=96.576266369 podStartE2EDuration="1m36.576266369s" podCreationTimestamp="2026-02-02 21:20:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:21:38.564510084 +0000 UTC m=+118.859535103" watchObservedRunningTime="2026-02-02 21:21:38.576266369 +0000 UTC m=+118.871291388" Feb 02 21:21:38 crc kubenswrapper[4789]: I0202 21:21:38.594187 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d49gm" podStartSLOduration=95.594170179 podStartE2EDuration="1m35.594170179s" podCreationTimestamp="2026-02-02 21:20:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:21:38.57875392 +0000 UTC m=+118.873778939" watchObservedRunningTime="2026-02-02 21:21:38.594170179 +0000 UTC m=+118.889195198" Feb 02 21:21:38 crc kubenswrapper[4789]: I0202 21:21:38.638717 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-dsv6b" podStartSLOduration=96.638700408 podStartE2EDuration="1m36.638700408s" podCreationTimestamp="2026-02-02 21:20:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:21:38.638447291 +0000 UTC m=+118.933472320" watchObservedRunningTime="2026-02-02 21:21:38.638700408 +0000 UTC m=+118.933725427" Feb 02 21:21:38 crc kubenswrapper[4789]: I0202 21:21:38.639012 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=96.639007717 podStartE2EDuration="1m36.639007717s" podCreationTimestamp="2026-02-02 21:20:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:21:38.619141891 +0000 UTC m=+118.914166910" watchObservedRunningTime="2026-02-02 21:21:38.639007717 +0000 UTC m=+118.934032736" Feb 02 21:21:38 crc kubenswrapper[4789]: I0202 21:21:38.666815 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:38 crc kubenswrapper[4789]: I0202 21:21:38.666860 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:38 crc kubenswrapper[4789]: I0202 21:21:38.666873 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:38 crc kubenswrapper[4789]: I0202 21:21:38.666893 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:38 crc kubenswrapper[4789]: I0202 21:21:38.666904 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:38Z","lastTransitionTime":"2026-02-02T21:21:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:38 crc kubenswrapper[4789]: I0202 21:21:38.687529 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-wlsw6" podStartSLOduration=96.687501449 podStartE2EDuration="1m36.687501449s" podCreationTimestamp="2026-02-02 21:20:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:21:38.678237785 +0000 UTC m=+118.973262804" watchObservedRunningTime="2026-02-02 21:21:38.687501449 +0000 UTC m=+118.982526478" Feb 02 21:21:38 crc kubenswrapper[4789]: I0202 21:21:38.688395 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=51.688384154 podStartE2EDuration="51.688384154s" podCreationTimestamp="2026-02-02 21:20:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:21:38.688006713 +0000 UTC m=+118.983031752" watchObservedRunningTime="2026-02-02 21:21:38.688384154 +0000 UTC m=+118.983409193" Feb 02 21:21:38 crc kubenswrapper[4789]: I0202 21:21:38.725644 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=91.725618015 podStartE2EDuration="1m31.725618015s" podCreationTimestamp="2026-02-02 21:20:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:21:38.710964247 +0000 UTC m=+119.005989286" watchObservedRunningTime="2026-02-02 21:21:38.725618015 +0000 UTC m=+119.020643064" Feb 02 21:21:38 crc kubenswrapper[4789]: I0202 21:21:38.768802 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=15.768778335 podStartE2EDuration="15.768778335s" podCreationTimestamp="2026-02-02 21:21:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:21:38.753498159 +0000 UTC m=+119.048523218" watchObservedRunningTime="2026-02-02 21:21:38.768778335 +0000 UTC m=+119.063803394" Feb 02 21:21:38 crc kubenswrapper[4789]: I0202 21:21:38.769800 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:38 crc kubenswrapper[4789]: I0202 21:21:38.769829 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:38 crc kubenswrapper[4789]: I0202 21:21:38.769838 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:38 crc kubenswrapper[4789]: I0202 21:21:38.769850 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:38 crc kubenswrapper[4789]: I0202 21:21:38.769860 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:38Z","lastTransitionTime":"2026-02-02T21:21:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:38 crc kubenswrapper[4789]: I0202 21:21:38.802447 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-6l576" podStartSLOduration=98.802423593 podStartE2EDuration="1m38.802423593s" podCreationTimestamp="2026-02-02 21:20:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:21:38.785300975 +0000 UTC m=+119.080326024" watchObservedRunningTime="2026-02-02 21:21:38.802423593 +0000 UTC m=+119.097448642" Feb 02 21:21:38 crc kubenswrapper[4789]: I0202 21:21:38.817447 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=69.817428511 podStartE2EDuration="1m9.817428511s" podCreationTimestamp="2026-02-02 21:20:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:21:38.817301577 +0000 UTC m=+119.112326636" watchObservedRunningTime="2026-02-02 21:21:38.817428511 +0000 UTC m=+119.112453530" Feb 02 21:21:38 crc kubenswrapper[4789]: I0202 21:21:38.831491 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 20:17:38.545948323 +0000 UTC Feb 02 21:21:38 crc kubenswrapper[4789]: I0202 21:21:38.872383 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:38 crc kubenswrapper[4789]: I0202 21:21:38.872439 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:38 crc kubenswrapper[4789]: I0202 21:21:38.872458 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:38 crc kubenswrapper[4789]: I0202 21:21:38.872481 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:38 crc kubenswrapper[4789]: I0202 21:21:38.872503 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:38Z","lastTransitionTime":"2026-02-02T21:21:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:38 crc kubenswrapper[4789]: I0202 21:21:38.975256 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:38 crc kubenswrapper[4789]: I0202 21:21:38.975352 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:38 crc kubenswrapper[4789]: I0202 21:21:38.975371 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:38 crc kubenswrapper[4789]: I0202 21:21:38.975396 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:38 crc kubenswrapper[4789]: I0202 21:21:38.975414 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:38Z","lastTransitionTime":"2026-02-02T21:21:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:39 crc kubenswrapper[4789]: I0202 21:21:39.078477 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:39 crc kubenswrapper[4789]: I0202 21:21:39.078534 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:39 crc kubenswrapper[4789]: I0202 21:21:39.078555 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:39 crc kubenswrapper[4789]: I0202 21:21:39.078616 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:39 crc kubenswrapper[4789]: I0202 21:21:39.078636 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:39Z","lastTransitionTime":"2026-02-02T21:21:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:39 crc kubenswrapper[4789]: I0202 21:21:39.181420 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:39 crc kubenswrapper[4789]: I0202 21:21:39.181479 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:39 crc kubenswrapper[4789]: I0202 21:21:39.181498 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:39 crc kubenswrapper[4789]: I0202 21:21:39.181521 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:39 crc kubenswrapper[4789]: I0202 21:21:39.181539 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:39Z","lastTransitionTime":"2026-02-02T21:21:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:39 crc kubenswrapper[4789]: I0202 21:21:39.284036 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:39 crc kubenswrapper[4789]: I0202 21:21:39.284072 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:39 crc kubenswrapper[4789]: I0202 21:21:39.284086 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:39 crc kubenswrapper[4789]: I0202 21:21:39.284101 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:39 crc kubenswrapper[4789]: I0202 21:21:39.284113 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:39Z","lastTransitionTime":"2026-02-02T21:21:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:39 crc kubenswrapper[4789]: I0202 21:21:39.386896 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:39 crc kubenswrapper[4789]: I0202 21:21:39.386952 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:39 crc kubenswrapper[4789]: I0202 21:21:39.386970 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:39 crc kubenswrapper[4789]: I0202 21:21:39.386993 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:39 crc kubenswrapper[4789]: I0202 21:21:39.387010 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:39Z","lastTransitionTime":"2026-02-02T21:21:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:39 crc kubenswrapper[4789]: I0202 21:21:39.419232 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 21:21:39 crc kubenswrapper[4789]: I0202 21:21:39.419284 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 21:21:39 crc kubenswrapper[4789]: I0202 21:21:39.419341 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 21:21:39 crc kubenswrapper[4789]: E0202 21:21:39.420012 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 21:21:39 crc kubenswrapper[4789]: E0202 21:21:39.420085 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 21:21:39 crc kubenswrapper[4789]: E0202 21:21:39.420135 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 21:21:39 crc kubenswrapper[4789]: I0202 21:21:39.436161 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2x5ws_70a32268-2a2d-47f3-9fc6-4281b8dc6a02/kube-multus/1.log" Feb 02 21:21:39 crc kubenswrapper[4789]: I0202 21:21:39.496036 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:39 crc kubenswrapper[4789]: I0202 21:21:39.496379 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:39 crc kubenswrapper[4789]: I0202 21:21:39.496489 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:39 crc kubenswrapper[4789]: I0202 21:21:39.496616 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:39 crc kubenswrapper[4789]: I0202 21:21:39.496719 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:39Z","lastTransitionTime":"2026-02-02T21:21:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:39 crc kubenswrapper[4789]: I0202 21:21:39.599994 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:39 crc kubenswrapper[4789]: I0202 21:21:39.600279 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:39 crc kubenswrapper[4789]: I0202 21:21:39.600347 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:39 crc kubenswrapper[4789]: I0202 21:21:39.600434 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:39 crc kubenswrapper[4789]: I0202 21:21:39.600498 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:39Z","lastTransitionTime":"2026-02-02T21:21:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:39 crc kubenswrapper[4789]: I0202 21:21:39.703206 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:39 crc kubenswrapper[4789]: I0202 21:21:39.703662 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:39 crc kubenswrapper[4789]: I0202 21:21:39.703910 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:39 crc kubenswrapper[4789]: I0202 21:21:39.704322 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:39 crc kubenswrapper[4789]: I0202 21:21:39.704757 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:39Z","lastTransitionTime":"2026-02-02T21:21:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:39 crc kubenswrapper[4789]: I0202 21:21:39.807439 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:39 crc kubenswrapper[4789]: I0202 21:21:39.807860 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:39 crc kubenswrapper[4789]: I0202 21:21:39.808041 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:39 crc kubenswrapper[4789]: I0202 21:21:39.808246 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:39 crc kubenswrapper[4789]: I0202 21:21:39.808436 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:39Z","lastTransitionTime":"2026-02-02T21:21:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:39 crc kubenswrapper[4789]: I0202 21:21:39.832240 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 23:22:53.878042458 +0000 UTC Feb 02 21:21:39 crc kubenswrapper[4789]: I0202 21:21:39.899354 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:39 crc kubenswrapper[4789]: I0202 21:21:39.899395 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:39 crc kubenswrapper[4789]: I0202 21:21:39.899411 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:39 crc kubenswrapper[4789]: I0202 21:21:39.899434 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:39 crc kubenswrapper[4789]: I0202 21:21:39.899451 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:39Z","lastTransitionTime":"2026-02-02T21:21:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:39 crc kubenswrapper[4789]: I0202 21:21:39.924222 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 21:21:39 crc kubenswrapper[4789]: I0202 21:21:39.924298 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 21:21:39 crc kubenswrapper[4789]: I0202 21:21:39.924325 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 21:21:39 crc kubenswrapper[4789]: I0202 21:21:39.924358 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 21:21:39 crc kubenswrapper[4789]: I0202 21:21:39.924381 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T21:21:39Z","lastTransitionTime":"2026-02-02T21:21:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 21:21:39 crc kubenswrapper[4789]: I0202 21:21:39.965608 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-cl42l"] Feb 02 21:21:39 crc kubenswrapper[4789]: I0202 21:21:39.966156 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cl42l" Feb 02 21:21:39 crc kubenswrapper[4789]: I0202 21:21:39.969974 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 02 21:21:39 crc kubenswrapper[4789]: I0202 21:21:39.969993 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 02 21:21:39 crc kubenswrapper[4789]: I0202 21:21:39.970398 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 02 21:21:39 crc kubenswrapper[4789]: I0202 21:21:39.971264 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 02 21:21:40 crc kubenswrapper[4789]: I0202 21:21:40.146474 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1448173-93ff-4e02-bb4d-9bdd675f1ae7-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-cl42l\" (UID: \"c1448173-93ff-4e02-bb4d-9bdd675f1ae7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cl42l" Feb 02 21:21:40 crc kubenswrapper[4789]: I0202 21:21:40.146540 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/c1448173-93ff-4e02-bb4d-9bdd675f1ae7-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-cl42l\" (UID: \"c1448173-93ff-4e02-bb4d-9bdd675f1ae7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cl42l" Feb 02 21:21:40 crc kubenswrapper[4789]: I0202 21:21:40.146685 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c1448173-93ff-4e02-bb4d-9bdd675f1ae7-service-ca\") pod \"cluster-version-operator-5c965bbfc6-cl42l\" (UID: \"c1448173-93ff-4e02-bb4d-9bdd675f1ae7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cl42l" Feb 02 21:21:40 crc kubenswrapper[4789]: I0202 21:21:40.146706 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c1448173-93ff-4e02-bb4d-9bdd675f1ae7-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-cl42l\" (UID: \"c1448173-93ff-4e02-bb4d-9bdd675f1ae7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cl42l" Feb 02 21:21:40 crc kubenswrapper[4789]: I0202 21:21:40.146754 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/c1448173-93ff-4e02-bb4d-9bdd675f1ae7-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-cl42l\" (UID: \"c1448173-93ff-4e02-bb4d-9bdd675f1ae7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cl42l" Feb 02 21:21:40 crc kubenswrapper[4789]: I0202 21:21:40.248015 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1448173-93ff-4e02-bb4d-9bdd675f1ae7-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-cl42l\" (UID: \"c1448173-93ff-4e02-bb4d-9bdd675f1ae7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cl42l" Feb 02 21:21:40 crc kubenswrapper[4789]: I0202 21:21:40.248127 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/c1448173-93ff-4e02-bb4d-9bdd675f1ae7-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-cl42l\" (UID: \"c1448173-93ff-4e02-bb4d-9bdd675f1ae7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cl42l" Feb 02 21:21:40 crc kubenswrapper[4789]: I0202 21:21:40.248234 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c1448173-93ff-4e02-bb4d-9bdd675f1ae7-service-ca\") pod \"cluster-version-operator-5c965bbfc6-cl42l\" (UID: \"c1448173-93ff-4e02-bb4d-9bdd675f1ae7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cl42l" Feb 02 21:21:40 crc kubenswrapper[4789]: I0202 21:21:40.248278 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c1448173-93ff-4e02-bb4d-9bdd675f1ae7-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-cl42l\" (UID: \"c1448173-93ff-4e02-bb4d-9bdd675f1ae7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cl42l" Feb 02 21:21:40 crc kubenswrapper[4789]: I0202 21:21:40.248355 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/c1448173-93ff-4e02-bb4d-9bdd675f1ae7-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-cl42l\" (UID: \"c1448173-93ff-4e02-bb4d-9bdd675f1ae7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cl42l" Feb 02 21:21:40 crc kubenswrapper[4789]: I0202 21:21:40.248444 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/c1448173-93ff-4e02-bb4d-9bdd675f1ae7-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-cl42l\" (UID: \"c1448173-93ff-4e02-bb4d-9bdd675f1ae7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cl42l" Feb 02 21:21:40 crc kubenswrapper[4789]: I0202 21:21:40.248379 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/c1448173-93ff-4e02-bb4d-9bdd675f1ae7-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-cl42l\" (UID: \"c1448173-93ff-4e02-bb4d-9bdd675f1ae7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cl42l" Feb 02 21:21:40 crc kubenswrapper[4789]: I0202 21:21:40.249802 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c1448173-93ff-4e02-bb4d-9bdd675f1ae7-service-ca\") pod \"cluster-version-operator-5c965bbfc6-cl42l\" (UID: \"c1448173-93ff-4e02-bb4d-9bdd675f1ae7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cl42l" Feb 02 21:21:40 crc kubenswrapper[4789]: I0202 21:21:40.257512 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1448173-93ff-4e02-bb4d-9bdd675f1ae7-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-cl42l\" (UID: \"c1448173-93ff-4e02-bb4d-9bdd675f1ae7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cl42l" Feb 02 21:21:40 crc kubenswrapper[4789]: I0202 21:21:40.278536 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c1448173-93ff-4e02-bb4d-9bdd675f1ae7-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-cl42l\" (UID: \"c1448173-93ff-4e02-bb4d-9bdd675f1ae7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cl42l" Feb 02 21:21:40 crc kubenswrapper[4789]: I0202 21:21:40.290763 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cl42l" Feb 02 21:21:40 crc kubenswrapper[4789]: I0202 21:21:40.419448 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vjbpg" Feb 02 21:21:40 crc kubenswrapper[4789]: E0202 21:21:40.421234 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vjbpg" podUID="2dc26662-64d3-47f0-9e0d-d340760ca348" Feb 02 21:21:40 crc kubenswrapper[4789]: E0202 21:21:40.436678 4789 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 02 21:21:40 crc kubenswrapper[4789]: I0202 21:21:40.441432 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cl42l" event={"ID":"c1448173-93ff-4e02-bb4d-9bdd675f1ae7","Type":"ContainerStarted","Data":"11c2cfbe607941930a65263b289412d0b1dafd643bfa490321aa778c2a213f9a"} Feb 02 21:21:40 crc kubenswrapper[4789]: I0202 21:21:40.441482 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cl42l" event={"ID":"c1448173-93ff-4e02-bb4d-9bdd675f1ae7","Type":"ContainerStarted","Data":"219b59d11d8747d4fe00114192152096042fcbe009af74186827a999aaaaa089"} Feb 02 21:21:40 crc kubenswrapper[4789]: E0202 21:21:40.533218 4789 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 02 21:21:40 crc kubenswrapper[4789]: I0202 21:21:40.832695 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 21:35:17.883785022 +0000 UTC Feb 02 21:21:40 crc kubenswrapper[4789]: I0202 21:21:40.832745 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 02 21:21:40 crc kubenswrapper[4789]: I0202 21:21:40.842847 4789 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 02 21:21:41 crc kubenswrapper[4789]: I0202 21:21:41.419303 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 21:21:41 crc kubenswrapper[4789]: E0202 21:21:41.419795 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 21:21:41 crc kubenswrapper[4789]: I0202 21:21:41.419366 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 21:21:41 crc kubenswrapper[4789]: I0202 21:21:41.419360 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 21:21:41 crc kubenswrapper[4789]: E0202 21:21:41.419912 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 21:21:41 crc kubenswrapper[4789]: E0202 21:21:41.420044 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 21:21:42 crc kubenswrapper[4789]: I0202 21:21:42.419598 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vjbpg" Feb 02 21:21:42 crc kubenswrapper[4789]: E0202 21:21:42.419816 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vjbpg" podUID="2dc26662-64d3-47f0-9e0d-d340760ca348" Feb 02 21:21:43 crc kubenswrapper[4789]: I0202 21:21:43.419476 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 21:21:43 crc kubenswrapper[4789]: I0202 21:21:43.419563 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 21:21:43 crc kubenswrapper[4789]: I0202 21:21:43.419626 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 21:21:43 crc kubenswrapper[4789]: E0202 21:21:43.419683 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 21:21:43 crc kubenswrapper[4789]: E0202 21:21:43.419775 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 21:21:43 crc kubenswrapper[4789]: E0202 21:21:43.419908 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 21:21:44 crc kubenswrapper[4789]: I0202 21:21:44.419616 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vjbpg" Feb 02 21:21:44 crc kubenswrapper[4789]: E0202 21:21:44.419820 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vjbpg" podUID="2dc26662-64d3-47f0-9e0d-d340760ca348" Feb 02 21:21:45 crc kubenswrapper[4789]: I0202 21:21:45.419384 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 21:21:45 crc kubenswrapper[4789]: I0202 21:21:45.419504 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 21:21:45 crc kubenswrapper[4789]: I0202 21:21:45.419410 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 21:21:45 crc kubenswrapper[4789]: E0202 21:21:45.419645 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 21:21:45 crc kubenswrapper[4789]: E0202 21:21:45.419804 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 21:21:45 crc kubenswrapper[4789]: E0202 21:21:45.420015 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 21:21:45 crc kubenswrapper[4789]: E0202 21:21:45.534063 4789 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 02 21:21:46 crc kubenswrapper[4789]: I0202 21:21:46.418842 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vjbpg" Feb 02 21:21:46 crc kubenswrapper[4789]: E0202 21:21:46.419108 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vjbpg" podUID="2dc26662-64d3-47f0-9e0d-d340760ca348" Feb 02 21:21:47 crc kubenswrapper[4789]: I0202 21:21:47.418801 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 21:21:47 crc kubenswrapper[4789]: E0202 21:21:47.419164 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 21:21:47 crc kubenswrapper[4789]: I0202 21:21:47.419552 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 21:21:47 crc kubenswrapper[4789]: E0202 21:21:47.419708 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 21:21:47 crc kubenswrapper[4789]: I0202 21:21:47.419990 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 21:21:47 crc kubenswrapper[4789]: E0202 21:21:47.420133 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 21:21:48 crc kubenswrapper[4789]: I0202 21:21:48.419075 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vjbpg" Feb 02 21:21:48 crc kubenswrapper[4789]: E0202 21:21:48.419279 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vjbpg" podUID="2dc26662-64d3-47f0-9e0d-d340760ca348" Feb 02 21:21:48 crc kubenswrapper[4789]: I0202 21:21:48.419870 4789 scope.go:117] "RemoveContainer" containerID="877f8c63dfc6ee1e69f7f0bc64e5d2896384f737678efb01ad9d27b9030a5fcc" Feb 02 21:21:49 crc kubenswrapper[4789]: I0202 21:21:49.354914 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cl42l" podStartSLOduration=107.354895011 podStartE2EDuration="1m47.354895011s" podCreationTimestamp="2026-02-02 21:20:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:21:40.455758882 +0000 UTC m=+120.750783941" watchObservedRunningTime="2026-02-02 21:21:49.354895011 +0000 UTC m=+129.649920040" Feb 02 21:21:49 crc kubenswrapper[4789]: I0202 21:21:49.355596 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-vjbpg"] Feb 02 21:21:49 crc kubenswrapper[4789]: I0202 21:21:49.355687 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vjbpg" Feb 02 21:21:49 crc kubenswrapper[4789]: E0202 21:21:49.355791 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vjbpg" podUID="2dc26662-64d3-47f0-9e0d-d340760ca348" Feb 02 21:21:49 crc kubenswrapper[4789]: I0202 21:21:49.419475 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 21:21:49 crc kubenswrapper[4789]: I0202 21:21:49.419537 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 21:21:49 crc kubenswrapper[4789]: E0202 21:21:49.419595 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 21:21:49 crc kubenswrapper[4789]: I0202 21:21:49.419607 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 21:21:49 crc kubenswrapper[4789]: E0202 21:21:49.419679 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 21:21:49 crc kubenswrapper[4789]: E0202 21:21:49.419837 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 21:21:49 crc kubenswrapper[4789]: I0202 21:21:49.533062 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w8vkt_2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6/ovnkube-controller/3.log" Feb 02 21:21:49 crc kubenswrapper[4789]: I0202 21:21:49.535920 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" event={"ID":"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6","Type":"ContainerStarted","Data":"7129ba83a6eb1565ad3687f3eceafd29c68b5e8d14512ae91de8f36b552900a7"} Feb 02 21:21:49 crc kubenswrapper[4789]: I0202 21:21:49.536293 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" Feb 02 21:21:49 crc kubenswrapper[4789]: I0202 21:21:49.573142 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" podStartSLOduration=106.573114489 podStartE2EDuration="1m46.573114489s" podCreationTimestamp="2026-02-02 21:20:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:21:49.567453167 +0000 UTC m=+129.862478236" watchObservedRunningTime="2026-02-02 21:21:49.573114489 +0000 UTC m=+129.868139538" Feb 02 21:21:50 crc kubenswrapper[4789]: E0202 21:21:50.535166 4789 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 02 21:21:51 crc kubenswrapper[4789]: I0202 21:21:51.419291 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 21:21:51 crc kubenswrapper[4789]: I0202 21:21:51.419388 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 21:21:51 crc kubenswrapper[4789]: I0202 21:21:51.419469 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 21:21:51 crc kubenswrapper[4789]: E0202 21:21:51.419639 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 21:21:51 crc kubenswrapper[4789]: I0202 21:21:51.419834 4789 scope.go:117] "RemoveContainer" containerID="75cf318c3d63c5316cbeba8abb93919973f88b415ed7116b55333813b8a889fa" Feb 02 21:21:51 crc kubenswrapper[4789]: I0202 21:21:51.419944 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vjbpg" Feb 02 21:21:51 crc kubenswrapper[4789]: E0202 21:21:51.420065 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 21:21:51 crc kubenswrapper[4789]: E0202 21:21:51.420231 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vjbpg" podUID="2dc26662-64d3-47f0-9e0d-d340760ca348" Feb 02 21:21:51 crc kubenswrapper[4789]: E0202 21:21:51.420318 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 21:21:52 crc kubenswrapper[4789]: I0202 21:21:52.553370 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2x5ws_70a32268-2a2d-47f3-9fc6-4281b8dc6a02/kube-multus/1.log" Feb 02 21:21:52 crc kubenswrapper[4789]: I0202 21:21:52.553458 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2x5ws" event={"ID":"70a32268-2a2d-47f3-9fc6-4281b8dc6a02","Type":"ContainerStarted","Data":"9d3648a8bdabecf0fe7e95880b046a5f5b8a91912f23059a00680ec150976c5f"} Feb 02 21:21:52 crc kubenswrapper[4789]: I0202 21:21:52.579888 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-2x5ws" podStartSLOduration=110.579858691 podStartE2EDuration="1m50.579858691s" podCreationTimestamp="2026-02-02 21:20:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:21:52.578461591 +0000 UTC m=+132.873486700" watchObservedRunningTime="2026-02-02 21:21:52.579858691 +0000 UTC m=+132.874883750" Feb 02 21:21:53 crc kubenswrapper[4789]: I0202 21:21:53.418700 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 21:21:53 crc kubenswrapper[4789]: I0202 21:21:53.418658 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vjbpg" Feb 02 21:21:53 crc kubenswrapper[4789]: I0202 21:21:53.418804 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 21:21:53 crc kubenswrapper[4789]: E0202 21:21:53.419392 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 21:21:53 crc kubenswrapper[4789]: I0202 21:21:53.419425 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 21:21:53 crc kubenswrapper[4789]: E0202 21:21:53.419714 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vjbpg" podUID="2dc26662-64d3-47f0-9e0d-d340760ca348" Feb 02 21:21:53 crc kubenswrapper[4789]: E0202 21:21:53.419740 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 21:21:53 crc kubenswrapper[4789]: E0202 21:21:53.419865 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 21:21:55 crc kubenswrapper[4789]: I0202 21:21:55.418842 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 21:21:55 crc kubenswrapper[4789]: I0202 21:21:55.418906 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vjbpg" Feb 02 21:21:55 crc kubenswrapper[4789]: I0202 21:21:55.419021 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 21:21:55 crc kubenswrapper[4789]: E0202 21:21:55.419029 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 21:21:55 crc kubenswrapper[4789]: I0202 21:21:55.419074 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 21:21:55 crc kubenswrapper[4789]: E0202 21:21:55.419201 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 21:21:55 crc kubenswrapper[4789]: E0202 21:21:55.419301 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 21:21:55 crc kubenswrapper[4789]: E0202 21:21:55.419392 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vjbpg" podUID="2dc26662-64d3-47f0-9e0d-d340760ca348" Feb 02 21:21:57 crc kubenswrapper[4789]: I0202 21:21:57.419370 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 21:21:57 crc kubenswrapper[4789]: I0202 21:21:57.419429 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 21:21:57 crc kubenswrapper[4789]: I0202 21:21:57.419468 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vjbpg" Feb 02 21:21:57 crc kubenswrapper[4789]: I0202 21:21:57.419614 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 21:21:57 crc kubenswrapper[4789]: I0202 21:21:57.422892 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 02 21:21:57 crc kubenswrapper[4789]: I0202 21:21:57.423140 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 02 21:21:57 crc kubenswrapper[4789]: I0202 21:21:57.423183 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 02 21:21:57 crc kubenswrapper[4789]: I0202 21:21:57.423294 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 02 21:21:57 crc kubenswrapper[4789]: I0202 21:21:57.423410 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 02 21:21:57 crc kubenswrapper[4789]: I0202 21:21:57.423546 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.185895 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.231835 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-4qf8w"] Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.232797 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-4qf8w" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.235317 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-4z5px"] Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.239183 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-hqmrz"] Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.239742 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-hqmrz" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.235545 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.240526 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-4z5px" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.235797 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.238411 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.238518 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.238543 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.238662 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.238673 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.246096 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.246886 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.251903 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.252205 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.254350 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.254699 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.254734 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.255316 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.255615 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.255781 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.263222 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.263505 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.263981 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.265066 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.267148 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-gv5lf"] Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.268077 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-bdqgm"] Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.268423 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gv5lf" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.268754 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.268775 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bdqgm" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.269074 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-fzd6q"] Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.269701 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fzd6q" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.272563 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.281091 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.281447 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.281461 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lnf5c"] Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.282336 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-pkpwd"] Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.282469 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/6a6d8d36-0b11-496c-b07a-145358594fa2-audit\") pod \"apiserver-76f77b778f-4qf8w\" (UID: \"6a6d8d36-0b11-496c-b07a-145358594fa2\") " pod="openshift-apiserver/apiserver-76f77b778f-4qf8w" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.282527 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/62b2eeb5-6380-43c4-9c2e-e7aa29c88057-serving-cert\") pod \"apiserver-7bbb656c7d-gv5lf\" (UID: \"62b2eeb5-6380-43c4-9c2e-e7aa29c88057\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gv5lf" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.282561 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6777175c-7525-4ae6-9b3e-391b3e21abf8-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-4z5px\" (UID: \"6777175c-7525-4ae6-9b3e-391b3e21abf8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4z5px" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.282603 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2fc2384-c0fd-421b-b715-39300bec870d-config\") pod \"machine-approver-56656f9798-fzd6q\" (UID: \"b2fc2384-c0fd-421b-b715-39300bec870d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fzd6q" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.282625 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6a6d8d36-0b11-496c-b07a-145358594fa2-node-pullsecrets\") pod \"apiserver-76f77b778f-4qf8w\" (UID: \"6a6d8d36-0b11-496c-b07a-145358594fa2\") " pod="openshift-apiserver/apiserver-76f77b778f-4qf8w" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.282644 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a6d8d36-0b11-496c-b07a-145358594fa2-serving-cert\") pod \"apiserver-76f77b778f-4qf8w\" (UID: \"6a6d8d36-0b11-496c-b07a-145358594fa2\") " pod="openshift-apiserver/apiserver-76f77b778f-4qf8w" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.282666 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/62b2eeb5-6380-43c4-9c2e-e7aa29c88057-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-gv5lf\" (UID: \"62b2eeb5-6380-43c4-9c2e-e7aa29c88057\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gv5lf" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.282687 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6a6d8d36-0b11-496c-b07a-145358594fa2-etcd-serving-ca\") pod \"apiserver-76f77b778f-4qf8w\" (UID: \"6a6d8d36-0b11-496c-b07a-145358594fa2\") " pod="openshift-apiserver/apiserver-76f77b778f-4qf8w" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.282707 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50fa75e9-6b6f-4dc5-a3be-5d3e2d7f8169-config\") pod \"route-controller-manager-6576b87f9c-bdqgm\" (UID: \"50fa75e9-6b6f-4dc5-a3be-5d3e2d7f8169\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bdqgm" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.282728 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/62b2eeb5-6380-43c4-9c2e-e7aa29c88057-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-gv5lf\" (UID: \"62b2eeb5-6380-43c4-9c2e-e7aa29c88057\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gv5lf" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.282748 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6a6d8d36-0b11-496c-b07a-145358594fa2-audit-dir\") pod \"apiserver-76f77b778f-4qf8w\" (UID: \"6a6d8d36-0b11-496c-b07a-145358594fa2\") " pod="openshift-apiserver/apiserver-76f77b778f-4qf8w" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.282767 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6777175c-7525-4ae6-9b3e-391b3e21abf8-serving-cert\") pod \"controller-manager-879f6c89f-4z5px\" (UID: \"6777175c-7525-4ae6-9b3e-391b3e21abf8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4z5px" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.282847 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6777175c-7525-4ae6-9b3e-391b3e21abf8-client-ca\") pod \"controller-manager-879f6c89f-4z5px\" (UID: \"6777175c-7525-4ae6-9b3e-391b3e21abf8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4z5px" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.282868 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/62b2eeb5-6380-43c4-9c2e-e7aa29c88057-etcd-client\") pod \"apiserver-7bbb656c7d-gv5lf\" (UID: \"62b2eeb5-6380-43c4-9c2e-e7aa29c88057\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gv5lf" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.282888 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/62b2eeb5-6380-43c4-9c2e-e7aa29c88057-encryption-config\") pod \"apiserver-7bbb656c7d-gv5lf\" (UID: \"62b2eeb5-6380-43c4-9c2e-e7aa29c88057\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gv5lf" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.282926 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjxqv\" (UniqueName: \"kubernetes.io/projected/62b2eeb5-6380-43c4-9c2e-e7aa29c88057-kube-api-access-kjxqv\") pod \"apiserver-7bbb656c7d-gv5lf\" (UID: \"62b2eeb5-6380-43c4-9c2e-e7aa29c88057\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gv5lf" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.282971 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82a78e00-0795-4de5-8062-f92878ea6c72-config\") pod \"machine-api-operator-5694c8668f-hqmrz\" (UID: \"82a78e00-0795-4de5-8062-f92878ea6c72\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hqmrz" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.282999 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svjrw\" (UniqueName: \"kubernetes.io/projected/6a6d8d36-0b11-496c-b07a-145358594fa2-kube-api-access-svjrw\") pod \"apiserver-76f77b778f-4qf8w\" (UID: \"6a6d8d36-0b11-496c-b07a-145358594fa2\") " pod="openshift-apiserver/apiserver-76f77b778f-4qf8w" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.283041 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6a6d8d36-0b11-496c-b07a-145358594fa2-encryption-config\") pod \"apiserver-76f77b778f-4qf8w\" (UID: \"6a6d8d36-0b11-496c-b07a-145358594fa2\") " pod="openshift-apiserver/apiserver-76f77b778f-4qf8w" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.283071 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hcxw\" (UniqueName: \"kubernetes.io/projected/b2fc2384-c0fd-421b-b715-39300bec870d-kube-api-access-4hcxw\") pod \"machine-approver-56656f9798-fzd6q\" (UID: \"b2fc2384-c0fd-421b-b715-39300bec870d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fzd6q" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.281725 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.283098 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddt5h\" (UniqueName: \"kubernetes.io/projected/6777175c-7525-4ae6-9b3e-391b3e21abf8-kube-api-access-ddt5h\") pod \"controller-manager-879f6c89f-4z5px\" (UID: \"6777175c-7525-4ae6-9b3e-391b3e21abf8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4z5px" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.281760 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.283154 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.281908 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.283240 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.283473 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.283544 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.283201 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gc5qg\" (UniqueName: \"kubernetes.io/projected/82a78e00-0795-4de5-8062-f92878ea6c72-kube-api-access-gc5qg\") pod \"machine-api-operator-5694c8668f-hqmrz\" (UID: \"82a78e00-0795-4de5-8062-f92878ea6c72\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hqmrz" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.283614 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.283620 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.283648 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/b2fc2384-c0fd-421b-b715-39300bec870d-machine-approver-tls\") pod \"machine-approver-56656f9798-fzd6q\" (UID: \"b2fc2384-c0fd-421b-b715-39300bec870d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fzd6q" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.283667 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.283008 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.283689 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6777175c-7525-4ae6-9b3e-391b3e21abf8-config\") pod \"controller-manager-879f6c89f-4z5px\" (UID: \"6777175c-7525-4ae6-9b3e-391b3e21abf8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4z5px" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.283717 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.283069 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.283771 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.283786 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/82a78e00-0795-4de5-8062-f92878ea6c72-images\") pod \"machine-api-operator-5694c8668f-hqmrz\" (UID: \"82a78e00-0795-4de5-8062-f92878ea6c72\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hqmrz" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.283818 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.283855 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.283820 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/62b2eeb5-6380-43c4-9c2e-e7aa29c88057-audit-policies\") pod \"apiserver-7bbb656c7d-gv5lf\" (UID: \"62b2eeb5-6380-43c4-9c2e-e7aa29c88057\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gv5lf" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.283889 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.283897 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a6d8d36-0b11-496c-b07a-145358594fa2-config\") pod \"apiserver-76f77b778f-4qf8w\" (UID: \"6a6d8d36-0b11-496c-b07a-145358594fa2\") " pod="openshift-apiserver/apiserver-76f77b778f-4qf8w" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.283937 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/50fa75e9-6b6f-4dc5-a3be-5d3e2d7f8169-serving-cert\") pod \"route-controller-manager-6576b87f9c-bdqgm\" (UID: \"50fa75e9-6b6f-4dc5-a3be-5d3e2d7f8169\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bdqgm" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.283968 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.284016 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/82a78e00-0795-4de5-8062-f92878ea6c72-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-hqmrz\" (UID: \"82a78e00-0795-4de5-8062-f92878ea6c72\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hqmrz" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.284047 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tfzp\" (UniqueName: \"kubernetes.io/projected/50fa75e9-6b6f-4dc5-a3be-5d3e2d7f8169-kube-api-access-6tfzp\") pod \"route-controller-manager-6576b87f9c-bdqgm\" (UID: \"50fa75e9-6b6f-4dc5-a3be-5d3e2d7f8169\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bdqgm" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.284086 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a6d8d36-0b11-496c-b07a-145358594fa2-trusted-ca-bundle\") pod \"apiserver-76f77b778f-4qf8w\" (UID: \"6a6d8d36-0b11-496c-b07a-145358594fa2\") " pod="openshift-apiserver/apiserver-76f77b778f-4qf8w" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.284118 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b2fc2384-c0fd-421b-b715-39300bec870d-auth-proxy-config\") pod \"machine-approver-56656f9798-fzd6q\" (UID: \"b2fc2384-c0fd-421b-b715-39300bec870d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fzd6q" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.284150 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/50fa75e9-6b6f-4dc5-a3be-5d3e2d7f8169-client-ca\") pod \"route-controller-manager-6576b87f9c-bdqgm\" (UID: \"50fa75e9-6b6f-4dc5-a3be-5d3e2d7f8169\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bdqgm" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.284176 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/6a6d8d36-0b11-496c-b07a-145358594fa2-image-import-ca\") pod \"apiserver-76f77b778f-4qf8w\" (UID: \"6a6d8d36-0b11-496c-b07a-145358594fa2\") " pod="openshift-apiserver/apiserver-76f77b778f-4qf8w" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.284207 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/62b2eeb5-6380-43c4-9c2e-e7aa29c88057-audit-dir\") pod \"apiserver-7bbb656c7d-gv5lf\" (UID: \"62b2eeb5-6380-43c4-9c2e-e7aa29c88057\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gv5lf" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.284234 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6a6d8d36-0b11-496c-b07a-145358594fa2-etcd-client\") pod \"apiserver-76f77b778f-4qf8w\" (UID: \"6a6d8d36-0b11-496c-b07a-145358594fa2\") " pod="openshift-apiserver/apiserver-76f77b778f-4qf8w" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.285665 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-pkpwd" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.285906 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lnf5c" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.286781 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-x568j"] Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.287554 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-x568j" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.292982 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.293482 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.303303 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fv5wb"] Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.305684 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-4lbfr"] Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.307740 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.315592 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-4lbfr" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.317436 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-dd7g5"] Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.318224 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.318464 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.318638 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fv5wb" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.319470 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-dd7g5" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.320341 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-zfv5p"] Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.320742 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-zfv5p" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.321453 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.321636 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.321769 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.321826 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.321867 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.321760 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.321995 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.322036 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.322344 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.322535 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.324649 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.324896 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-25rjx"] Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.325760 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-nsqw4"] Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.326327 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nsqw4" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.326923 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-8787r"] Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.327193 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-25rjx" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.327476 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-8787r" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.327990 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.330215 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-x2wtg"] Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.330962 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.331331 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-jwp46"] Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.331773 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.331917 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-jwp46" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.333038 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.334230 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.335296 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.335660 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.335715 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.335818 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.336018 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.336119 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.336649 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.336833 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.336962 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.337101 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.339034 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.339242 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.339721 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.339801 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.339830 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.339892 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.339949 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.340016 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.340043 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.340098 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.340167 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.340176 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.340272 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.340429 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.340566 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.340693 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.340658 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6t7gf"] Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.340951 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.341058 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.341170 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.341174 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.341204 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lbmnl"] Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.341296 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.341354 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6t7gf" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.341711 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-84vfz"] Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.341724 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.341803 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.342304 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.342625 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.355379 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hcxsv"] Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.356351 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hc84b"] Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.357252 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hc84b" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.360310 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.361495 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.370367 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lbmnl" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.370450 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-84vfz" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.384120 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.385506 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-wgzrz"] Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.386117 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/50fa75e9-6b6f-4dc5-a3be-5d3e2d7f8169-serving-cert\") pod \"route-controller-manager-6576b87f9c-bdqgm\" (UID: \"50fa75e9-6b6f-4dc5-a3be-5d3e2d7f8169\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bdqgm" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.386152 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/48272be3-d48f-45b1-99a9-28ed3ba310ed-profile-collector-cert\") pod \"catalog-operator-68c6474976-hcxsv\" (UID: \"48272be3-d48f-45b1-99a9-28ed3ba310ed\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hcxsv" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.386179 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/82a78e00-0795-4de5-8062-f92878ea6c72-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-hqmrz\" (UID: \"82a78e00-0795-4de5-8062-f92878ea6c72\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hqmrz" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.386201 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tfzp\" (UniqueName: \"kubernetes.io/projected/50fa75e9-6b6f-4dc5-a3be-5d3e2d7f8169-kube-api-access-6tfzp\") pod \"route-controller-manager-6576b87f9c-bdqgm\" (UID: \"50fa75e9-6b6f-4dc5-a3be-5d3e2d7f8169\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bdqgm" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.386222 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a6d8d36-0b11-496c-b07a-145358594fa2-trusted-ca-bundle\") pod \"apiserver-76f77b778f-4qf8w\" (UID: \"6a6d8d36-0b11-496c-b07a-145358594fa2\") " pod="openshift-apiserver/apiserver-76f77b778f-4qf8w" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.386244 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/dc42f967-8fe9-4a89-8e82-5272f070ed73-metrics-tls\") pod \"dns-operator-744455d44c-pkpwd\" (UID: \"dc42f967-8fe9-4a89-8e82-5272f070ed73\") " pod="openshift-dns-operator/dns-operator-744455d44c-pkpwd" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.386261 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-wgzrz" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.386267 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a2edcffa-d93c-4125-863d-05812a4ff79a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-zfv5p\" (UID: \"a2edcffa-d93c-4125-863d-05812a4ff79a\") " pod="openshift-authentication/oauth-openshift-558db77b4-zfv5p" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.386288 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vf9z\" (UniqueName: \"kubernetes.io/projected/a2edcffa-d93c-4125-863d-05812a4ff79a-kube-api-access-2vf9z\") pod \"oauth-openshift-558db77b4-zfv5p\" (UID: \"a2edcffa-d93c-4125-863d-05812a4ff79a\") " pod="openshift-authentication/oauth-openshift-558db77b4-zfv5p" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.386312 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b2fc2384-c0fd-421b-b715-39300bec870d-auth-proxy-config\") pod \"machine-approver-56656f9798-fzd6q\" (UID: \"b2fc2384-c0fd-421b-b715-39300bec870d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fzd6q" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.386334 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gf5c\" (UniqueName: \"kubernetes.io/projected/9c0c6217-0e72-4682-8417-f6f6b2809bfa-kube-api-access-5gf5c\") pod \"olm-operator-6b444d44fb-lbmnl\" (UID: \"9c0c6217-0e72-4682-8417-f6f6b2809bfa\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lbmnl" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.386355 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a2edcffa-d93c-4125-863d-05812a4ff79a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-zfv5p\" (UID: \"a2edcffa-d93c-4125-863d-05812a4ff79a\") " pod="openshift-authentication/oauth-openshift-558db77b4-zfv5p" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.386377 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a2edcffa-d93c-4125-863d-05812a4ff79a-audit-policies\") pod \"oauth-openshift-558db77b4-zfv5p\" (UID: \"a2edcffa-d93c-4125-863d-05812a4ff79a\") " pod="openshift-authentication/oauth-openshift-558db77b4-zfv5p" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.386399 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pclb\" (UniqueName: \"kubernetes.io/projected/dc42f967-8fe9-4a89-8e82-5272f070ed73-kube-api-access-6pclb\") pod \"dns-operator-744455d44c-pkpwd\" (UID: \"dc42f967-8fe9-4a89-8e82-5272f070ed73\") " pod="openshift-dns-operator/dns-operator-744455d44c-pkpwd" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.386422 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9c0c6217-0e72-4682-8417-f6f6b2809bfa-srv-cert\") pod \"olm-operator-6b444d44fb-lbmnl\" (UID: \"9c0c6217-0e72-4682-8417-f6f6b2809bfa\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lbmnl" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.386443 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a2edcffa-d93c-4125-863d-05812a4ff79a-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-zfv5p\" (UID: \"a2edcffa-d93c-4125-863d-05812a4ff79a\") " pod="openshift-authentication/oauth-openshift-558db77b4-zfv5p" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.386452 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lssjz"] Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.386468 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/62b2eeb5-6380-43c4-9c2e-e7aa29c88057-audit-dir\") pod \"apiserver-7bbb656c7d-gv5lf\" (UID: \"62b2eeb5-6380-43c4-9c2e-e7aa29c88057\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gv5lf" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.386489 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/50fa75e9-6b6f-4dc5-a3be-5d3e2d7f8169-client-ca\") pod \"route-controller-manager-6576b87f9c-bdqgm\" (UID: \"50fa75e9-6b6f-4dc5-a3be-5d3e2d7f8169\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bdqgm" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.386509 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/6a6d8d36-0b11-496c-b07a-145358594fa2-image-import-ca\") pod \"apiserver-76f77b778f-4qf8w\" (UID: \"6a6d8d36-0b11-496c-b07a-145358594fa2\") " pod="openshift-apiserver/apiserver-76f77b778f-4qf8w" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.386532 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6a6d8d36-0b11-496c-b07a-145358594fa2-etcd-client\") pod \"apiserver-76f77b778f-4qf8w\" (UID: \"6a6d8d36-0b11-496c-b07a-145358594fa2\") " pod="openshift-apiserver/apiserver-76f77b778f-4qf8w" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.386571 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/6a6d8d36-0b11-496c-b07a-145358594fa2-audit\") pod \"apiserver-76f77b778f-4qf8w\" (UID: \"6a6d8d36-0b11-496c-b07a-145358594fa2\") " pod="openshift-apiserver/apiserver-76f77b778f-4qf8w" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.391724 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/62b2eeb5-6380-43c4-9c2e-e7aa29c88057-serving-cert\") pod \"apiserver-7bbb656c7d-gv5lf\" (UID: \"62b2eeb5-6380-43c4-9c2e-e7aa29c88057\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gv5lf" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.391756 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/50fa75e9-6b6f-4dc5-a3be-5d3e2d7f8169-client-ca\") pod \"route-controller-manager-6576b87f9c-bdqgm\" (UID: \"50fa75e9-6b6f-4dc5-a3be-5d3e2d7f8169\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bdqgm" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.391768 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6777175c-7525-4ae6-9b3e-391b3e21abf8-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-4z5px\" (UID: \"6777175c-7525-4ae6-9b3e-391b3e21abf8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4z5px" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.388513 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a6d8d36-0b11-496c-b07a-145358594fa2-trusted-ca-bundle\") pod \"apiserver-76f77b778f-4qf8w\" (UID: \"6a6d8d36-0b11-496c-b07a-145358594fa2\") " pod="openshift-apiserver/apiserver-76f77b778f-4qf8w" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.391793 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/48272be3-d48f-45b1-99a9-28ed3ba310ed-srv-cert\") pod \"catalog-operator-68c6474976-hcxsv\" (UID: \"48272be3-d48f-45b1-99a9-28ed3ba310ed\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hcxsv" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.391815 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2fc2384-c0fd-421b-b715-39300bec870d-config\") pod \"machine-approver-56656f9798-fzd6q\" (UID: \"b2fc2384-c0fd-421b-b715-39300bec870d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fzd6q" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.391822 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/62b2eeb5-6380-43c4-9c2e-e7aa29c88057-audit-dir\") pod \"apiserver-7bbb656c7d-gv5lf\" (UID: \"62b2eeb5-6380-43c4-9c2e-e7aa29c88057\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gv5lf" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.391842 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6a6d8d36-0b11-496c-b07a-145358594fa2-node-pullsecrets\") pod \"apiserver-76f77b778f-4qf8w\" (UID: \"6a6d8d36-0b11-496c-b07a-145358594fa2\") " pod="openshift-apiserver/apiserver-76f77b778f-4qf8w" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.391861 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a6d8d36-0b11-496c-b07a-145358594fa2-serving-cert\") pod \"apiserver-76f77b778f-4qf8w\" (UID: \"6a6d8d36-0b11-496c-b07a-145358594fa2\") " pod="openshift-apiserver/apiserver-76f77b778f-4qf8w" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.391921 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a2edcffa-d93c-4125-863d-05812a4ff79a-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-zfv5p\" (UID: \"a2edcffa-d93c-4125-863d-05812a4ff79a\") " pod="openshift-authentication/oauth-openshift-558db77b4-zfv5p" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.391939 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e9b63e7c-d6fb-4ead-8ec0-2b38f5de0609-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-lnf5c\" (UID: \"e9b63e7c-d6fb-4ead-8ec0-2b38f5de0609\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lnf5c" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.391960 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/62b2eeb5-6380-43c4-9c2e-e7aa29c88057-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-gv5lf\" (UID: \"62b2eeb5-6380-43c4-9c2e-e7aa29c88057\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gv5lf" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.391977 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6a6d8d36-0b11-496c-b07a-145358594fa2-etcd-serving-ca\") pod \"apiserver-76f77b778f-4qf8w\" (UID: \"6a6d8d36-0b11-496c-b07a-145358594fa2\") " pod="openshift-apiserver/apiserver-76f77b778f-4qf8w" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.391994 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/abf25fb1-39e2-4b26-9d3b-1cebdcbc7f98-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-84vfz\" (UID: \"abf25fb1-39e2-4b26-9d3b-1cebdcbc7f98\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-84vfz" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.392015 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50fa75e9-6b6f-4dc5-a3be-5d3e2d7f8169-config\") pod \"route-controller-manager-6576b87f9c-bdqgm\" (UID: \"50fa75e9-6b6f-4dc5-a3be-5d3e2d7f8169\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bdqgm" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.392032 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e9b63e7c-d6fb-4ead-8ec0-2b38f5de0609-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-lnf5c\" (UID: \"e9b63e7c-d6fb-4ead-8ec0-2b38f5de0609\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lnf5c" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.392051 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/62b2eeb5-6380-43c4-9c2e-e7aa29c88057-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-gv5lf\" (UID: \"62b2eeb5-6380-43c4-9c2e-e7aa29c88057\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gv5lf" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.392066 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abf25fb1-39e2-4b26-9d3b-1cebdcbc7f98-config\") pod \"kube-controller-manager-operator-78b949d7b-84vfz\" (UID: \"abf25fb1-39e2-4b26-9d3b-1cebdcbc7f98\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-84vfz" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.392084 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6a6d8d36-0b11-496c-b07a-145358594fa2-audit-dir\") pod \"apiserver-76f77b778f-4qf8w\" (UID: \"6a6d8d36-0b11-496c-b07a-145358594fa2\") " pod="openshift-apiserver/apiserver-76f77b778f-4qf8w" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.392104 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a2edcffa-d93c-4125-863d-05812a4ff79a-audit-dir\") pod \"oauth-openshift-558db77b4-zfv5p\" (UID: \"a2edcffa-d93c-4125-863d-05812a4ff79a\") " pod="openshift-authentication/oauth-openshift-558db77b4-zfv5p" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.392121 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a2edcffa-d93c-4125-863d-05812a4ff79a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-zfv5p\" (UID: \"a2edcffa-d93c-4125-863d-05812a4ff79a\") " pod="openshift-authentication/oauth-openshift-558db77b4-zfv5p" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.392149 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6777175c-7525-4ae6-9b3e-391b3e21abf8-client-ca\") pod \"controller-manager-879f6c89f-4z5px\" (UID: \"6777175c-7525-4ae6-9b3e-391b3e21abf8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4z5px" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.392163 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6777175c-7525-4ae6-9b3e-391b3e21abf8-serving-cert\") pod \"controller-manager-879f6c89f-4z5px\" (UID: \"6777175c-7525-4ae6-9b3e-391b3e21abf8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4z5px" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.392180 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/62b2eeb5-6380-43c4-9c2e-e7aa29c88057-etcd-client\") pod \"apiserver-7bbb656c7d-gv5lf\" (UID: \"62b2eeb5-6380-43c4-9c2e-e7aa29c88057\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gv5lf" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.392197 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2d8qg\" (UniqueName: \"kubernetes.io/projected/899bce18-bfcc-42b8-ab5e-149d16e8eddb-kube-api-access-2d8qg\") pod \"downloads-7954f5f757-dd7g5\" (UID: \"899bce18-bfcc-42b8-ab5e-149d16e8eddb\") " pod="openshift-console/downloads-7954f5f757-dd7g5" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.392213 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/62b2eeb5-6380-43c4-9c2e-e7aa29c88057-encryption-config\") pod \"apiserver-7bbb656c7d-gv5lf\" (UID: \"62b2eeb5-6380-43c4-9c2e-e7aa29c88057\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gv5lf" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.392227 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a2edcffa-d93c-4125-863d-05812a4ff79a-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-zfv5p\" (UID: \"a2edcffa-d93c-4125-863d-05812a4ff79a\") " pod="openshift-authentication/oauth-openshift-558db77b4-zfv5p" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.392242 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a2edcffa-d93c-4125-863d-05812a4ff79a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-zfv5p\" (UID: \"a2edcffa-d93c-4125-863d-05812a4ff79a\") " pod="openshift-authentication/oauth-openshift-558db77b4-zfv5p" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.392256 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/e9b63e7c-d6fb-4ead-8ec0-2b38f5de0609-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-lnf5c\" (UID: \"e9b63e7c-d6fb-4ead-8ec0-2b38f5de0609\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lnf5c" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.392278 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjxqv\" (UniqueName: \"kubernetes.io/projected/62b2eeb5-6380-43c4-9c2e-e7aa29c88057-kube-api-access-kjxqv\") pod \"apiserver-7bbb656c7d-gv5lf\" (UID: \"62b2eeb5-6380-43c4-9c2e-e7aa29c88057\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gv5lf" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.392292 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a2edcffa-d93c-4125-863d-05812a4ff79a-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-zfv5p\" (UID: \"a2edcffa-d93c-4125-863d-05812a4ff79a\") " pod="openshift-authentication/oauth-openshift-558db77b4-zfv5p" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.392313 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82a78e00-0795-4de5-8062-f92878ea6c72-config\") pod \"machine-api-operator-5694c8668f-hqmrz\" (UID: \"82a78e00-0795-4de5-8062-f92878ea6c72\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hqmrz" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.392329 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svjrw\" (UniqueName: \"kubernetes.io/projected/6a6d8d36-0b11-496c-b07a-145358594fa2-kube-api-access-svjrw\") pod \"apiserver-76f77b778f-4qf8w\" (UID: \"6a6d8d36-0b11-496c-b07a-145358594fa2\") " pod="openshift-apiserver/apiserver-76f77b778f-4qf8w" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.392348 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6a6d8d36-0b11-496c-b07a-145358594fa2-encryption-config\") pod \"apiserver-76f77b778f-4qf8w\" (UID: \"6a6d8d36-0b11-496c-b07a-145358594fa2\") " pod="openshift-apiserver/apiserver-76f77b778f-4qf8w" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.392363 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/abf25fb1-39e2-4b26-9d3b-1cebdcbc7f98-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-84vfz\" (UID: \"abf25fb1-39e2-4b26-9d3b-1cebdcbc7f98\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-84vfz" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.392377 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kg75b\" (UniqueName: \"kubernetes.io/projected/48272be3-d48f-45b1-99a9-28ed3ba310ed-kube-api-access-kg75b\") pod \"catalog-operator-68c6474976-hcxsv\" (UID: \"48272be3-d48f-45b1-99a9-28ed3ba310ed\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hcxsv" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.392386 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b2fc2384-c0fd-421b-b715-39300bec870d-auth-proxy-config\") pod \"machine-approver-56656f9798-fzd6q\" (UID: \"b2fc2384-c0fd-421b-b715-39300bec870d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fzd6q" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.392394 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hcxw\" (UniqueName: \"kubernetes.io/projected/b2fc2384-c0fd-421b-b715-39300bec870d-kube-api-access-4hcxw\") pod \"machine-approver-56656f9798-fzd6q\" (UID: \"b2fc2384-c0fd-421b-b715-39300bec870d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fzd6q" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.392426 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddt5h\" (UniqueName: \"kubernetes.io/projected/6777175c-7525-4ae6-9b3e-391b3e21abf8-kube-api-access-ddt5h\") pod \"controller-manager-879f6c89f-4z5px\" (UID: \"6777175c-7525-4ae6-9b3e-391b3e21abf8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4z5px" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.392451 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7pzz\" (UniqueName: \"kubernetes.io/projected/e9b63e7c-d6fb-4ead-8ec0-2b38f5de0609-kube-api-access-t7pzz\") pod \"cluster-image-registry-operator-dc59b4c8b-lnf5c\" (UID: \"e9b63e7c-d6fb-4ead-8ec0-2b38f5de0609\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lnf5c" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.392474 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a2edcffa-d93c-4125-863d-05812a4ff79a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-zfv5p\" (UID: \"a2edcffa-d93c-4125-863d-05812a4ff79a\") " pod="openshift-authentication/oauth-openshift-558db77b4-zfv5p" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.392490 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6777175c-7525-4ae6-9b3e-391b3e21abf8-config\") pod \"controller-manager-879f6c89f-4z5px\" (UID: \"6777175c-7525-4ae6-9b3e-391b3e21abf8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4z5px" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.392520 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gc5qg\" (UniqueName: \"kubernetes.io/projected/82a78e00-0795-4de5-8062-f92878ea6c72-kube-api-access-gc5qg\") pod \"machine-api-operator-5694c8668f-hqmrz\" (UID: \"82a78e00-0795-4de5-8062-f92878ea6c72\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hqmrz" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.392536 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/b2fc2384-c0fd-421b-b715-39300bec870d-machine-approver-tls\") pod \"machine-approver-56656f9798-fzd6q\" (UID: \"b2fc2384-c0fd-421b-b715-39300bec870d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fzd6q" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.392552 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a2edcffa-d93c-4125-863d-05812a4ff79a-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-zfv5p\" (UID: \"a2edcffa-d93c-4125-863d-05812a4ff79a\") " pod="openshift-authentication/oauth-openshift-558db77b4-zfv5p" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.392573 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/82a78e00-0795-4de5-8062-f92878ea6c72-images\") pod \"machine-api-operator-5694c8668f-hqmrz\" (UID: \"82a78e00-0795-4de5-8062-f92878ea6c72\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hqmrz" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.392606 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/62b2eeb5-6380-43c4-9c2e-e7aa29c88057-audit-policies\") pod \"apiserver-7bbb656c7d-gv5lf\" (UID: \"62b2eeb5-6380-43c4-9c2e-e7aa29c88057\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gv5lf" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.392621 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a6d8d36-0b11-496c-b07a-145358594fa2-config\") pod \"apiserver-76f77b778f-4qf8w\" (UID: \"6a6d8d36-0b11-496c-b07a-145358594fa2\") " pod="openshift-apiserver/apiserver-76f77b778f-4qf8w" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.392637 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/9c0c6217-0e72-4682-8417-f6f6b2809bfa-profile-collector-cert\") pod \"olm-operator-6b444d44fb-lbmnl\" (UID: \"9c0c6217-0e72-4682-8417-f6f6b2809bfa\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lbmnl" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.392655 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a2edcffa-d93c-4125-863d-05812a4ff79a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-zfv5p\" (UID: \"a2edcffa-d93c-4125-863d-05812a4ff79a\") " pod="openshift-authentication/oauth-openshift-558db77b4-zfv5p" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.393306 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/6a6d8d36-0b11-496c-b07a-145358594fa2-image-import-ca\") pod \"apiserver-76f77b778f-4qf8w\" (UID: \"6a6d8d36-0b11-496c-b07a-145358594fa2\") " pod="openshift-apiserver/apiserver-76f77b778f-4qf8w" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.393348 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6a6d8d36-0b11-496c-b07a-145358594fa2-audit-dir\") pod \"apiserver-76f77b778f-4qf8w\" (UID: \"6a6d8d36-0b11-496c-b07a-145358594fa2\") " pod="openshift-apiserver/apiserver-76f77b778f-4qf8w" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.393639 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.394168 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6777175c-7525-4ae6-9b3e-391b3e21abf8-config\") pod \"controller-manager-879f6c89f-4z5px\" (UID: \"6777175c-7525-4ae6-9b3e-391b3e21abf8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4z5px" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.394611 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.395911 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6a6d8d36-0b11-496c-b07a-145358594fa2-etcd-client\") pod \"apiserver-76f77b778f-4qf8w\" (UID: \"6a6d8d36-0b11-496c-b07a-145358594fa2\") " pod="openshift-apiserver/apiserver-76f77b778f-4qf8w" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.396077 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.396427 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/62b2eeb5-6380-43c4-9c2e-e7aa29c88057-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-gv5lf\" (UID: \"62b2eeb5-6380-43c4-9c2e-e7aa29c88057\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gv5lf" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.397315 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/62b2eeb5-6380-43c4-9c2e-e7aa29c88057-audit-policies\") pod \"apiserver-7bbb656c7d-gv5lf\" (UID: \"62b2eeb5-6380-43c4-9c2e-e7aa29c88057\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gv5lf" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.397948 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50fa75e9-6b6f-4dc5-a3be-5d3e2d7f8169-config\") pod \"route-controller-manager-6576b87f9c-bdqgm\" (UID: \"50fa75e9-6b6f-4dc5-a3be-5d3e2d7f8169\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bdqgm" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.398422 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/62b2eeb5-6380-43c4-9c2e-e7aa29c88057-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-gv5lf\" (UID: \"62b2eeb5-6380-43c4-9c2e-e7aa29c88057\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gv5lf" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.399063 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a6d8d36-0b11-496c-b07a-145358594fa2-config\") pod \"apiserver-76f77b778f-4qf8w\" (UID: \"6a6d8d36-0b11-496c-b07a-145358594fa2\") " pod="openshift-apiserver/apiserver-76f77b778f-4qf8w" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.386923 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lssjz" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.399515 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6777175c-7525-4ae6-9b3e-391b3e21abf8-client-ca\") pod \"controller-manager-879f6c89f-4z5px\" (UID: \"6777175c-7525-4ae6-9b3e-391b3e21abf8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4z5px" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.400084 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/6a6d8d36-0b11-496c-b07a-145358594fa2-audit\") pod \"apiserver-76f77b778f-4qf8w\" (UID: \"6a6d8d36-0b11-496c-b07a-145358594fa2\") " pod="openshift-apiserver/apiserver-76f77b778f-4qf8w" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.386876 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zjp5b"] Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.400180 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.400514 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6df8n"] Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.400944 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6df8n" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.401645 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6777175c-7525-4ae6-9b3e-391b3e21abf8-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-4z5px\" (UID: \"6777175c-7525-4ae6-9b3e-391b3e21abf8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4z5px" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.401904 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6a6d8d36-0b11-496c-b07a-145358594fa2-etcd-serving-ca\") pod \"apiserver-76f77b778f-4qf8w\" (UID: \"6a6d8d36-0b11-496c-b07a-145358594fa2\") " pod="openshift-apiserver/apiserver-76f77b778f-4qf8w" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.401985 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zjp5b" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.402647 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2fc2384-c0fd-421b-b715-39300bec870d-config\") pod \"machine-approver-56656f9798-fzd6q\" (UID: \"b2fc2384-c0fd-421b-b715-39300bec870d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fzd6q" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.403049 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/82a78e00-0795-4de5-8062-f92878ea6c72-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-hqmrz\" (UID: \"82a78e00-0795-4de5-8062-f92878ea6c72\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hqmrz" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.403246 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/82a78e00-0795-4de5-8062-f92878ea6c72-images\") pod \"machine-api-operator-5694c8668f-hqmrz\" (UID: \"82a78e00-0795-4de5-8062-f92878ea6c72\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hqmrz" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.403530 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6a6d8d36-0b11-496c-b07a-145358594fa2-node-pullsecrets\") pod \"apiserver-76f77b778f-4qf8w\" (UID: \"6a6d8d36-0b11-496c-b07a-145358594fa2\") " pod="openshift-apiserver/apiserver-76f77b778f-4qf8w" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.404756 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82a78e00-0795-4de5-8062-f92878ea6c72-config\") pod \"machine-api-operator-5694c8668f-hqmrz\" (UID: \"82a78e00-0795-4de5-8062-f92878ea6c72\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hqmrz" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.404844 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.405249 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/b2fc2384-c0fd-421b-b715-39300bec870d-machine-approver-tls\") pod \"machine-approver-56656f9798-fzd6q\" (UID: \"b2fc2384-c0fd-421b-b715-39300bec870d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fzd6q" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.405312 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nqts5"] Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.407166 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a6d8d36-0b11-496c-b07a-145358594fa2-serving-cert\") pod \"apiserver-76f77b778f-4qf8w\" (UID: \"6a6d8d36-0b11-496c-b07a-145358594fa2\") " pod="openshift-apiserver/apiserver-76f77b778f-4qf8w" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.407894 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6777175c-7525-4ae6-9b3e-391b3e21abf8-serving-cert\") pod \"controller-manager-879f6c89f-4z5px\" (UID: \"6777175c-7525-4ae6-9b3e-391b3e21abf8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4z5px" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.408797 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-bp7g7"] Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.408961 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/62b2eeb5-6380-43c4-9c2e-e7aa29c88057-etcd-client\") pod \"apiserver-7bbb656c7d-gv5lf\" (UID: \"62b2eeb5-6380-43c4-9c2e-e7aa29c88057\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gv5lf" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.409322 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-w594l"] Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.409792 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-c9zn5"] Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.409813 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/62b2eeb5-6380-43c4-9c2e-e7aa29c88057-serving-cert\") pod \"apiserver-7bbb656c7d-gv5lf\" (UID: \"62b2eeb5-6380-43c4-9c2e-e7aa29c88057\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gv5lf" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.409930 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nqts5" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.410241 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-bp7g7" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.410414 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-c9zn5" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.410458 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-w594l" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.410799 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hcxsv" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.411550 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/50fa75e9-6b6f-4dc5-a3be-5d3e2d7f8169-serving-cert\") pod \"route-controller-manager-6576b87f9c-bdqgm\" (UID: \"50fa75e9-6b6f-4dc5-a3be-5d3e2d7f8169\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bdqgm" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.411609 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zn96z"] Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.412020 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29501115-zcgzz"] Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.412440 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29501115-zcgzz" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.413456 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-85qfd"] Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.414011 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7sn8m"] Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.414030 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zn96z" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.414100 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-85qfd" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.414384 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/62b2eeb5-6380-43c4-9c2e-e7aa29c88057-encryption-config\") pod \"apiserver-7bbb656c7d-gv5lf\" (UID: \"62b2eeb5-6380-43c4-9c2e-e7aa29c88057\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gv5lf" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.414413 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.414649 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-7sn8m" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.415037 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6a6d8d36-0b11-496c-b07a-145358594fa2-encryption-config\") pod \"apiserver-76f77b778f-4qf8w\" (UID: \"6a6d8d36-0b11-496c-b07a-145358594fa2\") " pod="openshift-apiserver/apiserver-76f77b778f-4qf8w" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.416085 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-6n95r"] Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.416782 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-6n95r" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.416941 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-4qf8w"] Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.417899 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-25gsr"] Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.418699 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-25gsr" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.432925 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-26j4t"] Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.435043 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-hqmrz"] Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.436210 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-4z5px"] Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.436235 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-gv5lf"] Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.436250 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-bdqgm"] Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.436264 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-pkpwd"] Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.436277 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-dd7g5"] Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.435377 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-26j4t" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.436632 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.438989 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-x568j"] Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.444422 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lbmnl"] Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.447858 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-nsqw4"] Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.449859 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6t7gf"] Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.452087 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-rlwl7"] Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.452867 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-rlwl7" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.453694 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lnf5c"] Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.454883 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.455600 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-4lbfr"] Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.457331 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fv5wb"] Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.460640 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-x2wtg"] Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.461796 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-zfv5p"] Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.463132 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6df8n"] Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.464308 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-w594l"] Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.465369 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-8787r"] Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.466626 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nqts5"] Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.468713 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7sn8m"] Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.469551 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-85qfd"] Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.472810 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lssjz"] Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.474038 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hcxsv"] Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.475704 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-cfghm"] Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.475746 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.476365 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hc84b"] Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.476456 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-cfghm" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.477439 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-84vfz"] Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.478992 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-dzvhk"] Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.480333 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-dzvhk" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.480705 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-25rjx"] Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.482727 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29501115-zcgzz"] Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.482754 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-rlwl7"] Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.484751 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zjp5b"] Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.484934 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zn96z"] Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.486702 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-bp7g7"] Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.488273 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-wgzrz"] Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.489716 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-26j4t"] Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.491201 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-25gsr"] Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.492682 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-6n95r"] Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.493374 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gf5c\" (UniqueName: \"kubernetes.io/projected/9c0c6217-0e72-4682-8417-f6f6b2809bfa-kube-api-access-5gf5c\") pod \"olm-operator-6b444d44fb-lbmnl\" (UID: \"9c0c6217-0e72-4682-8417-f6f6b2809bfa\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lbmnl" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.493402 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a2edcffa-d93c-4125-863d-05812a4ff79a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-zfv5p\" (UID: \"a2edcffa-d93c-4125-863d-05812a4ff79a\") " pod="openshift-authentication/oauth-openshift-558db77b4-zfv5p" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.493423 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a2edcffa-d93c-4125-863d-05812a4ff79a-audit-policies\") pod \"oauth-openshift-558db77b4-zfv5p\" (UID: \"a2edcffa-d93c-4125-863d-05812a4ff79a\") " pod="openshift-authentication/oauth-openshift-558db77b4-zfv5p" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.493439 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pclb\" (UniqueName: \"kubernetes.io/projected/dc42f967-8fe9-4a89-8e82-5272f070ed73-kube-api-access-6pclb\") pod \"dns-operator-744455d44c-pkpwd\" (UID: \"dc42f967-8fe9-4a89-8e82-5272f070ed73\") " pod="openshift-dns-operator/dns-operator-744455d44c-pkpwd" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.493457 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9c0c6217-0e72-4682-8417-f6f6b2809bfa-srv-cert\") pod \"olm-operator-6b444d44fb-lbmnl\" (UID: \"9c0c6217-0e72-4682-8417-f6f6b2809bfa\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lbmnl" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.493473 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a2edcffa-d93c-4125-863d-05812a4ff79a-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-zfv5p\" (UID: \"a2edcffa-d93c-4125-863d-05812a4ff79a\") " pod="openshift-authentication/oauth-openshift-558db77b4-zfv5p" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.493515 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/48272be3-d48f-45b1-99a9-28ed3ba310ed-srv-cert\") pod \"catalog-operator-68c6474976-hcxsv\" (UID: \"48272be3-d48f-45b1-99a9-28ed3ba310ed\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hcxsv" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.493532 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a2edcffa-d93c-4125-863d-05812a4ff79a-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-zfv5p\" (UID: \"a2edcffa-d93c-4125-863d-05812a4ff79a\") " pod="openshift-authentication/oauth-openshift-558db77b4-zfv5p" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.494262 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-dzvhk"] Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.494433 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a2edcffa-d93c-4125-863d-05812a4ff79a-audit-policies\") pod \"oauth-openshift-558db77b4-zfv5p\" (UID: \"a2edcffa-d93c-4125-863d-05812a4ff79a\") " pod="openshift-authentication/oauth-openshift-558db77b4-zfv5p" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.494529 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e9b63e7c-d6fb-4ead-8ec0-2b38f5de0609-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-lnf5c\" (UID: \"e9b63e7c-d6fb-4ead-8ec0-2b38f5de0609\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lnf5c" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.494553 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/abf25fb1-39e2-4b26-9d3b-1cebdcbc7f98-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-84vfz\" (UID: \"abf25fb1-39e2-4b26-9d3b-1cebdcbc7f98\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-84vfz" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.494570 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e9b63e7c-d6fb-4ead-8ec0-2b38f5de0609-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-lnf5c\" (UID: \"e9b63e7c-d6fb-4ead-8ec0-2b38f5de0609\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lnf5c" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.494603 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abf25fb1-39e2-4b26-9d3b-1cebdcbc7f98-config\") pod \"kube-controller-manager-operator-78b949d7b-84vfz\" (UID: \"abf25fb1-39e2-4b26-9d3b-1cebdcbc7f98\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-84vfz" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.494618 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a2edcffa-d93c-4125-863d-05812a4ff79a-audit-dir\") pod \"oauth-openshift-558db77b4-zfv5p\" (UID: \"a2edcffa-d93c-4125-863d-05812a4ff79a\") " pod="openshift-authentication/oauth-openshift-558db77b4-zfv5p" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.494634 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a2edcffa-d93c-4125-863d-05812a4ff79a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-zfv5p\" (UID: \"a2edcffa-d93c-4125-863d-05812a4ff79a\") " pod="openshift-authentication/oauth-openshift-558db77b4-zfv5p" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.494660 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2d8qg\" (UniqueName: \"kubernetes.io/projected/899bce18-bfcc-42b8-ab5e-149d16e8eddb-kube-api-access-2d8qg\") pod \"downloads-7954f5f757-dd7g5\" (UID: \"899bce18-bfcc-42b8-ab5e-149d16e8eddb\") " pod="openshift-console/downloads-7954f5f757-dd7g5" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.494670 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.494683 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a2edcffa-d93c-4125-863d-05812a4ff79a-audit-dir\") pod \"oauth-openshift-558db77b4-zfv5p\" (UID: \"a2edcffa-d93c-4125-863d-05812a4ff79a\") " pod="openshift-authentication/oauth-openshift-558db77b4-zfv5p" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.494679 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a2edcffa-d93c-4125-863d-05812a4ff79a-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-zfv5p\" (UID: \"a2edcffa-d93c-4125-863d-05812a4ff79a\") " pod="openshift-authentication/oauth-openshift-558db77b4-zfv5p" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.495316 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a2edcffa-d93c-4125-863d-05812a4ff79a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-zfv5p\" (UID: \"a2edcffa-d93c-4125-863d-05812a4ff79a\") " pod="openshift-authentication/oauth-openshift-558db77b4-zfv5p" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.495353 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/e9b63e7c-d6fb-4ead-8ec0-2b38f5de0609-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-lnf5c\" (UID: \"e9b63e7c-d6fb-4ead-8ec0-2b38f5de0609\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lnf5c" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.495381 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a2edcffa-d93c-4125-863d-05812a4ff79a-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-zfv5p\" (UID: \"a2edcffa-d93c-4125-863d-05812a4ff79a\") " pod="openshift-authentication/oauth-openshift-558db77b4-zfv5p" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.495406 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/abf25fb1-39e2-4b26-9d3b-1cebdcbc7f98-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-84vfz\" (UID: \"abf25fb1-39e2-4b26-9d3b-1cebdcbc7f98\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-84vfz" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.495425 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kg75b\" (UniqueName: \"kubernetes.io/projected/48272be3-d48f-45b1-99a9-28ed3ba310ed-kube-api-access-kg75b\") pod \"catalog-operator-68c6474976-hcxsv\" (UID: \"48272be3-d48f-45b1-99a9-28ed3ba310ed\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hcxsv" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.495460 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7pzz\" (UniqueName: \"kubernetes.io/projected/e9b63e7c-d6fb-4ead-8ec0-2b38f5de0609-kube-api-access-t7pzz\") pod \"cluster-image-registry-operator-dc59b4c8b-lnf5c\" (UID: \"e9b63e7c-d6fb-4ead-8ec0-2b38f5de0609\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lnf5c" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.495478 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a2edcffa-d93c-4125-863d-05812a4ff79a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-zfv5p\" (UID: \"a2edcffa-d93c-4125-863d-05812a4ff79a\") " pod="openshift-authentication/oauth-openshift-558db77b4-zfv5p" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.495507 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a2edcffa-d93c-4125-863d-05812a4ff79a-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-zfv5p\" (UID: \"a2edcffa-d93c-4125-863d-05812a4ff79a\") " pod="openshift-authentication/oauth-openshift-558db77b4-zfv5p" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.495512 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a2edcffa-d93c-4125-863d-05812a4ff79a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-zfv5p\" (UID: \"a2edcffa-d93c-4125-863d-05812a4ff79a\") " pod="openshift-authentication/oauth-openshift-558db77b4-zfv5p" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.495522 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/9c0c6217-0e72-4682-8417-f6f6b2809bfa-profile-collector-cert\") pod \"olm-operator-6b444d44fb-lbmnl\" (UID: \"9c0c6217-0e72-4682-8417-f6f6b2809bfa\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lbmnl" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.495618 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a2edcffa-d93c-4125-863d-05812a4ff79a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-zfv5p\" (UID: \"a2edcffa-d93c-4125-863d-05812a4ff79a\") " pod="openshift-authentication/oauth-openshift-558db77b4-zfv5p" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.495654 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/48272be3-d48f-45b1-99a9-28ed3ba310ed-profile-collector-cert\") pod \"catalog-operator-68c6474976-hcxsv\" (UID: \"48272be3-d48f-45b1-99a9-28ed3ba310ed\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hcxsv" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.495691 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/dc42f967-8fe9-4a89-8e82-5272f070ed73-metrics-tls\") pod \"dns-operator-744455d44c-pkpwd\" (UID: \"dc42f967-8fe9-4a89-8e82-5272f070ed73\") " pod="openshift-dns-operator/dns-operator-744455d44c-pkpwd" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.495716 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a2edcffa-d93c-4125-863d-05812a4ff79a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-zfv5p\" (UID: \"a2edcffa-d93c-4125-863d-05812a4ff79a\") " pod="openshift-authentication/oauth-openshift-558db77b4-zfv5p" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.495742 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vf9z\" (UniqueName: \"kubernetes.io/projected/a2edcffa-d93c-4125-863d-05812a4ff79a-kube-api-access-2vf9z\") pod \"oauth-openshift-558db77b4-zfv5p\" (UID: \"a2edcffa-d93c-4125-863d-05812a4ff79a\") " pod="openshift-authentication/oauth-openshift-558db77b4-zfv5p" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.496226 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-c9zn5"] Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.496620 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a2edcffa-d93c-4125-863d-05812a4ff79a-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-zfv5p\" (UID: \"a2edcffa-d93c-4125-863d-05812a4ff79a\") " pod="openshift-authentication/oauth-openshift-558db77b4-zfv5p" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.496851 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a2edcffa-d93c-4125-863d-05812a4ff79a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-zfv5p\" (UID: \"a2edcffa-d93c-4125-863d-05812a4ff79a\") " pod="openshift-authentication/oauth-openshift-558db77b4-zfv5p" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.497756 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e9b63e7c-d6fb-4ead-8ec0-2b38f5de0609-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-lnf5c\" (UID: \"e9b63e7c-d6fb-4ead-8ec0-2b38f5de0609\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lnf5c" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.497914 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a2edcffa-d93c-4125-863d-05812a4ff79a-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-zfv5p\" (UID: \"a2edcffa-d93c-4125-863d-05812a4ff79a\") " pod="openshift-authentication/oauth-openshift-558db77b4-zfv5p" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.498375 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-jf9jd"] Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.498673 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/e9b63e7c-d6fb-4ead-8ec0-2b38f5de0609-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-lnf5c\" (UID: \"e9b63e7c-d6fb-4ead-8ec0-2b38f5de0609\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lnf5c" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.498827 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a2edcffa-d93c-4125-863d-05812a4ff79a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-zfv5p\" (UID: \"a2edcffa-d93c-4125-863d-05812a4ff79a\") " pod="openshift-authentication/oauth-openshift-558db77b4-zfv5p" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.499053 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/dc42f967-8fe9-4a89-8e82-5272f070ed73-metrics-tls\") pod \"dns-operator-744455d44c-pkpwd\" (UID: \"dc42f967-8fe9-4a89-8e82-5272f070ed73\") " pod="openshift-dns-operator/dns-operator-744455d44c-pkpwd" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.499123 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a2edcffa-d93c-4125-863d-05812a4ff79a-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-zfv5p\" (UID: \"a2edcffa-d93c-4125-863d-05812a4ff79a\") " pod="openshift-authentication/oauth-openshift-558db77b4-zfv5p" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.499152 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a2edcffa-d93c-4125-863d-05812a4ff79a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-zfv5p\" (UID: \"a2edcffa-d93c-4125-863d-05812a4ff79a\") " pod="openshift-authentication/oauth-openshift-558db77b4-zfv5p" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.499213 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-jf9jd" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.499790 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a2edcffa-d93c-4125-863d-05812a4ff79a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-zfv5p\" (UID: \"a2edcffa-d93c-4125-863d-05812a4ff79a\") " pod="openshift-authentication/oauth-openshift-558db77b4-zfv5p" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.499985 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a2edcffa-d93c-4125-863d-05812a4ff79a-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-zfv5p\" (UID: \"a2edcffa-d93c-4125-863d-05812a4ff79a\") " pod="openshift-authentication/oauth-openshift-558db77b4-zfv5p" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.499990 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a2edcffa-d93c-4125-863d-05812a4ff79a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-zfv5p\" (UID: \"a2edcffa-d93c-4125-863d-05812a4ff79a\") " pod="openshift-authentication/oauth-openshift-558db77b4-zfv5p" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.500177 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-jf9jd"] Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.501290 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a2edcffa-d93c-4125-863d-05812a4ff79a-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-zfv5p\" (UID: \"a2edcffa-d93c-4125-863d-05812a4ff79a\") " pod="openshift-authentication/oauth-openshift-558db77b4-zfv5p" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.515297 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.535150 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.555275 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.576102 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.594836 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.614978 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.635407 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.655953 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.675296 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.716195 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.734814 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.755005 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.795208 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.815900 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.831997 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/abf25fb1-39e2-4b26-9d3b-1cebdcbc7f98-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-84vfz\" (UID: \"abf25fb1-39e2-4b26-9d3b-1cebdcbc7f98\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-84vfz" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.835334 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.855953 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.865843 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abf25fb1-39e2-4b26-9d3b-1cebdcbc7f98-config\") pod \"kube-controller-manager-operator-78b949d7b-84vfz\" (UID: \"abf25fb1-39e2-4b26-9d3b-1cebdcbc7f98\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-84vfz" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.875360 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.880215 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/9c0c6217-0e72-4682-8417-f6f6b2809bfa-profile-collector-cert\") pod \"olm-operator-6b444d44fb-lbmnl\" (UID: \"9c0c6217-0e72-4682-8417-f6f6b2809bfa\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lbmnl" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.881965 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/48272be3-d48f-45b1-99a9-28ed3ba310ed-profile-collector-cert\") pod \"catalog-operator-68c6474976-hcxsv\" (UID: \"48272be3-d48f-45b1-99a9-28ed3ba310ed\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hcxsv" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.896466 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.915849 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.928800 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9c0c6217-0e72-4682-8417-f6f6b2809bfa-srv-cert\") pod \"olm-operator-6b444d44fb-lbmnl\" (UID: \"9c0c6217-0e72-4682-8417-f6f6b2809bfa\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lbmnl" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.936618 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.956079 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.975812 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 02 21:22:00 crc kubenswrapper[4789]: I0202 21:22:00.995978 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 02 21:22:01 crc kubenswrapper[4789]: I0202 21:22:01.015822 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 02 21:22:01 crc kubenswrapper[4789]: I0202 21:22:01.035933 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 02 21:22:01 crc kubenswrapper[4789]: I0202 21:22:01.055425 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 02 21:22:01 crc kubenswrapper[4789]: I0202 21:22:01.103515 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tfzp\" (UniqueName: \"kubernetes.io/projected/50fa75e9-6b6f-4dc5-a3be-5d3e2d7f8169-kube-api-access-6tfzp\") pod \"route-controller-manager-6576b87f9c-bdqgm\" (UID: \"50fa75e9-6b6f-4dc5-a3be-5d3e2d7f8169\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bdqgm" Feb 02 21:22:01 crc kubenswrapper[4789]: I0202 21:22:01.123615 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hcxw\" (UniqueName: \"kubernetes.io/projected/b2fc2384-c0fd-421b-b715-39300bec870d-kube-api-access-4hcxw\") pod \"machine-approver-56656f9798-fzd6q\" (UID: \"b2fc2384-c0fd-421b-b715-39300bec870d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fzd6q" Feb 02 21:22:01 crc kubenswrapper[4789]: I0202 21:22:01.143565 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddt5h\" (UniqueName: \"kubernetes.io/projected/6777175c-7525-4ae6-9b3e-391b3e21abf8-kube-api-access-ddt5h\") pod \"controller-manager-879f6c89f-4z5px\" (UID: \"6777175c-7525-4ae6-9b3e-391b3e21abf8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4z5px" Feb 02 21:22:01 crc kubenswrapper[4789]: I0202 21:22:01.155951 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 02 21:22:01 crc kubenswrapper[4789]: I0202 21:22:01.158712 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gc5qg\" (UniqueName: \"kubernetes.io/projected/82a78e00-0795-4de5-8062-f92878ea6c72-kube-api-access-gc5qg\") pod \"machine-api-operator-5694c8668f-hqmrz\" (UID: \"82a78e00-0795-4de5-8062-f92878ea6c72\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hqmrz" Feb 02 21:22:01 crc kubenswrapper[4789]: I0202 21:22:01.176124 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-hqmrz" Feb 02 21:22:01 crc kubenswrapper[4789]: I0202 21:22:01.203269 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svjrw\" (UniqueName: \"kubernetes.io/projected/6a6d8d36-0b11-496c-b07a-145358594fa2-kube-api-access-svjrw\") pod \"apiserver-76f77b778f-4qf8w\" (UID: \"6a6d8d36-0b11-496c-b07a-145358594fa2\") " pod="openshift-apiserver/apiserver-76f77b778f-4qf8w" Feb 02 21:22:01 crc kubenswrapper[4789]: I0202 21:22:01.205911 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-4z5px" Feb 02 21:22:01 crc kubenswrapper[4789]: I0202 21:22:01.217203 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 02 21:22:01 crc kubenswrapper[4789]: I0202 21:22:01.223270 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjxqv\" (UniqueName: \"kubernetes.io/projected/62b2eeb5-6380-43c4-9c2e-e7aa29c88057-kube-api-access-kjxqv\") pod \"apiserver-7bbb656c7d-gv5lf\" (UID: \"62b2eeb5-6380-43c4-9c2e-e7aa29c88057\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gv5lf" Feb 02 21:22:01 crc kubenswrapper[4789]: I0202 21:22:01.236009 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 02 21:22:01 crc kubenswrapper[4789]: I0202 21:22:01.254297 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bdqgm" Feb 02 21:22:01 crc kubenswrapper[4789]: I0202 21:22:01.256195 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 02 21:22:01 crc kubenswrapper[4789]: I0202 21:22:01.275993 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 02 21:22:01 crc kubenswrapper[4789]: I0202 21:22:01.295637 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 02 21:22:01 crc kubenswrapper[4789]: I0202 21:22:01.299520 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fzd6q" Feb 02 21:22:01 crc kubenswrapper[4789]: I0202 21:22:01.315245 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 02 21:22:01 crc kubenswrapper[4789]: W0202 21:22:01.321455 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb2fc2384_c0fd_421b_b715_39300bec870d.slice/crio-8fb255906b9e190ca197eb4d964259740010326becb175ddca4f194bd3e15bef WatchSource:0}: Error finding container 8fb255906b9e190ca197eb4d964259740010326becb175ddca4f194bd3e15bef: Status 404 returned error can't find the container with id 8fb255906b9e190ca197eb4d964259740010326becb175ddca4f194bd3e15bef Feb 02 21:22:01 crc kubenswrapper[4789]: I0202 21:22:01.335514 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 02 21:22:01 crc kubenswrapper[4789]: I0202 21:22:01.343195 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gv5lf" Feb 02 21:22:01 crc kubenswrapper[4789]: I0202 21:22:01.355277 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 02 21:22:01 crc kubenswrapper[4789]: I0202 21:22:01.375538 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 02 21:22:01 crc kubenswrapper[4789]: I0202 21:22:01.395566 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 02 21:22:01 crc kubenswrapper[4789]: I0202 21:22:01.414144 4789 request.go:700] Waited for 1.009876837s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-samples-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Feb 02 21:22:01 crc kubenswrapper[4789]: I0202 21:22:01.415853 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 02 21:22:01 crc kubenswrapper[4789]: I0202 21:22:01.435721 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 02 21:22:01 crc kubenswrapper[4789]: I0202 21:22:01.452698 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-hqmrz"] Feb 02 21:22:01 crc kubenswrapper[4789]: I0202 21:22:01.453941 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-4z5px"] Feb 02 21:22:01 crc kubenswrapper[4789]: I0202 21:22:01.455681 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 02 21:22:01 crc kubenswrapper[4789]: I0202 21:22:01.460535 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-4qf8w" Feb 02 21:22:01 crc kubenswrapper[4789]: I0202 21:22:01.477633 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 02 21:22:01 crc kubenswrapper[4789]: I0202 21:22:01.503416 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-bdqgm"] Feb 02 21:22:01 crc kubenswrapper[4789]: E0202 21:22:01.503986 4789 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 02 21:22:01 crc kubenswrapper[4789]: E0202 21:22:01.504058 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/48272be3-d48f-45b1-99a9-28ed3ba310ed-srv-cert podName:48272be3-d48f-45b1-99a9-28ed3ba310ed nodeName:}" failed. No retries permitted until 2026-02-02 21:22:02.004036184 +0000 UTC m=+142.299061203 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/48272be3-d48f-45b1-99a9-28ed3ba310ed-srv-cert") pod "catalog-operator-68c6474976-hcxsv" (UID: "48272be3-d48f-45b1-99a9-28ed3ba310ed") : failed to sync secret cache: timed out waiting for the condition Feb 02 21:22:01 crc kubenswrapper[4789]: I0202 21:22:01.505056 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 02 21:22:01 crc kubenswrapper[4789]: I0202 21:22:01.514548 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 02 21:22:01 crc kubenswrapper[4789]: W0202 21:22:01.515925 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod50fa75e9_6b6f_4dc5_a3be_5d3e2d7f8169.slice/crio-c45c09642aad10c01c8b24b0bb37aab8b6119f44acdbf15c737612250557246a WatchSource:0}: Error finding container c45c09642aad10c01c8b24b0bb37aab8b6119f44acdbf15c737612250557246a: Status 404 returned error can't find the container with id c45c09642aad10c01c8b24b0bb37aab8b6119f44acdbf15c737612250557246a Feb 02 21:22:01 crc kubenswrapper[4789]: I0202 21:22:01.534825 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 02 21:22:01 crc kubenswrapper[4789]: I0202 21:22:01.556541 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 02 21:22:01 crc kubenswrapper[4789]: I0202 21:22:01.576108 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 02 21:22:01 crc kubenswrapper[4789]: I0202 21:22:01.593661 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bdqgm" event={"ID":"50fa75e9-6b6f-4dc5-a3be-5d3e2d7f8169","Type":"ContainerStarted","Data":"c45c09642aad10c01c8b24b0bb37aab8b6119f44acdbf15c737612250557246a"} Feb 02 21:22:01 crc kubenswrapper[4789]: I0202 21:22:01.594399 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 02 21:22:01 crc kubenswrapper[4789]: I0202 21:22:01.595515 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-gv5lf"] Feb 02 21:22:01 crc kubenswrapper[4789]: I0202 21:22:01.596321 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-hqmrz" event={"ID":"82a78e00-0795-4de5-8062-f92878ea6c72","Type":"ContainerStarted","Data":"9508515cb36ef13bf357f0246ea1fd8e79551f5ada43a65333a2ef0b470df1ed"} Feb 02 21:22:01 crc kubenswrapper[4789]: I0202 21:22:01.598644 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fzd6q" event={"ID":"b2fc2384-c0fd-421b-b715-39300bec870d","Type":"ContainerStarted","Data":"8fb255906b9e190ca197eb4d964259740010326becb175ddca4f194bd3e15bef"} Feb 02 21:22:01 crc kubenswrapper[4789]: I0202 21:22:01.600241 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-4z5px" event={"ID":"6777175c-7525-4ae6-9b3e-391b3e21abf8","Type":"ContainerStarted","Data":"cf13991a19505657ea1f94681cb0c0f157ad67e9729175ec9f35231006e8cf37"} Feb 02 21:22:01 crc kubenswrapper[4789]: I0202 21:22:01.615058 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 02 21:22:01 crc kubenswrapper[4789]: I0202 21:22:01.635122 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 02 21:22:01 crc kubenswrapper[4789]: I0202 21:22:01.655246 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 02 21:22:01 crc kubenswrapper[4789]: I0202 21:22:01.663829 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-4qf8w"] Feb 02 21:22:01 crc kubenswrapper[4789]: W0202 21:22:01.675111 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a6d8d36_0b11_496c_b07a_145358594fa2.slice/crio-b8a26460b8a455fb21fadb51696fb80e15eafc7d1b90d15d22b7983ef5c9403d WatchSource:0}: Error finding container b8a26460b8a455fb21fadb51696fb80e15eafc7d1b90d15d22b7983ef5c9403d: Status 404 returned error can't find the container with id b8a26460b8a455fb21fadb51696fb80e15eafc7d1b90d15d22b7983ef5c9403d Feb 02 21:22:01 crc kubenswrapper[4789]: I0202 21:22:01.675258 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 02 21:22:01 crc kubenswrapper[4789]: I0202 21:22:01.694870 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 02 21:22:01 crc kubenswrapper[4789]: I0202 21:22:01.715099 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 02 21:22:01 crc kubenswrapper[4789]: I0202 21:22:01.735187 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 02 21:22:01 crc kubenswrapper[4789]: I0202 21:22:01.755565 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 02 21:22:01 crc kubenswrapper[4789]: I0202 21:22:01.775066 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 02 21:22:01 crc kubenswrapper[4789]: I0202 21:22:01.794531 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 02 21:22:01 crc kubenswrapper[4789]: I0202 21:22:01.815020 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 02 21:22:01 crc kubenswrapper[4789]: I0202 21:22:01.837728 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 02 21:22:01 crc kubenswrapper[4789]: I0202 21:22:01.856943 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 02 21:22:01 crc kubenswrapper[4789]: I0202 21:22:01.874980 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 02 21:22:01 crc kubenswrapper[4789]: I0202 21:22:01.896156 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 02 21:22:01 crc kubenswrapper[4789]: I0202 21:22:01.915572 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 02 21:22:01 crc kubenswrapper[4789]: I0202 21:22:01.943079 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 02 21:22:01 crc kubenswrapper[4789]: I0202 21:22:01.955438 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 02 21:22:01 crc kubenswrapper[4789]: I0202 21:22:01.975699 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 02 21:22:01 crc kubenswrapper[4789]: I0202 21:22:01.995842 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.015355 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.016888 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/48272be3-d48f-45b1-99a9-28ed3ba310ed-srv-cert\") pod \"catalog-operator-68c6474976-hcxsv\" (UID: \"48272be3-d48f-45b1-99a9-28ed3ba310ed\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hcxsv" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.025995 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/48272be3-d48f-45b1-99a9-28ed3ba310ed-srv-cert\") pod \"catalog-operator-68c6474976-hcxsv\" (UID: \"48272be3-d48f-45b1-99a9-28ed3ba310ed\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hcxsv" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.036188 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.055980 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.076531 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.096219 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.115793 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.135996 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.155565 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.176722 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.196514 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.215320 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.236290 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.255576 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.275939 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.296535 4789 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.318944 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.336669 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.382847 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gf5c\" (UniqueName: \"kubernetes.io/projected/9c0c6217-0e72-4682-8417-f6f6b2809bfa-kube-api-access-5gf5c\") pod \"olm-operator-6b444d44fb-lbmnl\" (UID: \"9c0c6217-0e72-4682-8417-f6f6b2809bfa\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lbmnl" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.385736 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lbmnl" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.395891 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pclb\" (UniqueName: \"kubernetes.io/projected/dc42f967-8fe9-4a89-8e82-5272f070ed73-kube-api-access-6pclb\") pod \"dns-operator-744455d44c-pkpwd\" (UID: \"dc42f967-8fe9-4a89-8e82-5272f070ed73\") " pod="openshift-dns-operator/dns-operator-744455d44c-pkpwd" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.426965 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e9b63e7c-d6fb-4ead-8ec0-2b38f5de0609-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-lnf5c\" (UID: \"e9b63e7c-d6fb-4ead-8ec0-2b38f5de0609\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lnf5c" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.434136 4789 request.go:700] Waited for 1.939307538s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/serviceaccounts/default/token Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.446815 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/abf25fb1-39e2-4b26-9d3b-1cebdcbc7f98-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-84vfz\" (UID: \"abf25fb1-39e2-4b26-9d3b-1cebdcbc7f98\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-84vfz" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.464879 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2d8qg\" (UniqueName: \"kubernetes.io/projected/899bce18-bfcc-42b8-ab5e-149d16e8eddb-kube-api-access-2d8qg\") pod \"downloads-7954f5f757-dd7g5\" (UID: \"899bce18-bfcc-42b8-ab5e-149d16e8eddb\") " pod="openshift-console/downloads-7954f5f757-dd7g5" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.483896 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7pzz\" (UniqueName: \"kubernetes.io/projected/e9b63e7c-d6fb-4ead-8ec0-2b38f5de0609-kube-api-access-t7pzz\") pod \"cluster-image-registry-operator-dc59b4c8b-lnf5c\" (UID: \"e9b63e7c-d6fb-4ead-8ec0-2b38f5de0609\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lnf5c" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.499688 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kg75b\" (UniqueName: \"kubernetes.io/projected/48272be3-d48f-45b1-99a9-28ed3ba310ed-kube-api-access-kg75b\") pod \"catalog-operator-68c6474976-hcxsv\" (UID: \"48272be3-d48f-45b1-99a9-28ed3ba310ed\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hcxsv" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.516957 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.527081 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vf9z\" (UniqueName: \"kubernetes.io/projected/a2edcffa-d93c-4125-863d-05812a4ff79a-kube-api-access-2vf9z\") pod \"oauth-openshift-558db77b4-zfv5p\" (UID: \"a2edcffa-d93c-4125-863d-05812a4ff79a\") " pod="openshift-authentication/oauth-openshift-558db77b4-zfv5p" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.535940 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.552963 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-pkpwd" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.559827 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.563897 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lnf5c" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.600036 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-dd7g5" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.617319 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-zfv5p" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.618804 4789 generic.go:334] "Generic (PLEG): container finished" podID="6a6d8d36-0b11-496c-b07a-145358594fa2" containerID="8d4e5f345c0f14b2592c0a366e02f3d946105ca19dec332773a617f8cf552584" exitCode=0 Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.618892 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-4qf8w" event={"ID":"6a6d8d36-0b11-496c-b07a-145358594fa2","Type":"ContainerDied","Data":"8d4e5f345c0f14b2592c0a366e02f3d946105ca19dec332773a617f8cf552584"} Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.618925 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-4qf8w" event={"ID":"6a6d8d36-0b11-496c-b07a-145358594fa2","Type":"ContainerStarted","Data":"b8a26460b8a455fb21fadb51696fb80e15eafc7d1b90d15d22b7983ef5c9403d"} Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.623490 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fzd6q" event={"ID":"b2fc2384-c0fd-421b-b715-39300bec870d","Type":"ContainerStarted","Data":"ac23e9ad5c52e30e41a6a226e07a06ea37f383fa7a167203ec19e63c9e598d4e"} Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.623556 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fzd6q" event={"ID":"b2fc2384-c0fd-421b-b715-39300bec870d","Type":"ContainerStarted","Data":"698e9ced0d70fe0ff75babdd024a084d2547321e03498fad217ec7894bf3606a"} Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.624881 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b523ddb6-b299-4bd7-9a33-75c025fb1805-serving-cert\") pod \"authentication-operator-69f744f599-4lbfr\" (UID: \"b523ddb6-b299-4bd7-9a33-75c025fb1805\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4lbfr" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.624927 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/925e8b6f-3848-4ab5-ab00-55405db2334c-metrics-tls\") pod \"ingress-operator-5b745b69d9-nsqw4\" (UID: \"925e8b6f-3848-4ab5-ab00-55405db2334c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nsqw4" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.624954 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b1a5cd8a-fb52-47ab-8aff-64e621d9b8e0-service-ca-bundle\") pod \"router-default-5444994796-jwp46\" (UID: \"b1a5cd8a-fb52-47ab-8aff-64e621d9b8e0\") " pod="openshift-ingress/router-default-5444994796-jwp46" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.624982 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a7cb5e21-a1f6-4e35-bf0d-e709b16d6994-console-serving-cert\") pod \"console-f9d7485db-x568j\" (UID: \"a7cb5e21-a1f6-4e35-bf0d-e709b16d6994\") " pod="openshift-console/console-f9d7485db-x568j" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.625035 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgtml\" (UniqueName: \"kubernetes.io/projected/b4eaafb5-bf66-460f-86df-9b3825837d05-kube-api-access-hgtml\") pod \"openshift-apiserver-operator-796bbdcf4f-fv5wb\" (UID: \"b4eaafb5-bf66-460f-86df-9b3825837d05\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fv5wb" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.637052 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b523ddb6-b299-4bd7-9a33-75c025fb1805-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-4lbfr\" (UID: \"b523ddb6-b299-4bd7-9a33-75c025fb1805\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4lbfr" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.637227 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/925e8b6f-3848-4ab5-ab00-55405db2334c-bound-sa-token\") pod \"ingress-operator-5b745b69d9-nsqw4\" (UID: \"925e8b6f-3848-4ab5-ab00-55405db2334c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nsqw4" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.637285 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d9743bb9-e748-40c7-a15d-c33fad88c2f2-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-6t7gf\" (UID: \"d9743bb9-e748-40c7-a15d-c33fad88c2f2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6t7gf" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.637331 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlmvb\" (UniqueName: \"kubernetes.io/projected/d9743bb9-e748-40c7-a15d-c33fad88c2f2-kube-api-access-tlmvb\") pod \"openshift-controller-manager-operator-756b6f6bc6-6t7gf\" (UID: \"d9743bb9-e748-40c7-a15d-c33fad88c2f2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6t7gf" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.637446 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4eaafb5-bf66-460f-86df-9b3825837d05-config\") pod \"openshift-apiserver-operator-796bbdcf4f-fv5wb\" (UID: \"b4eaafb5-bf66-460f-86df-9b3825837d05\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fv5wb" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.637728 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/925e8b6f-3848-4ab5-ab00-55405db2334c-trusted-ca\") pod \"ingress-operator-5b745b69d9-nsqw4\" (UID: \"925e8b6f-3848-4ab5-ab00-55405db2334c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nsqw4" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.637779 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a7cb5e21-a1f6-4e35-bf0d-e709b16d6994-console-config\") pod \"console-f9d7485db-x568j\" (UID: \"a7cb5e21-a1f6-4e35-bf0d-e709b16d6994\") " pod="openshift-console/console-f9d7485db-x568j" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.637806 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5be28070-1b99-4b27-8777-7f7935ba0b6e-serving-cert\") pod \"console-operator-58897d9998-8787r\" (UID: \"5be28070-1b99-4b27-8777-7f7935ba0b6e\") " pod="openshift-console-operator/console-operator-58897d9998-8787r" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.638023 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r67r7\" (UniqueName: \"kubernetes.io/projected/b523ddb6-b299-4bd7-9a33-75c025fb1805-kube-api-access-r67r7\") pod \"authentication-operator-69f744f599-4lbfr\" (UID: \"b523ddb6-b299-4bd7-9a33-75c025fb1805\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4lbfr" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.638069 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/b1a5cd8a-fb52-47ab-8aff-64e621d9b8e0-stats-auth\") pod \"router-default-5444994796-jwp46\" (UID: \"b1a5cd8a-fb52-47ab-8aff-64e621d9b8e0\") " pod="openshift-ingress/router-default-5444994796-jwp46" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.638117 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b523ddb6-b299-4bd7-9a33-75c025fb1805-config\") pod \"authentication-operator-69f744f599-4lbfr\" (UID: \"b523ddb6-b299-4bd7-9a33-75c025fb1805\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4lbfr" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.638148 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/0d7fbad6-8c3b-4d05-8ca4-ef9f3f39ec4f-available-featuregates\") pod \"openshift-config-operator-7777fb866f-25rjx\" (UID: \"0d7fbad6-8c3b-4d05-8ca4-ef9f3f39ec4f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-25rjx" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.638183 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/b1a5cd8a-fb52-47ab-8aff-64e621d9b8e0-default-certificate\") pod \"router-default-5444994796-jwp46\" (UID: \"b1a5cd8a-fb52-47ab-8aff-64e621d9b8e0\") " pod="openshift-ingress/router-default-5444994796-jwp46" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.638219 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvtwk\" (UniqueName: \"kubernetes.io/projected/7b59cb33-d5dc-4e90-b6fe-fe3ad948c346-kube-api-access-kvtwk\") pod \"packageserver-d55dfcdfc-hc84b\" (UID: \"7b59cb33-d5dc-4e90-b6fe-fe3ad948c346\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hc84b" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.638259 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a9b60922-75eb-4c97-85d5-12c146fe6cb1-ca-trust-extracted\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.638303 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brj2x\" (UniqueName: \"kubernetes.io/projected/a9b60922-75eb-4c97-85d5-12c146fe6cb1-kube-api-access-brj2x\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.638398 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a9b60922-75eb-4c97-85d5-12c146fe6cb1-installation-pull-secrets\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.638430 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b523ddb6-b299-4bd7-9a33-75c025fb1805-service-ca-bundle\") pod \"authentication-operator-69f744f599-4lbfr\" (UID: \"b523ddb6-b299-4bd7-9a33-75c025fb1805\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4lbfr" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.638454 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5be28070-1b99-4b27-8777-7f7935ba0b6e-config\") pod \"console-operator-58897d9998-8787r\" (UID: \"5be28070-1b99-4b27-8777-7f7935ba0b6e\") " pod="openshift-console-operator/console-operator-58897d9998-8787r" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.638516 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a9b60922-75eb-4c97-85d5-12c146fe6cb1-bound-sa-token\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.638559 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a7cb5e21-a1f6-4e35-bf0d-e709b16d6994-trusted-ca-bundle\") pod \"console-f9d7485db-x568j\" (UID: \"a7cb5e21-a1f6-4e35-bf0d-e709b16d6994\") " pod="openshift-console/console-f9d7485db-x568j" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.638599 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a7cb5e21-a1f6-4e35-bf0d-e709b16d6994-oauth-serving-cert\") pod \"console-f9d7485db-x568j\" (UID: \"a7cb5e21-a1f6-4e35-bf0d-e709b16d6994\") " pod="openshift-console/console-f9d7485db-x568j" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.639802 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvwqc\" (UniqueName: \"kubernetes.io/projected/a7cb5e21-a1f6-4e35-bf0d-e709b16d6994-kube-api-access-zvwqc\") pod \"console-f9d7485db-x568j\" (UID: \"a7cb5e21-a1f6-4e35-bf0d-e709b16d6994\") " pod="openshift-console/console-f9d7485db-x568j" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.639853 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r98wx\" (UniqueName: \"kubernetes.io/projected/5be28070-1b99-4b27-8777-7f7935ba0b6e-kube-api-access-r98wx\") pod \"console-operator-58897d9998-8787r\" (UID: \"5be28070-1b99-4b27-8777-7f7935ba0b6e\") " pod="openshift-console-operator/console-operator-58897d9998-8787r" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.639885 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhk9q\" (UniqueName: \"kubernetes.io/projected/0d7fbad6-8c3b-4d05-8ca4-ef9f3f39ec4f-kube-api-access-fhk9q\") pod \"openshift-config-operator-7777fb866f-25rjx\" (UID: \"0d7fbad6-8c3b-4d05-8ca4-ef9f3f39ec4f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-25rjx" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.639907 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9743bb9-e748-40c7-a15d-c33fad88c2f2-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-6t7gf\" (UID: \"d9743bb9-e748-40c7-a15d-c33fad88c2f2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6t7gf" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.639953 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.639983 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a9b60922-75eb-4c97-85d5-12c146fe6cb1-trusted-ca\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.640014 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltbc2\" (UniqueName: \"kubernetes.io/projected/b1a5cd8a-fb52-47ab-8aff-64e621d9b8e0-kube-api-access-ltbc2\") pod \"router-default-5444994796-jwp46\" (UID: \"b1a5cd8a-fb52-47ab-8aff-64e621d9b8e0\") " pod="openshift-ingress/router-default-5444994796-jwp46" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.640052 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a9b60922-75eb-4c97-85d5-12c146fe6cb1-registry-tls\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.640072 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a7cb5e21-a1f6-4e35-bf0d-e709b16d6994-console-oauth-config\") pod \"console-f9d7485db-x568j\" (UID: \"a7cb5e21-a1f6-4e35-bf0d-e709b16d6994\") " pod="openshift-console/console-f9d7485db-x568j" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.640147 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b1a5cd8a-fb52-47ab-8aff-64e621d9b8e0-metrics-certs\") pod \"router-default-5444994796-jwp46\" (UID: \"b1a5cd8a-fb52-47ab-8aff-64e621d9b8e0\") " pod="openshift-ingress/router-default-5444994796-jwp46" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.640175 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hf2s6\" (UniqueName: \"kubernetes.io/projected/925e8b6f-3848-4ab5-ab00-55405db2334c-kube-api-access-hf2s6\") pod \"ingress-operator-5b745b69d9-nsqw4\" (UID: \"925e8b6f-3848-4ab5-ab00-55405db2334c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nsqw4" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.640204 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b4eaafb5-bf66-460f-86df-9b3825837d05-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-fv5wb\" (UID: \"b4eaafb5-bf66-460f-86df-9b3825837d05\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fv5wb" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.640230 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a7cb5e21-a1f6-4e35-bf0d-e709b16d6994-service-ca\") pod \"console-f9d7485db-x568j\" (UID: \"a7cb5e21-a1f6-4e35-bf0d-e709b16d6994\") " pod="openshift-console/console-f9d7485db-x568j" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.640257 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5be28070-1b99-4b27-8777-7f7935ba0b6e-trusted-ca\") pod \"console-operator-58897d9998-8787r\" (UID: \"5be28070-1b99-4b27-8777-7f7935ba0b6e\") " pod="openshift-console-operator/console-operator-58897d9998-8787r" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.640358 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7b59cb33-d5dc-4e90-b6fe-fe3ad948c346-apiservice-cert\") pod \"packageserver-d55dfcdfc-hc84b\" (UID: \"7b59cb33-d5dc-4e90-b6fe-fe3ad948c346\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hc84b" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.640388 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7b59cb33-d5dc-4e90-b6fe-fe3ad948c346-webhook-cert\") pod \"packageserver-d55dfcdfc-hc84b\" (UID: \"7b59cb33-d5dc-4e90-b6fe-fe3ad948c346\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hc84b" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.640426 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a9b60922-75eb-4c97-85d5-12c146fe6cb1-registry-certificates\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.640453 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d7fbad6-8c3b-4d05-8ca4-ef9f3f39ec4f-serving-cert\") pod \"openshift-config-operator-7777fb866f-25rjx\" (UID: \"0d7fbad6-8c3b-4d05-8ca4-ef9f3f39ec4f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-25rjx" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.640477 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7b59cb33-d5dc-4e90-b6fe-fe3ad948c346-tmpfs\") pod \"packageserver-d55dfcdfc-hc84b\" (UID: \"7b59cb33-d5dc-4e90-b6fe-fe3ad948c346\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hc84b" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.643274 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-4z5px" event={"ID":"6777175c-7525-4ae6-9b3e-391b3e21abf8","Type":"ContainerStarted","Data":"c1f504f3ae1b387311e1902cf3465280092480c81575bb7c7622dd0298fc8324"} Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.643341 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-4z5px" Feb 02 21:22:02 crc kubenswrapper[4789]: E0202 21:22:02.643378 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 21:22:03.143354879 +0000 UTC m=+143.438379908 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x2wtg" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.645542 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bdqgm" event={"ID":"50fa75e9-6b6f-4dc5-a3be-5d3e2d7f8169","Type":"ContainerStarted","Data":"391d0bd791deb4acffb0e3d2d0c7ec607bb6e0f45bbd10001b41863fddbd0104"} Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.645980 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bdqgm" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.651319 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-hqmrz" event={"ID":"82a78e00-0795-4de5-8062-f92878ea6c72","Type":"ContainerStarted","Data":"386d19fdfe8599097e21053e93ba444b08aeac85e47382e38bd04fc8a2ef8631"} Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.651818 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-4z5px" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.651844 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-hqmrz" event={"ID":"82a78e00-0795-4de5-8062-f92878ea6c72","Type":"ContainerStarted","Data":"dd54a7179e4cecaf85d38216c2e0c4ca09f08ad932684d62355ee9a94ae595c6"} Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.654550 4789 generic.go:334] "Generic (PLEG): container finished" podID="62b2eeb5-6380-43c4-9c2e-e7aa29c88057" containerID="cd9f16d99520fae524ccbf47dab27fa31e42aa064c131772a1296f1881086515" exitCode=0 Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.654626 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gv5lf" event={"ID":"62b2eeb5-6380-43c4-9c2e-e7aa29c88057","Type":"ContainerDied","Data":"cd9f16d99520fae524ccbf47dab27fa31e42aa064c131772a1296f1881086515"} Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.654652 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gv5lf" event={"ID":"62b2eeb5-6380-43c4-9c2e-e7aa29c88057","Type":"ContainerStarted","Data":"ea963842947e6e6b4d497718e3b02c422c80bd6d0678362f05ae24b412e3161b"} Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.655478 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bdqgm" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.662055 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lbmnl"] Feb 02 21:22:02 crc kubenswrapper[4789]: W0202 21:22:02.669846 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9c0c6217_0e72_4682_8417_f6f6b2809bfa.slice/crio-f86299d1ab04c257c84d565a779c37682f6cd3bbeb7856c30647af78c0e2dcf9 WatchSource:0}: Error finding container f86299d1ab04c257c84d565a779c37682f6cd3bbeb7856c30647af78c0e2dcf9: Status 404 returned error can't find the container with id f86299d1ab04c257c84d565a779c37682f6cd3bbeb7856c30647af78c0e2dcf9 Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.679874 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-84vfz" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.719271 4789 csr.go:261] certificate signing request csr-drspn is approved, waiting to be issued Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.729462 4789 csr.go:257] certificate signing request csr-drspn is issued Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.742824 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:22:02 crc kubenswrapper[4789]: E0202 21:22:02.742936 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 21:22:03.242912655 +0000 UTC m=+143.537937674 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.742986 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b2ad8f41-6fd7-4cc5-b8a1-38a886817c6c-registration-dir\") pod \"csi-hostpathplugin-dzvhk\" (UID: \"b2ad8f41-6fd7-4cc5-b8a1-38a886817c6c\") " pod="hostpath-provisioner/csi-hostpathplugin-dzvhk" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.743017 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9ee2bc38-213d-4181-8e23-0f579b87c986-config-volume\") pod \"collect-profiles-29501115-zcgzz\" (UID: \"9ee2bc38-213d-4181-8e23-0f579b87c986\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501115-zcgzz" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.743034 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d22cr\" (UniqueName: \"kubernetes.io/projected/81508e9e-bf9e-4d3e-b505-c9ca5ae81d79-kube-api-access-d22cr\") pod \"cluster-samples-operator-665b6dd947-6df8n\" (UID: \"81508e9e-bf9e-4d3e-b505-c9ca5ae81d79\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6df8n" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.743059 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a9b60922-75eb-4c97-85d5-12c146fe6cb1-installation-pull-secrets\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.743084 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b2ad8f41-6fd7-4cc5-b8a1-38a886817c6c-socket-dir\") pod \"csi-hostpathplugin-dzvhk\" (UID: \"b2ad8f41-6fd7-4cc5-b8a1-38a886817c6c\") " pod="hostpath-provisioner/csi-hostpathplugin-dzvhk" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.743105 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/62652ba8-968d-4e22-8e4a-00497c30cacc-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-zn96z\" (UID: \"62652ba8-968d-4e22-8e4a-00497c30cacc\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zn96z" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.743144 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a9b60922-75eb-4c97-85d5-12c146fe6cb1-bound-sa-token\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.743166 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a7cb5e21-a1f6-4e35-bf0d-e709b16d6994-trusted-ca-bundle\") pod \"console-f9d7485db-x568j\" (UID: \"a7cb5e21-a1f6-4e35-bf0d-e709b16d6994\") " pod="openshift-console/console-f9d7485db-x568j" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.743178 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a7cb5e21-a1f6-4e35-bf0d-e709b16d6994-oauth-serving-cert\") pod \"console-f9d7485db-x568j\" (UID: \"a7cb5e21-a1f6-4e35-bf0d-e709b16d6994\") " pod="openshift-console/console-f9d7485db-x568j" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.743194 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/15911fdb-68b4-453a-a196-d4806f11ab2f-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zjp5b\" (UID: \"15911fdb-68b4-453a-a196-d4806f11ab2f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zjp5b" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.743212 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/2ab2e98f-4cb6-47c6-acbf-b2b58621c78f-images\") pod \"machine-config-operator-74547568cd-85qfd\" (UID: \"2ab2e98f-4cb6-47c6-acbf-b2b58621c78f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-85qfd" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.743237 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvwqc\" (UniqueName: \"kubernetes.io/projected/a7cb5e21-a1f6-4e35-bf0d-e709b16d6994-kube-api-access-zvwqc\") pod \"console-f9d7485db-x568j\" (UID: \"a7cb5e21-a1f6-4e35-bf0d-e709b16d6994\") " pod="openshift-console/console-f9d7485db-x568j" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.743252 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r98wx\" (UniqueName: \"kubernetes.io/projected/5be28070-1b99-4b27-8777-7f7935ba0b6e-kube-api-access-r98wx\") pod \"console-operator-58897d9998-8787r\" (UID: \"5be28070-1b99-4b27-8777-7f7935ba0b6e\") " pod="openshift-console-operator/console-operator-58897d9998-8787r" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.743267 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhk9q\" (UniqueName: \"kubernetes.io/projected/0d7fbad6-8c3b-4d05-8ca4-ef9f3f39ec4f-kube-api-access-fhk9q\") pod \"openshift-config-operator-7777fb866f-25rjx\" (UID: \"0d7fbad6-8c3b-4d05-8ca4-ef9f3f39ec4f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-25rjx" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.743282 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9743bb9-e748-40c7-a15d-c33fad88c2f2-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-6t7gf\" (UID: \"d9743bb9-e748-40c7-a15d-c33fad88c2f2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6t7gf" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.743303 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.743320 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltbc2\" (UniqueName: \"kubernetes.io/projected/b1a5cd8a-fb52-47ab-8aff-64e621d9b8e0-kube-api-access-ltbc2\") pod \"router-default-5444994796-jwp46\" (UID: \"b1a5cd8a-fb52-47ab-8aff-64e621d9b8e0\") " pod="openshift-ingress/router-default-5444994796-jwp46" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.743352 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a7cb5e21-a1f6-4e35-bf0d-e709b16d6994-console-oauth-config\") pod \"console-f9d7485db-x568j\" (UID: \"a7cb5e21-a1f6-4e35-bf0d-e709b16d6994\") " pod="openshift-console/console-f9d7485db-x568j" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.743383 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b1a5cd8a-fb52-47ab-8aff-64e621d9b8e0-metrics-certs\") pod \"router-default-5444994796-jwp46\" (UID: \"b1a5cd8a-fb52-47ab-8aff-64e621d9b8e0\") " pod="openshift-ingress/router-default-5444994796-jwp46" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.743399 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8044779f-6644-4f6b-8265-2014af5cc045-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-25gsr\" (UID: \"8044779f-6644-4f6b-8265-2014af5cc045\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-25gsr" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.743413 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a7cb5e21-a1f6-4e35-bf0d-e709b16d6994-service-ca\") pod \"console-f9d7485db-x568j\" (UID: \"a7cb5e21-a1f6-4e35-bf0d-e709b16d6994\") " pod="openshift-console/console-f9d7485db-x568j" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.743431 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5be28070-1b99-4b27-8777-7f7935ba0b6e-trusted-ca\") pod \"console-operator-58897d9998-8787r\" (UID: \"5be28070-1b99-4b27-8777-7f7935ba0b6e\") " pod="openshift-console-operator/console-operator-58897d9998-8787r" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.743448 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c931683a-9657-49c3-87e4-f76d8f2bf95a-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-nqts5\" (UID: \"c931683a-9657-49c3-87e4-f76d8f2bf95a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nqts5" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.743498 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7b59cb33-d5dc-4e90-b6fe-fe3ad948c346-apiservice-cert\") pod \"packageserver-d55dfcdfc-hc84b\" (UID: \"7b59cb33-d5dc-4e90-b6fe-fe3ad948c346\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hc84b" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.743522 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khdgh\" (UniqueName: \"kubernetes.io/projected/f7efbc68-70b1-4521-9be4-e67317fe757e-kube-api-access-khdgh\") pod \"service-ca-operator-777779d784-bp7g7\" (UID: \"f7efbc68-70b1-4521-9be4-e67317fe757e\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-bp7g7" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.743537 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9af55cc7-0e27-43ce-8db1-ce73a35d361e-config-volume\") pod \"dns-default-jf9jd\" (UID: \"9af55cc7-0e27-43ce-8db1-ce73a35d361e\") " pod="openshift-dns/dns-default-jf9jd" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.743551 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9ee2bc38-213d-4181-8e23-0f579b87c986-secret-volume\") pod \"collect-profiles-29501115-zcgzz\" (UID: \"9ee2bc38-213d-4181-8e23-0f579b87c986\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501115-zcgzz" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.743573 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7b59cb33-d5dc-4e90-b6fe-fe3ad948c346-webhook-cert\") pod \"packageserver-d55dfcdfc-hc84b\" (UID: \"7b59cb33-d5dc-4e90-b6fe-fe3ad948c346\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hc84b" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.743626 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7efbc68-70b1-4521-9be4-e67317fe757e-config\") pod \"service-ca-operator-777779d784-bp7g7\" (UID: \"f7efbc68-70b1-4521-9be4-e67317fe757e\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-bp7g7" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.743641 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f7efbc68-70b1-4521-9be4-e67317fe757e-serving-cert\") pod \"service-ca-operator-777779d784-bp7g7\" (UID: \"f7efbc68-70b1-4521-9be4-e67317fe757e\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-bp7g7" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.743656 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/f06e3896-8ef9-4988-974f-446fb0bb3faf-signing-key\") pod \"service-ca-9c57cc56f-w594l\" (UID: \"f06e3896-8ef9-4988-974f-446fb0bb3faf\") " pod="openshift-service-ca/service-ca-9c57cc56f-w594l" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.743671 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d7fbad6-8c3b-4d05-8ca4-ef9f3f39ec4f-serving-cert\") pod \"openshift-config-operator-7777fb866f-25rjx\" (UID: \"0d7fbad6-8c3b-4d05-8ca4-ef9f3f39ec4f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-25rjx" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.743690 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/81508e9e-bf9e-4d3e-b505-c9ca5ae81d79-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-6df8n\" (UID: \"81508e9e-bf9e-4d3e-b505-c9ca5ae81d79\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6df8n" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.743706 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/925e8b6f-3848-4ab5-ab00-55405db2334c-metrics-tls\") pod \"ingress-operator-5b745b69d9-nsqw4\" (UID: \"925e8b6f-3848-4ab5-ab00-55405db2334c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nsqw4" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.743721 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b1a5cd8a-fb52-47ab-8aff-64e621d9b8e0-service-ca-bundle\") pod \"router-default-5444994796-jwp46\" (UID: \"b1a5cd8a-fb52-47ab-8aff-64e621d9b8e0\") " pod="openshift-ingress/router-default-5444994796-jwp46" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.743736 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/b2ad8f41-6fd7-4cc5-b8a1-38a886817c6c-plugins-dir\") pod \"csi-hostpathplugin-dzvhk\" (UID: \"b2ad8f41-6fd7-4cc5-b8a1-38a886817c6c\") " pod="hostpath-provisioner/csi-hostpathplugin-dzvhk" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.743759 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b523ddb6-b299-4bd7-9a33-75c025fb1805-serving-cert\") pod \"authentication-operator-69f744f599-4lbfr\" (UID: \"b523ddb6-b299-4bd7-9a33-75c025fb1805\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4lbfr" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.743775 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8044779f-6644-4f6b-8265-2014af5cc045-proxy-tls\") pod \"machine-config-controller-84d6567774-25gsr\" (UID: \"8044779f-6644-4f6b-8265-2014af5cc045\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-25gsr" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.743791 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kg4r\" (UniqueName: \"kubernetes.io/projected/f06e3896-8ef9-4988-974f-446fb0bb3faf-kube-api-access-8kg4r\") pod \"service-ca-9c57cc56f-w594l\" (UID: \"f06e3896-8ef9-4988-974f-446fb0bb3faf\") " pod="openshift-service-ca/service-ca-9c57cc56f-w594l" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.743807 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/7259afcb-55a9-490c-a823-94e217d939e0-certs\") pod \"machine-config-server-cfghm\" (UID: \"7259afcb-55a9-490c-a823-94e217d939e0\") " pod="openshift-machine-config-operator/machine-config-server-cfghm" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.743824 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ad609ed-908e-45b1-90e9-0068a4c1d700-config\") pod \"kube-apiserver-operator-766d6c64bb-lssjz\" (UID: \"5ad609ed-908e-45b1-90e9-0068a4c1d700\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lssjz" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.743865 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2npsj\" (UniqueName: \"kubernetes.io/projected/b2ad8f41-6fd7-4cc5-b8a1-38a886817c6c-kube-api-access-2npsj\") pod \"csi-hostpathplugin-dzvhk\" (UID: \"b2ad8f41-6fd7-4cc5-b8a1-38a886817c6c\") " pod="hostpath-provisioner/csi-hostpathplugin-dzvhk" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.743890 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zs4jl\" (UniqueName: \"kubernetes.io/projected/c9720cea-0f21-43a7-91b1-31c95167f4a4-kube-api-access-zs4jl\") pod \"migrator-59844c95c7-26j4t\" (UID: \"c9720cea-0f21-43a7-91b1-31c95167f4a4\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-26j4t" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.743909 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/74de48da-c435-4a4a-8042-c8fc935059b7-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-c9zn5\" (UID: \"74de48da-c435-4a4a-8042-c8fc935059b7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-c9zn5" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.743954 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-757zt\" (UniqueName: \"kubernetes.io/projected/7259afcb-55a9-490c-a823-94e217d939e0-kube-api-access-757zt\") pod \"machine-config-server-cfghm\" (UID: \"7259afcb-55a9-490c-a823-94e217d939e0\") " pod="openshift-machine-config-operator/machine-config-server-cfghm" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.743969 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tv4t6\" (UniqueName: \"kubernetes.io/projected/74de48da-c435-4a4a-8042-c8fc935059b7-kube-api-access-tv4t6\") pod \"package-server-manager-789f6589d5-c9zn5\" (UID: \"74de48da-c435-4a4a-8042-c8fc935059b7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-c9zn5" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.743986 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97bfj\" (UniqueName: \"kubernetes.io/projected/62652ba8-968d-4e22-8e4a-00497c30cacc-kube-api-access-97bfj\") pod \"control-plane-machine-set-operator-78cbb6b69f-zn96z\" (UID: \"62652ba8-968d-4e22-8e4a-00497c30cacc\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zn96z" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.744003 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/925e8b6f-3848-4ab5-ab00-55405db2334c-bound-sa-token\") pod \"ingress-operator-5b745b69d9-nsqw4\" (UID: \"925e8b6f-3848-4ab5-ab00-55405db2334c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nsqw4" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.744019 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d9743bb9-e748-40c7-a15d-c33fad88c2f2-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-6t7gf\" (UID: \"d9743bb9-e748-40c7-a15d-c33fad88c2f2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6t7gf" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.744036 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hspk\" (UniqueName: \"kubernetes.io/projected/0ab37823-7471-4bc8-b5e5-28110c1e5f4e-kube-api-access-8hspk\") pod \"ingress-canary-rlwl7\" (UID: \"0ab37823-7471-4bc8-b5e5-28110c1e5f4e\") " pod="openshift-ingress-canary/ingress-canary-rlwl7" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.744054 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nb2t8\" (UniqueName: \"kubernetes.io/projected/ff15c8fe-e1d6-4adb-85e3-decb591896c2-kube-api-access-nb2t8\") pod \"multus-admission-controller-857f4d67dd-6n95r\" (UID: \"ff15c8fe-e1d6-4adb-85e3-decb591896c2\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-6n95r" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.744082 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4eaafb5-bf66-460f-86df-9b3825837d05-config\") pod \"openshift-apiserver-operator-796bbdcf4f-fv5wb\" (UID: \"b4eaafb5-bf66-460f-86df-9b3825837d05\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fv5wb" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.744105 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9af55cc7-0e27-43ce-8db1-ce73a35d361e-metrics-tls\") pod \"dns-default-jf9jd\" (UID: \"9af55cc7-0e27-43ce-8db1-ce73a35d361e\") " pod="openshift-dns/dns-default-jf9jd" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.744139 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ff15c8fe-e1d6-4adb-85e3-decb591896c2-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-6n95r\" (UID: \"ff15c8fe-e1d6-4adb-85e3-decb591896c2\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-6n95r" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.744166 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5be28070-1b99-4b27-8777-7f7935ba0b6e-serving-cert\") pod \"console-operator-58897d9998-8787r\" (UID: \"5be28070-1b99-4b27-8777-7f7935ba0b6e\") " pod="openshift-console-operator/console-operator-58897d9998-8787r" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.744192 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r67r7\" (UniqueName: \"kubernetes.io/projected/b523ddb6-b299-4bd7-9a33-75c025fb1805-kube-api-access-r67r7\") pod \"authentication-operator-69f744f599-4lbfr\" (UID: \"b523ddb6-b299-4bd7-9a33-75c025fb1805\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4lbfr" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.744208 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/b1a5cd8a-fb52-47ab-8aff-64e621d9b8e0-stats-auth\") pod \"router-default-5444994796-jwp46\" (UID: \"b1a5cd8a-fb52-47ab-8aff-64e621d9b8e0\") " pod="openshift-ingress/router-default-5444994796-jwp46" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.744263 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b523ddb6-b299-4bd7-9a33-75c025fb1805-config\") pod \"authentication-operator-69f744f599-4lbfr\" (UID: \"b523ddb6-b299-4bd7-9a33-75c025fb1805\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4lbfr" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.744278 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/b1a5cd8a-fb52-47ab-8aff-64e621d9b8e0-default-certificate\") pod \"router-default-5444994796-jwp46\" (UID: \"b1a5cd8a-fb52-47ab-8aff-64e621d9b8e0\") " pod="openshift-ingress/router-default-5444994796-jwp46" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.744295 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15911fdb-68b4-453a-a196-d4806f11ab2f-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zjp5b\" (UID: \"15911fdb-68b4-453a-a196-d4806f11ab2f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zjp5b" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.744311 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/4da03a7e-6764-44c5-bdca-52dc85f316fc-etcd-service-ca\") pod \"etcd-operator-b45778765-wgzrz\" (UID: \"4da03a7e-6764-44c5-bdca-52dc85f316fc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wgzrz" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.745322 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a7cb5e21-a1f6-4e35-bf0d-e709b16d6994-service-ca\") pod \"console-f9d7485db-x568j\" (UID: \"a7cb5e21-a1f6-4e35-bf0d-e709b16d6994\") " pod="openshift-console/console-f9d7485db-x568j" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.745549 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a7cb5e21-a1f6-4e35-bf0d-e709b16d6994-oauth-serving-cert\") pod \"console-f9d7485db-x568j\" (UID: \"a7cb5e21-a1f6-4e35-bf0d-e709b16d6994\") " pod="openshift-console/console-f9d7485db-x568j" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.746740 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/f06e3896-8ef9-4988-974f-446fb0bb3faf-signing-cabundle\") pod \"service-ca-9c57cc56f-w594l\" (UID: \"f06e3896-8ef9-4988-974f-446fb0bb3faf\") " pod="openshift-service-ca/service-ca-9c57cc56f-w594l" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.746784 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brj2x\" (UniqueName: \"kubernetes.io/projected/a9b60922-75eb-4c97-85d5-12c146fe6cb1-kube-api-access-brj2x\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.746809 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5be28070-1b99-4b27-8777-7f7935ba0b6e-config\") pod \"console-operator-58897d9998-8787r\" (UID: \"5be28070-1b99-4b27-8777-7f7935ba0b6e\") " pod="openshift-console-operator/console-operator-58897d9998-8787r" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.746858 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2ab2e98f-4cb6-47c6-acbf-b2b58621c78f-auth-proxy-config\") pod \"machine-config-operator-74547568cd-85qfd\" (UID: \"2ab2e98f-4cb6-47c6-acbf-b2b58621c78f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-85qfd" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.746896 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b523ddb6-b299-4bd7-9a33-75c025fb1805-service-ca-bundle\") pod \"authentication-operator-69f744f599-4lbfr\" (UID: \"b523ddb6-b299-4bd7-9a33-75c025fb1805\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4lbfr" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.746898 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9743bb9-e748-40c7-a15d-c33fad88c2f2-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-6t7gf\" (UID: \"d9743bb9-e748-40c7-a15d-c33fad88c2f2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6t7gf" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.746969 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4n6g\" (UniqueName: \"kubernetes.io/projected/9af55cc7-0e27-43ce-8db1-ce73a35d361e-kube-api-access-p4n6g\") pod \"dns-default-jf9jd\" (UID: \"9af55cc7-0e27-43ce-8db1-ce73a35d361e\") " pod="openshift-dns/dns-default-jf9jd" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.747018 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gckp\" (UniqueName: \"kubernetes.io/projected/4da03a7e-6764-44c5-bdca-52dc85f316fc-kube-api-access-8gckp\") pod \"etcd-operator-b45778765-wgzrz\" (UID: \"4da03a7e-6764-44c5-bdca-52dc85f316fc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wgzrz" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.747037 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/7259afcb-55a9-490c-a823-94e217d939e0-node-bootstrap-token\") pod \"machine-config-server-cfghm\" (UID: \"7259afcb-55a9-490c-a823-94e217d939e0\") " pod="openshift-machine-config-operator/machine-config-server-cfghm" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.747053 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6559dcc4-e08f-4c1b-89b4-164673cd2ed0-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-7sn8m\" (UID: \"6559dcc4-e08f-4c1b-89b4-164673cd2ed0\") " pod="openshift-marketplace/marketplace-operator-79b997595-7sn8m" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.747138 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5krx\" (UniqueName: \"kubernetes.io/projected/8044779f-6644-4f6b-8265-2014af5cc045-kube-api-access-s5krx\") pod \"machine-config-controller-84d6567774-25gsr\" (UID: \"8044779f-6644-4f6b-8265-2014af5cc045\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-25gsr" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.747185 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/b2ad8f41-6fd7-4cc5-b8a1-38a886817c6c-mountpoint-dir\") pod \"csi-hostpathplugin-dzvhk\" (UID: \"b2ad8f41-6fd7-4cc5-b8a1-38a886817c6c\") " pod="hostpath-provisioner/csi-hostpathplugin-dzvhk" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.747206 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a9b60922-75eb-4c97-85d5-12c146fe6cb1-trusted-ca\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.748033 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a7cb5e21-a1f6-4e35-bf0d-e709b16d6994-trusted-ca-bundle\") pod \"console-f9d7485db-x568j\" (UID: \"a7cb5e21-a1f6-4e35-bf0d-e709b16d6994\") " pod="openshift-console/console-f9d7485db-x568j" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.748429 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b523ddb6-b299-4bd7-9a33-75c025fb1805-service-ca-bundle\") pod \"authentication-operator-69f744f599-4lbfr\" (UID: \"b523ddb6-b299-4bd7-9a33-75c025fb1805\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4lbfr" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.750283 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5be28070-1b99-4b27-8777-7f7935ba0b6e-config\") pod \"console-operator-58897d9998-8787r\" (UID: \"5be28070-1b99-4b27-8777-7f7935ba0b6e\") " pod="openshift-console-operator/console-operator-58897d9998-8787r" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.751808 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a9b60922-75eb-4c97-85d5-12c146fe6cb1-registry-tls\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.751840 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dmk5\" (UniqueName: \"kubernetes.io/projected/c931683a-9657-49c3-87e4-f76d8f2bf95a-kube-api-access-8dmk5\") pod \"kube-storage-version-migrator-operator-b67b599dd-nqts5\" (UID: \"c931683a-9657-49c3-87e4-f76d8f2bf95a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nqts5" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.751861 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqm2q\" (UniqueName: \"kubernetes.io/projected/6559dcc4-e08f-4c1b-89b4-164673cd2ed0-kube-api-access-nqm2q\") pod \"marketplace-operator-79b997595-7sn8m\" (UID: \"6559dcc4-e08f-4c1b-89b4-164673cd2ed0\") " pod="openshift-marketplace/marketplace-operator-79b997595-7sn8m" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.751961 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5be28070-1b99-4b27-8777-7f7935ba0b6e-trusted-ca\") pod \"console-operator-58897d9998-8787r\" (UID: \"5be28070-1b99-4b27-8777-7f7935ba0b6e\") " pod="openshift-console-operator/console-operator-58897d9998-8787r" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.751964 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hf2s6\" (UniqueName: \"kubernetes.io/projected/925e8b6f-3848-4ab5-ab00-55405db2334c-kube-api-access-hf2s6\") pod \"ingress-operator-5b745b69d9-nsqw4\" (UID: \"925e8b6f-3848-4ab5-ab00-55405db2334c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nsqw4" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.752027 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b4eaafb5-bf66-460f-86df-9b3825837d05-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-fv5wb\" (UID: \"b4eaafb5-bf66-460f-86df-9b3825837d05\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fv5wb" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.752051 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5ad609ed-908e-45b1-90e9-0068a4c1d700-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-lssjz\" (UID: \"5ad609ed-908e-45b1-90e9-0068a4c1d700\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lssjz" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.752087 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4da03a7e-6764-44c5-bdca-52dc85f316fc-serving-cert\") pod \"etcd-operator-b45778765-wgzrz\" (UID: \"4da03a7e-6764-44c5-bdca-52dc85f316fc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wgzrz" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.752174 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/b2ad8f41-6fd7-4cc5-b8a1-38a886817c6c-csi-data-dir\") pod \"csi-hostpathplugin-dzvhk\" (UID: \"b2ad8f41-6fd7-4cc5-b8a1-38a886817c6c\") " pod="hostpath-provisioner/csi-hostpathplugin-dzvhk" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.752203 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4eaafb5-bf66-460f-86df-9b3825837d05-config\") pod \"openshift-apiserver-operator-796bbdcf4f-fv5wb\" (UID: \"b4eaafb5-bf66-460f-86df-9b3825837d05\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fv5wb" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.752211 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a9b60922-75eb-4c97-85d5-12c146fe6cb1-registry-certificates\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.752267 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7b59cb33-d5dc-4e90-b6fe-fe3ad948c346-tmpfs\") pod \"packageserver-d55dfcdfc-hc84b\" (UID: \"7b59cb33-d5dc-4e90-b6fe-fe3ad948c346\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hc84b" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.752295 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4da03a7e-6764-44c5-bdca-52dc85f316fc-etcd-client\") pod \"etcd-operator-b45778765-wgzrz\" (UID: \"4da03a7e-6764-44c5-bdca-52dc85f316fc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wgzrz" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.752779 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a7cb5e21-a1f6-4e35-bf0d-e709b16d6994-console-serving-cert\") pod \"console-f9d7485db-x568j\" (UID: \"a7cb5e21-a1f6-4e35-bf0d-e709b16d6994\") " pod="openshift-console/console-f9d7485db-x568j" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.752805 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgtml\" (UniqueName: \"kubernetes.io/projected/b4eaafb5-bf66-460f-86df-9b3825837d05-kube-api-access-hgtml\") pod \"openshift-apiserver-operator-796bbdcf4f-fv5wb\" (UID: \"b4eaafb5-bf66-460f-86df-9b3825837d05\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fv5wb" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.752943 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9rzg\" (UniqueName: \"kubernetes.io/projected/2ab2e98f-4cb6-47c6-acbf-b2b58621c78f-kube-api-access-v9rzg\") pod \"machine-config-operator-74547568cd-85qfd\" (UID: \"2ab2e98f-4cb6-47c6-acbf-b2b58621c78f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-85qfd" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.753034 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b523ddb6-b299-4bd7-9a33-75c025fb1805-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-4lbfr\" (UID: \"b523ddb6-b299-4bd7-9a33-75c025fb1805\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4lbfr" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.753057 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/4da03a7e-6764-44c5-bdca-52dc85f316fc-etcd-ca\") pod \"etcd-operator-b45778765-wgzrz\" (UID: \"4da03a7e-6764-44c5-bdca-52dc85f316fc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wgzrz" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.753094 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlmvb\" (UniqueName: \"kubernetes.io/projected/d9743bb9-e748-40c7-a15d-c33fad88c2f2-kube-api-access-tlmvb\") pod \"openshift-controller-manager-operator-756b6f6bc6-6t7gf\" (UID: \"d9743bb9-e748-40c7-a15d-c33fad88c2f2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6t7gf" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.753152 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2ab2e98f-4cb6-47c6-acbf-b2b58621c78f-proxy-tls\") pod \"machine-config-operator-74547568cd-85qfd\" (UID: \"2ab2e98f-4cb6-47c6-acbf-b2b58621c78f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-85qfd" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.753172 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vmsd\" (UniqueName: \"kubernetes.io/projected/9ee2bc38-213d-4181-8e23-0f579b87c986-kube-api-access-8vmsd\") pod \"collect-profiles-29501115-zcgzz\" (UID: \"9ee2bc38-213d-4181-8e23-0f579b87c986\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501115-zcgzz" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.753191 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6559dcc4-e08f-4c1b-89b4-164673cd2ed0-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-7sn8m\" (UID: \"6559dcc4-e08f-4c1b-89b4-164673cd2ed0\") " pod="openshift-marketplace/marketplace-operator-79b997595-7sn8m" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.753248 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a9b60922-75eb-4c97-85d5-12c146fe6cb1-registry-certificates\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.753356 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15911fdb-68b4-453a-a196-d4806f11ab2f-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zjp5b\" (UID: \"15911fdb-68b4-453a-a196-d4806f11ab2f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zjp5b" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.753377 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/925e8b6f-3848-4ab5-ab00-55405db2334c-trusted-ca\") pod \"ingress-operator-5b745b69d9-nsqw4\" (UID: \"925e8b6f-3848-4ab5-ab00-55405db2334c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nsqw4" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.753396 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c931683a-9657-49c3-87e4-f76d8f2bf95a-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-nqts5\" (UID: \"c931683a-9657-49c3-87e4-f76d8f2bf95a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nqts5" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.753412 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4da03a7e-6764-44c5-bdca-52dc85f316fc-config\") pod \"etcd-operator-b45778765-wgzrz\" (UID: \"4da03a7e-6764-44c5-bdca-52dc85f316fc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wgzrz" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.753452 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a7cb5e21-a1f6-4e35-bf0d-e709b16d6994-console-config\") pod \"console-f9d7485db-x568j\" (UID: \"a7cb5e21-a1f6-4e35-bf0d-e709b16d6994\") " pod="openshift-console/console-f9d7485db-x568j" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.753471 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0ab37823-7471-4bc8-b5e5-28110c1e5f4e-cert\") pod \"ingress-canary-rlwl7\" (UID: \"0ab37823-7471-4bc8-b5e5-28110c1e5f4e\") " pod="openshift-ingress-canary/ingress-canary-rlwl7" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.753507 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvtwk\" (UniqueName: \"kubernetes.io/projected/7b59cb33-d5dc-4e90-b6fe-fe3ad948c346-kube-api-access-kvtwk\") pod \"packageserver-d55dfcdfc-hc84b\" (UID: \"7b59cb33-d5dc-4e90-b6fe-fe3ad948c346\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hc84b" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.753544 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/0d7fbad6-8c3b-4d05-8ca4-ef9f3f39ec4f-available-featuregates\") pod \"openshift-config-operator-7777fb866f-25rjx\" (UID: \"0d7fbad6-8c3b-4d05-8ca4-ef9f3f39ec4f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-25rjx" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.753615 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a9b60922-75eb-4c97-85d5-12c146fe6cb1-ca-trust-extracted\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.753635 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ad609ed-908e-45b1-90e9-0068a4c1d700-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-lssjz\" (UID: \"5ad609ed-908e-45b1-90e9-0068a4c1d700\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lssjz" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.753785 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/b1a5cd8a-fb52-47ab-8aff-64e621d9b8e0-default-certificate\") pod \"router-default-5444994796-jwp46\" (UID: \"b1a5cd8a-fb52-47ab-8aff-64e621d9b8e0\") " pod="openshift-ingress/router-default-5444994796-jwp46" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.753875 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b1a5cd8a-fb52-47ab-8aff-64e621d9b8e0-service-ca-bundle\") pod \"router-default-5444994796-jwp46\" (UID: \"b1a5cd8a-fb52-47ab-8aff-64e621d9b8e0\") " pod="openshift-ingress/router-default-5444994796-jwp46" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.755706 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/0d7fbad6-8c3b-4d05-8ca4-ef9f3f39ec4f-available-featuregates\") pod \"openshift-config-operator-7777fb866f-25rjx\" (UID: \"0d7fbad6-8c3b-4d05-8ca4-ef9f3f39ec4f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-25rjx" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.758059 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a7cb5e21-a1f6-4e35-bf0d-e709b16d6994-console-config\") pod \"console-f9d7485db-x568j\" (UID: \"a7cb5e21-a1f6-4e35-bf0d-e709b16d6994\") " pod="openshift-console/console-f9d7485db-x568j" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.758385 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a9b60922-75eb-4c97-85d5-12c146fe6cb1-ca-trust-extracted\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.758819 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hcxsv" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.759219 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d7fbad6-8c3b-4d05-8ca4-ef9f3f39ec4f-serving-cert\") pod \"openshift-config-operator-7777fb866f-25rjx\" (UID: \"0d7fbad6-8c3b-4d05-8ca4-ef9f3f39ec4f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-25rjx" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.760344 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a7cb5e21-a1f6-4e35-bf0d-e709b16d6994-console-serving-cert\") pod \"console-f9d7485db-x568j\" (UID: \"a7cb5e21-a1f6-4e35-bf0d-e709b16d6994\") " pod="openshift-console/console-f9d7485db-x568j" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.760942 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b523ddb6-b299-4bd7-9a33-75c025fb1805-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-4lbfr\" (UID: \"b523ddb6-b299-4bd7-9a33-75c025fb1805\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4lbfr" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.761233 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b523ddb6-b299-4bd7-9a33-75c025fb1805-config\") pod \"authentication-operator-69f744f599-4lbfr\" (UID: \"b523ddb6-b299-4bd7-9a33-75c025fb1805\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4lbfr" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.761957 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b1a5cd8a-fb52-47ab-8aff-64e621d9b8e0-metrics-certs\") pod \"router-default-5444994796-jwp46\" (UID: \"b1a5cd8a-fb52-47ab-8aff-64e621d9b8e0\") " pod="openshift-ingress/router-default-5444994796-jwp46" Feb 02 21:22:02 crc kubenswrapper[4789]: E0202 21:22:02.762948 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 21:22:03.260536023 +0000 UTC m=+143.555561042 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x2wtg" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.763379 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b4eaafb5-bf66-460f-86df-9b3825837d05-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-fv5wb\" (UID: \"b4eaafb5-bf66-460f-86df-9b3825837d05\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fv5wb" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.766786 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d9743bb9-e748-40c7-a15d-c33fad88c2f2-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-6t7gf\" (UID: \"d9743bb9-e748-40c7-a15d-c33fad88c2f2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6t7gf" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.773782 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a9b60922-75eb-4c97-85d5-12c146fe6cb1-registry-tls\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.774510 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5be28070-1b99-4b27-8777-7f7935ba0b6e-serving-cert\") pod \"console-operator-58897d9998-8787r\" (UID: \"5be28070-1b99-4b27-8777-7f7935ba0b6e\") " pod="openshift-console-operator/console-operator-58897d9998-8787r" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.774808 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a9b60922-75eb-4c97-85d5-12c146fe6cb1-installation-pull-secrets\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.775612 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/925e8b6f-3848-4ab5-ab00-55405db2334c-metrics-tls\") pod \"ingress-operator-5b745b69d9-nsqw4\" (UID: \"925e8b6f-3848-4ab5-ab00-55405db2334c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nsqw4" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.775635 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a7cb5e21-a1f6-4e35-bf0d-e709b16d6994-console-oauth-config\") pod \"console-f9d7485db-x568j\" (UID: \"a7cb5e21-a1f6-4e35-bf0d-e709b16d6994\") " pod="openshift-console/console-f9d7485db-x568j" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.775764 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b523ddb6-b299-4bd7-9a33-75c025fb1805-serving-cert\") pod \"authentication-operator-69f744f599-4lbfr\" (UID: \"b523ddb6-b299-4bd7-9a33-75c025fb1805\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4lbfr" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.775782 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/b1a5cd8a-fb52-47ab-8aff-64e621d9b8e0-stats-auth\") pod \"router-default-5444994796-jwp46\" (UID: \"b1a5cd8a-fb52-47ab-8aff-64e621d9b8e0\") " pod="openshift-ingress/router-default-5444994796-jwp46" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.789184 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a9b60922-75eb-4c97-85d5-12c146fe6cb1-trusted-ca\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.807351 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/925e8b6f-3848-4ab5-ab00-55405db2334c-trusted-ca\") pod \"ingress-operator-5b745b69d9-nsqw4\" (UID: \"925e8b6f-3848-4ab5-ab00-55405db2334c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nsqw4" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.808348 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvwqc\" (UniqueName: \"kubernetes.io/projected/a7cb5e21-a1f6-4e35-bf0d-e709b16d6994-kube-api-access-zvwqc\") pod \"console-f9d7485db-x568j\" (UID: \"a7cb5e21-a1f6-4e35-bf0d-e709b16d6994\") " pod="openshift-console/console-f9d7485db-x568j" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.808706 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7b59cb33-d5dc-4e90-b6fe-fe3ad948c346-tmpfs\") pod \"packageserver-d55dfcdfc-hc84b\" (UID: \"7b59cb33-d5dc-4e90-b6fe-fe3ad948c346\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hc84b" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.814303 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7b59cb33-d5dc-4e90-b6fe-fe3ad948c346-webhook-cert\") pod \"packageserver-d55dfcdfc-hc84b\" (UID: \"7b59cb33-d5dc-4e90-b6fe-fe3ad948c346\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hc84b" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.815593 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7b59cb33-d5dc-4e90-b6fe-fe3ad948c346-apiservice-cert\") pod \"packageserver-d55dfcdfc-hc84b\" (UID: \"7b59cb33-d5dc-4e90-b6fe-fe3ad948c346\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hc84b" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.820396 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhk9q\" (UniqueName: \"kubernetes.io/projected/0d7fbad6-8c3b-4d05-8ca4-ef9f3f39ec4f-kube-api-access-fhk9q\") pod \"openshift-config-operator-7777fb866f-25rjx\" (UID: \"0d7fbad6-8c3b-4d05-8ca4-ef9f3f39ec4f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-25rjx" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.836800 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a9b60922-75eb-4c97-85d5-12c146fe6cb1-bound-sa-token\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.843244 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-pkpwd"] Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.854185 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.854366 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ad609ed-908e-45b1-90e9-0068a4c1d700-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-lssjz\" (UID: \"5ad609ed-908e-45b1-90e9-0068a4c1d700\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lssjz" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.854389 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b2ad8f41-6fd7-4cc5-b8a1-38a886817c6c-registration-dir\") pod \"csi-hostpathplugin-dzvhk\" (UID: \"b2ad8f41-6fd7-4cc5-b8a1-38a886817c6c\") " pod="hostpath-provisioner/csi-hostpathplugin-dzvhk" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.854404 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9ee2bc38-213d-4181-8e23-0f579b87c986-config-volume\") pod \"collect-profiles-29501115-zcgzz\" (UID: \"9ee2bc38-213d-4181-8e23-0f579b87c986\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501115-zcgzz" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.854420 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d22cr\" (UniqueName: \"kubernetes.io/projected/81508e9e-bf9e-4d3e-b505-c9ca5ae81d79-kube-api-access-d22cr\") pod \"cluster-samples-operator-665b6dd947-6df8n\" (UID: \"81508e9e-bf9e-4d3e-b505-c9ca5ae81d79\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6df8n" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.854444 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b2ad8f41-6fd7-4cc5-b8a1-38a886817c6c-socket-dir\") pod \"csi-hostpathplugin-dzvhk\" (UID: \"b2ad8f41-6fd7-4cc5-b8a1-38a886817c6c\") " pod="hostpath-provisioner/csi-hostpathplugin-dzvhk" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.854460 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/62652ba8-968d-4e22-8e4a-00497c30cacc-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-zn96z\" (UID: \"62652ba8-968d-4e22-8e4a-00497c30cacc\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zn96z" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.854484 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/15911fdb-68b4-453a-a196-d4806f11ab2f-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zjp5b\" (UID: \"15911fdb-68b4-453a-a196-d4806f11ab2f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zjp5b" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.854500 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/2ab2e98f-4cb6-47c6-acbf-b2b58621c78f-images\") pod \"machine-config-operator-74547568cd-85qfd\" (UID: \"2ab2e98f-4cb6-47c6-acbf-b2b58621c78f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-85qfd" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.854534 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8044779f-6644-4f6b-8265-2014af5cc045-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-25gsr\" (UID: \"8044779f-6644-4f6b-8265-2014af5cc045\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-25gsr" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.854549 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c931683a-9657-49c3-87e4-f76d8f2bf95a-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-nqts5\" (UID: \"c931683a-9657-49c3-87e4-f76d8f2bf95a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nqts5" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.854586 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khdgh\" (UniqueName: \"kubernetes.io/projected/f7efbc68-70b1-4521-9be4-e67317fe757e-kube-api-access-khdgh\") pod \"service-ca-operator-777779d784-bp7g7\" (UID: \"f7efbc68-70b1-4521-9be4-e67317fe757e\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-bp7g7" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.854601 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9af55cc7-0e27-43ce-8db1-ce73a35d361e-config-volume\") pod \"dns-default-jf9jd\" (UID: \"9af55cc7-0e27-43ce-8db1-ce73a35d361e\") " pod="openshift-dns/dns-default-jf9jd" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.854615 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9ee2bc38-213d-4181-8e23-0f579b87c986-secret-volume\") pod \"collect-profiles-29501115-zcgzz\" (UID: \"9ee2bc38-213d-4181-8e23-0f579b87c986\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501115-zcgzz" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.854631 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7efbc68-70b1-4521-9be4-e67317fe757e-config\") pod \"service-ca-operator-777779d784-bp7g7\" (UID: \"f7efbc68-70b1-4521-9be4-e67317fe757e\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-bp7g7" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.854645 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f7efbc68-70b1-4521-9be4-e67317fe757e-serving-cert\") pod \"service-ca-operator-777779d784-bp7g7\" (UID: \"f7efbc68-70b1-4521-9be4-e67317fe757e\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-bp7g7" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.854660 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/f06e3896-8ef9-4988-974f-446fb0bb3faf-signing-key\") pod \"service-ca-9c57cc56f-w594l\" (UID: \"f06e3896-8ef9-4988-974f-446fb0bb3faf\") " pod="openshift-service-ca/service-ca-9c57cc56f-w594l" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.854674 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/81508e9e-bf9e-4d3e-b505-c9ca5ae81d79-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-6df8n\" (UID: \"81508e9e-bf9e-4d3e-b505-c9ca5ae81d79\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6df8n" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.854689 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/b2ad8f41-6fd7-4cc5-b8a1-38a886817c6c-plugins-dir\") pod \"csi-hostpathplugin-dzvhk\" (UID: \"b2ad8f41-6fd7-4cc5-b8a1-38a886817c6c\") " pod="hostpath-provisioner/csi-hostpathplugin-dzvhk" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.854704 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8044779f-6644-4f6b-8265-2014af5cc045-proxy-tls\") pod \"machine-config-controller-84d6567774-25gsr\" (UID: \"8044779f-6644-4f6b-8265-2014af5cc045\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-25gsr" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.854722 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kg4r\" (UniqueName: \"kubernetes.io/projected/f06e3896-8ef9-4988-974f-446fb0bb3faf-kube-api-access-8kg4r\") pod \"service-ca-9c57cc56f-w594l\" (UID: \"f06e3896-8ef9-4988-974f-446fb0bb3faf\") " pod="openshift-service-ca/service-ca-9c57cc56f-w594l" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.854738 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/7259afcb-55a9-490c-a823-94e217d939e0-certs\") pod \"machine-config-server-cfghm\" (UID: \"7259afcb-55a9-490c-a823-94e217d939e0\") " pod="openshift-machine-config-operator/machine-config-server-cfghm" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.854754 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ad609ed-908e-45b1-90e9-0068a4c1d700-config\") pod \"kube-apiserver-operator-766d6c64bb-lssjz\" (UID: \"5ad609ed-908e-45b1-90e9-0068a4c1d700\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lssjz" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.854770 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2npsj\" (UniqueName: \"kubernetes.io/projected/b2ad8f41-6fd7-4cc5-b8a1-38a886817c6c-kube-api-access-2npsj\") pod \"csi-hostpathplugin-dzvhk\" (UID: \"b2ad8f41-6fd7-4cc5-b8a1-38a886817c6c\") " pod="hostpath-provisioner/csi-hostpathplugin-dzvhk" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.854786 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zs4jl\" (UniqueName: \"kubernetes.io/projected/c9720cea-0f21-43a7-91b1-31c95167f4a4-kube-api-access-zs4jl\") pod \"migrator-59844c95c7-26j4t\" (UID: \"c9720cea-0f21-43a7-91b1-31c95167f4a4\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-26j4t" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.854798 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b2ad8f41-6fd7-4cc5-b8a1-38a886817c6c-socket-dir\") pod \"csi-hostpathplugin-dzvhk\" (UID: \"b2ad8f41-6fd7-4cc5-b8a1-38a886817c6c\") " pod="hostpath-provisioner/csi-hostpathplugin-dzvhk" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.856796 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b2ad8f41-6fd7-4cc5-b8a1-38a886817c6c-registration-dir\") pod \"csi-hostpathplugin-dzvhk\" (UID: \"b2ad8f41-6fd7-4cc5-b8a1-38a886817c6c\") " pod="hostpath-provisioner/csi-hostpathplugin-dzvhk" Feb 02 21:22:02 crc kubenswrapper[4789]: E0202 21:22:02.856928 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 21:22:03.356905468 +0000 UTC m=+143.651930537 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.857285 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/2ab2e98f-4cb6-47c6-acbf-b2b58621c78f-images\") pod \"machine-config-operator-74547568cd-85qfd\" (UID: \"2ab2e98f-4cb6-47c6-acbf-b2b58621c78f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-85qfd" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.857379 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c931683a-9657-49c3-87e4-f76d8f2bf95a-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-nqts5\" (UID: \"c931683a-9657-49c3-87e4-f76d8f2bf95a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nqts5" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.857995 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8044779f-6644-4f6b-8265-2014af5cc045-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-25gsr\" (UID: \"8044779f-6644-4f6b-8265-2014af5cc045\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-25gsr" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.858048 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/b2ad8f41-6fd7-4cc5-b8a1-38a886817c6c-plugins-dir\") pod \"csi-hostpathplugin-dzvhk\" (UID: \"b2ad8f41-6fd7-4cc5-b8a1-38a886817c6c\") " pod="hostpath-provisioner/csi-hostpathplugin-dzvhk" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.858313 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9ee2bc38-213d-4181-8e23-0f579b87c986-config-volume\") pod \"collect-profiles-29501115-zcgzz\" (UID: \"9ee2bc38-213d-4181-8e23-0f579b87c986\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501115-zcgzz" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.854801 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/74de48da-c435-4a4a-8042-c8fc935059b7-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-c9zn5\" (UID: \"74de48da-c435-4a4a-8042-c8fc935059b7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-c9zn5" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.859014 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9af55cc7-0e27-43ce-8db1-ce73a35d361e-config-volume\") pod \"dns-default-jf9jd\" (UID: \"9af55cc7-0e27-43ce-8db1-ce73a35d361e\") " pod="openshift-dns/dns-default-jf9jd" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.859073 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-757zt\" (UniqueName: \"kubernetes.io/projected/7259afcb-55a9-490c-a823-94e217d939e0-kube-api-access-757zt\") pod \"machine-config-server-cfghm\" (UID: \"7259afcb-55a9-490c-a823-94e217d939e0\") " pod="openshift-machine-config-operator/machine-config-server-cfghm" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.859101 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tv4t6\" (UniqueName: \"kubernetes.io/projected/74de48da-c435-4a4a-8042-c8fc935059b7-kube-api-access-tv4t6\") pod \"package-server-manager-789f6589d5-c9zn5\" (UID: \"74de48da-c435-4a4a-8042-c8fc935059b7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-c9zn5" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.859130 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97bfj\" (UniqueName: \"kubernetes.io/projected/62652ba8-968d-4e22-8e4a-00497c30cacc-kube-api-access-97bfj\") pod \"control-plane-machine-set-operator-78cbb6b69f-zn96z\" (UID: \"62652ba8-968d-4e22-8e4a-00497c30cacc\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zn96z" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.859216 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hspk\" (UniqueName: \"kubernetes.io/projected/0ab37823-7471-4bc8-b5e5-28110c1e5f4e-kube-api-access-8hspk\") pod \"ingress-canary-rlwl7\" (UID: \"0ab37823-7471-4bc8-b5e5-28110c1e5f4e\") " pod="openshift-ingress-canary/ingress-canary-rlwl7" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.859243 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nb2t8\" (UniqueName: \"kubernetes.io/projected/ff15c8fe-e1d6-4adb-85e3-decb591896c2-kube-api-access-nb2t8\") pod \"multus-admission-controller-857f4d67dd-6n95r\" (UID: \"ff15c8fe-e1d6-4adb-85e3-decb591896c2\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-6n95r" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.859282 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9af55cc7-0e27-43ce-8db1-ce73a35d361e-metrics-tls\") pod \"dns-default-jf9jd\" (UID: \"9af55cc7-0e27-43ce-8db1-ce73a35d361e\") " pod="openshift-dns/dns-default-jf9jd" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.859316 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ff15c8fe-e1d6-4adb-85e3-decb591896c2-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-6n95r\" (UID: \"ff15c8fe-e1d6-4adb-85e3-decb591896c2\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-6n95r" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.859361 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15911fdb-68b4-453a-a196-d4806f11ab2f-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zjp5b\" (UID: \"15911fdb-68b4-453a-a196-d4806f11ab2f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zjp5b" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.859385 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/4da03a7e-6764-44c5-bdca-52dc85f316fc-etcd-service-ca\") pod \"etcd-operator-b45778765-wgzrz\" (UID: \"4da03a7e-6764-44c5-bdca-52dc85f316fc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wgzrz" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.859408 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/f06e3896-8ef9-4988-974f-446fb0bb3faf-signing-cabundle\") pod \"service-ca-9c57cc56f-w594l\" (UID: \"f06e3896-8ef9-4988-974f-446fb0bb3faf\") " pod="openshift-service-ca/service-ca-9c57cc56f-w594l" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.859438 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2ab2e98f-4cb6-47c6-acbf-b2b58621c78f-auth-proxy-config\") pod \"machine-config-operator-74547568cd-85qfd\" (UID: \"2ab2e98f-4cb6-47c6-acbf-b2b58621c78f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-85qfd" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.859466 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4n6g\" (UniqueName: \"kubernetes.io/projected/9af55cc7-0e27-43ce-8db1-ce73a35d361e-kube-api-access-p4n6g\") pod \"dns-default-jf9jd\" (UID: \"9af55cc7-0e27-43ce-8db1-ce73a35d361e\") " pod="openshift-dns/dns-default-jf9jd" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.859487 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gckp\" (UniqueName: \"kubernetes.io/projected/4da03a7e-6764-44c5-bdca-52dc85f316fc-kube-api-access-8gckp\") pod \"etcd-operator-b45778765-wgzrz\" (UID: \"4da03a7e-6764-44c5-bdca-52dc85f316fc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wgzrz" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.859507 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/7259afcb-55a9-490c-a823-94e217d939e0-node-bootstrap-token\") pod \"machine-config-server-cfghm\" (UID: \"7259afcb-55a9-490c-a823-94e217d939e0\") " pod="openshift-machine-config-operator/machine-config-server-cfghm" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.859530 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6559dcc4-e08f-4c1b-89b4-164673cd2ed0-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-7sn8m\" (UID: \"6559dcc4-e08f-4c1b-89b4-164673cd2ed0\") " pod="openshift-marketplace/marketplace-operator-79b997595-7sn8m" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.859554 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5krx\" (UniqueName: \"kubernetes.io/projected/8044779f-6644-4f6b-8265-2014af5cc045-kube-api-access-s5krx\") pod \"machine-config-controller-84d6567774-25gsr\" (UID: \"8044779f-6644-4f6b-8265-2014af5cc045\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-25gsr" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.859558 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7efbc68-70b1-4521-9be4-e67317fe757e-config\") pod \"service-ca-operator-777779d784-bp7g7\" (UID: \"f7efbc68-70b1-4521-9be4-e67317fe757e\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-bp7g7" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.859913 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/b2ad8f41-6fd7-4cc5-b8a1-38a886817c6c-mountpoint-dir\") pod \"csi-hostpathplugin-dzvhk\" (UID: \"b2ad8f41-6fd7-4cc5-b8a1-38a886817c6c\") " pod="hostpath-provisioner/csi-hostpathplugin-dzvhk" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.864868 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/4da03a7e-6764-44c5-bdca-52dc85f316fc-etcd-service-ca\") pod \"etcd-operator-b45778765-wgzrz\" (UID: \"4da03a7e-6764-44c5-bdca-52dc85f316fc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wgzrz" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.865208 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6559dcc4-e08f-4c1b-89b4-164673cd2ed0-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-7sn8m\" (UID: \"6559dcc4-e08f-4c1b-89b4-164673cd2ed0\") " pod="openshift-marketplace/marketplace-operator-79b997595-7sn8m" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.865376 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/b2ad8f41-6fd7-4cc5-b8a1-38a886817c6c-mountpoint-dir\") pod \"csi-hostpathplugin-dzvhk\" (UID: \"b2ad8f41-6fd7-4cc5-b8a1-38a886817c6c\") " pod="hostpath-provisioner/csi-hostpathplugin-dzvhk" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.865415 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dmk5\" (UniqueName: \"kubernetes.io/projected/c931683a-9657-49c3-87e4-f76d8f2bf95a-kube-api-access-8dmk5\") pod \"kube-storage-version-migrator-operator-b67b599dd-nqts5\" (UID: \"c931683a-9657-49c3-87e4-f76d8f2bf95a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nqts5" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.865433 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqm2q\" (UniqueName: \"kubernetes.io/projected/6559dcc4-e08f-4c1b-89b4-164673cd2ed0-kube-api-access-nqm2q\") pod \"marketplace-operator-79b997595-7sn8m\" (UID: \"6559dcc4-e08f-4c1b-89b4-164673cd2ed0\") " pod="openshift-marketplace/marketplace-operator-79b997595-7sn8m" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.865438 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/f06e3896-8ef9-4988-974f-446fb0bb3faf-signing-cabundle\") pod \"service-ca-9c57cc56f-w594l\" (UID: \"f06e3896-8ef9-4988-974f-446fb0bb3faf\") " pod="openshift-service-ca/service-ca-9c57cc56f-w594l" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.865549 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5ad609ed-908e-45b1-90e9-0068a4c1d700-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-lssjz\" (UID: \"5ad609ed-908e-45b1-90e9-0068a4c1d700\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lssjz" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.865568 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4da03a7e-6764-44c5-bdca-52dc85f316fc-serving-cert\") pod \"etcd-operator-b45778765-wgzrz\" (UID: \"4da03a7e-6764-44c5-bdca-52dc85f316fc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wgzrz" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.865611 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/b2ad8f41-6fd7-4cc5-b8a1-38a886817c6c-csi-data-dir\") pod \"csi-hostpathplugin-dzvhk\" (UID: \"b2ad8f41-6fd7-4cc5-b8a1-38a886817c6c\") " pod="hostpath-provisioner/csi-hostpathplugin-dzvhk" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.865832 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2ab2e98f-4cb6-47c6-acbf-b2b58621c78f-auth-proxy-config\") pod \"machine-config-operator-74547568cd-85qfd\" (UID: \"2ab2e98f-4cb6-47c6-acbf-b2b58621c78f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-85qfd" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.867783 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/b2ad8f41-6fd7-4cc5-b8a1-38a886817c6c-csi-data-dir\") pod \"csi-hostpathplugin-dzvhk\" (UID: \"b2ad8f41-6fd7-4cc5-b8a1-38a886817c6c\") " pod="hostpath-provisioner/csi-hostpathplugin-dzvhk" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.867856 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4da03a7e-6764-44c5-bdca-52dc85f316fc-etcd-client\") pod \"etcd-operator-b45778765-wgzrz\" (UID: \"4da03a7e-6764-44c5-bdca-52dc85f316fc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wgzrz" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.867974 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9rzg\" (UniqueName: \"kubernetes.io/projected/2ab2e98f-4cb6-47c6-acbf-b2b58621c78f-kube-api-access-v9rzg\") pod \"machine-config-operator-74547568cd-85qfd\" (UID: \"2ab2e98f-4cb6-47c6-acbf-b2b58621c78f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-85qfd" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.868020 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/4da03a7e-6764-44c5-bdca-52dc85f316fc-etcd-ca\") pod \"etcd-operator-b45778765-wgzrz\" (UID: \"4da03a7e-6764-44c5-bdca-52dc85f316fc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wgzrz" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.868072 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2ab2e98f-4cb6-47c6-acbf-b2b58621c78f-proxy-tls\") pod \"machine-config-operator-74547568cd-85qfd\" (UID: \"2ab2e98f-4cb6-47c6-acbf-b2b58621c78f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-85qfd" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.868132 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vmsd\" (UniqueName: \"kubernetes.io/projected/9ee2bc38-213d-4181-8e23-0f579b87c986-kube-api-access-8vmsd\") pod \"collect-profiles-29501115-zcgzz\" (UID: \"9ee2bc38-213d-4181-8e23-0f579b87c986\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501115-zcgzz" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.868152 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6559dcc4-e08f-4c1b-89b4-164673cd2ed0-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-7sn8m\" (UID: \"6559dcc4-e08f-4c1b-89b4-164673cd2ed0\") " pod="openshift-marketplace/marketplace-operator-79b997595-7sn8m" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.868178 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15911fdb-68b4-453a-a196-d4806f11ab2f-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zjp5b\" (UID: \"15911fdb-68b4-453a-a196-d4806f11ab2f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zjp5b" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.868195 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c931683a-9657-49c3-87e4-f76d8f2bf95a-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-nqts5\" (UID: \"c931683a-9657-49c3-87e4-f76d8f2bf95a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nqts5" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.868209 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4da03a7e-6764-44c5-bdca-52dc85f316fc-config\") pod \"etcd-operator-b45778765-wgzrz\" (UID: \"4da03a7e-6764-44c5-bdca-52dc85f316fc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wgzrz" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.868225 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0ab37823-7471-4bc8-b5e5-28110c1e5f4e-cert\") pod \"ingress-canary-rlwl7\" (UID: \"0ab37823-7471-4bc8-b5e5-28110c1e5f4e\") " pod="openshift-ingress-canary/ingress-canary-rlwl7" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.868895 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15911fdb-68b4-453a-a196-d4806f11ab2f-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zjp5b\" (UID: \"15911fdb-68b4-453a-a196-d4806f11ab2f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zjp5b" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.869339 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/4da03a7e-6764-44c5-bdca-52dc85f316fc-etcd-ca\") pod \"etcd-operator-b45778765-wgzrz\" (UID: \"4da03a7e-6764-44c5-bdca-52dc85f316fc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wgzrz" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.869766 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4da03a7e-6764-44c5-bdca-52dc85f316fc-config\") pod \"etcd-operator-b45778765-wgzrz\" (UID: \"4da03a7e-6764-44c5-bdca-52dc85f316fc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wgzrz" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.871013 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f7efbc68-70b1-4521-9be4-e67317fe757e-serving-cert\") pod \"service-ca-operator-777779d784-bp7g7\" (UID: \"f7efbc68-70b1-4521-9be4-e67317fe757e\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-bp7g7" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.871216 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-x568j" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.873110 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/7259afcb-55a9-490c-a823-94e217d939e0-certs\") pod \"machine-config-server-cfghm\" (UID: \"7259afcb-55a9-490c-a823-94e217d939e0\") " pod="openshift-machine-config-operator/machine-config-server-cfghm" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.873490 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r98wx\" (UniqueName: \"kubernetes.io/projected/5be28070-1b99-4b27-8777-7f7935ba0b6e-kube-api-access-r98wx\") pod \"console-operator-58897d9998-8787r\" (UID: \"5be28070-1b99-4b27-8777-7f7935ba0b6e\") " pod="openshift-console-operator/console-operator-58897d9998-8787r" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.874367 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/f06e3896-8ef9-4988-974f-446fb0bb3faf-signing-key\") pod \"service-ca-9c57cc56f-w594l\" (UID: \"f06e3896-8ef9-4988-974f-446fb0bb3faf\") " pod="openshift-service-ca/service-ca-9c57cc56f-w594l" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.890424 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2ab2e98f-4cb6-47c6-acbf-b2b58621c78f-proxy-tls\") pod \"machine-config-operator-74547568cd-85qfd\" (UID: \"2ab2e98f-4cb6-47c6-acbf-b2b58621c78f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-85qfd" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.890762 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4da03a7e-6764-44c5-bdca-52dc85f316fc-serving-cert\") pod \"etcd-operator-b45778765-wgzrz\" (UID: \"4da03a7e-6764-44c5-bdca-52dc85f316fc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wgzrz" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.890849 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/62652ba8-968d-4e22-8e4a-00497c30cacc-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-zn96z\" (UID: \"62652ba8-968d-4e22-8e4a-00497c30cacc\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zn96z" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.891194 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/81508e9e-bf9e-4d3e-b505-c9ca5ae81d79-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-6df8n\" (UID: \"81508e9e-bf9e-4d3e-b505-c9ca5ae81d79\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6df8n" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.892206 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltbc2\" (UniqueName: \"kubernetes.io/projected/b1a5cd8a-fb52-47ab-8aff-64e621d9b8e0-kube-api-access-ltbc2\") pod \"router-default-5444994796-jwp46\" (UID: \"b1a5cd8a-fb52-47ab-8aff-64e621d9b8e0\") " pod="openshift-ingress/router-default-5444994796-jwp46" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.896129 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/74de48da-c435-4a4a-8042-c8fc935059b7-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-c9zn5\" (UID: \"74de48da-c435-4a4a-8042-c8fc935059b7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-c9zn5" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.900081 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9ee2bc38-213d-4181-8e23-0f579b87c986-secret-volume\") pod \"collect-profiles-29501115-zcgzz\" (UID: \"9ee2bc38-213d-4181-8e23-0f579b87c986\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501115-zcgzz" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.903652 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8044779f-6644-4f6b-8265-2014af5cc045-proxy-tls\") pod \"machine-config-controller-84d6567774-25gsr\" (UID: \"8044779f-6644-4f6b-8265-2014af5cc045\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-25gsr" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.905243 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ad609ed-908e-45b1-90e9-0068a4c1d700-config\") pod \"kube-apiserver-operator-766d6c64bb-lssjz\" (UID: \"5ad609ed-908e-45b1-90e9-0068a4c1d700\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lssjz" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.906535 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ad609ed-908e-45b1-90e9-0068a4c1d700-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-lssjz\" (UID: \"5ad609ed-908e-45b1-90e9-0068a4c1d700\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lssjz" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.908217 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15911fdb-68b4-453a-a196-d4806f11ab2f-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zjp5b\" (UID: \"15911fdb-68b4-453a-a196-d4806f11ab2f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zjp5b" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.908346 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6559dcc4-e08f-4c1b-89b4-164673cd2ed0-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-7sn8m\" (UID: \"6559dcc4-e08f-4c1b-89b4-164673cd2ed0\") " pod="openshift-marketplace/marketplace-operator-79b997595-7sn8m" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.908817 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9af55cc7-0e27-43ce-8db1-ce73a35d361e-metrics-tls\") pod \"dns-default-jf9jd\" (UID: \"9af55cc7-0e27-43ce-8db1-ce73a35d361e\") " pod="openshift-dns/dns-default-jf9jd" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.911082 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ff15c8fe-e1d6-4adb-85e3-decb591896c2-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-6n95r\" (UID: \"ff15c8fe-e1d6-4adb-85e3-decb591896c2\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-6n95r" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.911747 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/7259afcb-55a9-490c-a823-94e217d939e0-node-bootstrap-token\") pod \"machine-config-server-cfghm\" (UID: \"7259afcb-55a9-490c-a823-94e217d939e0\") " pod="openshift-machine-config-operator/machine-config-server-cfghm" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.913836 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4da03a7e-6764-44c5-bdca-52dc85f316fc-etcd-client\") pod \"etcd-operator-b45778765-wgzrz\" (UID: \"4da03a7e-6764-44c5-bdca-52dc85f316fc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wgzrz" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.914976 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/925e8b6f-3848-4ab5-ab00-55405db2334c-bound-sa-token\") pod \"ingress-operator-5b745b69d9-nsqw4\" (UID: \"925e8b6f-3848-4ab5-ab00-55405db2334c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nsqw4" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.915162 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0ab37823-7471-4bc8-b5e5-28110c1e5f4e-cert\") pod \"ingress-canary-rlwl7\" (UID: \"0ab37823-7471-4bc8-b5e5-28110c1e5f4e\") " pod="openshift-ingress-canary/ingress-canary-rlwl7" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.917037 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c931683a-9657-49c3-87e4-f76d8f2bf95a-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-nqts5\" (UID: \"c931683a-9657-49c3-87e4-f76d8f2bf95a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nqts5" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.919160 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r67r7\" (UniqueName: \"kubernetes.io/projected/b523ddb6-b299-4bd7-9a33-75c025fb1805-kube-api-access-r67r7\") pod \"authentication-operator-69f744f599-4lbfr\" (UID: \"b523ddb6-b299-4bd7-9a33-75c025fb1805\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4lbfr" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.929284 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-25rjx" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.937033 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-8787r" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.938004 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brj2x\" (UniqueName: \"kubernetes.io/projected/a9b60922-75eb-4c97-85d5-12c146fe6cb1-kube-api-access-brj2x\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.951144 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-jwp46" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.959756 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hf2s6\" (UniqueName: \"kubernetes.io/projected/925e8b6f-3848-4ab5-ab00-55405db2334c-kube-api-access-hf2s6\") pod \"ingress-operator-5b745b69d9-nsqw4\" (UID: \"925e8b6f-3848-4ab5-ab00-55405db2334c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nsqw4" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.969168 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:02 crc kubenswrapper[4789]: E0202 21:22:02.969601 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 21:22:03.469566251 +0000 UTC m=+143.764591270 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x2wtg" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.979374 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgtml\" (UniqueName: \"kubernetes.io/projected/b4eaafb5-bf66-460f-86df-9b3825837d05-kube-api-access-hgtml\") pod \"openshift-apiserver-operator-796bbdcf4f-fv5wb\" (UID: \"b4eaafb5-bf66-460f-86df-9b3825837d05\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fv5wb" Feb 02 21:22:02 crc kubenswrapper[4789]: I0202 21:22:02.990001 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tlmvb\" (UniqueName: \"kubernetes.io/projected/d9743bb9-e748-40c7-a15d-c33fad88c2f2-kube-api-access-tlmvb\") pod \"openshift-controller-manager-operator-756b6f6bc6-6t7gf\" (UID: \"d9743bb9-e748-40c7-a15d-c33fad88c2f2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6t7gf" Feb 02 21:22:03 crc kubenswrapper[4789]: I0202 21:22:03.020176 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvtwk\" (UniqueName: \"kubernetes.io/projected/7b59cb33-d5dc-4e90-b6fe-fe3ad948c346-kube-api-access-kvtwk\") pod \"packageserver-d55dfcdfc-hc84b\" (UID: \"7b59cb33-d5dc-4e90-b6fe-fe3ad948c346\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hc84b" Feb 02 21:22:03 crc kubenswrapper[4789]: I0202 21:22:03.065862 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/15911fdb-68b4-453a-a196-d4806f11ab2f-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zjp5b\" (UID: \"15911fdb-68b4-453a-a196-d4806f11ab2f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zjp5b" Feb 02 21:22:03 crc kubenswrapper[4789]: I0202 21:22:03.071147 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:22:03 crc kubenswrapper[4789]: E0202 21:22:03.071194 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 21:22:03.571167557 +0000 UTC m=+143.866192576 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:03 crc kubenswrapper[4789]: I0202 21:22:03.071223 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d22cr\" (UniqueName: \"kubernetes.io/projected/81508e9e-bf9e-4d3e-b505-c9ca5ae81d79-kube-api-access-d22cr\") pod \"cluster-samples-operator-665b6dd947-6df8n\" (UID: \"81508e9e-bf9e-4d3e-b505-c9ca5ae81d79\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6df8n" Feb 02 21:22:03 crc kubenswrapper[4789]: I0202 21:22:03.071511 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:03 crc kubenswrapper[4789]: E0202 21:22:03.071918 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 21:22:03.571898078 +0000 UTC m=+143.866923097 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x2wtg" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:03 crc kubenswrapper[4789]: I0202 21:22:03.089627 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kg4r\" (UniqueName: \"kubernetes.io/projected/f06e3896-8ef9-4988-974f-446fb0bb3faf-kube-api-access-8kg4r\") pod \"service-ca-9c57cc56f-w594l\" (UID: \"f06e3896-8ef9-4988-974f-446fb0bb3faf\") " pod="openshift-service-ca/service-ca-9c57cc56f-w594l" Feb 02 21:22:03 crc kubenswrapper[4789]: I0202 21:22:03.121014 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zs4jl\" (UniqueName: \"kubernetes.io/projected/c9720cea-0f21-43a7-91b1-31c95167f4a4-kube-api-access-zs4jl\") pod \"migrator-59844c95c7-26j4t\" (UID: \"c9720cea-0f21-43a7-91b1-31c95167f4a4\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-26j4t" Feb 02 21:22:03 crc kubenswrapper[4789]: I0202 21:22:03.122198 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-26j4t" Feb 02 21:22:03 crc kubenswrapper[4789]: I0202 21:22:03.138131 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gckp\" (UniqueName: \"kubernetes.io/projected/4da03a7e-6764-44c5-bdca-52dc85f316fc-kube-api-access-8gckp\") pod \"etcd-operator-b45778765-wgzrz\" (UID: \"4da03a7e-6764-44c5-bdca-52dc85f316fc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wgzrz" Feb 02 21:22:03 crc kubenswrapper[4789]: I0202 21:22:03.167699 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-x568j"] Feb 02 21:22:03 crc kubenswrapper[4789]: I0202 21:22:03.169906 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dmk5\" (UniqueName: \"kubernetes.io/projected/c931683a-9657-49c3-87e4-f76d8f2bf95a-kube-api-access-8dmk5\") pod \"kube-storage-version-migrator-operator-b67b599dd-nqts5\" (UID: \"c931683a-9657-49c3-87e4-f76d8f2bf95a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nqts5" Feb 02 21:22:03 crc kubenswrapper[4789]: I0202 21:22:03.173019 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:22:03 crc kubenswrapper[4789]: E0202 21:22:03.173143 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 21:22:03.673123163 +0000 UTC m=+143.968148182 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:03 crc kubenswrapper[4789]: I0202 21:22:03.173273 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:03 crc kubenswrapper[4789]: E0202 21:22:03.173644 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 21:22:03.673624207 +0000 UTC m=+143.968649226 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x2wtg" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:03 crc kubenswrapper[4789]: I0202 21:22:03.180874 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5krx\" (UniqueName: \"kubernetes.io/projected/8044779f-6644-4f6b-8265-2014af5cc045-kube-api-access-s5krx\") pod \"machine-config-controller-84d6567774-25gsr\" (UID: \"8044779f-6644-4f6b-8265-2014af5cc045\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-25gsr" Feb 02 21:22:03 crc kubenswrapper[4789]: I0202 21:22:03.182996 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fv5wb" Feb 02 21:22:03 crc kubenswrapper[4789]: I0202 21:22:03.191412 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-4lbfr" Feb 02 21:22:03 crc kubenswrapper[4789]: I0202 21:22:03.191689 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqm2q\" (UniqueName: \"kubernetes.io/projected/6559dcc4-e08f-4c1b-89b4-164673cd2ed0-kube-api-access-nqm2q\") pod \"marketplace-operator-79b997595-7sn8m\" (UID: \"6559dcc4-e08f-4c1b-89b4-164673cd2ed0\") " pod="openshift-marketplace/marketplace-operator-79b997595-7sn8m" Feb 02 21:22:03 crc kubenswrapper[4789]: I0202 21:22:03.210733 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-dd7g5"] Feb 02 21:22:03 crc kubenswrapper[4789]: I0202 21:22:03.217393 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5ad609ed-908e-45b1-90e9-0068a4c1d700-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-lssjz\" (UID: \"5ad609ed-908e-45b1-90e9-0068a4c1d700\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lssjz" Feb 02 21:22:03 crc kubenswrapper[4789]: I0202 21:22:03.224319 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nsqw4" Feb 02 21:22:03 crc kubenswrapper[4789]: I0202 21:22:03.235503 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khdgh\" (UniqueName: \"kubernetes.io/projected/f7efbc68-70b1-4521-9be4-e67317fe757e-kube-api-access-khdgh\") pod \"service-ca-operator-777779d784-bp7g7\" (UID: \"f7efbc68-70b1-4521-9be4-e67317fe757e\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-bp7g7" Feb 02 21:22:03 crc kubenswrapper[4789]: I0202 21:22:03.239774 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lnf5c"] Feb 02 21:22:03 crc kubenswrapper[4789]: I0202 21:22:03.255111 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2npsj\" (UniqueName: \"kubernetes.io/projected/b2ad8f41-6fd7-4cc5-b8a1-38a886817c6c-kube-api-access-2npsj\") pod \"csi-hostpathplugin-dzvhk\" (UID: \"b2ad8f41-6fd7-4cc5-b8a1-38a886817c6c\") " pod="hostpath-provisioner/csi-hostpathplugin-dzvhk" Feb 02 21:22:03 crc kubenswrapper[4789]: I0202 21:22:03.262299 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6t7gf" Feb 02 21:22:03 crc kubenswrapper[4789]: I0202 21:22:03.263742 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hc84b" Feb 02 21:22:03 crc kubenswrapper[4789]: I0202 21:22:03.263936 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-zfv5p"] Feb 02 21:22:03 crc kubenswrapper[4789]: I0202 21:22:03.274236 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:22:03 crc kubenswrapper[4789]: E0202 21:22:03.274603 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 21:22:03.774565703 +0000 UTC m=+144.069590722 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:03 crc kubenswrapper[4789]: I0202 21:22:03.274693 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:03 crc kubenswrapper[4789]: E0202 21:22:03.275046 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 21:22:03.775033387 +0000 UTC m=+144.070058406 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x2wtg" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:03 crc kubenswrapper[4789]: I0202 21:22:03.281902 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hspk\" (UniqueName: \"kubernetes.io/projected/0ab37823-7471-4bc8-b5e5-28110c1e5f4e-kube-api-access-8hspk\") pod \"ingress-canary-rlwl7\" (UID: \"0ab37823-7471-4bc8-b5e5-28110c1e5f4e\") " pod="openshift-ingress-canary/ingress-canary-rlwl7" Feb 02 21:22:03 crc kubenswrapper[4789]: I0202 21:22:03.291518 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-wgzrz" Feb 02 21:22:03 crc kubenswrapper[4789]: I0202 21:22:03.304422 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lssjz" Feb 02 21:22:03 crc kubenswrapper[4789]: I0202 21:22:03.305611 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6df8n" Feb 02 21:22:03 crc kubenswrapper[4789]: I0202 21:22:03.312262 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zjp5b" Feb 02 21:22:03 crc kubenswrapper[4789]: I0202 21:22:03.318770 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nqts5" Feb 02 21:22:03 crc kubenswrapper[4789]: I0202 21:22:03.328168 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-bp7g7" Feb 02 21:22:03 crc kubenswrapper[4789]: I0202 21:22:03.336570 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nb2t8\" (UniqueName: \"kubernetes.io/projected/ff15c8fe-e1d6-4adb-85e3-decb591896c2-kube-api-access-nb2t8\") pod \"multus-admission-controller-857f4d67dd-6n95r\" (UID: \"ff15c8fe-e1d6-4adb-85e3-decb591896c2\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-6n95r" Feb 02 21:22:03 crc kubenswrapper[4789]: I0202 21:22:03.350816 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-w594l" Feb 02 21:22:03 crc kubenswrapper[4789]: I0202 21:22:03.356226 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tv4t6\" (UniqueName: \"kubernetes.io/projected/74de48da-c435-4a4a-8042-c8fc935059b7-kube-api-access-tv4t6\") pod \"package-server-manager-789f6589d5-c9zn5\" (UID: \"74de48da-c435-4a4a-8042-c8fc935059b7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-c9zn5" Feb 02 21:22:03 crc kubenswrapper[4789]: I0202 21:22:03.360082 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4n6g\" (UniqueName: \"kubernetes.io/projected/9af55cc7-0e27-43ce-8db1-ce73a35d361e-kube-api-access-p4n6g\") pod \"dns-default-jf9jd\" (UID: \"9af55cc7-0e27-43ce-8db1-ce73a35d361e\") " pod="openshift-dns/dns-default-jf9jd" Feb 02 21:22:03 crc kubenswrapper[4789]: I0202 21:22:03.361102 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97bfj\" (UniqueName: \"kubernetes.io/projected/62652ba8-968d-4e22-8e4a-00497c30cacc-kube-api-access-97bfj\") pod \"control-plane-machine-set-operator-78cbb6b69f-zn96z\" (UID: \"62652ba8-968d-4e22-8e4a-00497c30cacc\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zn96z" Feb 02 21:22:03 crc kubenswrapper[4789]: I0202 21:22:03.380796 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zn96z" Feb 02 21:22:03 crc kubenswrapper[4789]: I0202 21:22:03.381437 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:22:03 crc kubenswrapper[4789]: E0202 21:22:03.381724 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 21:22:03.881708989 +0000 UTC m=+144.176734008 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:03 crc kubenswrapper[4789]: I0202 21:22:03.382358 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hcxsv"] Feb 02 21:22:03 crc kubenswrapper[4789]: I0202 21:22:03.383647 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-84vfz"] Feb 02 21:22:03 crc kubenswrapper[4789]: I0202 21:22:03.392233 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-8787r"] Feb 02 21:22:03 crc kubenswrapper[4789]: I0202 21:22:03.395811 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-7sn8m" Feb 02 21:22:03 crc kubenswrapper[4789]: I0202 21:22:03.405825 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-6n95r" Feb 02 21:22:03 crc kubenswrapper[4789]: I0202 21:22:03.413143 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-25gsr" Feb 02 21:22:03 crc kubenswrapper[4789]: I0202 21:22:03.429037 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-rlwl7" Feb 02 21:22:03 crc kubenswrapper[4789]: I0202 21:22:03.433944 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vmsd\" (UniqueName: \"kubernetes.io/projected/9ee2bc38-213d-4181-8e23-0f579b87c986-kube-api-access-8vmsd\") pod \"collect-profiles-29501115-zcgzz\" (UID: \"9ee2bc38-213d-4181-8e23-0f579b87c986\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501115-zcgzz" Feb 02 21:22:03 crc kubenswrapper[4789]: I0202 21:22:03.437210 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-757zt\" (UniqueName: \"kubernetes.io/projected/7259afcb-55a9-490c-a823-94e217d939e0-kube-api-access-757zt\") pod \"machine-config-server-cfghm\" (UID: \"7259afcb-55a9-490c-a823-94e217d939e0\") " pod="openshift-machine-config-operator/machine-config-server-cfghm" Feb 02 21:22:03 crc kubenswrapper[4789]: I0202 21:22:03.437424 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-cfghm" Feb 02 21:22:03 crc kubenswrapper[4789]: I0202 21:22:03.452960 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9rzg\" (UniqueName: \"kubernetes.io/projected/2ab2e98f-4cb6-47c6-acbf-b2b58621c78f-kube-api-access-v9rzg\") pod \"machine-config-operator-74547568cd-85qfd\" (UID: \"2ab2e98f-4cb6-47c6-acbf-b2b58621c78f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-85qfd" Feb 02 21:22:03 crc kubenswrapper[4789]: I0202 21:22:03.454201 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-dzvhk" Feb 02 21:22:03 crc kubenswrapper[4789]: I0202 21:22:03.461939 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-jf9jd" Feb 02 21:22:03 crc kubenswrapper[4789]: I0202 21:22:03.488533 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:03 crc kubenswrapper[4789]: E0202 21:22:03.488830 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 21:22:03.988818653 +0000 UTC m=+144.283843662 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x2wtg" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:03 crc kubenswrapper[4789]: W0202 21:22:03.500819 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48272be3_d48f_45b1_99a9_28ed3ba310ed.slice/crio-4c359585d99a3fe983ac9de4eed8904fc99e8b525ffdaeb4e06331dc708fa294 WatchSource:0}: Error finding container 4c359585d99a3fe983ac9de4eed8904fc99e8b525ffdaeb4e06331dc708fa294: Status 404 returned error can't find the container with id 4c359585d99a3fe983ac9de4eed8904fc99e8b525ffdaeb4e06331dc708fa294 Feb 02 21:22:03 crc kubenswrapper[4789]: W0202 21:22:03.554866 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5be28070_1b99_4b27_8777_7f7935ba0b6e.slice/crio-851f0b48d7b0976357db2de641d714a054e4cb2f7f62785aae9d66b11067d252 WatchSource:0}: Error finding container 851f0b48d7b0976357db2de641d714a054e4cb2f7f62785aae9d66b11067d252: Status 404 returned error can't find the container with id 851f0b48d7b0976357db2de641d714a054e4cb2f7f62785aae9d66b11067d252 Feb 02 21:22:03 crc kubenswrapper[4789]: I0202 21:22:03.593142 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:22:03 crc kubenswrapper[4789]: E0202 21:22:03.593497 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 21:22:04.093482946 +0000 UTC m=+144.388507965 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:03 crc kubenswrapper[4789]: I0202 21:22:03.614531 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-nsqw4"] Feb 02 21:22:03 crc kubenswrapper[4789]: I0202 21:22:03.634006 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-c9zn5" Feb 02 21:22:03 crc kubenswrapper[4789]: I0202 21:22:03.667511 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29501115-zcgzz" Feb 02 21:22:03 crc kubenswrapper[4789]: I0202 21:22:03.671990 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-25rjx"] Feb 02 21:22:03 crc kubenswrapper[4789]: I0202 21:22:03.684269 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-85qfd" Feb 02 21:22:03 crc kubenswrapper[4789]: I0202 21:22:03.698221 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:03 crc kubenswrapper[4789]: E0202 21:22:03.698522 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 21:22:04.19851124 +0000 UTC m=+144.493536259 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x2wtg" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:03 crc kubenswrapper[4789]: I0202 21:22:03.707774 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-4qf8w" event={"ID":"6a6d8d36-0b11-496c-b07a-145358594fa2","Type":"ContainerStarted","Data":"b71f37ed42f6cb5b774d7bdaa090532563091a648f19f082c4a9ff4ee834014e"} Feb 02 21:22:03 crc kubenswrapper[4789]: I0202 21:22:03.720761 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-x568j" event={"ID":"a7cb5e21-a1f6-4e35-bf0d-e709b16d6994","Type":"ContainerStarted","Data":"e0caa4a10dc5d015815b09ecc6643d246e908bacd0be7e52ef25691854bb187e"} Feb 02 21:22:03 crc kubenswrapper[4789]: I0202 21:22:03.728738 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-jwp46" event={"ID":"b1a5cd8a-fb52-47ab-8aff-64e621d9b8e0","Type":"ContainerStarted","Data":"cc658d9aa2ccd4c46fec8aff18078efcb46f7a46c2c18020ac999fed812d428a"} Feb 02 21:22:03 crc kubenswrapper[4789]: I0202 21:22:03.728779 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-jwp46" event={"ID":"b1a5cd8a-fb52-47ab-8aff-64e621d9b8e0","Type":"ContainerStarted","Data":"e75dbdd6f0a7bd44d309ef2a3141f5935d49ddad91c2558b72ec650c43742dfa"} Feb 02 21:22:03 crc kubenswrapper[4789]: I0202 21:22:03.735669 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-02 21:17:02 +0000 UTC, rotation deadline is 2026-11-03 08:40:15.80695822 +0000 UTC Feb 02 21:22:03 crc kubenswrapper[4789]: I0202 21:22:03.735692 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6563h18m12.071268289s for next certificate rotation Feb 02 21:22:03 crc kubenswrapper[4789]: I0202 21:22:03.735838 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lbmnl" event={"ID":"9c0c6217-0e72-4682-8417-f6f6b2809bfa","Type":"ContainerStarted","Data":"12925b279ba7a0e6c1d4944af0efa23d362e6ae521b4ef4732a6f2bf2473f057"} Feb 02 21:22:03 crc kubenswrapper[4789]: I0202 21:22:03.735860 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lbmnl" event={"ID":"9c0c6217-0e72-4682-8417-f6f6b2809bfa","Type":"ContainerStarted","Data":"f86299d1ab04c257c84d565a779c37682f6cd3bbeb7856c30647af78c0e2dcf9"} Feb 02 21:22:03 crc kubenswrapper[4789]: I0202 21:22:03.736440 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lbmnl" Feb 02 21:22:03 crc kubenswrapper[4789]: I0202 21:22:03.743487 4789 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-lbmnl container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.17:8443/healthz\": dial tcp 10.217.0.17:8443: connect: connection refused" start-of-body= Feb 02 21:22:03 crc kubenswrapper[4789]: I0202 21:22:03.743534 4789 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lbmnl" podUID="9c0c6217-0e72-4682-8417-f6f6b2809bfa" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.17:8443/healthz\": dial tcp 10.217.0.17:8443: connect: connection refused" Feb 02 21:22:03 crc kubenswrapper[4789]: I0202 21:22:03.744519 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-84vfz" event={"ID":"abf25fb1-39e2-4b26-9d3b-1cebdcbc7f98","Type":"ContainerStarted","Data":"cdb10abcef46ec5a18b02922b5ec14cb4f779a26fe5cf1f50afc82fc91a9d412"} Feb 02 21:22:03 crc kubenswrapper[4789]: I0202 21:22:03.776491 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gv5lf" event={"ID":"62b2eeb5-6380-43c4-9c2e-e7aa29c88057","Type":"ContainerStarted","Data":"25eee36de801ab6711746461104b1d4badd47fd311fe75678aecdbbf80eff4ad"} Feb 02 21:22:03 crc kubenswrapper[4789]: W0202 21:22:03.776745 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod925e8b6f_3848_4ab5_ab00_55405db2334c.slice/crio-fded8f32d21e739c387e85a74002dd7ee9a38cc3470a94a526789a42ba3c64ea WatchSource:0}: Error finding container fded8f32d21e739c387e85a74002dd7ee9a38cc3470a94a526789a42ba3c64ea: Status 404 returned error can't find the container with id fded8f32d21e739c387e85a74002dd7ee9a38cc3470a94a526789a42ba3c64ea Feb 02 21:22:03 crc kubenswrapper[4789]: I0202 21:22:03.780983 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-dd7g5" event={"ID":"899bce18-bfcc-42b8-ab5e-149d16e8eddb","Type":"ContainerStarted","Data":"faac5c86544032d9698ae406d791904b5430fe2a5be3cbaa3fa264e437db5ef2"} Feb 02 21:22:03 crc kubenswrapper[4789]: I0202 21:22:03.784558 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-pkpwd" event={"ID":"dc42f967-8fe9-4a89-8e82-5272f070ed73","Type":"ContainerStarted","Data":"67f12562fa3638a0603f3b0c3d4b3a80cb86cdd848862f8476e0e37f69fff96f"} Feb 02 21:22:03 crc kubenswrapper[4789]: I0202 21:22:03.784646 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-pkpwd" event={"ID":"dc42f967-8fe9-4a89-8e82-5272f070ed73","Type":"ContainerStarted","Data":"7e28bff00efd4f9b3134ab7527b3690f6483b5f6816be600cc262f36ca6eb5c5"} Feb 02 21:22:03 crc kubenswrapper[4789]: I0202 21:22:03.785386 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lnf5c" event={"ID":"e9b63e7c-d6fb-4ead-8ec0-2b38f5de0609","Type":"ContainerStarted","Data":"dc2a83869ab705efa9492af7a9b98bcd28235037049b35e3d179e74731894b77"} Feb 02 21:22:03 crc kubenswrapper[4789]: I0202 21:22:03.791116 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-zfv5p" event={"ID":"a2edcffa-d93c-4125-863d-05812a4ff79a","Type":"ContainerStarted","Data":"3eb6eca71866a76b44568a7e75b9bd8edd4de7a3a0214003eaf3708473118b3a"} Feb 02 21:22:03 crc kubenswrapper[4789]: I0202 21:22:03.793332 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-8787r" event={"ID":"5be28070-1b99-4b27-8777-7f7935ba0b6e","Type":"ContainerStarted","Data":"851f0b48d7b0976357db2de641d714a054e4cb2f7f62785aae9d66b11067d252"} Feb 02 21:22:03 crc kubenswrapper[4789]: I0202 21:22:03.795033 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hcxsv" event={"ID":"48272be3-d48f-45b1-99a9-28ed3ba310ed","Type":"ContainerStarted","Data":"4c359585d99a3fe983ac9de4eed8904fc99e8b525ffdaeb4e06331dc708fa294"} Feb 02 21:22:03 crc kubenswrapper[4789]: I0202 21:22:03.809909 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:22:03 crc kubenswrapper[4789]: E0202 21:22:03.814412 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 21:22:04.314393677 +0000 UTC m=+144.609418696 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:03 crc kubenswrapper[4789]: I0202 21:22:03.887780 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-26j4t"] Feb 02 21:22:03 crc kubenswrapper[4789]: I0202 21:22:03.925105 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:03 crc kubenswrapper[4789]: E0202 21:22:03.929122 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 21:22:04.42910723 +0000 UTC m=+144.724132249 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x2wtg" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:03 crc kubenswrapper[4789]: I0202 21:22:03.953084 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-jwp46" Feb 02 21:22:03 crc kubenswrapper[4789]: I0202 21:22:03.962966 4789 patch_prober.go:28] interesting pod/router-default-5444994796-jwp46 container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Feb 02 21:22:03 crc kubenswrapper[4789]: I0202 21:22:03.963009 4789 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jwp46" podUID="b1a5cd8a-fb52-47ab-8aff-64e621d9b8e0" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Feb 02 21:22:04 crc kubenswrapper[4789]: I0202 21:22:03.999093 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-4lbfr"] Feb 02 21:22:04 crc kubenswrapper[4789]: I0202 21:22:04.026391 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:22:04 crc kubenswrapper[4789]: E0202 21:22:04.026537 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 21:22:04.526508655 +0000 UTC m=+144.821533674 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:04 crc kubenswrapper[4789]: I0202 21:22:04.026785 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:04 crc kubenswrapper[4789]: E0202 21:22:04.027097 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 21:22:04.527084911 +0000 UTC m=+144.822109930 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x2wtg" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:04 crc kubenswrapper[4789]: I0202 21:22:04.127925 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:22:04 crc kubenswrapper[4789]: E0202 21:22:04.128267 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 21:22:04.628250194 +0000 UTC m=+144.923275213 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:04 crc kubenswrapper[4789]: W0202 21:22:04.155734 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb523ddb6_b299_4bd7_9a33_75c025fb1805.slice/crio-cb55bb15e1ca95b3531de2785149c8be3dbde1f21c5568d8d6ac7f9970e016de WatchSource:0}: Error finding container cb55bb15e1ca95b3531de2785149c8be3dbde1f21c5568d8d6ac7f9970e016de: Status 404 returned error can't find the container with id cb55bb15e1ca95b3531de2785149c8be3dbde1f21c5568d8d6ac7f9970e016de Feb 02 21:22:04 crc kubenswrapper[4789]: I0202 21:22:04.231238 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:04 crc kubenswrapper[4789]: E0202 21:22:04.231489 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 21:22:04.731477246 +0000 UTC m=+145.026502265 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x2wtg" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:04 crc kubenswrapper[4789]: I0202 21:22:04.292240 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zjp5b"] Feb 02 21:22:04 crc kubenswrapper[4789]: I0202 21:22:04.300380 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-6n95r"] Feb 02 21:22:04 crc kubenswrapper[4789]: I0202 21:22:04.309846 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6df8n"] Feb 02 21:22:04 crc kubenswrapper[4789]: I0202 21:22:04.326068 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-jwp46" podStartSLOduration=121.32605597 podStartE2EDuration="2m1.32605597s" podCreationTimestamp="2026-02-02 21:20:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:22:04.324379491 +0000 UTC m=+144.619404510" watchObservedRunningTime="2026-02-02 21:22:04.32605597 +0000 UTC m=+144.621080989" Feb 02 21:22:04 crc kubenswrapper[4789]: I0202 21:22:04.334050 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:22:04 crc kubenswrapper[4789]: E0202 21:22:04.334466 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 21:22:04.834447871 +0000 UTC m=+145.129472890 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:04 crc kubenswrapper[4789]: I0202 21:22:04.435192 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:04 crc kubenswrapper[4789]: E0202 21:22:04.435441 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 21:22:04.935431249 +0000 UTC m=+145.230456268 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x2wtg" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:04 crc kubenswrapper[4789]: I0202 21:22:04.516927 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fzd6q" podStartSLOduration=124.516910885 podStartE2EDuration="2m4.516910885s" podCreationTimestamp="2026-02-02 21:20:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:22:04.476957805 +0000 UTC m=+144.771982824" watchObservedRunningTime="2026-02-02 21:22:04.516910885 +0000 UTC m=+144.811935904" Feb 02 21:22:04 crc kubenswrapper[4789]: I0202 21:22:04.536225 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:22:04 crc kubenswrapper[4789]: E0202 21:22:04.536644 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 21:22:05.036628113 +0000 UTC m=+145.331653132 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:04 crc kubenswrapper[4789]: I0202 21:22:04.588398 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-wgzrz"] Feb 02 21:22:04 crc kubenswrapper[4789]: I0202 21:22:04.673559 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:04 crc kubenswrapper[4789]: E0202 21:22:04.673968 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 21:22:05.173951817 +0000 UTC m=+145.468976836 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x2wtg" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:04 crc kubenswrapper[4789]: I0202 21:22:04.695331 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bdqgm" podStartSLOduration=121.695313912 podStartE2EDuration="2m1.695313912s" podCreationTimestamp="2026-02-02 21:20:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:22:04.693343665 +0000 UTC m=+144.988368684" watchObservedRunningTime="2026-02-02 21:22:04.695313912 +0000 UTC m=+144.990338921" Feb 02 21:22:04 crc kubenswrapper[4789]: I0202 21:22:04.739463 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lbmnl" podStartSLOduration=121.739444162 podStartE2EDuration="2m1.739444162s" podCreationTimestamp="2026-02-02 21:20:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:22:04.738249338 +0000 UTC m=+145.033274357" watchObservedRunningTime="2026-02-02 21:22:04.739444162 +0000 UTC m=+145.034469171" Feb 02 21:22:04 crc kubenswrapper[4789]: I0202 21:22:04.777742 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:22:04 crc kubenswrapper[4789]: E0202 21:22:04.778257 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 21:22:05.27824172 +0000 UTC m=+145.573266739 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:04 crc kubenswrapper[4789]: I0202 21:22:04.857097 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-25rjx" event={"ID":"0d7fbad6-8c3b-4d05-8ca4-ef9f3f39ec4f","Type":"ContainerStarted","Data":"55b106f561be5d2dc4534492d32068aa944e7f5d5b92b3d06962c69fbed83ad6"} Feb 02 21:22:04 crc kubenswrapper[4789]: I0202 21:22:04.857138 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-25rjx" event={"ID":"0d7fbad6-8c3b-4d05-8ca4-ef9f3f39ec4f","Type":"ContainerStarted","Data":"6412933703f271ef0e0b742226d1152a5006e16fdf827767f6054e8fe4d8f39e"} Feb 02 21:22:04 crc kubenswrapper[4789]: I0202 21:22:04.876598 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-dd7g5" event={"ID":"899bce18-bfcc-42b8-ab5e-149d16e8eddb","Type":"ContainerStarted","Data":"62c9a4d31f583a8c7de7ebbcfca30eefeeffcd9d943249f0571941624fcdae1a"} Feb 02 21:22:04 crc kubenswrapper[4789]: I0202 21:22:04.877406 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-dd7g5" Feb 02 21:22:04 crc kubenswrapper[4789]: I0202 21:22:04.879013 4789 patch_prober.go:28] interesting pod/downloads-7954f5f757-dd7g5 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 02 21:22:04 crc kubenswrapper[4789]: I0202 21:22:04.879056 4789 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-dd7g5" podUID="899bce18-bfcc-42b8-ab5e-149d16e8eddb" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 02 21:22:04 crc kubenswrapper[4789]: I0202 21:22:04.879787 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:04 crc kubenswrapper[4789]: E0202 21:22:04.880241 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 21:22:05.380227786 +0000 UTC m=+145.675252805 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x2wtg" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:04 crc kubenswrapper[4789]: I0202 21:22:04.887885 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zjp5b" event={"ID":"15911fdb-68b4-453a-a196-d4806f11ab2f","Type":"ContainerStarted","Data":"6979522527babc393ddccb5fd55dd9b15b3de11368089d71ddf044f292b17313"} Feb 02 21:22:04 crc kubenswrapper[4789]: I0202 21:22:04.896967 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gv5lf" podStartSLOduration=121.896944837 podStartE2EDuration="2m1.896944837s" podCreationTimestamp="2026-02-02 21:20:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:22:04.888174415 +0000 UTC m=+145.183199434" watchObservedRunningTime="2026-02-02 21:22:04.896944837 +0000 UTC m=+145.191969856" Feb 02 21:22:04 crc kubenswrapper[4789]: I0202 21:22:04.927663 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lnf5c" event={"ID":"e9b63e7c-d6fb-4ead-8ec0-2b38f5de0609","Type":"ContainerStarted","Data":"1e8b79a9f5e3fb3040e85d4bd56848dc20a487b86580ee6278c64f0a91add8ca"} Feb 02 21:22:04 crc kubenswrapper[4789]: I0202 21:22:04.929800 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hc84b"] Feb 02 21:22:04 crc kubenswrapper[4789]: I0202 21:22:04.933185 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-wgzrz" event={"ID":"4da03a7e-6764-44c5-bdca-52dc85f316fc","Type":"ContainerStarted","Data":"b7fe04912dd4b76fd590935b88f9c57a9f39c32f3d59ad7f862b6c93d3e36f25"} Feb 02 21:22:04 crc kubenswrapper[4789]: I0202 21:22:04.936230 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-26j4t" event={"ID":"c9720cea-0f21-43a7-91b1-31c95167f4a4","Type":"ContainerStarted","Data":"cf665e4c076c799926a41b5b059504f64056bc57d59402bdb2d5ed7025419664"} Feb 02 21:22:04 crc kubenswrapper[4789]: I0202 21:22:04.963075 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hcxsv" event={"ID":"48272be3-d48f-45b1-99a9-28ed3ba310ed","Type":"ContainerStarted","Data":"b05172f408ed97c5e9c8eb6e937bf2c06a910d716d821f52eb468d7a18836552"} Feb 02 21:22:04 crc kubenswrapper[4789]: I0202 21:22:04.964346 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hcxsv" Feb 02 21:22:04 crc kubenswrapper[4789]: I0202 21:22:04.972011 4789 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-hcxsv container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Feb 02 21:22:04 crc kubenswrapper[4789]: I0202 21:22:04.972083 4789 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hcxsv" podUID="48272be3-d48f-45b1-99a9-28ed3ba310ed" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" Feb 02 21:22:04 crc kubenswrapper[4789]: I0202 21:22:04.972165 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-cfghm" event={"ID":"7259afcb-55a9-490c-a823-94e217d939e0","Type":"ContainerStarted","Data":"90abb8bd0b0dd4aec7e8d79c4b5badfc8fad8d033012c745dabe224f5d7ff604"} Feb 02 21:22:04 crc kubenswrapper[4789]: I0202 21:22:04.982016 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:22:04 crc kubenswrapper[4789]: E0202 21:22:04.983442 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 21:22:05.483413936 +0000 UTC m=+145.778439015 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:04 crc kubenswrapper[4789]: I0202 21:22:04.986709 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-8787r" event={"ID":"5be28070-1b99-4b27-8777-7f7935ba0b6e","Type":"ContainerStarted","Data":"2b311b41406abe6286d59638dbc1cb2cbefc147c50448d518311aca8542401a4"} Feb 02 21:22:04 crc kubenswrapper[4789]: I0202 21:22:04.986743 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-8787r" Feb 02 21:22:04 crc kubenswrapper[4789]: I0202 21:22:04.991639 4789 patch_prober.go:28] interesting pod/console-operator-58897d9998-8787r container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/readyz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Feb 02 21:22:04 crc kubenswrapper[4789]: I0202 21:22:04.991701 4789 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-8787r" podUID="5be28070-1b99-4b27-8777-7f7935ba0b6e" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/readyz\": dial tcp 10.217.0.23:8443: connect: connection refused" Feb 02 21:22:05 crc kubenswrapper[4789]: I0202 21:22:05.004508 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-x568j" event={"ID":"a7cb5e21-a1f6-4e35-bf0d-e709b16d6994","Type":"ContainerStarted","Data":"10f1ae5569859cdc38f9f0c0e54b8d505db332ebcf22f76d45adda1a3b1a00a7"} Feb 02 21:22:05 crc kubenswrapper[4789]: I0202 21:22:05.005267 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nqts5"] Feb 02 21:22:05 crc kubenswrapper[4789]: I0202 21:22:05.007959 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-4lbfr" event={"ID":"b523ddb6-b299-4bd7-9a33-75c025fb1805","Type":"ContainerStarted","Data":"cb55bb15e1ca95b3531de2785149c8be3dbde1f21c5568d8d6ac7f9970e016de"} Feb 02 21:22:05 crc kubenswrapper[4789]: I0202 21:22:05.018273 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-hqmrz" podStartSLOduration=122.018257239 podStartE2EDuration="2m2.018257239s" podCreationTimestamp="2026-02-02 21:20:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:22:05.016079597 +0000 UTC m=+145.311104616" watchObservedRunningTime="2026-02-02 21:22:05.018257239 +0000 UTC m=+145.313282248" Feb 02 21:22:05 crc kubenswrapper[4789]: I0202 21:22:05.019030 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nsqw4" event={"ID":"925e8b6f-3848-4ab5-ab00-55405db2334c","Type":"ContainerStarted","Data":"46c371136a5a7851475919425b87ca7b63eca3e8b2884a6ef24e3cd1d25199a2"} Feb 02 21:22:05 crc kubenswrapper[4789]: I0202 21:22:05.019124 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nsqw4" event={"ID":"925e8b6f-3848-4ab5-ab00-55405db2334c","Type":"ContainerStarted","Data":"fded8f32d21e739c387e85a74002dd7ee9a38cc3470a94a526789a42ba3c64ea"} Feb 02 21:22:05 crc kubenswrapper[4789]: I0202 21:22:05.028531 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-6n95r" event={"ID":"ff15c8fe-e1d6-4adb-85e3-decb591896c2","Type":"ContainerStarted","Data":"eb2e6d776830dec3e8ddad3031857088219e9cd7fe77de73295aa1538087f9ee"} Feb 02 21:22:05 crc kubenswrapper[4789]: I0202 21:22:05.033555 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-4z5px" podStartSLOduration=123.033539179 podStartE2EDuration="2m3.033539179s" podCreationTimestamp="2026-02-02 21:20:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:22:05.032887331 +0000 UTC m=+145.327912350" watchObservedRunningTime="2026-02-02 21:22:05.033539179 +0000 UTC m=+145.328564198" Feb 02 21:22:05 crc kubenswrapper[4789]: I0202 21:22:05.036316 4789 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-lbmnl container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.17:8443/healthz\": dial tcp 10.217.0.17:8443: connect: connection refused" start-of-body= Feb 02 21:22:05 crc kubenswrapper[4789]: I0202 21:22:05.036712 4789 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lbmnl" podUID="9c0c6217-0e72-4682-8417-f6f6b2809bfa" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.17:8443/healthz\": dial tcp 10.217.0.17:8443: connect: connection refused" Feb 02 21:22:05 crc kubenswrapper[4789]: I0202 21:22:05.085257 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:05 crc kubenswrapper[4789]: E0202 21:22:05.086632 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 21:22:05.586619948 +0000 UTC m=+145.881644967 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x2wtg" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:05 crc kubenswrapper[4789]: I0202 21:22:05.087774 4789 patch_prober.go:28] interesting pod/router-default-5444994796-jwp46 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 21:22:05 crc kubenswrapper[4789]: [-]has-synced failed: reason withheld Feb 02 21:22:05 crc kubenswrapper[4789]: [+]process-running ok Feb 02 21:22:05 crc kubenswrapper[4789]: healthz check failed Feb 02 21:22:05 crc kubenswrapper[4789]: I0202 21:22:05.087829 4789 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jwp46" podUID="b1a5cd8a-fb52-47ab-8aff-64e621d9b8e0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 21:22:05 crc kubenswrapper[4789]: W0202 21:22:05.113588 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc931683a_9657_49c3_87e4_f76d8f2bf95a.slice/crio-2ecfebe836d510a22010637db94f2165eff654b14eba6cd8d1f154d8dea4c2e4 WatchSource:0}: Error finding container 2ecfebe836d510a22010637db94f2165eff654b14eba6cd8d1f154d8dea4c2e4: Status 404 returned error can't find the container with id 2ecfebe836d510a22010637db94f2165eff654b14eba6cd8d1f154d8dea4c2e4 Feb 02 21:22:05 crc kubenswrapper[4789]: I0202 21:22:05.143505 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-rlwl7"] Feb 02 21:22:05 crc kubenswrapper[4789]: I0202 21:22:05.167270 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-bp7g7"] Feb 02 21:22:05 crc kubenswrapper[4789]: I0202 21:22:05.186896 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:22:05 crc kubenswrapper[4789]: E0202 21:22:05.187113 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 21:22:05.687098011 +0000 UTC m=+145.982123030 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:05 crc kubenswrapper[4789]: I0202 21:22:05.211313 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7sn8m"] Feb 02 21:22:05 crc kubenswrapper[4789]: I0202 21:22:05.223877 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hcxsv" podStartSLOduration=122.223861049 podStartE2EDuration="2m2.223861049s" podCreationTimestamp="2026-02-02 21:20:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:22:05.223136649 +0000 UTC m=+145.518161668" watchObservedRunningTime="2026-02-02 21:22:05.223861049 +0000 UTC m=+145.518886068" Feb 02 21:22:05 crc kubenswrapper[4789]: I0202 21:22:05.264311 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29501115-zcgzz"] Feb 02 21:22:05 crc kubenswrapper[4789]: I0202 21:22:05.290174 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:05 crc kubenswrapper[4789]: E0202 21:22:05.290529 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 21:22:05.790510119 +0000 UTC m=+146.085535138 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x2wtg" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:05 crc kubenswrapper[4789]: I0202 21:22:05.308295 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-x568j" podStartSLOduration=123.30827999 podStartE2EDuration="2m3.30827999s" podCreationTimestamp="2026-02-02 21:20:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:22:05.307701054 +0000 UTC m=+145.602726073" watchObservedRunningTime="2026-02-02 21:22:05.30827999 +0000 UTC m=+145.603304999" Feb 02 21:22:05 crc kubenswrapper[4789]: W0202 21:22:05.317846 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0ab37823_7471_4bc8_b5e5_28110c1e5f4e.slice/crio-4b71c93157154b4037579019f9593b5014197b6969ef894d4b0c58afe9c46a0e WatchSource:0}: Error finding container 4b71c93157154b4037579019f9593b5014197b6969ef894d4b0c58afe9c46a0e: Status 404 returned error can't find the container with id 4b71c93157154b4037579019f9593b5014197b6969ef894d4b0c58afe9c46a0e Feb 02 21:22:05 crc kubenswrapper[4789]: I0202 21:22:05.367332 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-8787r" podStartSLOduration=123.36731681 podStartE2EDuration="2m3.36731681s" podCreationTimestamp="2026-02-02 21:20:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:22:05.365128647 +0000 UTC m=+145.660153666" watchObservedRunningTime="2026-02-02 21:22:05.36731681 +0000 UTC m=+145.662341829" Feb 02 21:22:05 crc kubenswrapper[4789]: I0202 21:22:05.392007 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:22:05 crc kubenswrapper[4789]: E0202 21:22:05.392346 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 21:22:05.89233156 +0000 UTC m=+146.187356579 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:05 crc kubenswrapper[4789]: I0202 21:22:05.447943 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6t7gf"] Feb 02 21:22:05 crc kubenswrapper[4789]: I0202 21:22:05.457598 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lnf5c" podStartSLOduration=123.457566639 podStartE2EDuration="2m3.457566639s" podCreationTimestamp="2026-02-02 21:20:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:22:05.454573343 +0000 UTC m=+145.749598362" watchObservedRunningTime="2026-02-02 21:22:05.457566639 +0000 UTC m=+145.752591658" Feb 02 21:22:05 crc kubenswrapper[4789]: I0202 21:22:05.492969 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:05 crc kubenswrapper[4789]: E0202 21:22:05.493289 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 21:22:05.993278637 +0000 UTC m=+146.288303656 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x2wtg" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:05 crc kubenswrapper[4789]: I0202 21:22:05.508652 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-dzvhk"] Feb 02 21:22:05 crc kubenswrapper[4789]: I0202 21:22:05.514067 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-w594l"] Feb 02 21:22:05 crc kubenswrapper[4789]: I0202 21:22:05.529874 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-85qfd"] Feb 02 21:22:05 crc kubenswrapper[4789]: I0202 21:22:05.532821 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-25gsr"] Feb 02 21:22:05 crc kubenswrapper[4789]: W0202 21:22:05.534093 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf06e3896_8ef9_4988_974f_446fb0bb3faf.slice/crio-2727d24ad0ce292bb5a46277e09641d57989071a5a81043971927f108da7ba34 WatchSource:0}: Error finding container 2727d24ad0ce292bb5a46277e09641d57989071a5a81043971927f108da7ba34: Status 404 returned error can't find the container with id 2727d24ad0ce292bb5a46277e09641d57989071a5a81043971927f108da7ba34 Feb 02 21:22:05 crc kubenswrapper[4789]: I0202 21:22:05.545254 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zn96z"] Feb 02 21:22:05 crc kubenswrapper[4789]: I0202 21:22:05.550333 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-c9zn5"] Feb 02 21:22:05 crc kubenswrapper[4789]: I0202 21:22:05.556043 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lssjz"] Feb 02 21:22:05 crc kubenswrapper[4789]: W0202 21:22:05.559707 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod62652ba8_968d_4e22_8e4a_00497c30cacc.slice/crio-e2fd892bdf528dd1b58d4f447d5e9aa5d7fda845071e080ecf0e8c2477a471b2 WatchSource:0}: Error finding container e2fd892bdf528dd1b58d4f447d5e9aa5d7fda845071e080ecf0e8c2477a471b2: Status 404 returned error can't find the container with id e2fd892bdf528dd1b58d4f447d5e9aa5d7fda845071e080ecf0e8c2477a471b2 Feb 02 21:22:05 crc kubenswrapper[4789]: W0202 21:22:05.572747 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8044779f_6644_4f6b_8265_2014af5cc045.slice/crio-477be7043c5b638ff021bffaf049cc250d2dae6c4e9c30e7a232dcae8c7c3174 WatchSource:0}: Error finding container 477be7043c5b638ff021bffaf049cc250d2dae6c4e9c30e7a232dcae8c7c3174: Status 404 returned error can't find the container with id 477be7043c5b638ff021bffaf049cc250d2dae6c4e9c30e7a232dcae8c7c3174 Feb 02 21:22:05 crc kubenswrapper[4789]: I0202 21:22:05.595412 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:22:05 crc kubenswrapper[4789]: E0202 21:22:05.595750 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 21:22:06.095736617 +0000 UTC m=+146.390761636 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:05 crc kubenswrapper[4789]: I0202 21:22:05.676332 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fv5wb"] Feb 02 21:22:05 crc kubenswrapper[4789]: I0202 21:22:05.691614 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-jf9jd"] Feb 02 21:22:05 crc kubenswrapper[4789]: I0202 21:22:05.701040 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:05 crc kubenswrapper[4789]: E0202 21:22:05.709389 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 21:22:06.209356739 +0000 UTC m=+146.504381758 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x2wtg" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:05 crc kubenswrapper[4789]: I0202 21:22:05.767681 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-dd7g5" podStartSLOduration=123.767659507 podStartE2EDuration="2m3.767659507s" podCreationTimestamp="2026-02-02 21:20:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:22:05.760233704 +0000 UTC m=+146.055258723" watchObservedRunningTime="2026-02-02 21:22:05.767659507 +0000 UTC m=+146.062684526" Feb 02 21:22:05 crc kubenswrapper[4789]: I0202 21:22:05.802284 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:22:05 crc kubenswrapper[4789]: E0202 21:22:05.802685 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 21:22:06.302670535 +0000 UTC m=+146.597695554 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:05 crc kubenswrapper[4789]: I0202 21:22:05.903873 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:05 crc kubenswrapper[4789]: E0202 21:22:05.904163 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 21:22:06.404151847 +0000 UTC m=+146.699176866 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x2wtg" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:05 crc kubenswrapper[4789]: I0202 21:22:05.958192 4789 patch_prober.go:28] interesting pod/router-default-5444994796-jwp46 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 21:22:05 crc kubenswrapper[4789]: [-]has-synced failed: reason withheld Feb 02 21:22:05 crc kubenswrapper[4789]: [+]process-running ok Feb 02 21:22:05 crc kubenswrapper[4789]: healthz check failed Feb 02 21:22:05 crc kubenswrapper[4789]: I0202 21:22:05.958237 4789 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jwp46" podUID="b1a5cd8a-fb52-47ab-8aff-64e621d9b8e0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 21:22:06 crc kubenswrapper[4789]: I0202 21:22:06.004550 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:22:06 crc kubenswrapper[4789]: E0202 21:22:06.004716 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 21:22:06.504691522 +0000 UTC m=+146.799716541 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:06 crc kubenswrapper[4789]: I0202 21:22:06.004813 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:06 crc kubenswrapper[4789]: E0202 21:22:06.005125 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 21:22:06.505111724 +0000 UTC m=+146.800136743 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x2wtg" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:06 crc kubenswrapper[4789]: I0202 21:22:06.052221 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-85qfd" event={"ID":"2ab2e98f-4cb6-47c6-acbf-b2b58621c78f","Type":"ContainerStarted","Data":"8dc7c228e1fd46b288c3fba0a363afaeb58be2ba9b574b581df43e7e046dd58b"} Feb 02 21:22:06 crc kubenswrapper[4789]: I0202 21:22:06.052262 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-85qfd" event={"ID":"2ab2e98f-4cb6-47c6-acbf-b2b58621c78f","Type":"ContainerStarted","Data":"3b19b9ac1e222ddb00b8f6ab231dd3109115bf8f38d933ad8c25d9f888310175"} Feb 02 21:22:06 crc kubenswrapper[4789]: I0202 21:22:06.053449 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zn96z" event={"ID":"62652ba8-968d-4e22-8e4a-00497c30cacc","Type":"ContainerStarted","Data":"e2fd892bdf528dd1b58d4f447d5e9aa5d7fda845071e080ecf0e8c2477a471b2"} Feb 02 21:22:06 crc kubenswrapper[4789]: I0202 21:22:06.055234 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-7sn8m" event={"ID":"6559dcc4-e08f-4c1b-89b4-164673cd2ed0","Type":"ContainerStarted","Data":"83d4500eef1a1480016f0a774d8f505793fe5d64c66c533f6875f2d3f0ee8b35"} Feb 02 21:22:06 crc kubenswrapper[4789]: I0202 21:22:06.055259 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-7sn8m" event={"ID":"6559dcc4-e08f-4c1b-89b4-164673cd2ed0","Type":"ContainerStarted","Data":"b635fe6fefad0679163d7c23ad1928b33aeecf8c80d7d0c4c0a1d765f5bd38e6"} Feb 02 21:22:06 crc kubenswrapper[4789]: I0202 21:22:06.056509 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-6n95r" event={"ID":"ff15c8fe-e1d6-4adb-85e3-decb591896c2","Type":"ContainerStarted","Data":"be94e6dcd990b03d47450382ed78a32ac5136ee2d0092c64895a1b892ad9b0be"} Feb 02 21:22:06 crc kubenswrapper[4789]: I0202 21:22:06.057862 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6t7gf" event={"ID":"d9743bb9-e748-40c7-a15d-c33fad88c2f2","Type":"ContainerStarted","Data":"6bc649f60e87e7f4725d4b16aaed4b9baed307fc88571864a14304a5a0f7585b"} Feb 02 21:22:06 crc kubenswrapper[4789]: I0202 21:22:06.057986 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6t7gf" event={"ID":"d9743bb9-e748-40c7-a15d-c33fad88c2f2","Type":"ContainerStarted","Data":"ed0d0599b81e9675f84e717a26221d0eac58394d44ad73473ab50a61f8cafc67"} Feb 02 21:22:06 crc kubenswrapper[4789]: I0202 21:22:06.059158 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-26j4t" event={"ID":"c9720cea-0f21-43a7-91b1-31c95167f4a4","Type":"ContainerStarted","Data":"0f0142f94fbbaa58edf66a90c7e13c60fcb701da962449ea58b69b8996f0e7b0"} Feb 02 21:22:06 crc kubenswrapper[4789]: I0202 21:22:06.060626 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nqts5" event={"ID":"c931683a-9657-49c3-87e4-f76d8f2bf95a","Type":"ContainerStarted","Data":"48d611d2e1681330243931de91c3f613c2da4660adf15743c8907b9e104e2cf1"} Feb 02 21:22:06 crc kubenswrapper[4789]: I0202 21:22:06.060661 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nqts5" event={"ID":"c931683a-9657-49c3-87e4-f76d8f2bf95a","Type":"ContainerStarted","Data":"2ecfebe836d510a22010637db94f2165eff654b14eba6cd8d1f154d8dea4c2e4"} Feb 02 21:22:06 crc kubenswrapper[4789]: I0202 21:22:06.062940 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-c9zn5" event={"ID":"74de48da-c435-4a4a-8042-c8fc935059b7","Type":"ContainerStarted","Data":"e2f70cbaf698a34239122be62a8a337b69be29266bad7de73e8f9399d0b53c9a"} Feb 02 21:22:06 crc kubenswrapper[4789]: I0202 21:22:06.065382 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-4qf8w" event={"ID":"6a6d8d36-0b11-496c-b07a-145358594fa2","Type":"ContainerStarted","Data":"40ea11b3217be17743f60b86e2bd2c81c0e31d817fa721e910b3ef5633b5e4ea"} Feb 02 21:22:06 crc kubenswrapper[4789]: I0202 21:22:06.075707 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-w594l" event={"ID":"f06e3896-8ef9-4988-974f-446fb0bb3faf","Type":"ContainerStarted","Data":"2727d24ad0ce292bb5a46277e09641d57989071a5a81043971927f108da7ba34"} Feb 02 21:22:06 crc kubenswrapper[4789]: I0202 21:22:06.077786 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hc84b" event={"ID":"7b59cb33-d5dc-4e90-b6fe-fe3ad948c346","Type":"ContainerStarted","Data":"456bcb2cf42fdd77e64e31a83520d1e6823297324f89a9102ceacc57dabd6a4c"} Feb 02 21:22:06 crc kubenswrapper[4789]: I0202 21:22:06.077827 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hc84b" event={"ID":"7b59cb33-d5dc-4e90-b6fe-fe3ad948c346","Type":"ContainerStarted","Data":"4f266c29d7d87584f655f99e95d66b04a293677b9b17522814d6490b7a3fbb46"} Feb 02 21:22:06 crc kubenswrapper[4789]: I0202 21:22:06.079049 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hc84b" Feb 02 21:22:06 crc kubenswrapper[4789]: I0202 21:22:06.079691 4789 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-hc84b container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:5443/healthz\": dial tcp 10.217.0.24:5443: connect: connection refused" start-of-body= Feb 02 21:22:06 crc kubenswrapper[4789]: I0202 21:22:06.079762 4789 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hc84b" podUID="7b59cb33-d5dc-4e90-b6fe-fe3ad948c346" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.24:5443/healthz\": dial tcp 10.217.0.24:5443: connect: connection refused" Feb 02 21:22:06 crc kubenswrapper[4789]: I0202 21:22:06.082373 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nqts5" podStartSLOduration=123.082360349 podStartE2EDuration="2m3.082360349s" podCreationTimestamp="2026-02-02 21:20:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:22:06.082321968 +0000 UTC m=+146.377346987" watchObservedRunningTime="2026-02-02 21:22:06.082360349 +0000 UTC m=+146.377385368" Feb 02 21:22:06 crc kubenswrapper[4789]: I0202 21:22:06.090286 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lssjz" event={"ID":"5ad609ed-908e-45b1-90e9-0068a4c1d700","Type":"ContainerStarted","Data":"13370e69615cfbaa38c25f6e10f18046d23a2c7d010b775fe0d194f1279e18e1"} Feb 02 21:22:06 crc kubenswrapper[4789]: I0202 21:22:06.094604 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-4lbfr" event={"ID":"b523ddb6-b299-4bd7-9a33-75c025fb1805","Type":"ContainerStarted","Data":"44b999707de754efb412b030f403a497cf44f5de0d1ce015e48bf9e2b06782f3"} Feb 02 21:22:06 crc kubenswrapper[4789]: I0202 21:22:06.101730 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6df8n" event={"ID":"81508e9e-bf9e-4d3e-b505-c9ca5ae81d79","Type":"ContainerStarted","Data":"f762f35ebf523a335e8fd608b1e2eea131b650bf9e754db1f15f2d1b91acf0eb"} Feb 02 21:22:06 crc kubenswrapper[4789]: I0202 21:22:06.101780 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6df8n" event={"ID":"81508e9e-bf9e-4d3e-b505-c9ca5ae81d79","Type":"ContainerStarted","Data":"53f7240361c1ecd0759a9c34221f4c9e36bac65ae5365c16732eb4dda6ec95c8"} Feb 02 21:22:06 crc kubenswrapper[4789]: I0202 21:22:06.102858 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-pkpwd" event={"ID":"dc42f967-8fe9-4a89-8e82-5272f070ed73","Type":"ContainerStarted","Data":"9210bbcda97f62628adafcd4ef91ea3b8e3212f797c8f25c29b6d153c89e0221"} Feb 02 21:22:06 crc kubenswrapper[4789]: I0202 21:22:06.108361 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hc84b" podStartSLOduration=123.108345017 podStartE2EDuration="2m3.108345017s" podCreationTimestamp="2026-02-02 21:20:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:22:06.107984217 +0000 UTC m=+146.403009236" watchObservedRunningTime="2026-02-02 21:22:06.108345017 +0000 UTC m=+146.403370036" Feb 02 21:22:06 crc kubenswrapper[4789]: I0202 21:22:06.109359 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:22:06 crc kubenswrapper[4789]: E0202 21:22:06.111064 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 21:22:06.611047585 +0000 UTC m=+146.906072604 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:06 crc kubenswrapper[4789]: I0202 21:22:06.115368 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-25gsr" event={"ID":"8044779f-6644-4f6b-8265-2014af5cc045","Type":"ContainerStarted","Data":"477be7043c5b638ff021bffaf049cc250d2dae6c4e9c30e7a232dcae8c7c3174"} Feb 02 21:22:06 crc kubenswrapper[4789]: I0202 21:22:06.123309 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29501115-zcgzz" event={"ID":"9ee2bc38-213d-4181-8e23-0f579b87c986","Type":"ContainerStarted","Data":"6d89acfaef1b3506730c39c3731b4a07faef3b7d4dd42d0a85038266e3378c3e"} Feb 02 21:22:06 crc kubenswrapper[4789]: I0202 21:22:06.123377 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29501115-zcgzz" event={"ID":"9ee2bc38-213d-4181-8e23-0f579b87c986","Type":"ContainerStarted","Data":"8bafb44d8cfcbdc99217281c579a2e408aae4ddb1386258a29d1b3f4f994cfb9"} Feb 02 21:22:06 crc kubenswrapper[4789]: I0202 21:22:06.133550 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-4qf8w" podStartSLOduration=124.133534012 podStartE2EDuration="2m4.133534012s" podCreationTimestamp="2026-02-02 21:20:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:22:06.13171329 +0000 UTC m=+146.426738309" watchObservedRunningTime="2026-02-02 21:22:06.133534012 +0000 UTC m=+146.428559031" Feb 02 21:22:06 crc kubenswrapper[4789]: I0202 21:22:06.143949 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-rlwl7" event={"ID":"0ab37823-7471-4bc8-b5e5-28110c1e5f4e","Type":"ContainerStarted","Data":"993b8784843cd796dd560700517aff1c3d4a6b0b869e69e922c7f74853e280c1"} Feb 02 21:22:06 crc kubenswrapper[4789]: I0202 21:22:06.144028 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-rlwl7" event={"ID":"0ab37823-7471-4bc8-b5e5-28110c1e5f4e","Type":"ContainerStarted","Data":"4b71c93157154b4037579019f9593b5014197b6969ef894d4b0c58afe9c46a0e"} Feb 02 21:22:06 crc kubenswrapper[4789]: I0202 21:22:06.146387 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-zfv5p" event={"ID":"a2edcffa-d93c-4125-863d-05812a4ff79a","Type":"ContainerStarted","Data":"ec8a9168c6f25216a41a23bc5e03444a15486e81cfdec6bf31a3133c82ba0e72"} Feb 02 21:22:06 crc kubenswrapper[4789]: I0202 21:22:06.147125 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-zfv5p" Feb 02 21:22:06 crc kubenswrapper[4789]: I0202 21:22:06.153072 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-4lbfr" podStartSLOduration=124.153059684 podStartE2EDuration="2m4.153059684s" podCreationTimestamp="2026-02-02 21:20:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:22:06.147026971 +0000 UTC m=+146.442051990" watchObservedRunningTime="2026-02-02 21:22:06.153059684 +0000 UTC m=+146.448084703" Feb 02 21:22:06 crc kubenswrapper[4789]: I0202 21:22:06.156175 4789 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-zfv5p container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.13:6443/healthz\": dial tcp 10.217.0.13:6443: connect: connection refused" start-of-body= Feb 02 21:22:06 crc kubenswrapper[4789]: I0202 21:22:06.156250 4789 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-zfv5p" podUID="a2edcffa-d93c-4125-863d-05812a4ff79a" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.13:6443/healthz\": dial tcp 10.217.0.13:6443: connect: connection refused" Feb 02 21:22:06 crc kubenswrapper[4789]: I0202 21:22:06.167057 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-jf9jd" event={"ID":"9af55cc7-0e27-43ce-8db1-ce73a35d361e","Type":"ContainerStarted","Data":"2d1b2ac2646ce45745a5382d5eaa3f4433f21a9037f9f81aa247048180985064"} Feb 02 21:22:06 crc kubenswrapper[4789]: I0202 21:22:06.195303 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nsqw4" event={"ID":"925e8b6f-3848-4ab5-ab00-55405db2334c","Type":"ContainerStarted","Data":"98807361a2e67eeda8b974a6d5190bc3ea5c696b834316210a24b48fd5f9b2e2"} Feb 02 21:22:06 crc kubenswrapper[4789]: I0202 21:22:06.204420 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-cfghm" event={"ID":"7259afcb-55a9-490c-a823-94e217d939e0","Type":"ContainerStarted","Data":"3b296aa8f78b6899b59d9db54002f62dd3c3e427c787baa5b593393f18e7f76d"} Feb 02 21:22:06 crc kubenswrapper[4789]: I0202 21:22:06.207663 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-pkpwd" podStartSLOduration=124.207649076 podStartE2EDuration="2m4.207649076s" podCreationTimestamp="2026-02-02 21:20:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:22:06.192188581 +0000 UTC m=+146.487213600" watchObservedRunningTime="2026-02-02 21:22:06.207649076 +0000 UTC m=+146.502674095" Feb 02 21:22:06 crc kubenswrapper[4789]: I0202 21:22:06.209736 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zjp5b" event={"ID":"15911fdb-68b4-453a-a196-d4806f11ab2f","Type":"ContainerStarted","Data":"11f73d4e5e497c472041574ade08ffd2259beb7e754e1eb4f81bf706ffc9556e"} Feb 02 21:22:06 crc kubenswrapper[4789]: I0202 21:22:06.212021 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:06 crc kubenswrapper[4789]: E0202 21:22:06.215358 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 21:22:06.715333628 +0000 UTC m=+147.010358717 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x2wtg" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:06 crc kubenswrapper[4789]: I0202 21:22:06.221352 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29501115-zcgzz" podStartSLOduration=124.22133909 podStartE2EDuration="2m4.22133909s" podCreationTimestamp="2026-02-02 21:20:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:22:06.207145302 +0000 UTC m=+146.502170321" watchObservedRunningTime="2026-02-02 21:22:06.22133909 +0000 UTC m=+146.516364109" Feb 02 21:22:06 crc kubenswrapper[4789]: I0202 21:22:06.221498 4789 generic.go:334] "Generic (PLEG): container finished" podID="0d7fbad6-8c3b-4d05-8ca4-ef9f3f39ec4f" containerID="55b106f561be5d2dc4534492d32068aa944e7f5d5b92b3d06962c69fbed83ad6" exitCode=0 Feb 02 21:22:06 crc kubenswrapper[4789]: I0202 21:22:06.221607 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-25rjx" event={"ID":"0d7fbad6-8c3b-4d05-8ca4-ef9f3f39ec4f","Type":"ContainerDied","Data":"55b106f561be5d2dc4534492d32068aa944e7f5d5b92b3d06962c69fbed83ad6"} Feb 02 21:22:06 crc kubenswrapper[4789]: I0202 21:22:06.222353 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-rlwl7" podStartSLOduration=6.222346939 podStartE2EDuration="6.222346939s" podCreationTimestamp="2026-02-02 21:22:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:22:06.219895389 +0000 UTC m=+146.514920408" watchObservedRunningTime="2026-02-02 21:22:06.222346939 +0000 UTC m=+146.517371958" Feb 02 21:22:06 crc kubenswrapper[4789]: I0202 21:22:06.223205 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-dzvhk" event={"ID":"b2ad8f41-6fd7-4cc5-b8a1-38a886817c6c","Type":"ContainerStarted","Data":"44612fbcaa09d4a3c2472596a0e4b6976b1d4de33a76bf3d174bbaa5e23cd3a7"} Feb 02 21:22:06 crc kubenswrapper[4789]: I0202 21:22:06.224402 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-wgzrz" event={"ID":"4da03a7e-6764-44c5-bdca-52dc85f316fc","Type":"ContainerStarted","Data":"cdf50d491f95a6f77f0ff292d67dc7276943b7b30774c8614f050ad0792de457"} Feb 02 21:22:06 crc kubenswrapper[4789]: I0202 21:22:06.225666 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fv5wb" event={"ID":"b4eaafb5-bf66-460f-86df-9b3825837d05","Type":"ContainerStarted","Data":"cb8ce52b7837192cd195da6543314d11052c7cc72a66e5398b708d757747570c"} Feb 02 21:22:06 crc kubenswrapper[4789]: I0202 21:22:06.235446 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-84vfz" event={"ID":"abf25fb1-39e2-4b26-9d3b-1cebdcbc7f98","Type":"ContainerStarted","Data":"322282e86a5f3996a34ff0ac483d33d21090e585fecf34f90642514a6a6dfe12"} Feb 02 21:22:06 crc kubenswrapper[4789]: I0202 21:22:06.246771 4789 patch_prober.go:28] interesting pod/downloads-7954f5f757-dd7g5 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 02 21:22:06 crc kubenswrapper[4789]: I0202 21:22:06.246826 4789 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-dd7g5" podUID="899bce18-bfcc-42b8-ab5e-149d16e8eddb" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 02 21:22:06 crc kubenswrapper[4789]: I0202 21:22:06.246776 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-bp7g7" event={"ID":"f7efbc68-70b1-4521-9be4-e67317fe757e","Type":"ContainerStarted","Data":"f8801031826d660c738e65c55bae495da828f00667278c9b0c2b5f2b0e84c300"} Feb 02 21:22:06 crc kubenswrapper[4789]: I0202 21:22:06.246886 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-bp7g7" event={"ID":"f7efbc68-70b1-4521-9be4-e67317fe757e","Type":"ContainerStarted","Data":"d498c19f3d30dc4ecf2d2b7a1e3d0c25e789a084793297d73808ada5a6d79244"} Feb 02 21:22:06 crc kubenswrapper[4789]: I0202 21:22:06.247752 4789 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-hcxsv container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Feb 02 21:22:06 crc kubenswrapper[4789]: I0202 21:22:06.247779 4789 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hcxsv" podUID="48272be3-d48f-45b1-99a9-28ed3ba310ed" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" Feb 02 21:22:06 crc kubenswrapper[4789]: I0202 21:22:06.248091 4789 patch_prober.go:28] interesting pod/console-operator-58897d9998-8787r container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/readyz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Feb 02 21:22:06 crc kubenswrapper[4789]: I0202 21:22:06.248133 4789 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-8787r" podUID="5be28070-1b99-4b27-8777-7f7935ba0b6e" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/readyz\": dial tcp 10.217.0.23:8443: connect: connection refused" Feb 02 21:22:06 crc kubenswrapper[4789]: I0202 21:22:06.251524 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-cfghm" podStartSLOduration=6.251507189 podStartE2EDuration="6.251507189s" podCreationTimestamp="2026-02-02 21:22:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:22:06.251496829 +0000 UTC m=+146.546521848" watchObservedRunningTime="2026-02-02 21:22:06.251507189 +0000 UTC m=+146.546532208" Feb 02 21:22:06 crc kubenswrapper[4789]: I0202 21:22:06.273192 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lbmnl" Feb 02 21:22:06 crc kubenswrapper[4789]: I0202 21:22:06.310946 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nsqw4" podStartSLOduration=123.31093131 podStartE2EDuration="2m3.31093131s" podCreationTimestamp="2026-02-02 21:20:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:22:06.271569057 +0000 UTC m=+146.566594076" watchObservedRunningTime="2026-02-02 21:22:06.31093131 +0000 UTC m=+146.605956329" Feb 02 21:22:06 crc kubenswrapper[4789]: I0202 21:22:06.313396 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-zfv5p" podStartSLOduration=124.313389671 podStartE2EDuration="2m4.313389671s" podCreationTimestamp="2026-02-02 21:20:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:22:06.310292692 +0000 UTC m=+146.605317711" watchObservedRunningTime="2026-02-02 21:22:06.313389671 +0000 UTC m=+146.608414690" Feb 02 21:22:06 crc kubenswrapper[4789]: I0202 21:22:06.314059 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:22:06 crc kubenswrapper[4789]: E0202 21:22:06.314070 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 21:22:06.81403975 +0000 UTC m=+147.109064769 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:06 crc kubenswrapper[4789]: I0202 21:22:06.314491 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:06 crc kubenswrapper[4789]: E0202 21:22:06.317967 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 21:22:06.817949162 +0000 UTC m=+147.112974181 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x2wtg" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:06 crc kubenswrapper[4789]: I0202 21:22:06.346419 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gv5lf" Feb 02 21:22:06 crc kubenswrapper[4789]: I0202 21:22:06.346516 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gv5lf" Feb 02 21:22:06 crc kubenswrapper[4789]: I0202 21:22:06.370541 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gv5lf" Feb 02 21:22:06 crc kubenswrapper[4789]: I0202 21:22:06.372020 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-84vfz" podStartSLOduration=123.372004569 podStartE2EDuration="2m3.372004569s" podCreationTimestamp="2026-02-02 21:20:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:22:06.370007981 +0000 UTC m=+146.665033000" watchObservedRunningTime="2026-02-02 21:22:06.372004569 +0000 UTC m=+146.667029588" Feb 02 21:22:06 crc kubenswrapper[4789]: I0202 21:22:06.425984 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:22:06 crc kubenswrapper[4789]: E0202 21:22:06.427057 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 21:22:06.927039723 +0000 UTC m=+147.222064742 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:06 crc kubenswrapper[4789]: I0202 21:22:06.432751 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zjp5b" podStartSLOduration=123.432729507 podStartE2EDuration="2m3.432729507s" podCreationTimestamp="2026-02-02 21:20:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:22:06.388787852 +0000 UTC m=+146.683812871" watchObservedRunningTime="2026-02-02 21:22:06.432729507 +0000 UTC m=+146.727754526" Feb 02 21:22:06 crc kubenswrapper[4789]: I0202 21:22:06.433808 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-wgzrz" podStartSLOduration=124.433802208 podStartE2EDuration="2m4.433802208s" podCreationTimestamp="2026-02-02 21:20:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:22:06.4286516 +0000 UTC m=+146.723676619" watchObservedRunningTime="2026-02-02 21:22:06.433802208 +0000 UTC m=+146.728827227" Feb 02 21:22:06 crc kubenswrapper[4789]: I0202 21:22:06.462230 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-4qf8w" Feb 02 21:22:06 crc kubenswrapper[4789]: I0202 21:22:06.462382 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-4qf8w" Feb 02 21:22:06 crc kubenswrapper[4789]: I0202 21:22:06.469672 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-bp7g7" podStartSLOduration=123.46965576 podStartE2EDuration="2m3.46965576s" podCreationTimestamp="2026-02-02 21:20:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:22:06.449635634 +0000 UTC m=+146.744660653" watchObservedRunningTime="2026-02-02 21:22:06.46965576 +0000 UTC m=+146.764680779" Feb 02 21:22:06 crc kubenswrapper[4789]: I0202 21:22:06.527343 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:06 crc kubenswrapper[4789]: E0202 21:22:06.527742 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 21:22:07.027728252 +0000 UTC m=+147.322753261 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x2wtg" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:06 crc kubenswrapper[4789]: I0202 21:22:06.628827 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:22:06 crc kubenswrapper[4789]: E0202 21:22:06.629184 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 21:22:07.129126432 +0000 UTC m=+147.424151471 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:06 crc kubenswrapper[4789]: I0202 21:22:06.629348 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:06 crc kubenswrapper[4789]: E0202 21:22:06.629689 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 21:22:07.129675028 +0000 UTC m=+147.424700047 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x2wtg" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:06 crc kubenswrapper[4789]: I0202 21:22:06.730016 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:22:06 crc kubenswrapper[4789]: E0202 21:22:06.730217 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 21:22:07.230188862 +0000 UTC m=+147.525213891 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:06 crc kubenswrapper[4789]: I0202 21:22:06.730330 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:06 crc kubenswrapper[4789]: E0202 21:22:06.730729 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 21:22:07.230717167 +0000 UTC m=+147.525742186 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x2wtg" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:06 crc kubenswrapper[4789]: I0202 21:22:06.831305 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:22:06 crc kubenswrapper[4789]: E0202 21:22:06.831498 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 21:22:07.331456378 +0000 UTC m=+147.626481397 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:06 crc kubenswrapper[4789]: I0202 21:22:06.831693 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:06 crc kubenswrapper[4789]: E0202 21:22:06.831997 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 21:22:07.331984923 +0000 UTC m=+147.627009952 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x2wtg" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:06 crc kubenswrapper[4789]: I0202 21:22:06.932251 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:22:06 crc kubenswrapper[4789]: E0202 21:22:06.932606 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 21:22:07.43259063 +0000 UTC m=+147.727615659 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:06 crc kubenswrapper[4789]: I0202 21:22:06.959038 4789 patch_prober.go:28] interesting pod/router-default-5444994796-jwp46 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 21:22:06 crc kubenswrapper[4789]: [-]has-synced failed: reason withheld Feb 02 21:22:06 crc kubenswrapper[4789]: [+]process-running ok Feb 02 21:22:06 crc kubenswrapper[4789]: healthz check failed Feb 02 21:22:06 crc kubenswrapper[4789]: I0202 21:22:06.959117 4789 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jwp46" podUID="b1a5cd8a-fb52-47ab-8aff-64e621d9b8e0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 21:22:07 crc kubenswrapper[4789]: I0202 21:22:07.033885 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:07 crc kubenswrapper[4789]: E0202 21:22:07.034230 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 21:22:07.534214856 +0000 UTC m=+147.829239875 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x2wtg" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:07 crc kubenswrapper[4789]: I0202 21:22:07.135365 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:22:07 crc kubenswrapper[4789]: E0202 21:22:07.135839 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 21:22:07.635808331 +0000 UTC m=+147.930833350 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:07 crc kubenswrapper[4789]: I0202 21:22:07.236610 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:07 crc kubenswrapper[4789]: E0202 21:22:07.237004 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 21:22:07.736987745 +0000 UTC m=+148.032012754 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x2wtg" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:07 crc kubenswrapper[4789]: I0202 21:22:07.253503 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6df8n" event={"ID":"81508e9e-bf9e-4d3e-b505-c9ca5ae81d79","Type":"ContainerStarted","Data":"696105199f616d7e7c257f5cd3673f5013f31393155f3f4f5d1f3430ae4040a7"} Feb 02 21:22:07 crc kubenswrapper[4789]: I0202 21:22:07.254896 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zn96z" event={"ID":"62652ba8-968d-4e22-8e4a-00497c30cacc","Type":"ContainerStarted","Data":"18169c83c56e2a5c6e0af9cafc4fe3a99cd4caffac19ceaba50ca69929d9c367"} Feb 02 21:22:07 crc kubenswrapper[4789]: I0202 21:22:07.256455 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-25gsr" event={"ID":"8044779f-6644-4f6b-8265-2014af5cc045","Type":"ContainerStarted","Data":"dced074c40da86cbdc0ac9cee0b7c67ca1563d231b3177f55c7e6902a0b52c8f"} Feb 02 21:22:07 crc kubenswrapper[4789]: I0202 21:22:07.256502 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-25gsr" event={"ID":"8044779f-6644-4f6b-8265-2014af5cc045","Type":"ContainerStarted","Data":"fb3457c61a3e962ec744985f400400d9f2f18fba6a771dad1dcc2d63d58eb64f"} Feb 02 21:22:07 crc kubenswrapper[4789]: I0202 21:22:07.258615 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-25rjx" event={"ID":"0d7fbad6-8c3b-4d05-8ca4-ef9f3f39ec4f","Type":"ContainerStarted","Data":"50dbdf79b4a9001ea58998370c6affba997a505e2a068f310144835e5f36f808"} Feb 02 21:22:07 crc kubenswrapper[4789]: I0202 21:22:07.259165 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-25rjx" Feb 02 21:22:07 crc kubenswrapper[4789]: I0202 21:22:07.260889 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-85qfd" event={"ID":"2ab2e98f-4cb6-47c6-acbf-b2b58621c78f","Type":"ContainerStarted","Data":"c9429c6d830847d58c229a2678e2517562bfc666b0b8bf7c572a5cae6d30c3f5"} Feb 02 21:22:07 crc kubenswrapper[4789]: I0202 21:22:07.263113 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lssjz" event={"ID":"5ad609ed-908e-45b1-90e9-0068a4c1d700","Type":"ContainerStarted","Data":"fd6c5d8c756789d70dd3cbb4fed19a16eb5b140c13d322f373daa4b31d9c002b"} Feb 02 21:22:07 crc kubenswrapper[4789]: I0202 21:22:07.267517 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-26j4t" event={"ID":"c9720cea-0f21-43a7-91b1-31c95167f4a4","Type":"ContainerStarted","Data":"c67a23395b16c5b4337c0ad6bdc599ea235f6b7d894a597422152655b5db076a"} Feb 02 21:22:07 crc kubenswrapper[4789]: I0202 21:22:07.269462 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-w594l" event={"ID":"f06e3896-8ef9-4988-974f-446fb0bb3faf","Type":"ContainerStarted","Data":"a2f3844fdb3071a6435bd0252921833608a345af1acc7deb5f4c82494772be8a"} Feb 02 21:22:07 crc kubenswrapper[4789]: I0202 21:22:07.271717 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-6n95r" event={"ID":"ff15c8fe-e1d6-4adb-85e3-decb591896c2","Type":"ContainerStarted","Data":"1b060fef4cc74412165f8ad35eff104f2605152d54ff10048b948ff7308365d8"} Feb 02 21:22:07 crc kubenswrapper[4789]: I0202 21:22:07.273407 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fv5wb" event={"ID":"b4eaafb5-bf66-460f-86df-9b3825837d05","Type":"ContainerStarted","Data":"967264c5d8e6357d2a26744e84c46c095836f520a3fbdf66c143748a52cf16cb"} Feb 02 21:22:07 crc kubenswrapper[4789]: I0202 21:22:07.275209 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-jf9jd" event={"ID":"9af55cc7-0e27-43ce-8db1-ce73a35d361e","Type":"ContainerStarted","Data":"a879d6f9180414cb5c3eca04c59a3e6923c380a3d2dd21db9f2a79ea7b7fc987"} Feb 02 21:22:07 crc kubenswrapper[4789]: I0202 21:22:07.275243 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-jf9jd" event={"ID":"9af55cc7-0e27-43ce-8db1-ce73a35d361e","Type":"ContainerStarted","Data":"0910ae67892ec077b3cda5b80e198a9b7b9df88013105ea33f12c14a03b7aac4"} Feb 02 21:22:07 crc kubenswrapper[4789]: I0202 21:22:07.275303 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-jf9jd" Feb 02 21:22:07 crc kubenswrapper[4789]: I0202 21:22:07.277530 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-c9zn5" event={"ID":"74de48da-c435-4a4a-8042-c8fc935059b7","Type":"ContainerStarted","Data":"95ba21a04870ad67a3bad5d4be62069062e4d5bb1573b1382d3e05ff4726de50"} Feb 02 21:22:07 crc kubenswrapper[4789]: I0202 21:22:07.277572 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-c9zn5" event={"ID":"74de48da-c435-4a4a-8042-c8fc935059b7","Type":"ContainerStarted","Data":"b833a8b6728b2952a60e5249d26e5ed107e7bae3fc2837abd675e3208b5185ae"} Feb 02 21:22:07 crc kubenswrapper[4789]: I0202 21:22:07.278013 4789 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-zfv5p container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.13:6443/healthz\": dial tcp 10.217.0.13:6443: connect: connection refused" start-of-body= Feb 02 21:22:07 crc kubenswrapper[4789]: I0202 21:22:07.278066 4789 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-zfv5p" podUID="a2edcffa-d93c-4125-863d-05812a4ff79a" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.13:6443/healthz\": dial tcp 10.217.0.13:6443: connect: connection refused" Feb 02 21:22:07 crc kubenswrapper[4789]: I0202 21:22:07.278121 4789 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-hc84b container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:5443/healthz\": dial tcp 10.217.0.24:5443: connect: connection refused" start-of-body= Feb 02 21:22:07 crc kubenswrapper[4789]: I0202 21:22:07.278145 4789 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hc84b" podUID="7b59cb33-d5dc-4e90-b6fe-fe3ad948c346" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.24:5443/healthz\": dial tcp 10.217.0.24:5443: connect: connection refused" Feb 02 21:22:07 crc kubenswrapper[4789]: I0202 21:22:07.278195 4789 patch_prober.go:28] interesting pod/downloads-7954f5f757-dd7g5 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 02 21:22:07 crc kubenswrapper[4789]: I0202 21:22:07.278213 4789 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-dd7g5" podUID="899bce18-bfcc-42b8-ab5e-149d16e8eddb" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 02 21:22:07 crc kubenswrapper[4789]: I0202 21:22:07.280328 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6df8n" podStartSLOduration=125.280316732 podStartE2EDuration="2m5.280316732s" podCreationTimestamp="2026-02-02 21:20:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:22:07.279747646 +0000 UTC m=+147.574772665" watchObservedRunningTime="2026-02-02 21:22:07.280316732 +0000 UTC m=+147.575341751" Feb 02 21:22:07 crc kubenswrapper[4789]: I0202 21:22:07.286902 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gv5lf" Feb 02 21:22:07 crc kubenswrapper[4789]: I0202 21:22:07.299201 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zn96z" podStartSLOduration=124.299180775 podStartE2EDuration="2m4.299180775s" podCreationTimestamp="2026-02-02 21:20:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:22:07.297982431 +0000 UTC m=+147.593007450" watchObservedRunningTime="2026-02-02 21:22:07.299180775 +0000 UTC m=+147.594205794" Feb 02 21:22:07 crc kubenswrapper[4789]: I0202 21:22:07.317795 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-25gsr" podStartSLOduration=124.317776031 podStartE2EDuration="2m4.317776031s" podCreationTimestamp="2026-02-02 21:20:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:22:07.317552324 +0000 UTC m=+147.612577343" watchObservedRunningTime="2026-02-02 21:22:07.317776031 +0000 UTC m=+147.612801050" Feb 02 21:22:07 crc kubenswrapper[4789]: I0202 21:22:07.338289 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:22:07 crc kubenswrapper[4789]: E0202 21:22:07.338418 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 21:22:07.838396474 +0000 UTC m=+148.133421503 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:07 crc kubenswrapper[4789]: I0202 21:22:07.341929 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:07 crc kubenswrapper[4789]: E0202 21:22:07.370174 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 21:22:07.870154879 +0000 UTC m=+148.165179898 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x2wtg" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:07 crc kubenswrapper[4789]: I0202 21:22:07.386567 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fv5wb" podStartSLOduration=125.386541241 podStartE2EDuration="2m5.386541241s" podCreationTimestamp="2026-02-02 21:20:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:22:07.353267173 +0000 UTC m=+147.648292192" watchObservedRunningTime="2026-02-02 21:22:07.386541241 +0000 UTC m=+147.681566260" Feb 02 21:22:07 crc kubenswrapper[4789]: I0202 21:22:07.396871 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-85qfd" podStartSLOduration=124.396852498 podStartE2EDuration="2m4.396852498s" podCreationTimestamp="2026-02-02 21:20:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:22:07.396551989 +0000 UTC m=+147.691577008" watchObservedRunningTime="2026-02-02 21:22:07.396852498 +0000 UTC m=+147.691877517" Feb 02 21:22:07 crc kubenswrapper[4789]: I0202 21:22:07.398827 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-7sn8m" podStartSLOduration=124.398813804 podStartE2EDuration="2m4.398813804s" podCreationTimestamp="2026-02-02 21:20:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:22:07.385880252 +0000 UTC m=+147.680905271" watchObservedRunningTime="2026-02-02 21:22:07.398813804 +0000 UTC m=+147.693838823" Feb 02 21:22:07 crc kubenswrapper[4789]: I0202 21:22:07.424116 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-6n95r" podStartSLOduration=124.424095472 podStartE2EDuration="2m4.424095472s" podCreationTimestamp="2026-02-02 21:20:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:22:07.419076468 +0000 UTC m=+147.714101487" watchObservedRunningTime="2026-02-02 21:22:07.424095472 +0000 UTC m=+147.719120491" Feb 02 21:22:07 crc kubenswrapper[4789]: I0202 21:22:07.443969 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:22:07 crc kubenswrapper[4789]: E0202 21:22:07.444395 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 21:22:07.944376356 +0000 UTC m=+148.239401385 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:07 crc kubenswrapper[4789]: I0202 21:22:07.448367 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-26j4t" podStartSLOduration=124.44834999 podStartE2EDuration="2m4.44834999s" podCreationTimestamp="2026-02-02 21:20:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:22:07.447813195 +0000 UTC m=+147.742838224" watchObservedRunningTime="2026-02-02 21:22:07.44834999 +0000 UTC m=+147.743375009" Feb 02 21:22:07 crc kubenswrapper[4789]: I0202 21:22:07.471779 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-w594l" podStartSLOduration=124.471757454 podStartE2EDuration="2m4.471757454s" podCreationTimestamp="2026-02-02 21:20:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:22:07.468769528 +0000 UTC m=+147.763794547" watchObservedRunningTime="2026-02-02 21:22:07.471757454 +0000 UTC m=+147.766782473" Feb 02 21:22:07 crc kubenswrapper[4789]: I0202 21:22:07.500884 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-25rjx" podStartSLOduration=125.500865492 podStartE2EDuration="2m5.500865492s" podCreationTimestamp="2026-02-02 21:20:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:22:07.498208056 +0000 UTC m=+147.793233075" watchObservedRunningTime="2026-02-02 21:22:07.500865492 +0000 UTC m=+147.795890511" Feb 02 21:22:07 crc kubenswrapper[4789]: I0202 21:22:07.518383 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lssjz" podStartSLOduration=124.518366266 podStartE2EDuration="2m4.518366266s" podCreationTimestamp="2026-02-02 21:20:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:22:07.515520324 +0000 UTC m=+147.810545353" watchObservedRunningTime="2026-02-02 21:22:07.518366266 +0000 UTC m=+147.813391285" Feb 02 21:22:07 crc kubenswrapper[4789]: I0202 21:22:07.545679 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:07 crc kubenswrapper[4789]: E0202 21:22:07.546046 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 21:22:08.046017743 +0000 UTC m=+148.341042762 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x2wtg" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:07 crc kubenswrapper[4789]: I0202 21:22:07.574838 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-c9zn5" podStartSLOduration=124.574786371 podStartE2EDuration="2m4.574786371s" podCreationTimestamp="2026-02-02 21:20:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:22:07.557284647 +0000 UTC m=+147.852309686" watchObservedRunningTime="2026-02-02 21:22:07.574786371 +0000 UTC m=+147.869811390" Feb 02 21:22:07 crc kubenswrapper[4789]: I0202 21:22:07.594383 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-jf9jd" podStartSLOduration=7.594368175 podStartE2EDuration="7.594368175s" podCreationTimestamp="2026-02-02 21:22:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:22:07.574965416 +0000 UTC m=+147.869990435" watchObservedRunningTime="2026-02-02 21:22:07.594368175 +0000 UTC m=+147.889393194" Feb 02 21:22:07 crc kubenswrapper[4789]: I0202 21:22:07.595842 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6t7gf" podStartSLOduration=125.595835197 podStartE2EDuration="2m5.595835197s" podCreationTimestamp="2026-02-02 21:20:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:22:07.594281762 +0000 UTC m=+147.889306791" watchObservedRunningTime="2026-02-02 21:22:07.595835197 +0000 UTC m=+147.890860206" Feb 02 21:22:07 crc kubenswrapper[4789]: I0202 21:22:07.646777 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:22:07 crc kubenswrapper[4789]: E0202 21:22:07.646946 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 21:22:08.146922228 +0000 UTC m=+148.441947247 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:07 crc kubenswrapper[4789]: I0202 21:22:07.647126 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:07 crc kubenswrapper[4789]: E0202 21:22:07.647424 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 21:22:08.147407612 +0000 UTC m=+148.442432631 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x2wtg" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:07 crc kubenswrapper[4789]: I0202 21:22:07.748253 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:22:07 crc kubenswrapper[4789]: E0202 21:22:07.748397 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 21:22:08.248379009 +0000 UTC m=+148.543404028 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:07 crc kubenswrapper[4789]: I0202 21:22:07.748497 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:07 crc kubenswrapper[4789]: E0202 21:22:07.748748 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 21:22:08.24873991 +0000 UTC m=+148.543764929 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x2wtg" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:07 crc kubenswrapper[4789]: I0202 21:22:07.849973 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:22:07 crc kubenswrapper[4789]: E0202 21:22:07.850118 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 21:22:08.350098648 +0000 UTC m=+148.645123677 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:07 crc kubenswrapper[4789]: I0202 21:22:07.850319 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:07 crc kubenswrapper[4789]: E0202 21:22:07.850671 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 21:22:08.350661064 +0000 UTC m=+148.645686083 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x2wtg" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:07 crc kubenswrapper[4789]: I0202 21:22:07.951524 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:22:07 crc kubenswrapper[4789]: E0202 21:22:07.951876 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 21:22:08.451849928 +0000 UTC m=+148.746874947 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:07 crc kubenswrapper[4789]: I0202 21:22:07.952035 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:07 crc kubenswrapper[4789]: E0202 21:22:07.952311 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 21:22:08.452303261 +0000 UTC m=+148.747328280 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x2wtg" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:07 crc kubenswrapper[4789]: I0202 21:22:07.954568 4789 patch_prober.go:28] interesting pod/router-default-5444994796-jwp46 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 21:22:07 crc kubenswrapper[4789]: [-]has-synced failed: reason withheld Feb 02 21:22:07 crc kubenswrapper[4789]: [+]process-running ok Feb 02 21:22:07 crc kubenswrapper[4789]: healthz check failed Feb 02 21:22:07 crc kubenswrapper[4789]: I0202 21:22:07.954641 4789 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jwp46" podUID="b1a5cd8a-fb52-47ab-8aff-64e621d9b8e0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 21:22:08 crc kubenswrapper[4789]: I0202 21:22:08.053103 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:22:08 crc kubenswrapper[4789]: E0202 21:22:08.053269 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 21:22:08.553245577 +0000 UTC m=+148.848270596 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:08 crc kubenswrapper[4789]: I0202 21:22:08.053389 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:08 crc kubenswrapper[4789]: E0202 21:22:08.053707 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 21:22:08.55369883 +0000 UTC m=+148.848723849 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x2wtg" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:08 crc kubenswrapper[4789]: I0202 21:22:08.154413 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:22:08 crc kubenswrapper[4789]: E0202 21:22:08.154656 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 21:22:08.654622036 +0000 UTC m=+148.949647055 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:08 crc kubenswrapper[4789]: I0202 21:22:08.154784 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:08 crc kubenswrapper[4789]: E0202 21:22:08.155147 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 21:22:08.655130391 +0000 UTC m=+148.950155500 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x2wtg" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:08 crc kubenswrapper[4789]: I0202 21:22:08.255998 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:22:08 crc kubenswrapper[4789]: E0202 21:22:08.256146 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 21:22:08.756126989 +0000 UTC m=+149.051152028 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:08 crc kubenswrapper[4789]: I0202 21:22:08.256292 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:08 crc kubenswrapper[4789]: E0202 21:22:08.256607 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 21:22:08.756595273 +0000 UTC m=+149.051620292 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x2wtg" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:08 crc kubenswrapper[4789]: I0202 21:22:08.281919 4789 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-hc84b container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:5443/healthz\": dial tcp 10.217.0.24:5443: connect: connection refused" start-of-body= Feb 02 21:22:08 crc kubenswrapper[4789]: I0202 21:22:08.282229 4789 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hc84b" podUID="7b59cb33-d5dc-4e90-b6fe-fe3ad948c346" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.24:5443/healthz\": dial tcp 10.217.0.24:5443: connect: connection refused" Feb 02 21:22:08 crc kubenswrapper[4789]: I0202 21:22:08.282068 4789 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-zfv5p container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.13:6443/healthz\": dial tcp 10.217.0.13:6443: connect: connection refused" start-of-body= Feb 02 21:22:08 crc kubenswrapper[4789]: I0202 21:22:08.282305 4789 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-zfv5p" podUID="a2edcffa-d93c-4125-863d-05812a4ff79a" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.13:6443/healthz\": dial tcp 10.217.0.13:6443: connect: connection refused" Feb 02 21:22:08 crc kubenswrapper[4789]: I0202 21:22:08.282438 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-c9zn5" Feb 02 21:22:08 crc kubenswrapper[4789]: I0202 21:22:08.335145 4789 patch_prober.go:28] interesting pod/apiserver-76f77b778f-4qf8w container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 02 21:22:08 crc kubenswrapper[4789]: [+]log ok Feb 02 21:22:08 crc kubenswrapper[4789]: [+]etcd ok Feb 02 21:22:08 crc kubenswrapper[4789]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 02 21:22:08 crc kubenswrapper[4789]: [+]poststarthook/generic-apiserver-start-informers ok Feb 02 21:22:08 crc kubenswrapper[4789]: [+]poststarthook/max-in-flight-filter ok Feb 02 21:22:08 crc kubenswrapper[4789]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 02 21:22:08 crc kubenswrapper[4789]: [+]poststarthook/image.openshift.io-apiserver-caches ok Feb 02 21:22:08 crc kubenswrapper[4789]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Feb 02 21:22:08 crc kubenswrapper[4789]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Feb 02 21:22:08 crc kubenswrapper[4789]: [+]poststarthook/project.openshift.io-projectcache ok Feb 02 21:22:08 crc kubenswrapper[4789]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Feb 02 21:22:08 crc kubenswrapper[4789]: [+]poststarthook/openshift.io-startinformers ok Feb 02 21:22:08 crc kubenswrapper[4789]: [+]poststarthook/openshift.io-restmapperupdater ok Feb 02 21:22:08 crc kubenswrapper[4789]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 02 21:22:08 crc kubenswrapper[4789]: livez check failed Feb 02 21:22:08 crc kubenswrapper[4789]: I0202 21:22:08.335215 4789 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-4qf8w" podUID="6a6d8d36-0b11-496c-b07a-145358594fa2" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 21:22:08 crc kubenswrapper[4789]: I0202 21:22:08.357280 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:22:08 crc kubenswrapper[4789]: E0202 21:22:08.357496 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 21:22:08.857464727 +0000 UTC m=+149.152489756 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:08 crc kubenswrapper[4789]: I0202 21:22:08.358067 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:08 crc kubenswrapper[4789]: E0202 21:22:08.361174 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 21:22:08.861161823 +0000 UTC m=+149.156186852 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x2wtg" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:08 crc kubenswrapper[4789]: I0202 21:22:08.458955 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:22:08 crc kubenswrapper[4789]: E0202 21:22:08.459153 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 21:22:08.959123144 +0000 UTC m=+149.254148173 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:08 crc kubenswrapper[4789]: I0202 21:22:08.459189 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:08 crc kubenswrapper[4789]: E0202 21:22:08.459532 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 21:22:08.959520286 +0000 UTC m=+149.254545325 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x2wtg" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:08 crc kubenswrapper[4789]: I0202 21:22:08.560050 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:22:08 crc kubenswrapper[4789]: E0202 21:22:08.560209 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 21:22:09.060186273 +0000 UTC m=+149.355211302 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:08 crc kubenswrapper[4789]: I0202 21:22:08.560388 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:08 crc kubenswrapper[4789]: E0202 21:22:08.560659 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 21:22:09.060651686 +0000 UTC m=+149.355676695 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x2wtg" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:08 crc kubenswrapper[4789]: I0202 21:22:08.661166 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:22:08 crc kubenswrapper[4789]: E0202 21:22:08.661382 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 21:22:09.161352616 +0000 UTC m=+149.456377635 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:08 crc kubenswrapper[4789]: I0202 21:22:08.661466 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:08 crc kubenswrapper[4789]: E0202 21:22:08.661761 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 21:22:09.161750537 +0000 UTC m=+149.456775556 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x2wtg" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:08 crc kubenswrapper[4789]: I0202 21:22:08.762824 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:22:08 crc kubenswrapper[4789]: E0202 21:22:08.762986 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 21:22:09.262963192 +0000 UTC m=+149.557988211 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:08 crc kubenswrapper[4789]: I0202 21:22:08.763401 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:08 crc kubenswrapper[4789]: E0202 21:22:08.763724 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 21:22:09.263716393 +0000 UTC m=+149.558741412 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x2wtg" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:08 crc kubenswrapper[4789]: I0202 21:22:08.864228 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:22:08 crc kubenswrapper[4789]: E0202 21:22:08.864435 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 21:22:09.364404753 +0000 UTC m=+149.659429772 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:08 crc kubenswrapper[4789]: I0202 21:22:08.864638 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:08 crc kubenswrapper[4789]: E0202 21:22:08.865079 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 21:22:09.365059291 +0000 UTC m=+149.660084350 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x2wtg" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:08 crc kubenswrapper[4789]: I0202 21:22:08.955723 4789 patch_prober.go:28] interesting pod/router-default-5444994796-jwp46 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 21:22:08 crc kubenswrapper[4789]: [-]has-synced failed: reason withheld Feb 02 21:22:08 crc kubenswrapper[4789]: [+]process-running ok Feb 02 21:22:08 crc kubenswrapper[4789]: healthz check failed Feb 02 21:22:08 crc kubenswrapper[4789]: I0202 21:22:08.955778 4789 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jwp46" podUID="b1a5cd8a-fb52-47ab-8aff-64e621d9b8e0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 21:22:08 crc kubenswrapper[4789]: I0202 21:22:08.965904 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:22:08 crc kubenswrapper[4789]: E0202 21:22:08.966115 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 21:22:09.46608172 +0000 UTC m=+149.761106729 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:08 crc kubenswrapper[4789]: I0202 21:22:08.966218 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:08 crc kubenswrapper[4789]: E0202 21:22:08.966510 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 21:22:09.466495652 +0000 UTC m=+149.761520671 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x2wtg" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:09 crc kubenswrapper[4789]: I0202 21:22:09.067255 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:22:09 crc kubenswrapper[4789]: E0202 21:22:09.067447 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 21:22:09.567422278 +0000 UTC m=+149.862447297 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:09 crc kubenswrapper[4789]: I0202 21:22:09.067613 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:09 crc kubenswrapper[4789]: E0202 21:22:09.068112 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 21:22:09.568096878 +0000 UTC m=+149.863121907 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x2wtg" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:09 crc kubenswrapper[4789]: I0202 21:22:09.168879 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:22:09 crc kubenswrapper[4789]: E0202 21:22:09.169225 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 21:22:09.669204059 +0000 UTC m=+149.964229078 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:09 crc kubenswrapper[4789]: I0202 21:22:09.169286 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:09 crc kubenswrapper[4789]: E0202 21:22:09.169618 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 21:22:09.6696069 +0000 UTC m=+149.964631929 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x2wtg" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:09 crc kubenswrapper[4789]: I0202 21:22:09.270120 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:22:09 crc kubenswrapper[4789]: E0202 21:22:09.270245 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 21:22:09.770229378 +0000 UTC m=+150.065254397 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:09 crc kubenswrapper[4789]: I0202 21:22:09.270606 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:09 crc kubenswrapper[4789]: I0202 21:22:09.270635 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 21:22:09 crc kubenswrapper[4789]: E0202 21:22:09.270878 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 21:22:09.770867806 +0000 UTC m=+150.065892825 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x2wtg" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:09 crc kubenswrapper[4789]: I0202 21:22:09.287608 4789 generic.go:334] "Generic (PLEG): container finished" podID="9ee2bc38-213d-4181-8e23-0f579b87c986" containerID="6d89acfaef1b3506730c39c3731b4a07faef3b7d4dd42d0a85038266e3378c3e" exitCode=0 Feb 02 21:22:09 crc kubenswrapper[4789]: I0202 21:22:09.287689 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29501115-zcgzz" event={"ID":"9ee2bc38-213d-4181-8e23-0f579b87c986","Type":"ContainerDied","Data":"6d89acfaef1b3506730c39c3731b4a07faef3b7d4dd42d0a85038266e3378c3e"} Feb 02 21:22:09 crc kubenswrapper[4789]: I0202 21:22:09.289827 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-dzvhk" event={"ID":"b2ad8f41-6fd7-4cc5-b8a1-38a886817c6c","Type":"ContainerStarted","Data":"03ce8d9c334aa3d0f72f7461ca3cefc28baf11f1f462318eaf58b6079853867c"} Feb 02 21:22:09 crc kubenswrapper[4789]: I0202 21:22:09.294019 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 21:22:09 crc kubenswrapper[4789]: I0202 21:22:09.297624 4789 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-25rjx container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" start-of-body= Feb 02 21:22:09 crc kubenswrapper[4789]: I0202 21:22:09.297676 4789 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-25rjx" podUID="0d7fbad6-8c3b-4d05-8ca4-ef9f3f39ec4f" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" Feb 02 21:22:09 crc kubenswrapper[4789]: I0202 21:22:09.371852 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:22:09 crc kubenswrapper[4789]: E0202 21:22:09.372045 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 21:22:09.872018398 +0000 UTC m=+150.167043417 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:09 crc kubenswrapper[4789]: I0202 21:22:09.372084 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 21:22:09 crc kubenswrapper[4789]: I0202 21:22:09.372126 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:09 crc kubenswrapper[4789]: I0202 21:22:09.372225 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 21:22:09 crc kubenswrapper[4789]: I0202 21:22:09.372303 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 21:22:09 crc kubenswrapper[4789]: E0202 21:22:09.372621 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 21:22:09.872604225 +0000 UTC m=+150.167629244 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x2wtg" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:09 crc kubenswrapper[4789]: I0202 21:22:09.377390 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 21:22:09 crc kubenswrapper[4789]: I0202 21:22:09.377981 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 21:22:09 crc kubenswrapper[4789]: I0202 21:22:09.380103 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 21:22:09 crc kubenswrapper[4789]: I0202 21:22:09.446534 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 21:22:09 crc kubenswrapper[4789]: I0202 21:22:09.462374 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 21:22:09 crc kubenswrapper[4789]: I0202 21:22:09.472786 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:22:09 crc kubenswrapper[4789]: E0202 21:22:09.473092 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 21:22:09.973077128 +0000 UTC m=+150.268102147 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:09 crc kubenswrapper[4789]: I0202 21:22:09.488626 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 21:22:09 crc kubenswrapper[4789]: I0202 21:22:09.574082 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:09 crc kubenswrapper[4789]: E0202 21:22:09.574600 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 21:22:10.074570171 +0000 UTC m=+150.369595190 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x2wtg" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:09 crc kubenswrapper[4789]: I0202 21:22:09.675030 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:22:09 crc kubenswrapper[4789]: E0202 21:22:09.675331 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 21:22:10.175316782 +0000 UTC m=+150.470341801 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:09 crc kubenswrapper[4789]: I0202 21:22:09.776203 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:09 crc kubenswrapper[4789]: E0202 21:22:09.776507 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 21:22:10.276495875 +0000 UTC m=+150.571520904 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x2wtg" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:09 crc kubenswrapper[4789]: I0202 21:22:09.877019 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:22:09 crc kubenswrapper[4789]: E0202 21:22:09.877539 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 21:22:10.377512453 +0000 UTC m=+150.672537472 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:09 crc kubenswrapper[4789]: I0202 21:22:09.955627 4789 patch_prober.go:28] interesting pod/router-default-5444994796-jwp46 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 21:22:09 crc kubenswrapper[4789]: [-]has-synced failed: reason withheld Feb 02 21:22:09 crc kubenswrapper[4789]: [+]process-running ok Feb 02 21:22:09 crc kubenswrapper[4789]: healthz check failed Feb 02 21:22:09 crc kubenswrapper[4789]: I0202 21:22:09.955686 4789 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jwp46" podUID="b1a5cd8a-fb52-47ab-8aff-64e621d9b8e0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 21:22:09 crc kubenswrapper[4789]: I0202 21:22:09.978902 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:09 crc kubenswrapper[4789]: E0202 21:22:09.979274 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 21:22:10.479259533 +0000 UTC m=+150.774284552 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x2wtg" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:10 crc kubenswrapper[4789]: I0202 21:22:10.080329 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:22:10 crc kubenswrapper[4789]: E0202 21:22:10.080427 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 21:22:10.580411706 +0000 UTC m=+150.875436725 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:10 crc kubenswrapper[4789]: I0202 21:22:10.080666 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:10 crc kubenswrapper[4789]: E0202 21:22:10.080922 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 21:22:10.58091438 +0000 UTC m=+150.875939399 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x2wtg" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:10 crc kubenswrapper[4789]: I0202 21:22:10.130285 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ltv7s"] Feb 02 21:22:10 crc kubenswrapper[4789]: I0202 21:22:10.131434 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ltv7s" Feb 02 21:22:10 crc kubenswrapper[4789]: I0202 21:22:10.132840 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 02 21:22:10 crc kubenswrapper[4789]: I0202 21:22:10.149977 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ltv7s"] Feb 02 21:22:10 crc kubenswrapper[4789]: I0202 21:22:10.182545 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:22:10 crc kubenswrapper[4789]: I0202 21:22:10.182726 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-956jm\" (UniqueName: \"kubernetes.io/projected/82a7bf20-8db7-4d0f-91d4-a85ae5da91f5-kube-api-access-956jm\") pod \"community-operators-ltv7s\" (UID: \"82a7bf20-8db7-4d0f-91d4-a85ae5da91f5\") " pod="openshift-marketplace/community-operators-ltv7s" Feb 02 21:22:10 crc kubenswrapper[4789]: I0202 21:22:10.182758 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82a7bf20-8db7-4d0f-91d4-a85ae5da91f5-catalog-content\") pod \"community-operators-ltv7s\" (UID: \"82a7bf20-8db7-4d0f-91d4-a85ae5da91f5\") " pod="openshift-marketplace/community-operators-ltv7s" Feb 02 21:22:10 crc kubenswrapper[4789]: I0202 21:22:10.182782 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82a7bf20-8db7-4d0f-91d4-a85ae5da91f5-utilities\") pod \"community-operators-ltv7s\" (UID: \"82a7bf20-8db7-4d0f-91d4-a85ae5da91f5\") " pod="openshift-marketplace/community-operators-ltv7s" Feb 02 21:22:10 crc kubenswrapper[4789]: E0202 21:22:10.182945 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 21:22:10.682929457 +0000 UTC m=+150.977954476 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:10 crc kubenswrapper[4789]: I0202 21:22:10.284201 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-956jm\" (UniqueName: \"kubernetes.io/projected/82a7bf20-8db7-4d0f-91d4-a85ae5da91f5-kube-api-access-956jm\") pod \"community-operators-ltv7s\" (UID: \"82a7bf20-8db7-4d0f-91d4-a85ae5da91f5\") " pod="openshift-marketplace/community-operators-ltv7s" Feb 02 21:22:10 crc kubenswrapper[4789]: I0202 21:22:10.284532 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82a7bf20-8db7-4d0f-91d4-a85ae5da91f5-catalog-content\") pod \"community-operators-ltv7s\" (UID: \"82a7bf20-8db7-4d0f-91d4-a85ae5da91f5\") " pod="openshift-marketplace/community-operators-ltv7s" Feb 02 21:22:10 crc kubenswrapper[4789]: I0202 21:22:10.284560 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82a7bf20-8db7-4d0f-91d4-a85ae5da91f5-utilities\") pod \"community-operators-ltv7s\" (UID: \"82a7bf20-8db7-4d0f-91d4-a85ae5da91f5\") " pod="openshift-marketplace/community-operators-ltv7s" Feb 02 21:22:10 crc kubenswrapper[4789]: I0202 21:22:10.284616 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:10 crc kubenswrapper[4789]: E0202 21:22:10.284916 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 21:22:10.784904054 +0000 UTC m=+151.079929073 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x2wtg" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:10 crc kubenswrapper[4789]: I0202 21:22:10.285025 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82a7bf20-8db7-4d0f-91d4-a85ae5da91f5-catalog-content\") pod \"community-operators-ltv7s\" (UID: \"82a7bf20-8db7-4d0f-91d4-a85ae5da91f5\") " pod="openshift-marketplace/community-operators-ltv7s" Feb 02 21:22:10 crc kubenswrapper[4789]: I0202 21:22:10.285323 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82a7bf20-8db7-4d0f-91d4-a85ae5da91f5-utilities\") pod \"community-operators-ltv7s\" (UID: \"82a7bf20-8db7-4d0f-91d4-a85ae5da91f5\") " pod="openshift-marketplace/community-operators-ltv7s" Feb 02 21:22:10 crc kubenswrapper[4789]: I0202 21:22:10.294491 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"eb2450cec7526343b88d67b7167444a171f4a22605ad1b471ea44c8c78745610"} Feb 02 21:22:10 crc kubenswrapper[4789]: I0202 21:22:10.296257 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"a0829b0c7a38822f532ef78cd18966e1bbf8171921ff74e7a881d3bb7503e58c"} Feb 02 21:22:10 crc kubenswrapper[4789]: I0202 21:22:10.297135 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"86bafdd80c500e059a53000b9f5891a22684650c314e57d5c9d19b749df70d5f"} Feb 02 21:22:10 crc kubenswrapper[4789]: I0202 21:22:10.299998 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-956jm\" (UniqueName: \"kubernetes.io/projected/82a7bf20-8db7-4d0f-91d4-a85ae5da91f5-kube-api-access-956jm\") pod \"community-operators-ltv7s\" (UID: \"82a7bf20-8db7-4d0f-91d4-a85ae5da91f5\") " pod="openshift-marketplace/community-operators-ltv7s" Feb 02 21:22:10 crc kubenswrapper[4789]: I0202 21:22:10.332397 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-f288q"] Feb 02 21:22:10 crc kubenswrapper[4789]: I0202 21:22:10.333277 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f288q" Feb 02 21:22:10 crc kubenswrapper[4789]: I0202 21:22:10.334872 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 02 21:22:10 crc kubenswrapper[4789]: I0202 21:22:10.346037 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-f288q"] Feb 02 21:22:10 crc kubenswrapper[4789]: I0202 21:22:10.385129 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:22:10 crc kubenswrapper[4789]: I0202 21:22:10.385381 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/750c480c-359c-47cc-9cc1-72c36bc5c783-catalog-content\") pod \"certified-operators-f288q\" (UID: \"750c480c-359c-47cc-9cc1-72c36bc5c783\") " pod="openshift-marketplace/certified-operators-f288q" Feb 02 21:22:10 crc kubenswrapper[4789]: I0202 21:22:10.385411 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cp5n7\" (UniqueName: \"kubernetes.io/projected/750c480c-359c-47cc-9cc1-72c36bc5c783-kube-api-access-cp5n7\") pod \"certified-operators-f288q\" (UID: \"750c480c-359c-47cc-9cc1-72c36bc5c783\") " pod="openshift-marketplace/certified-operators-f288q" Feb 02 21:22:10 crc kubenswrapper[4789]: I0202 21:22:10.385459 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/750c480c-359c-47cc-9cc1-72c36bc5c783-utilities\") pod \"certified-operators-f288q\" (UID: \"750c480c-359c-47cc-9cc1-72c36bc5c783\") " pod="openshift-marketplace/certified-operators-f288q" Feb 02 21:22:10 crc kubenswrapper[4789]: E0202 21:22:10.385573 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 21:22:10.885557772 +0000 UTC m=+151.180582791 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:10 crc kubenswrapper[4789]: I0202 21:22:10.442300 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ltv7s" Feb 02 21:22:10 crc kubenswrapper[4789]: I0202 21:22:10.486558 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cp5n7\" (UniqueName: \"kubernetes.io/projected/750c480c-359c-47cc-9cc1-72c36bc5c783-kube-api-access-cp5n7\") pod \"certified-operators-f288q\" (UID: \"750c480c-359c-47cc-9cc1-72c36bc5c783\") " pod="openshift-marketplace/certified-operators-f288q" Feb 02 21:22:10 crc kubenswrapper[4789]: I0202 21:22:10.486658 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/750c480c-359c-47cc-9cc1-72c36bc5c783-utilities\") pod \"certified-operators-f288q\" (UID: \"750c480c-359c-47cc-9cc1-72c36bc5c783\") " pod="openshift-marketplace/certified-operators-f288q" Feb 02 21:22:10 crc kubenswrapper[4789]: I0202 21:22:10.486724 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:10 crc kubenswrapper[4789]: I0202 21:22:10.486747 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/750c480c-359c-47cc-9cc1-72c36bc5c783-catalog-content\") pod \"certified-operators-f288q\" (UID: \"750c480c-359c-47cc-9cc1-72c36bc5c783\") " pod="openshift-marketplace/certified-operators-f288q" Feb 02 21:22:10 crc kubenswrapper[4789]: E0202 21:22:10.487338 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 21:22:10.987288641 +0000 UTC m=+151.282313660 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x2wtg" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:10 crc kubenswrapper[4789]: I0202 21:22:10.487556 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/750c480c-359c-47cc-9cc1-72c36bc5c783-catalog-content\") pod \"certified-operators-f288q\" (UID: \"750c480c-359c-47cc-9cc1-72c36bc5c783\") " pod="openshift-marketplace/certified-operators-f288q" Feb 02 21:22:10 crc kubenswrapper[4789]: I0202 21:22:10.487682 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/750c480c-359c-47cc-9cc1-72c36bc5c783-utilities\") pod \"certified-operators-f288q\" (UID: \"750c480c-359c-47cc-9cc1-72c36bc5c783\") " pod="openshift-marketplace/certified-operators-f288q" Feb 02 21:22:10 crc kubenswrapper[4789]: I0202 21:22:10.503740 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cp5n7\" (UniqueName: \"kubernetes.io/projected/750c480c-359c-47cc-9cc1-72c36bc5c783-kube-api-access-cp5n7\") pod \"certified-operators-f288q\" (UID: \"750c480c-359c-47cc-9cc1-72c36bc5c783\") " pod="openshift-marketplace/certified-operators-f288q" Feb 02 21:22:10 crc kubenswrapper[4789]: I0202 21:22:10.507923 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29501115-zcgzz" Feb 02 21:22:10 crc kubenswrapper[4789]: I0202 21:22:10.556167 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-plkrb"] Feb 02 21:22:10 crc kubenswrapper[4789]: E0202 21:22:10.556779 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ee2bc38-213d-4181-8e23-0f579b87c986" containerName="collect-profiles" Feb 02 21:22:10 crc kubenswrapper[4789]: I0202 21:22:10.556792 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ee2bc38-213d-4181-8e23-0f579b87c986" containerName="collect-profiles" Feb 02 21:22:10 crc kubenswrapper[4789]: I0202 21:22:10.556936 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ee2bc38-213d-4181-8e23-0f579b87c986" containerName="collect-profiles" Feb 02 21:22:10 crc kubenswrapper[4789]: I0202 21:22:10.557943 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-plkrb" Feb 02 21:22:10 crc kubenswrapper[4789]: I0202 21:22:10.572938 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-plkrb"] Feb 02 21:22:10 crc kubenswrapper[4789]: I0202 21:22:10.587251 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:22:10 crc kubenswrapper[4789]: I0202 21:22:10.587297 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8vmsd\" (UniqueName: \"kubernetes.io/projected/9ee2bc38-213d-4181-8e23-0f579b87c986-kube-api-access-8vmsd\") pod \"9ee2bc38-213d-4181-8e23-0f579b87c986\" (UID: \"9ee2bc38-213d-4181-8e23-0f579b87c986\") " Feb 02 21:22:10 crc kubenswrapper[4789]: I0202 21:22:10.587353 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9ee2bc38-213d-4181-8e23-0f579b87c986-config-volume\") pod \"9ee2bc38-213d-4181-8e23-0f579b87c986\" (UID: \"9ee2bc38-213d-4181-8e23-0f579b87c986\") " Feb 02 21:22:10 crc kubenswrapper[4789]: I0202 21:22:10.587373 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9ee2bc38-213d-4181-8e23-0f579b87c986-secret-volume\") pod \"9ee2bc38-213d-4181-8e23-0f579b87c986\" (UID: \"9ee2bc38-213d-4181-8e23-0f579b87c986\") " Feb 02 21:22:10 crc kubenswrapper[4789]: I0202 21:22:10.587526 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a0bac2b-aef3-4313-9184-16e08bc0e572-utilities\") pod \"community-operators-plkrb\" (UID: \"6a0bac2b-aef3-4313-9184-16e08bc0e572\") " pod="openshift-marketplace/community-operators-plkrb" Feb 02 21:22:10 crc kubenswrapper[4789]: I0202 21:22:10.587598 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a0bac2b-aef3-4313-9184-16e08bc0e572-catalog-content\") pod \"community-operators-plkrb\" (UID: \"6a0bac2b-aef3-4313-9184-16e08bc0e572\") " pod="openshift-marketplace/community-operators-plkrb" Feb 02 21:22:10 crc kubenswrapper[4789]: I0202 21:22:10.587626 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xk4bk\" (UniqueName: \"kubernetes.io/projected/6a0bac2b-aef3-4313-9184-16e08bc0e572-kube-api-access-xk4bk\") pod \"community-operators-plkrb\" (UID: \"6a0bac2b-aef3-4313-9184-16e08bc0e572\") " pod="openshift-marketplace/community-operators-plkrb" Feb 02 21:22:10 crc kubenswrapper[4789]: E0202 21:22:10.587718 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 21:22:11.087701462 +0000 UTC m=+151.382726481 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:10 crc kubenswrapper[4789]: I0202 21:22:10.591263 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ee2bc38-213d-4181-8e23-0f579b87c986-config-volume" (OuterVolumeSpecName: "config-volume") pod "9ee2bc38-213d-4181-8e23-0f579b87c986" (UID: "9ee2bc38-213d-4181-8e23-0f579b87c986"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:22:10 crc kubenswrapper[4789]: I0202 21:22:10.592624 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ee2bc38-213d-4181-8e23-0f579b87c986-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9ee2bc38-213d-4181-8e23-0f579b87c986" (UID: "9ee2bc38-213d-4181-8e23-0f579b87c986"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:22:10 crc kubenswrapper[4789]: I0202 21:22:10.598797 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ee2bc38-213d-4181-8e23-0f579b87c986-kube-api-access-8vmsd" (OuterVolumeSpecName: "kube-api-access-8vmsd") pod "9ee2bc38-213d-4181-8e23-0f579b87c986" (UID: "9ee2bc38-213d-4181-8e23-0f579b87c986"). InnerVolumeSpecName "kube-api-access-8vmsd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:22:10 crc kubenswrapper[4789]: I0202 21:22:10.648486 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f288q" Feb 02 21:22:10 crc kubenswrapper[4789]: I0202 21:22:10.674199 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ltv7s"] Feb 02 21:22:10 crc kubenswrapper[4789]: I0202 21:22:10.689238 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a0bac2b-aef3-4313-9184-16e08bc0e572-utilities\") pod \"community-operators-plkrb\" (UID: \"6a0bac2b-aef3-4313-9184-16e08bc0e572\") " pod="openshift-marketplace/community-operators-plkrb" Feb 02 21:22:10 crc kubenswrapper[4789]: I0202 21:22:10.689330 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a0bac2b-aef3-4313-9184-16e08bc0e572-catalog-content\") pod \"community-operators-plkrb\" (UID: \"6a0bac2b-aef3-4313-9184-16e08bc0e572\") " pod="openshift-marketplace/community-operators-plkrb" Feb 02 21:22:10 crc kubenswrapper[4789]: I0202 21:22:10.689374 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xk4bk\" (UniqueName: \"kubernetes.io/projected/6a0bac2b-aef3-4313-9184-16e08bc0e572-kube-api-access-xk4bk\") pod \"community-operators-plkrb\" (UID: \"6a0bac2b-aef3-4313-9184-16e08bc0e572\") " pod="openshift-marketplace/community-operators-plkrb" Feb 02 21:22:10 crc kubenswrapper[4789]: I0202 21:22:10.689421 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:10 crc kubenswrapper[4789]: I0202 21:22:10.690212 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8vmsd\" (UniqueName: \"kubernetes.io/projected/9ee2bc38-213d-4181-8e23-0f579b87c986-kube-api-access-8vmsd\") on node \"crc\" DevicePath \"\"" Feb 02 21:22:10 crc kubenswrapper[4789]: I0202 21:22:10.690512 4789 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9ee2bc38-213d-4181-8e23-0f579b87c986-config-volume\") on node \"crc\" DevicePath \"\"" Feb 02 21:22:10 crc kubenswrapper[4789]: E0202 21:22:10.690511 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 21:22:11.190490522 +0000 UTC m=+151.485515601 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x2wtg" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:10 crc kubenswrapper[4789]: I0202 21:22:10.690555 4789 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9ee2bc38-213d-4181-8e23-0f579b87c986-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 02 21:22:10 crc kubenswrapper[4789]: I0202 21:22:10.690506 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a0bac2b-aef3-4313-9184-16e08bc0e572-utilities\") pod \"community-operators-plkrb\" (UID: \"6a0bac2b-aef3-4313-9184-16e08bc0e572\") " pod="openshift-marketplace/community-operators-plkrb" Feb 02 21:22:10 crc kubenswrapper[4789]: I0202 21:22:10.690706 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a0bac2b-aef3-4313-9184-16e08bc0e572-catalog-content\") pod \"community-operators-plkrb\" (UID: \"6a0bac2b-aef3-4313-9184-16e08bc0e572\") " pod="openshift-marketplace/community-operators-plkrb" Feb 02 21:22:10 crc kubenswrapper[4789]: I0202 21:22:10.707244 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xk4bk\" (UniqueName: \"kubernetes.io/projected/6a0bac2b-aef3-4313-9184-16e08bc0e572-kube-api-access-xk4bk\") pod \"community-operators-plkrb\" (UID: \"6a0bac2b-aef3-4313-9184-16e08bc0e572\") " pod="openshift-marketplace/community-operators-plkrb" Feb 02 21:22:10 crc kubenswrapper[4789]: I0202 21:22:10.734986 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-pnd42"] Feb 02 21:22:10 crc kubenswrapper[4789]: I0202 21:22:10.735884 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pnd42" Feb 02 21:22:10 crc kubenswrapper[4789]: I0202 21:22:10.743831 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pnd42"] Feb 02 21:22:10 crc kubenswrapper[4789]: I0202 21:22:10.772798 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 02 21:22:10 crc kubenswrapper[4789]: I0202 21:22:10.773561 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 21:22:10 crc kubenswrapper[4789]: I0202 21:22:10.778092 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 02 21:22:10 crc kubenswrapper[4789]: I0202 21:22:10.778157 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 02 21:22:10 crc kubenswrapper[4789]: I0202 21:22:10.782900 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 02 21:22:10 crc kubenswrapper[4789]: I0202 21:22:10.794046 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:22:10 crc kubenswrapper[4789]: E0202 21:22:10.794479 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 21:22:11.294463846 +0000 UTC m=+151.589488865 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:10 crc kubenswrapper[4789]: I0202 21:22:10.805082 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/275ff536-c274-436e-89f2-a2c138f9857a-catalog-content\") pod \"certified-operators-pnd42\" (UID: \"275ff536-c274-436e-89f2-a2c138f9857a\") " pod="openshift-marketplace/certified-operators-pnd42" Feb 02 21:22:10 crc kubenswrapper[4789]: I0202 21:22:10.817048 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7vkf\" (UniqueName: \"kubernetes.io/projected/275ff536-c274-436e-89f2-a2c138f9857a-kube-api-access-g7vkf\") pod \"certified-operators-pnd42\" (UID: \"275ff536-c274-436e-89f2-a2c138f9857a\") " pod="openshift-marketplace/certified-operators-pnd42" Feb 02 21:22:10 crc kubenswrapper[4789]: I0202 21:22:10.817190 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/275ff536-c274-436e-89f2-a2c138f9857a-utilities\") pod \"certified-operators-pnd42\" (UID: \"275ff536-c274-436e-89f2-a2c138f9857a\") " pod="openshift-marketplace/certified-operators-pnd42" Feb 02 21:22:10 crc kubenswrapper[4789]: I0202 21:22:10.817241 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:10 crc kubenswrapper[4789]: E0202 21:22:10.817536 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 21:22:11.31752346 +0000 UTC m=+151.612548479 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x2wtg" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:10 crc kubenswrapper[4789]: I0202 21:22:10.892249 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-plkrb" Feb 02 21:22:10 crc kubenswrapper[4789]: I0202 21:22:10.910891 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-f288q"] Feb 02 21:22:10 crc kubenswrapper[4789]: I0202 21:22:10.918939 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:22:10 crc kubenswrapper[4789]: I0202 21:22:10.919119 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3ae5cfe8-6155-4ae2-bd9b-0b2bcfac74fa-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"3ae5cfe8-6155-4ae2-bd9b-0b2bcfac74fa\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 21:22:10 crc kubenswrapper[4789]: I0202 21:22:10.919181 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/275ff536-c274-436e-89f2-a2c138f9857a-catalog-content\") pod \"certified-operators-pnd42\" (UID: \"275ff536-c274-436e-89f2-a2c138f9857a\") " pod="openshift-marketplace/certified-operators-pnd42" Feb 02 21:22:10 crc kubenswrapper[4789]: I0202 21:22:10.919233 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7vkf\" (UniqueName: \"kubernetes.io/projected/275ff536-c274-436e-89f2-a2c138f9857a-kube-api-access-g7vkf\") pod \"certified-operators-pnd42\" (UID: \"275ff536-c274-436e-89f2-a2c138f9857a\") " pod="openshift-marketplace/certified-operators-pnd42" Feb 02 21:22:10 crc kubenswrapper[4789]: E0202 21:22:10.919310 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 21:22:11.41928167 +0000 UTC m=+151.714306689 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:10 crc kubenswrapper[4789]: I0202 21:22:10.919477 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/275ff536-c274-436e-89f2-a2c138f9857a-utilities\") pod \"certified-operators-pnd42\" (UID: \"275ff536-c274-436e-89f2-a2c138f9857a\") " pod="openshift-marketplace/certified-operators-pnd42" Feb 02 21:22:10 crc kubenswrapper[4789]: I0202 21:22:10.919512 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3ae5cfe8-6155-4ae2-bd9b-0b2bcfac74fa-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"3ae5cfe8-6155-4ae2-bd9b-0b2bcfac74fa\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 21:22:10 crc kubenswrapper[4789]: I0202 21:22:10.920064 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/275ff536-c274-436e-89f2-a2c138f9857a-utilities\") pod \"certified-operators-pnd42\" (UID: \"275ff536-c274-436e-89f2-a2c138f9857a\") " pod="openshift-marketplace/certified-operators-pnd42" Feb 02 21:22:10 crc kubenswrapper[4789]: I0202 21:22:10.920263 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/275ff536-c274-436e-89f2-a2c138f9857a-catalog-content\") pod \"certified-operators-pnd42\" (UID: \"275ff536-c274-436e-89f2-a2c138f9857a\") " pod="openshift-marketplace/certified-operators-pnd42" Feb 02 21:22:10 crc kubenswrapper[4789]: W0202 21:22:10.920287 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod750c480c_359c_47cc_9cc1_72c36bc5c783.slice/crio-8b5df91df219f7b29539d2e1ff442eb9b9a5aaefb0d16ccec37b5113b8f4eb46 WatchSource:0}: Error finding container 8b5df91df219f7b29539d2e1ff442eb9b9a5aaefb0d16ccec37b5113b8f4eb46: Status 404 returned error can't find the container with id 8b5df91df219f7b29539d2e1ff442eb9b9a5aaefb0d16ccec37b5113b8f4eb46 Feb 02 21:22:10 crc kubenswrapper[4789]: I0202 21:22:10.948204 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7vkf\" (UniqueName: \"kubernetes.io/projected/275ff536-c274-436e-89f2-a2c138f9857a-kube-api-access-g7vkf\") pod \"certified-operators-pnd42\" (UID: \"275ff536-c274-436e-89f2-a2c138f9857a\") " pod="openshift-marketplace/certified-operators-pnd42" Feb 02 21:22:10 crc kubenswrapper[4789]: I0202 21:22:10.954298 4789 patch_prober.go:28] interesting pod/router-default-5444994796-jwp46 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 21:22:10 crc kubenswrapper[4789]: [-]has-synced failed: reason withheld Feb 02 21:22:10 crc kubenswrapper[4789]: [+]process-running ok Feb 02 21:22:10 crc kubenswrapper[4789]: healthz check failed Feb 02 21:22:10 crc kubenswrapper[4789]: I0202 21:22:10.954510 4789 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jwp46" podUID="b1a5cd8a-fb52-47ab-8aff-64e621d9b8e0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 21:22:11 crc kubenswrapper[4789]: I0202 21:22:11.020380 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3ae5cfe8-6155-4ae2-bd9b-0b2bcfac74fa-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"3ae5cfe8-6155-4ae2-bd9b-0b2bcfac74fa\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 21:22:11 crc kubenswrapper[4789]: I0202 21:22:11.020537 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3ae5cfe8-6155-4ae2-bd9b-0b2bcfac74fa-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"3ae5cfe8-6155-4ae2-bd9b-0b2bcfac74fa\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 21:22:11 crc kubenswrapper[4789]: I0202 21:22:11.020417 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3ae5cfe8-6155-4ae2-bd9b-0b2bcfac74fa-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"3ae5cfe8-6155-4ae2-bd9b-0b2bcfac74fa\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 21:22:11 crc kubenswrapper[4789]: I0202 21:22:11.020634 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:11 crc kubenswrapper[4789]: E0202 21:22:11.020873 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 21:22:11.520862894 +0000 UTC m=+151.815887913 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x2wtg" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:11 crc kubenswrapper[4789]: I0202 21:22:11.037334 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3ae5cfe8-6155-4ae2-bd9b-0b2bcfac74fa-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"3ae5cfe8-6155-4ae2-bd9b-0b2bcfac74fa\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 21:22:11 crc kubenswrapper[4789]: I0202 21:22:11.075840 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pnd42" Feb 02 21:22:11 crc kubenswrapper[4789]: I0202 21:22:11.099319 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-plkrb"] Feb 02 21:22:11 crc kubenswrapper[4789]: I0202 21:22:11.109544 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 21:22:11 crc kubenswrapper[4789]: I0202 21:22:11.121663 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:22:11 crc kubenswrapper[4789]: E0202 21:22:11.121994 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 21:22:11.621975346 +0000 UTC m=+151.917000365 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:11 crc kubenswrapper[4789]: I0202 21:22:11.223531 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:11 crc kubenswrapper[4789]: E0202 21:22:11.223827 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 21:22:11.723816258 +0000 UTC m=+152.018841277 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x2wtg" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:11 crc kubenswrapper[4789]: I0202 21:22:11.324290 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:22:11 crc kubenswrapper[4789]: E0202 21:22:11.324565 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 21:22:11.824548239 +0000 UTC m=+152.119573258 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:11 crc kubenswrapper[4789]: I0202 21:22:11.359419 4789 generic.go:334] "Generic (PLEG): container finished" podID="82a7bf20-8db7-4d0f-91d4-a85ae5da91f5" containerID="cc7ce86839b24c0822b5912b8064ac65ca86e299b2dec7656e0916bb4c8d7c05" exitCode=0 Feb 02 21:22:11 crc kubenswrapper[4789]: I0202 21:22:11.359766 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ltv7s" event={"ID":"82a7bf20-8db7-4d0f-91d4-a85ae5da91f5","Type":"ContainerDied","Data":"cc7ce86839b24c0822b5912b8064ac65ca86e299b2dec7656e0916bb4c8d7c05"} Feb 02 21:22:11 crc kubenswrapper[4789]: I0202 21:22:11.359794 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ltv7s" event={"ID":"82a7bf20-8db7-4d0f-91d4-a85ae5da91f5","Type":"ContainerStarted","Data":"f4244784b7875e587ec8991b6b1afb274bd5e6687bb0654f26fa6cacdeb20339"} Feb 02 21:22:11 crc kubenswrapper[4789]: I0202 21:22:11.362164 4789 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 21:22:11 crc kubenswrapper[4789]: I0202 21:22:11.368893 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29501115-zcgzz" event={"ID":"9ee2bc38-213d-4181-8e23-0f579b87c986","Type":"ContainerDied","Data":"8bafb44d8cfcbdc99217281c579a2e408aae4ddb1386258a29d1b3f4f994cfb9"} Feb 02 21:22:11 crc kubenswrapper[4789]: I0202 21:22:11.368930 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8bafb44d8cfcbdc99217281c579a2e408aae4ddb1386258a29d1b3f4f994cfb9" Feb 02 21:22:11 crc kubenswrapper[4789]: I0202 21:22:11.369050 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29501115-zcgzz" Feb 02 21:22:11 crc kubenswrapper[4789]: I0202 21:22:11.376304 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pnd42"] Feb 02 21:22:11 crc kubenswrapper[4789]: I0202 21:22:11.407696 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-plkrb" event={"ID":"6a0bac2b-aef3-4313-9184-16e08bc0e572","Type":"ContainerStarted","Data":"4dd4d33818a41df74b4ded52630d616776f69de22da4cd8d4bd328272ad08ea6"} Feb 02 21:22:11 crc kubenswrapper[4789]: I0202 21:22:11.409557 4789 generic.go:334] "Generic (PLEG): container finished" podID="750c480c-359c-47cc-9cc1-72c36bc5c783" containerID="70c86c8637f3df270c79ff053b0ebd691278cf835acb12e8e71ff091b0fbb12a" exitCode=0 Feb 02 21:22:11 crc kubenswrapper[4789]: I0202 21:22:11.409620 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f288q" event={"ID":"750c480c-359c-47cc-9cc1-72c36bc5c783","Type":"ContainerDied","Data":"70c86c8637f3df270c79ff053b0ebd691278cf835acb12e8e71ff091b0fbb12a"} Feb 02 21:22:11 crc kubenswrapper[4789]: I0202 21:22:11.409638 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f288q" event={"ID":"750c480c-359c-47cc-9cc1-72c36bc5c783","Type":"ContainerStarted","Data":"8b5df91df219f7b29539d2e1ff442eb9b9a5aaefb0d16ccec37b5113b8f4eb46"} Feb 02 21:22:11 crc kubenswrapper[4789]: I0202 21:22:11.411362 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"5be97ddc87a043e12d9727b0dadcba953b99713a828e0107e7cb5fe7d0bcf867"} Feb 02 21:22:11 crc kubenswrapper[4789]: I0202 21:22:11.440741 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"90f04a302e84eba3f12a34c964a96a14686e040395aa8ada2769d9b5e0658276"} Feb 02 21:22:11 crc kubenswrapper[4789]: I0202 21:22:11.440798 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 21:22:11 crc kubenswrapper[4789]: I0202 21:22:11.448536 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:11 crc kubenswrapper[4789]: E0202 21:22:11.450476 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 21:22:11.950460634 +0000 UTC m=+152.245485653 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x2wtg" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:11 crc kubenswrapper[4789]: I0202 21:22:11.487116 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"556b4043fc09ef982c2c91266f3d2b0f701fe13208f036a5fe79b3100083caff"} Feb 02 21:22:11 crc kubenswrapper[4789]: I0202 21:22:11.538851 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-4qf8w" Feb 02 21:22:11 crc kubenswrapper[4789]: I0202 21:22:11.549478 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-4qf8w" Feb 02 21:22:11 crc kubenswrapper[4789]: I0202 21:22:11.566619 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:22:11 crc kubenswrapper[4789]: E0202 21:22:11.567831 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 21:22:12.067813213 +0000 UTC m=+152.362838232 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:11 crc kubenswrapper[4789]: E0202 21:22:11.600113 4789 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a0bac2b_aef3_4313_9184_16e08bc0e572.slice/crio-a4c04810899e44a97ca0f7eace2668519fa19114ff170a61b3f7620bc94541a8.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ee2bc38_213d_4181_8e23_0f579b87c986.slice\": RecentStats: unable to find data in memory cache]" Feb 02 21:22:11 crc kubenswrapper[4789]: I0202 21:22:11.670294 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:11 crc kubenswrapper[4789]: E0202 21:22:11.671103 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 21:22:12.171092077 +0000 UTC m=+152.466117096 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x2wtg" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:11 crc kubenswrapper[4789]: I0202 21:22:11.737985 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 02 21:22:11 crc kubenswrapper[4789]: W0202 21:22:11.746419 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod3ae5cfe8_6155_4ae2_bd9b_0b2bcfac74fa.slice/crio-0b45a290a55b28f94af90d1a13911faf0d820afc4eb2cb8772affae882110ac5 WatchSource:0}: Error finding container 0b45a290a55b28f94af90d1a13911faf0d820afc4eb2cb8772affae882110ac5: Status 404 returned error can't find the container with id 0b45a290a55b28f94af90d1a13911faf0d820afc4eb2cb8772affae882110ac5 Feb 02 21:22:11 crc kubenswrapper[4789]: I0202 21:22:11.771700 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:22:11 crc kubenswrapper[4789]: E0202 21:22:11.771897 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 21:22:12.271873199 +0000 UTC m=+152.566898218 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:11 crc kubenswrapper[4789]: I0202 21:22:11.771995 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:11 crc kubenswrapper[4789]: E0202 21:22:11.772386 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 21:22:12.272365413 +0000 UTC m=+152.567390432 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x2wtg" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:11 crc kubenswrapper[4789]: I0202 21:22:11.873114 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:22:11 crc kubenswrapper[4789]: E0202 21:22:11.873322 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 21:22:12.373290879 +0000 UTC m=+152.668315898 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:11 crc kubenswrapper[4789]: I0202 21:22:11.873408 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:11 crc kubenswrapper[4789]: E0202 21:22:11.873792 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 21:22:12.373781003 +0000 UTC m=+152.668806102 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x2wtg" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:11 crc kubenswrapper[4789]: I0202 21:22:11.961597 4789 patch_prober.go:28] interesting pod/router-default-5444994796-jwp46 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 21:22:11 crc kubenswrapper[4789]: [-]has-synced failed: reason withheld Feb 02 21:22:11 crc kubenswrapper[4789]: [+]process-running ok Feb 02 21:22:11 crc kubenswrapper[4789]: healthz check failed Feb 02 21:22:11 crc kubenswrapper[4789]: I0202 21:22:11.961657 4789 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jwp46" podUID="b1a5cd8a-fb52-47ab-8aff-64e621d9b8e0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 21:22:11 crc kubenswrapper[4789]: I0202 21:22:11.974524 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:22:11 crc kubenswrapper[4789]: E0202 21:22:11.974719 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 21:22:12.474680298 +0000 UTC m=+152.769705317 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:11 crc kubenswrapper[4789]: I0202 21:22:11.974812 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:11 crc kubenswrapper[4789]: E0202 21:22:11.975095 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 21:22:12.47507817 +0000 UTC m=+152.770103189 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x2wtg" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:12 crc kubenswrapper[4789]: I0202 21:22:12.076060 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:22:12 crc kubenswrapper[4789]: E0202 21:22:12.076235 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 21:22:12.576211802 +0000 UTC m=+152.871236821 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:12 crc kubenswrapper[4789]: I0202 21:22:12.076440 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:12 crc kubenswrapper[4789]: E0202 21:22:12.076746 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 21:22:12.576738167 +0000 UTC m=+152.871763186 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x2wtg" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:12 crc kubenswrapper[4789]: I0202 21:22:12.118781 4789 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-25rjx container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 02 21:22:12 crc kubenswrapper[4789]: [+]log ok Feb 02 21:22:12 crc kubenswrapper[4789]: [-]poststarthook/max-in-flight-filter failed: reason withheld Feb 02 21:22:12 crc kubenswrapper[4789]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 02 21:22:12 crc kubenswrapper[4789]: healthz check failed Feb 02 21:22:12 crc kubenswrapper[4789]: I0202 21:22:12.118850 4789 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-25rjx" podUID="0d7fbad6-8c3b-4d05-8ca4-ef9f3f39ec4f" containerName="openshift-config-operator" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 21:22:12 crc kubenswrapper[4789]: I0202 21:22:12.126365 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-25rjx" Feb 02 21:22:12 crc kubenswrapper[4789]: I0202 21:22:12.177242 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:22:12 crc kubenswrapper[4789]: E0202 21:22:12.177540 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 21:22:12.677523588 +0000 UTC m=+152.972548607 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:12 crc kubenswrapper[4789]: I0202 21:22:12.278439 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:12 crc kubenswrapper[4789]: E0202 21:22:12.279946 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 21:22:12.779928216 +0000 UTC m=+153.074953255 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x2wtg" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:12 crc kubenswrapper[4789]: I0202 21:22:12.336590 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mv4vf"] Feb 02 21:22:12 crc kubenswrapper[4789]: I0202 21:22:12.337483 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mv4vf" Feb 02 21:22:12 crc kubenswrapper[4789]: I0202 21:22:12.341805 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 02 21:22:12 crc kubenswrapper[4789]: I0202 21:22:12.347072 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mv4vf"] Feb 02 21:22:12 crc kubenswrapper[4789]: I0202 21:22:12.379132 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:22:12 crc kubenswrapper[4789]: E0202 21:22:12.379370 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 21:22:12.879341089 +0000 UTC m=+153.174366098 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:12 crc kubenswrapper[4789]: I0202 21:22:12.379770 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:12 crc kubenswrapper[4789]: E0202 21:22:12.380195 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 21:22:12.880179683 +0000 UTC m=+153.175204702 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x2wtg" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:12 crc kubenswrapper[4789]: I0202 21:22:12.480431 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:22:12 crc kubenswrapper[4789]: I0202 21:22:12.480617 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9kns\" (UniqueName: \"kubernetes.io/projected/59a1480d-000f-481e-b287-78e39812c69b-kube-api-access-b9kns\") pod \"redhat-marketplace-mv4vf\" (UID: \"59a1480d-000f-481e-b287-78e39812c69b\") " pod="openshift-marketplace/redhat-marketplace-mv4vf" Feb 02 21:22:12 crc kubenswrapper[4789]: E0202 21:22:12.480637 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 21:22:12.980612455 +0000 UTC m=+153.275637474 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:12 crc kubenswrapper[4789]: I0202 21:22:12.480664 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59a1480d-000f-481e-b287-78e39812c69b-catalog-content\") pod \"redhat-marketplace-mv4vf\" (UID: \"59a1480d-000f-481e-b287-78e39812c69b\") " pod="openshift-marketplace/redhat-marketplace-mv4vf" Feb 02 21:22:12 crc kubenswrapper[4789]: I0202 21:22:12.480819 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59a1480d-000f-481e-b287-78e39812c69b-utilities\") pod \"redhat-marketplace-mv4vf\" (UID: \"59a1480d-000f-481e-b287-78e39812c69b\") " pod="openshift-marketplace/redhat-marketplace-mv4vf" Feb 02 21:22:12 crc kubenswrapper[4789]: I0202 21:22:12.480875 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:12 crc kubenswrapper[4789]: E0202 21:22:12.481122 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 21:22:12.981111039 +0000 UTC m=+153.276136058 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x2wtg" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:12 crc kubenswrapper[4789]: I0202 21:22:12.491506 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"3ae5cfe8-6155-4ae2-bd9b-0b2bcfac74fa","Type":"ContainerStarted","Data":"3d256163b85b2981f9936420ae12a7bcaf62075e68c4b71f4672ede06ca55856"} Feb 02 21:22:12 crc kubenswrapper[4789]: I0202 21:22:12.491671 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"3ae5cfe8-6155-4ae2-bd9b-0b2bcfac74fa","Type":"ContainerStarted","Data":"0b45a290a55b28f94af90d1a13911faf0d820afc4eb2cb8772affae882110ac5"} Feb 02 21:22:12 crc kubenswrapper[4789]: I0202 21:22:12.492933 4789 generic.go:334] "Generic (PLEG): container finished" podID="6a0bac2b-aef3-4313-9184-16e08bc0e572" containerID="a4c04810899e44a97ca0f7eace2668519fa19114ff170a61b3f7620bc94541a8" exitCode=0 Feb 02 21:22:12 crc kubenswrapper[4789]: I0202 21:22:12.493045 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-plkrb" event={"ID":"6a0bac2b-aef3-4313-9184-16e08bc0e572","Type":"ContainerDied","Data":"a4c04810899e44a97ca0f7eace2668519fa19114ff170a61b3f7620bc94541a8"} Feb 02 21:22:12 crc kubenswrapper[4789]: I0202 21:22:12.496463 4789 generic.go:334] "Generic (PLEG): container finished" podID="275ff536-c274-436e-89f2-a2c138f9857a" containerID="233b02ad9b48ca4ea49cdc1f7dd4cecc28c3ecf9b574869823c5aca9ea12ba80" exitCode=0 Feb 02 21:22:12 crc kubenswrapper[4789]: I0202 21:22:12.497849 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pnd42" event={"ID":"275ff536-c274-436e-89f2-a2c138f9857a","Type":"ContainerDied","Data":"233b02ad9b48ca4ea49cdc1f7dd4cecc28c3ecf9b574869823c5aca9ea12ba80"} Feb 02 21:22:12 crc kubenswrapper[4789]: I0202 21:22:12.497926 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pnd42" event={"ID":"275ff536-c274-436e-89f2-a2c138f9857a","Type":"ContainerStarted","Data":"9834d3b4ccd99791cc5938177aa698d33a9671e34c16c654c3e9e4791f3ef93e"} Feb 02 21:22:12 crc kubenswrapper[4789]: I0202 21:22:12.582059 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:22:12 crc kubenswrapper[4789]: E0202 21:22:12.582257 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 21:22:13.08222604 +0000 UTC m=+153.377251069 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:12 crc kubenswrapper[4789]: I0202 21:22:12.582641 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9kns\" (UniqueName: \"kubernetes.io/projected/59a1480d-000f-481e-b287-78e39812c69b-kube-api-access-b9kns\") pod \"redhat-marketplace-mv4vf\" (UID: \"59a1480d-000f-481e-b287-78e39812c69b\") " pod="openshift-marketplace/redhat-marketplace-mv4vf" Feb 02 21:22:12 crc kubenswrapper[4789]: I0202 21:22:12.583120 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59a1480d-000f-481e-b287-78e39812c69b-catalog-content\") pod \"redhat-marketplace-mv4vf\" (UID: \"59a1480d-000f-481e-b287-78e39812c69b\") " pod="openshift-marketplace/redhat-marketplace-mv4vf" Feb 02 21:22:12 crc kubenswrapper[4789]: I0202 21:22:12.583805 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59a1480d-000f-481e-b287-78e39812c69b-utilities\") pod \"redhat-marketplace-mv4vf\" (UID: \"59a1480d-000f-481e-b287-78e39812c69b\") " pod="openshift-marketplace/redhat-marketplace-mv4vf" Feb 02 21:22:12 crc kubenswrapper[4789]: I0202 21:22:12.584445 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:12 crc kubenswrapper[4789]: I0202 21:22:12.584327 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59a1480d-000f-481e-b287-78e39812c69b-utilities\") pod \"redhat-marketplace-mv4vf\" (UID: \"59a1480d-000f-481e-b287-78e39812c69b\") " pod="openshift-marketplace/redhat-marketplace-mv4vf" Feb 02 21:22:12 crc kubenswrapper[4789]: I0202 21:22:12.583757 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59a1480d-000f-481e-b287-78e39812c69b-catalog-content\") pod \"redhat-marketplace-mv4vf\" (UID: \"59a1480d-000f-481e-b287-78e39812c69b\") " pod="openshift-marketplace/redhat-marketplace-mv4vf" Feb 02 21:22:12 crc kubenswrapper[4789]: E0202 21:22:12.584845 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 21:22:13.084831576 +0000 UTC m=+153.379856605 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x2wtg" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:12 crc kubenswrapper[4789]: I0202 21:22:12.601359 4789 patch_prober.go:28] interesting pod/downloads-7954f5f757-dd7g5 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 02 21:22:12 crc kubenswrapper[4789]: I0202 21:22:12.601411 4789 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-dd7g5" podUID="899bce18-bfcc-42b8-ab5e-149d16e8eddb" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 02 21:22:12 crc kubenswrapper[4789]: I0202 21:22:12.601366 4789 patch_prober.go:28] interesting pod/downloads-7954f5f757-dd7g5 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 02 21:22:12 crc kubenswrapper[4789]: I0202 21:22:12.601625 4789 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-dd7g5" podUID="899bce18-bfcc-42b8-ab5e-149d16e8eddb" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 02 21:22:12 crc kubenswrapper[4789]: I0202 21:22:12.616364 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9kns\" (UniqueName: \"kubernetes.io/projected/59a1480d-000f-481e-b287-78e39812c69b-kube-api-access-b9kns\") pod \"redhat-marketplace-mv4vf\" (UID: \"59a1480d-000f-481e-b287-78e39812c69b\") " pod="openshift-marketplace/redhat-marketplace-mv4vf" Feb 02 21:22:12 crc kubenswrapper[4789]: I0202 21:22:12.623041 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-zfv5p" Feb 02 21:22:12 crc kubenswrapper[4789]: I0202 21:22:12.649955 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mv4vf" Feb 02 21:22:12 crc kubenswrapper[4789]: I0202 21:22:12.686621 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:22:12 crc kubenswrapper[4789]: E0202 21:22:12.686947 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 21:22:13.186916685 +0000 UTC m=+153.481941714 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:12 crc kubenswrapper[4789]: I0202 21:22:12.687337 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:12 crc kubenswrapper[4789]: E0202 21:22:12.690488 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 21:22:13.190470587 +0000 UTC m=+153.485495616 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x2wtg" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:12 crc kubenswrapper[4789]: I0202 21:22:12.731250 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-bxbt7"] Feb 02 21:22:12 crc kubenswrapper[4789]: I0202 21:22:12.732406 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bxbt7" Feb 02 21:22:12 crc kubenswrapper[4789]: I0202 21:22:12.745222 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bxbt7"] Feb 02 21:22:12 crc kubenswrapper[4789]: I0202 21:22:12.773122 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hcxsv" Feb 02 21:22:12 crc kubenswrapper[4789]: I0202 21:22:12.806606 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:22:12 crc kubenswrapper[4789]: E0202 21:22:12.806724 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 21:22:13.306703414 +0000 UTC m=+153.601728443 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:12 crc kubenswrapper[4789]: I0202 21:22:12.807164 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:12 crc kubenswrapper[4789]: I0202 21:22:12.807358 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/564193d6-a25b-478d-8957-54183764c6d7-catalog-content\") pod \"redhat-marketplace-bxbt7\" (UID: \"564193d6-a25b-478d-8957-54183764c6d7\") " pod="openshift-marketplace/redhat-marketplace-bxbt7" Feb 02 21:22:12 crc kubenswrapper[4789]: I0202 21:22:12.807415 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/564193d6-a25b-478d-8957-54183764c6d7-utilities\") pod \"redhat-marketplace-bxbt7\" (UID: \"564193d6-a25b-478d-8957-54183764c6d7\") " pod="openshift-marketplace/redhat-marketplace-bxbt7" Feb 02 21:22:12 crc kubenswrapper[4789]: I0202 21:22:12.807446 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8h9rt\" (UniqueName: \"kubernetes.io/projected/564193d6-a25b-478d-8957-54183764c6d7-kube-api-access-8h9rt\") pod \"redhat-marketplace-bxbt7\" (UID: \"564193d6-a25b-478d-8957-54183764c6d7\") " pod="openshift-marketplace/redhat-marketplace-bxbt7" Feb 02 21:22:12 crc kubenswrapper[4789]: E0202 21:22:12.809265 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 21:22:13.309240107 +0000 UTC m=+153.604265126 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x2wtg" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:12 crc kubenswrapper[4789]: I0202 21:22:12.872459 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-x568j" Feb 02 21:22:12 crc kubenswrapper[4789]: I0202 21:22:12.872830 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-x568j" Feb 02 21:22:12 crc kubenswrapper[4789]: I0202 21:22:12.875241 4789 patch_prober.go:28] interesting pod/console-f9d7485db-x568j container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Feb 02 21:22:12 crc kubenswrapper[4789]: I0202 21:22:12.875299 4789 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-x568j" podUID="a7cb5e21-a1f6-4e35-bf0d-e709b16d6994" containerName="console" probeResult="failure" output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" Feb 02 21:22:12 crc kubenswrapper[4789]: I0202 21:22:12.899440 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mv4vf"] Feb 02 21:22:12 crc kubenswrapper[4789]: I0202 21:22:12.908670 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:22:12 crc kubenswrapper[4789]: E0202 21:22:12.908868 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 21:22:13.408842165 +0000 UTC m=+153.703867184 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:12 crc kubenswrapper[4789]: I0202 21:22:12.908996 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/564193d6-a25b-478d-8957-54183764c6d7-catalog-content\") pod \"redhat-marketplace-bxbt7\" (UID: \"564193d6-a25b-478d-8957-54183764c6d7\") " pod="openshift-marketplace/redhat-marketplace-bxbt7" Feb 02 21:22:12 crc kubenswrapper[4789]: I0202 21:22:12.909041 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/564193d6-a25b-478d-8957-54183764c6d7-utilities\") pod \"redhat-marketplace-bxbt7\" (UID: \"564193d6-a25b-478d-8957-54183764c6d7\") " pod="openshift-marketplace/redhat-marketplace-bxbt7" Feb 02 21:22:12 crc kubenswrapper[4789]: I0202 21:22:12.909071 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8h9rt\" (UniqueName: \"kubernetes.io/projected/564193d6-a25b-478d-8957-54183764c6d7-kube-api-access-8h9rt\") pod \"redhat-marketplace-bxbt7\" (UID: \"564193d6-a25b-478d-8957-54183764c6d7\") " pod="openshift-marketplace/redhat-marketplace-bxbt7" Feb 02 21:22:12 crc kubenswrapper[4789]: I0202 21:22:12.909210 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:12 crc kubenswrapper[4789]: I0202 21:22:12.909406 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/564193d6-a25b-478d-8957-54183764c6d7-catalog-content\") pod \"redhat-marketplace-bxbt7\" (UID: \"564193d6-a25b-478d-8957-54183764c6d7\") " pod="openshift-marketplace/redhat-marketplace-bxbt7" Feb 02 21:22:12 crc kubenswrapper[4789]: I0202 21:22:12.909437 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/564193d6-a25b-478d-8957-54183764c6d7-utilities\") pod \"redhat-marketplace-bxbt7\" (UID: \"564193d6-a25b-478d-8957-54183764c6d7\") " pod="openshift-marketplace/redhat-marketplace-bxbt7" Feb 02 21:22:12 crc kubenswrapper[4789]: E0202 21:22:12.909910 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 21:22:13.409889885 +0000 UTC m=+153.704914924 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x2wtg" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:12 crc kubenswrapper[4789]: W0202 21:22:12.921460 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod59a1480d_000f_481e_b287_78e39812c69b.slice/crio-3674216b6c8bcce1577500b22c8276193896fdc7edea55471197014ec86052f2 WatchSource:0}: Error finding container 3674216b6c8bcce1577500b22c8276193896fdc7edea55471197014ec86052f2: Status 404 returned error can't find the container with id 3674216b6c8bcce1577500b22c8276193896fdc7edea55471197014ec86052f2 Feb 02 21:22:12 crc kubenswrapper[4789]: I0202 21:22:12.931922 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8h9rt\" (UniqueName: \"kubernetes.io/projected/564193d6-a25b-478d-8957-54183764c6d7-kube-api-access-8h9rt\") pod \"redhat-marketplace-bxbt7\" (UID: \"564193d6-a25b-478d-8957-54183764c6d7\") " pod="openshift-marketplace/redhat-marketplace-bxbt7" Feb 02 21:22:12 crc kubenswrapper[4789]: I0202 21:22:12.945015 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-8787r" Feb 02 21:22:12 crc kubenswrapper[4789]: I0202 21:22:12.952246 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-jwp46" Feb 02 21:22:12 crc kubenswrapper[4789]: I0202 21:22:12.958734 4789 patch_prober.go:28] interesting pod/router-default-5444994796-jwp46 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 21:22:12 crc kubenswrapper[4789]: [+]has-synced ok Feb 02 21:22:12 crc kubenswrapper[4789]: [+]process-running ok Feb 02 21:22:12 crc kubenswrapper[4789]: healthz check failed Feb 02 21:22:12 crc kubenswrapper[4789]: I0202 21:22:12.958779 4789 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jwp46" podUID="b1a5cd8a-fb52-47ab-8aff-64e621d9b8e0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 21:22:13 crc kubenswrapper[4789]: I0202 21:22:13.010020 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:22:13 crc kubenswrapper[4789]: E0202 21:22:13.010419 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 21:22:13.510383419 +0000 UTC m=+153.805408448 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:13 crc kubenswrapper[4789]: I0202 21:22:13.062168 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bxbt7" Feb 02 21:22:13 crc kubenswrapper[4789]: I0202 21:22:13.112524 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:13 crc kubenswrapper[4789]: E0202 21:22:13.112946 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 21:22:13.612927621 +0000 UTC m=+153.907952640 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x2wtg" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:13 crc kubenswrapper[4789]: I0202 21:22:13.229495 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:22:13 crc kubenswrapper[4789]: E0202 21:22:13.231899 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 21:22:13.731873516 +0000 UTC m=+154.026898535 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:13 crc kubenswrapper[4789]: I0202 21:22:13.233139 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:13 crc kubenswrapper[4789]: E0202 21:22:13.233547 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 21:22:13.733525794 +0000 UTC m=+154.028550913 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x2wtg" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:13 crc kubenswrapper[4789]: I0202 21:22:13.268295 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hc84b" Feb 02 21:22:13 crc kubenswrapper[4789]: I0202 21:22:13.333956 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:22:13 crc kubenswrapper[4789]: E0202 21:22:13.334282 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 21:22:13.834258274 +0000 UTC m=+154.129283293 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:13 crc kubenswrapper[4789]: I0202 21:22:13.334822 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-cjz6p"] Feb 02 21:22:13 crc kubenswrapper[4789]: I0202 21:22:13.336125 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cjz6p" Feb 02 21:22:13 crc kubenswrapper[4789]: I0202 21:22:13.338666 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 02 21:22:13 crc kubenswrapper[4789]: I0202 21:22:13.345472 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cjz6p"] Feb 02 21:22:13 crc kubenswrapper[4789]: I0202 21:22:13.397318 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-7sn8m" Feb 02 21:22:13 crc kubenswrapper[4789]: I0202 21:22:13.405391 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-7sn8m" Feb 02 21:22:13 crc kubenswrapper[4789]: I0202 21:22:13.437266 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81c40f7e-bff1-432e-95e4-dfeeba942abc-catalog-content\") pod \"redhat-operators-cjz6p\" (UID: \"81c40f7e-bff1-432e-95e4-dfeeba942abc\") " pod="openshift-marketplace/redhat-operators-cjz6p" Feb 02 21:22:13 crc kubenswrapper[4789]: I0202 21:22:13.437340 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xngdn\" (UniqueName: \"kubernetes.io/projected/81c40f7e-bff1-432e-95e4-dfeeba942abc-kube-api-access-xngdn\") pod \"redhat-operators-cjz6p\" (UID: \"81c40f7e-bff1-432e-95e4-dfeeba942abc\") " pod="openshift-marketplace/redhat-operators-cjz6p" Feb 02 21:22:13 crc kubenswrapper[4789]: I0202 21:22:13.437371 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81c40f7e-bff1-432e-95e4-dfeeba942abc-utilities\") pod \"redhat-operators-cjz6p\" (UID: \"81c40f7e-bff1-432e-95e4-dfeeba942abc\") " pod="openshift-marketplace/redhat-operators-cjz6p" Feb 02 21:22:13 crc kubenswrapper[4789]: I0202 21:22:13.437398 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:13 crc kubenswrapper[4789]: E0202 21:22:13.437671 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 21:22:13.937660391 +0000 UTC m=+154.232685410 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x2wtg" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:13 crc kubenswrapper[4789]: I0202 21:22:13.512150 4789 generic.go:334] "Generic (PLEG): container finished" podID="59a1480d-000f-481e-b287-78e39812c69b" containerID="ae5c42fd421d97a919cfd4f5b8b12d5a58d19b41d5cc7fe4134f05a7d4819734" exitCode=0 Feb 02 21:22:13 crc kubenswrapper[4789]: I0202 21:22:13.512219 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mv4vf" event={"ID":"59a1480d-000f-481e-b287-78e39812c69b","Type":"ContainerDied","Data":"ae5c42fd421d97a919cfd4f5b8b12d5a58d19b41d5cc7fe4134f05a7d4819734"} Feb 02 21:22:13 crc kubenswrapper[4789]: I0202 21:22:13.512248 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mv4vf" event={"ID":"59a1480d-000f-481e-b287-78e39812c69b","Type":"ContainerStarted","Data":"3674216b6c8bcce1577500b22c8276193896fdc7edea55471197014ec86052f2"} Feb 02 21:22:13 crc kubenswrapper[4789]: I0202 21:22:13.530922 4789 generic.go:334] "Generic (PLEG): container finished" podID="3ae5cfe8-6155-4ae2-bd9b-0b2bcfac74fa" containerID="3d256163b85b2981f9936420ae12a7bcaf62075e68c4b71f4672ede06ca55856" exitCode=0 Feb 02 21:22:13 crc kubenswrapper[4789]: I0202 21:22:13.530986 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"3ae5cfe8-6155-4ae2-bd9b-0b2bcfac74fa","Type":"ContainerDied","Data":"3d256163b85b2981f9936420ae12a7bcaf62075e68c4b71f4672ede06ca55856"} Feb 02 21:22:13 crc kubenswrapper[4789]: I0202 21:22:13.540016 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:22:13 crc kubenswrapper[4789]: I0202 21:22:13.540267 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81c40f7e-bff1-432e-95e4-dfeeba942abc-catalog-content\") pod \"redhat-operators-cjz6p\" (UID: \"81c40f7e-bff1-432e-95e4-dfeeba942abc\") " pod="openshift-marketplace/redhat-operators-cjz6p" Feb 02 21:22:13 crc kubenswrapper[4789]: E0202 21:22:13.540318 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 21:22:14.040283836 +0000 UTC m=+154.335308855 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:13 crc kubenswrapper[4789]: I0202 21:22:13.540355 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xngdn\" (UniqueName: \"kubernetes.io/projected/81c40f7e-bff1-432e-95e4-dfeeba942abc-kube-api-access-xngdn\") pod \"redhat-operators-cjz6p\" (UID: \"81c40f7e-bff1-432e-95e4-dfeeba942abc\") " pod="openshift-marketplace/redhat-operators-cjz6p" Feb 02 21:22:13 crc kubenswrapper[4789]: I0202 21:22:13.540414 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81c40f7e-bff1-432e-95e4-dfeeba942abc-utilities\") pod \"redhat-operators-cjz6p\" (UID: \"81c40f7e-bff1-432e-95e4-dfeeba942abc\") " pod="openshift-marketplace/redhat-operators-cjz6p" Feb 02 21:22:13 crc kubenswrapper[4789]: I0202 21:22:13.540487 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:13 crc kubenswrapper[4789]: I0202 21:22:13.540662 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81c40f7e-bff1-432e-95e4-dfeeba942abc-catalog-content\") pod \"redhat-operators-cjz6p\" (UID: \"81c40f7e-bff1-432e-95e4-dfeeba942abc\") " pod="openshift-marketplace/redhat-operators-cjz6p" Feb 02 21:22:13 crc kubenswrapper[4789]: E0202 21:22:13.540806 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 21:22:14.040798311 +0000 UTC m=+154.335823330 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x2wtg" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:13 crc kubenswrapper[4789]: I0202 21:22:13.540812 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81c40f7e-bff1-432e-95e4-dfeeba942abc-utilities\") pod \"redhat-operators-cjz6p\" (UID: \"81c40f7e-bff1-432e-95e4-dfeeba942abc\") " pod="openshift-marketplace/redhat-operators-cjz6p" Feb 02 21:22:13 crc kubenswrapper[4789]: I0202 21:22:13.572178 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xngdn\" (UniqueName: \"kubernetes.io/projected/81c40f7e-bff1-432e-95e4-dfeeba942abc-kube-api-access-xngdn\") pod \"redhat-operators-cjz6p\" (UID: \"81c40f7e-bff1-432e-95e4-dfeeba942abc\") " pod="openshift-marketplace/redhat-operators-cjz6p" Feb 02 21:22:13 crc kubenswrapper[4789]: I0202 21:22:13.591371 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bxbt7"] Feb 02 21:22:13 crc kubenswrapper[4789]: W0202 21:22:13.604298 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod564193d6_a25b_478d_8957_54183764c6d7.slice/crio-792c31f09013898876009a97025591e1596b900b21c8ed72ab7c3a53b3034156 WatchSource:0}: Error finding container 792c31f09013898876009a97025591e1596b900b21c8ed72ab7c3a53b3034156: Status 404 returned error can't find the container with id 792c31f09013898876009a97025591e1596b900b21c8ed72ab7c3a53b3034156 Feb 02 21:22:13 crc kubenswrapper[4789]: I0202 21:22:13.641038 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:22:13 crc kubenswrapper[4789]: E0202 21:22:13.641162 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 21:22:14.14114362 +0000 UTC m=+154.436168639 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:13 crc kubenswrapper[4789]: I0202 21:22:13.641351 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:13 crc kubenswrapper[4789]: E0202 21:22:13.641979 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 21:22:14.141970774 +0000 UTC m=+154.436995793 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x2wtg" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:13 crc kubenswrapper[4789]: I0202 21:22:13.707690 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cjz6p" Feb 02 21:22:13 crc kubenswrapper[4789]: I0202 21:22:13.731376 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-s96wn"] Feb 02 21:22:13 crc kubenswrapper[4789]: I0202 21:22:13.732566 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s96wn" Feb 02 21:22:13 crc kubenswrapper[4789]: I0202 21:22:13.742648 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:22:13 crc kubenswrapper[4789]: E0202 21:22:13.743081 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 21:22:14.243065385 +0000 UTC m=+154.538090404 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:13 crc kubenswrapper[4789]: I0202 21:22:13.744818 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s96wn"] Feb 02 21:22:13 crc kubenswrapper[4789]: I0202 21:22:13.844388 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59f183d3-7fe9-4f2d-bd43-02a2ed7eafc5-utilities\") pod \"redhat-operators-s96wn\" (UID: \"59f183d3-7fe9-4f2d-bd43-02a2ed7eafc5\") " pod="openshift-marketplace/redhat-operators-s96wn" Feb 02 21:22:13 crc kubenswrapper[4789]: I0202 21:22:13.844440 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjh2r\" (UniqueName: \"kubernetes.io/projected/59f183d3-7fe9-4f2d-bd43-02a2ed7eafc5-kube-api-access-cjh2r\") pod \"redhat-operators-s96wn\" (UID: \"59f183d3-7fe9-4f2d-bd43-02a2ed7eafc5\") " pod="openshift-marketplace/redhat-operators-s96wn" Feb 02 21:22:13 crc kubenswrapper[4789]: I0202 21:22:13.844538 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59f183d3-7fe9-4f2d-bd43-02a2ed7eafc5-catalog-content\") pod \"redhat-operators-s96wn\" (UID: \"59f183d3-7fe9-4f2d-bd43-02a2ed7eafc5\") " pod="openshift-marketplace/redhat-operators-s96wn" Feb 02 21:22:13 crc kubenswrapper[4789]: I0202 21:22:13.844629 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:13 crc kubenswrapper[4789]: E0202 21:22:13.844994 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 21:22:14.34497751 +0000 UTC m=+154.640002529 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x2wtg" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:13 crc kubenswrapper[4789]: I0202 21:22:13.945505 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cjz6p"] Feb 02 21:22:13 crc kubenswrapper[4789]: I0202 21:22:13.945657 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:22:13 crc kubenswrapper[4789]: E0202 21:22:13.945857 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 21:22:14.445838704 +0000 UTC m=+154.740863723 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:13 crc kubenswrapper[4789]: I0202 21:22:13.945994 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59f183d3-7fe9-4f2d-bd43-02a2ed7eafc5-utilities\") pod \"redhat-operators-s96wn\" (UID: \"59f183d3-7fe9-4f2d-bd43-02a2ed7eafc5\") " pod="openshift-marketplace/redhat-operators-s96wn" Feb 02 21:22:13 crc kubenswrapper[4789]: I0202 21:22:13.946038 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjh2r\" (UniqueName: \"kubernetes.io/projected/59f183d3-7fe9-4f2d-bd43-02a2ed7eafc5-kube-api-access-cjh2r\") pod \"redhat-operators-s96wn\" (UID: \"59f183d3-7fe9-4f2d-bd43-02a2ed7eafc5\") " pod="openshift-marketplace/redhat-operators-s96wn" Feb 02 21:22:13 crc kubenswrapper[4789]: I0202 21:22:13.946113 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59f183d3-7fe9-4f2d-bd43-02a2ed7eafc5-catalog-content\") pod \"redhat-operators-s96wn\" (UID: \"59f183d3-7fe9-4f2d-bd43-02a2ed7eafc5\") " pod="openshift-marketplace/redhat-operators-s96wn" Feb 02 21:22:13 crc kubenswrapper[4789]: I0202 21:22:13.946182 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:13 crc kubenswrapper[4789]: I0202 21:22:13.946686 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59f183d3-7fe9-4f2d-bd43-02a2ed7eafc5-utilities\") pod \"redhat-operators-s96wn\" (UID: \"59f183d3-7fe9-4f2d-bd43-02a2ed7eafc5\") " pod="openshift-marketplace/redhat-operators-s96wn" Feb 02 21:22:13 crc kubenswrapper[4789]: I0202 21:22:13.946817 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59f183d3-7fe9-4f2d-bd43-02a2ed7eafc5-catalog-content\") pod \"redhat-operators-s96wn\" (UID: \"59f183d3-7fe9-4f2d-bd43-02a2ed7eafc5\") " pod="openshift-marketplace/redhat-operators-s96wn" Feb 02 21:22:13 crc kubenswrapper[4789]: E0202 21:22:13.947089 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 21:22:14.447078759 +0000 UTC m=+154.742103778 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x2wtg" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:13 crc kubenswrapper[4789]: I0202 21:22:13.956822 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-jwp46" Feb 02 21:22:13 crc kubenswrapper[4789]: I0202 21:22:13.960804 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-jwp46" Feb 02 21:22:13 crc kubenswrapper[4789]: I0202 21:22:13.967340 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjh2r\" (UniqueName: \"kubernetes.io/projected/59f183d3-7fe9-4f2d-bd43-02a2ed7eafc5-kube-api-access-cjh2r\") pod \"redhat-operators-s96wn\" (UID: \"59f183d3-7fe9-4f2d-bd43-02a2ed7eafc5\") " pod="openshift-marketplace/redhat-operators-s96wn" Feb 02 21:22:14 crc kubenswrapper[4789]: I0202 21:22:14.047629 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:22:14 crc kubenswrapper[4789]: E0202 21:22:14.048273 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 21:22:14.548249572 +0000 UTC m=+154.843274591 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:14 crc kubenswrapper[4789]: I0202 21:22:14.050667 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:14 crc kubenswrapper[4789]: E0202 21:22:14.051296 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 21:22:14.5512853 +0000 UTC m=+154.846310319 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x2wtg" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:14 crc kubenswrapper[4789]: I0202 21:22:14.051986 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s96wn" Feb 02 21:22:14 crc kubenswrapper[4789]: I0202 21:22:14.152249 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:22:14 crc kubenswrapper[4789]: E0202 21:22:14.152659 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 21:22:14.652645048 +0000 UTC m=+154.947670067 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:14 crc kubenswrapper[4789]: I0202 21:22:14.253825 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:14 crc kubenswrapper[4789]: I0202 21:22:14.254006 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s96wn"] Feb 02 21:22:14 crc kubenswrapper[4789]: E0202 21:22:14.254868 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 21:22:14.754854241 +0000 UTC m=+155.049879260 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x2wtg" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:14 crc kubenswrapper[4789]: I0202 21:22:14.355223 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:22:14 crc kubenswrapper[4789]: E0202 21:22:14.355544 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 21:22:14.85552948 +0000 UTC m=+155.150554499 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:14 crc kubenswrapper[4789]: I0202 21:22:14.456885 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:14 crc kubenswrapper[4789]: E0202 21:22:14.457229 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 21:22:14.957212798 +0000 UTC m=+155.252237817 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x2wtg" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:14 crc kubenswrapper[4789]: I0202 21:22:14.542732 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cjz6p" event={"ID":"81c40f7e-bff1-432e-95e4-dfeeba942abc","Type":"ContainerStarted","Data":"aa82d38cd7d5694d19bc10855518627433f21b7a691f629a04542c46888cb279"} Feb 02 21:22:14 crc kubenswrapper[4789]: I0202 21:22:14.546455 4789 generic.go:334] "Generic (PLEG): container finished" podID="564193d6-a25b-478d-8957-54183764c6d7" containerID="fb33dd91e892706094be7c996e3f139340d946e243f38261286fda06b1d389e7" exitCode=0 Feb 02 21:22:14 crc kubenswrapper[4789]: I0202 21:22:14.546534 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bxbt7" event={"ID":"564193d6-a25b-478d-8957-54183764c6d7","Type":"ContainerDied","Data":"fb33dd91e892706094be7c996e3f139340d946e243f38261286fda06b1d389e7"} Feb 02 21:22:14 crc kubenswrapper[4789]: I0202 21:22:14.546622 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bxbt7" event={"ID":"564193d6-a25b-478d-8957-54183764c6d7","Type":"ContainerStarted","Data":"792c31f09013898876009a97025591e1596b900b21c8ed72ab7c3a53b3034156"} Feb 02 21:22:14 crc kubenswrapper[4789]: I0202 21:22:14.550515 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s96wn" event={"ID":"59f183d3-7fe9-4f2d-bd43-02a2ed7eafc5","Type":"ContainerStarted","Data":"a4ae250cef40aa3a3b4063417e9d167076a5a4657b66d5c9b599c8b2bd146455"} Feb 02 21:22:14 crc kubenswrapper[4789]: I0202 21:22:14.559224 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:22:14 crc kubenswrapper[4789]: E0202 21:22:14.559489 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 21:22:15.059475432 +0000 UTC m=+155.354500451 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:14 crc kubenswrapper[4789]: I0202 21:22:14.661007 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:14 crc kubenswrapper[4789]: E0202 21:22:14.661502 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 21:22:15.16148495 +0000 UTC m=+155.456509969 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x2wtg" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:14 crc kubenswrapper[4789]: I0202 21:22:14.762012 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:22:14 crc kubenswrapper[4789]: E0202 21:22:14.762237 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 21:22:15.26220305 +0000 UTC m=+155.557228069 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:14 crc kubenswrapper[4789]: I0202 21:22:14.762269 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:14 crc kubenswrapper[4789]: E0202 21:22:14.762604 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 21:22:15.262592881 +0000 UTC m=+155.557617900 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x2wtg" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:14 crc kubenswrapper[4789]: I0202 21:22:14.767065 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 21:22:14 crc kubenswrapper[4789]: I0202 21:22:14.862968 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3ae5cfe8-6155-4ae2-bd9b-0b2bcfac74fa-kubelet-dir\") pod \"3ae5cfe8-6155-4ae2-bd9b-0b2bcfac74fa\" (UID: \"3ae5cfe8-6155-4ae2-bd9b-0b2bcfac74fa\") " Feb 02 21:22:14 crc kubenswrapper[4789]: I0202 21:22:14.863716 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3ae5cfe8-6155-4ae2-bd9b-0b2bcfac74fa-kube-api-access\") pod \"3ae5cfe8-6155-4ae2-bd9b-0b2bcfac74fa\" (UID: \"3ae5cfe8-6155-4ae2-bd9b-0b2bcfac74fa\") " Feb 02 21:22:14 crc kubenswrapper[4789]: I0202 21:22:14.863849 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:22:14 crc kubenswrapper[4789]: I0202 21:22:14.863073 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3ae5cfe8-6155-4ae2-bd9b-0b2bcfac74fa-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "3ae5cfe8-6155-4ae2-bd9b-0b2bcfac74fa" (UID: "3ae5cfe8-6155-4ae2-bd9b-0b2bcfac74fa"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 21:22:14 crc kubenswrapper[4789]: E0202 21:22:14.864145 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 21:22:15.364133315 +0000 UTC m=+155.659158334 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:14 crc kubenswrapper[4789]: I0202 21:22:14.884980 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ae5cfe8-6155-4ae2-bd9b-0b2bcfac74fa-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "3ae5cfe8-6155-4ae2-bd9b-0b2bcfac74fa" (UID: "3ae5cfe8-6155-4ae2-bd9b-0b2bcfac74fa"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:22:14 crc kubenswrapper[4789]: I0202 21:22:14.965480 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:14 crc kubenswrapper[4789]: I0202 21:22:14.965616 4789 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3ae5cfe8-6155-4ae2-bd9b-0b2bcfac74fa-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 02 21:22:14 crc kubenswrapper[4789]: I0202 21:22:14.965631 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3ae5cfe8-6155-4ae2-bd9b-0b2bcfac74fa-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 21:22:14 crc kubenswrapper[4789]: E0202 21:22:14.965928 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 21:22:15.465912365 +0000 UTC m=+155.760937384 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x2wtg" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:15 crc kubenswrapper[4789]: I0202 21:22:15.066546 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:22:15 crc kubenswrapper[4789]: E0202 21:22:15.066732 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 21:22:15.566701947 +0000 UTC m=+155.861726966 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:15 crc kubenswrapper[4789]: I0202 21:22:15.067025 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:15 crc kubenswrapper[4789]: E0202 21:22:15.067341 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 21:22:15.567333105 +0000 UTC m=+155.862358124 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x2wtg" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:15 crc kubenswrapper[4789]: I0202 21:22:15.168277 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:22:15 crc kubenswrapper[4789]: E0202 21:22:15.168376 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 21:22:15.668361264 +0000 UTC m=+155.963386283 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:15 crc kubenswrapper[4789]: I0202 21:22:15.168613 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:15 crc kubenswrapper[4789]: E0202 21:22:15.168897 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 21:22:15.668890429 +0000 UTC m=+155.963915438 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x2wtg" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:15 crc kubenswrapper[4789]: I0202 21:22:15.270186 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:22:15 crc kubenswrapper[4789]: E0202 21:22:15.270354 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 21:22:15.77033015 +0000 UTC m=+156.065355169 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:15 crc kubenswrapper[4789]: I0202 21:22:15.270455 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:15 crc kubenswrapper[4789]: E0202 21:22:15.270807 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 21:22:15.770790493 +0000 UTC m=+156.065815512 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x2wtg" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:15 crc kubenswrapper[4789]: I0202 21:22:15.371138 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:22:15 crc kubenswrapper[4789]: E0202 21:22:15.371353 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 21:22:15.871324067 +0000 UTC m=+156.166349086 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:15 crc kubenswrapper[4789]: I0202 21:22:15.371726 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:15 crc kubenswrapper[4789]: E0202 21:22:15.372029 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 21:22:15.872016537 +0000 UTC m=+156.167041556 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x2wtg" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:15 crc kubenswrapper[4789]: I0202 21:22:15.475085 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:22:15 crc kubenswrapper[4789]: E0202 21:22:15.475288 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 21:22:15.97526742 +0000 UTC m=+156.270292439 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:15 crc kubenswrapper[4789]: I0202 21:22:15.475368 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:15 crc kubenswrapper[4789]: E0202 21:22:15.475763 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 21:22:15.975754164 +0000 UTC m=+156.270779183 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x2wtg" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:15 crc kubenswrapper[4789]: I0202 21:22:15.556676 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 21:22:15 crc kubenswrapper[4789]: I0202 21:22:15.562386 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"3ae5cfe8-6155-4ae2-bd9b-0b2bcfac74fa","Type":"ContainerDied","Data":"0b45a290a55b28f94af90d1a13911faf0d820afc4eb2cb8772affae882110ac5"} Feb 02 21:22:15 crc kubenswrapper[4789]: I0202 21:22:15.562529 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0b45a290a55b28f94af90d1a13911faf0d820afc4eb2cb8772affae882110ac5" Feb 02 21:22:15 crc kubenswrapper[4789]: I0202 21:22:15.576947 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:22:15 crc kubenswrapper[4789]: E0202 21:22:15.580566 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 21:22:16.080547402 +0000 UTC m=+156.375572421 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:15 crc kubenswrapper[4789]: I0202 21:22:15.681899 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:15 crc kubenswrapper[4789]: E0202 21:22:15.682604 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 21:22:16.18259291 +0000 UTC m=+156.477617929 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x2wtg" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:15 crc kubenswrapper[4789]: I0202 21:22:15.783100 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:22:15 crc kubenswrapper[4789]: E0202 21:22:15.783384 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 21:22:16.283371241 +0000 UTC m=+156.578396260 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:15 crc kubenswrapper[4789]: I0202 21:22:15.884893 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:15 crc kubenswrapper[4789]: E0202 21:22:15.885222 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 21:22:16.385210823 +0000 UTC m=+156.680235842 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x2wtg" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:15 crc kubenswrapper[4789]: I0202 21:22:15.986161 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:22:15 crc kubenswrapper[4789]: E0202 21:22:15.986315 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 21:22:16.486293354 +0000 UTC m=+156.781318373 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:15 crc kubenswrapper[4789]: I0202 21:22:15.986378 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:15 crc kubenswrapper[4789]: E0202 21:22:15.986826 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 21:22:16.486810318 +0000 UTC m=+156.781835337 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x2wtg" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:16 crc kubenswrapper[4789]: I0202 21:22:16.087790 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:22:16 crc kubenswrapper[4789]: E0202 21:22:16.087977 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 21:22:16.587952681 +0000 UTC m=+156.882977700 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:16 crc kubenswrapper[4789]: I0202 21:22:16.088033 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:16 crc kubenswrapper[4789]: E0202 21:22:16.088396 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 21:22:16.588384073 +0000 UTC m=+156.883409092 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x2wtg" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:16 crc kubenswrapper[4789]: I0202 21:22:16.188562 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:22:16 crc kubenswrapper[4789]: E0202 21:22:16.188657 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 21:22:16.6886407 +0000 UTC m=+156.983665719 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:16 crc kubenswrapper[4789]: I0202 21:22:16.188868 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:16 crc kubenswrapper[4789]: E0202 21:22:16.189084 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 21:22:16.689077243 +0000 UTC m=+156.984102262 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x2wtg" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:16 crc kubenswrapper[4789]: I0202 21:22:16.290357 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:22:16 crc kubenswrapper[4789]: E0202 21:22:16.290483 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 21:22:16.790459402 +0000 UTC m=+157.085484421 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:16 crc kubenswrapper[4789]: I0202 21:22:16.290550 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:16 crc kubenswrapper[4789]: E0202 21:22:16.291152 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 21:22:16.791140171 +0000 UTC m=+157.086165190 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x2wtg" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:16 crc kubenswrapper[4789]: I0202 21:22:16.391182 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:22:16 crc kubenswrapper[4789]: E0202 21:22:16.391373 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 21:22:16.891341967 +0000 UTC m=+157.186366986 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:16 crc kubenswrapper[4789]: I0202 21:22:16.391837 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:16 crc kubenswrapper[4789]: E0202 21:22:16.392208 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 21:22:16.892198031 +0000 UTC m=+157.187223130 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x2wtg" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:16 crc kubenswrapper[4789]: I0202 21:22:16.492487 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:22:16 crc kubenswrapper[4789]: E0202 21:22:16.492695 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 21:22:16.992653644 +0000 UTC m=+157.287678663 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:16 crc kubenswrapper[4789]: I0202 21:22:16.492849 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:16 crc kubenswrapper[4789]: E0202 21:22:16.493252 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 21:22:16.993244561 +0000 UTC m=+157.288269580 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x2wtg" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:16 crc kubenswrapper[4789]: I0202 21:22:16.579821 4789 generic.go:334] "Generic (PLEG): container finished" podID="59f183d3-7fe9-4f2d-bd43-02a2ed7eafc5" containerID="3d40b3a942f7b7975ce73830f98892a351c35d284d4e253c75cb08052db9c78d" exitCode=0 Feb 02 21:22:16 crc kubenswrapper[4789]: I0202 21:22:16.579905 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s96wn" event={"ID":"59f183d3-7fe9-4f2d-bd43-02a2ed7eafc5","Type":"ContainerDied","Data":"3d40b3a942f7b7975ce73830f98892a351c35d284d4e253c75cb08052db9c78d"} Feb 02 21:22:16 crc kubenswrapper[4789]: I0202 21:22:16.582238 4789 generic.go:334] "Generic (PLEG): container finished" podID="81c40f7e-bff1-432e-95e4-dfeeba942abc" containerID="6bd97d95fbba2c3f0134efa321d5fe74ff2fa932015c2419f1c36d699153f8b7" exitCode=0 Feb 02 21:22:16 crc kubenswrapper[4789]: I0202 21:22:16.582295 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cjz6p" event={"ID":"81c40f7e-bff1-432e-95e4-dfeeba942abc","Type":"ContainerDied","Data":"6bd97d95fbba2c3f0134efa321d5fe74ff2fa932015c2419f1c36d699153f8b7"} Feb 02 21:22:16 crc kubenswrapper[4789]: I0202 21:22:16.585034 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-dzvhk" event={"ID":"b2ad8f41-6fd7-4cc5-b8a1-38a886817c6c","Type":"ContainerStarted","Data":"f685ff3d5ccda04970cc3e58aa1ac9bf892fcc050f773edce920780a93af554d"} Feb 02 21:22:16 crc kubenswrapper[4789]: I0202 21:22:16.593956 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:22:16 crc kubenswrapper[4789]: E0202 21:22:16.594361 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 21:22:17.094344412 +0000 UTC m=+157.389369431 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:16 crc kubenswrapper[4789]: I0202 21:22:16.695549 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:16 crc kubenswrapper[4789]: E0202 21:22:16.695946 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 21:22:17.195928787 +0000 UTC m=+157.490953806 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x2wtg" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:16 crc kubenswrapper[4789]: I0202 21:22:16.796677 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:22:16 crc kubenswrapper[4789]: E0202 21:22:16.796933 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 21:22:17.296855883 +0000 UTC m=+157.591880912 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:16 crc kubenswrapper[4789]: I0202 21:22:16.796994 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:16 crc kubenswrapper[4789]: E0202 21:22:16.797327 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 21:22:17.297310166 +0000 UTC m=+157.592335205 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x2wtg" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:16 crc kubenswrapper[4789]: I0202 21:22:16.906629 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:22:16 crc kubenswrapper[4789]: E0202 21:22:16.906913 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 21:22:17.406881461 +0000 UTC m=+157.701906500 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:16 crc kubenswrapper[4789]: I0202 21:22:16.907161 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:16 crc kubenswrapper[4789]: E0202 21:22:16.907763 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 21:22:17.407745666 +0000 UTC m=+157.702770695 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x2wtg" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:17 crc kubenswrapper[4789]: I0202 21:22:17.007883 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:22:17 crc kubenswrapper[4789]: E0202 21:22:17.008093 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 21:22:17.508056954 +0000 UTC m=+157.803081973 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:17 crc kubenswrapper[4789]: I0202 21:22:17.008148 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:17 crc kubenswrapper[4789]: E0202 21:22:17.008458 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 21:22:17.508447685 +0000 UTC m=+157.803472704 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x2wtg" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:17 crc kubenswrapper[4789]: I0202 21:22:17.109438 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:22:17 crc kubenswrapper[4789]: E0202 21:22:17.109820 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 21:22:17.609798613 +0000 UTC m=+157.904823632 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:17 crc kubenswrapper[4789]: I0202 21:22:17.135288 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 02 21:22:17 crc kubenswrapper[4789]: E0202 21:22:17.135706 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ae5cfe8-6155-4ae2-bd9b-0b2bcfac74fa" containerName="pruner" Feb 02 21:22:17 crc kubenswrapper[4789]: I0202 21:22:17.135730 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ae5cfe8-6155-4ae2-bd9b-0b2bcfac74fa" containerName="pruner" Feb 02 21:22:17 crc kubenswrapper[4789]: I0202 21:22:17.135863 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ae5cfe8-6155-4ae2-bd9b-0b2bcfac74fa" containerName="pruner" Feb 02 21:22:17 crc kubenswrapper[4789]: I0202 21:22:17.136516 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 21:22:17 crc kubenswrapper[4789]: I0202 21:22:17.137355 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 02 21:22:17 crc kubenswrapper[4789]: I0202 21:22:17.144464 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 02 21:22:17 crc kubenswrapper[4789]: I0202 21:22:17.144708 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 02 21:22:17 crc kubenswrapper[4789]: I0202 21:22:17.211509 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c8daa12e-5397-4806-b316-ae2bcf9bdff4-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"c8daa12e-5397-4806-b316-ae2bcf9bdff4\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 21:22:17 crc kubenswrapper[4789]: I0202 21:22:17.211635 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:17 crc kubenswrapper[4789]: I0202 21:22:17.211769 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c8daa12e-5397-4806-b316-ae2bcf9bdff4-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"c8daa12e-5397-4806-b316-ae2bcf9bdff4\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 21:22:17 crc kubenswrapper[4789]: E0202 21:22:17.212030 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 21:22:17.712010686 +0000 UTC m=+158.007035715 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x2wtg" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:17 crc kubenswrapper[4789]: I0202 21:22:17.313293 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:22:17 crc kubenswrapper[4789]: I0202 21:22:17.313484 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c8daa12e-5397-4806-b316-ae2bcf9bdff4-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"c8daa12e-5397-4806-b316-ae2bcf9bdff4\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 21:22:17 crc kubenswrapper[4789]: I0202 21:22:17.313526 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c8daa12e-5397-4806-b316-ae2bcf9bdff4-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"c8daa12e-5397-4806-b316-ae2bcf9bdff4\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 21:22:17 crc kubenswrapper[4789]: I0202 21:22:17.313630 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c8daa12e-5397-4806-b316-ae2bcf9bdff4-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"c8daa12e-5397-4806-b316-ae2bcf9bdff4\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 21:22:17 crc kubenswrapper[4789]: E0202 21:22:17.313693 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 21:22:17.813679404 +0000 UTC m=+158.108704413 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:17 crc kubenswrapper[4789]: I0202 21:22:17.332819 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c8daa12e-5397-4806-b316-ae2bcf9bdff4-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"c8daa12e-5397-4806-b316-ae2bcf9bdff4\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 21:22:17 crc kubenswrapper[4789]: I0202 21:22:17.415218 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:17 crc kubenswrapper[4789]: E0202 21:22:17.415573 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 21:22:17.915557967 +0000 UTC m=+158.210582986 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x2wtg" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:17 crc kubenswrapper[4789]: I0202 21:22:17.454042 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 21:22:17 crc kubenswrapper[4789]: I0202 21:22:17.516661 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:22:17 crc kubenswrapper[4789]: E0202 21:22:17.517097 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 21:22:18.017036949 +0000 UTC m=+158.312061978 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:17 crc kubenswrapper[4789]: I0202 21:22:17.594650 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-dzvhk" event={"ID":"b2ad8f41-6fd7-4cc5-b8a1-38a886817c6c","Type":"ContainerStarted","Data":"ee3561a7c1713d78f465c2e7fb921e5191b1e1746f5970ae51f88e3f4457b430"} Feb 02 21:22:17 crc kubenswrapper[4789]: I0202 21:22:17.621258 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:17 crc kubenswrapper[4789]: E0202 21:22:17.621633 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 21:22:18.12161598 +0000 UTC m=+158.416640999 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x2wtg" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:17 crc kubenswrapper[4789]: I0202 21:22:17.717790 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 02 21:22:17 crc kubenswrapper[4789]: I0202 21:22:17.723702 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:22:17 crc kubenswrapper[4789]: E0202 21:22:17.723859 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 21:22:18.223834474 +0000 UTC m=+158.518859493 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:17 crc kubenswrapper[4789]: I0202 21:22:17.724291 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:17 crc kubenswrapper[4789]: E0202 21:22:17.724988 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 21:22:18.224969846 +0000 UTC m=+158.519994855 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x2wtg" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:17 crc kubenswrapper[4789]: I0202 21:22:17.829080 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:22:17 crc kubenswrapper[4789]: E0202 21:22:17.829242 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 21:22:18.329207748 +0000 UTC m=+158.624232767 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:17 crc kubenswrapper[4789]: I0202 21:22:17.829383 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:17 crc kubenswrapper[4789]: E0202 21:22:17.829746 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 21:22:18.329731273 +0000 UTC m=+158.624756292 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x2wtg" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:17 crc kubenswrapper[4789]: I0202 21:22:17.839234 4789 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 02 21:22:17 crc kubenswrapper[4789]: I0202 21:22:17.930757 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:22:17 crc kubenswrapper[4789]: E0202 21:22:17.930943 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 21:22:18.430921906 +0000 UTC m=+158.725946925 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:17 crc kubenswrapper[4789]: I0202 21:22:17.931009 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:17 crc kubenswrapper[4789]: E0202 21:22:17.931362 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 21:22:18.431346229 +0000 UTC m=+158.726371248 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x2wtg" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:18 crc kubenswrapper[4789]: I0202 21:22:18.032042 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:22:18 crc kubenswrapper[4789]: E0202 21:22:18.032371 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 21:22:18.532354377 +0000 UTC m=+158.827379396 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:18 crc kubenswrapper[4789]: I0202 21:22:18.133265 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:18 crc kubenswrapper[4789]: E0202 21:22:18.133643 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 21:22:18.633626763 +0000 UTC m=+158.928651782 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x2wtg" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:18 crc kubenswrapper[4789]: I0202 21:22:18.242108 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:22:18 crc kubenswrapper[4789]: E0202 21:22:18.242776 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 21:22:18.742760885 +0000 UTC m=+159.037785904 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:18 crc kubenswrapper[4789]: I0202 21:22:18.344128 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:18 crc kubenswrapper[4789]: E0202 21:22:18.344470 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 21:22:18.844459274 +0000 UTC m=+159.139484293 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x2wtg" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:18 crc kubenswrapper[4789]: I0202 21:22:18.445118 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:22:18 crc kubenswrapper[4789]: E0202 21:22:18.445288 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 21:22:18.945260456 +0000 UTC m=+159.240285475 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:18 crc kubenswrapper[4789]: I0202 21:22:18.445336 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:18 crc kubenswrapper[4789]: E0202 21:22:18.445655 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 21:22:18.945647707 +0000 UTC m=+159.240672726 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x2wtg" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:18 crc kubenswrapper[4789]: I0202 21:22:18.466711 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-jf9jd" Feb 02 21:22:18 crc kubenswrapper[4789]: I0202 21:22:18.546511 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:22:18 crc kubenswrapper[4789]: E0202 21:22:18.547300 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 21:22:19.047261413 +0000 UTC m=+159.342286452 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:18 crc kubenswrapper[4789]: I0202 21:22:18.603476 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-dzvhk" event={"ID":"b2ad8f41-6fd7-4cc5-b8a1-38a886817c6c","Type":"ContainerStarted","Data":"d338cf4ad97a9e9dbc4d5044d51d5e62b6d117749006759a839fcea89cfb00d1"} Feb 02 21:22:18 crc kubenswrapper[4789]: I0202 21:22:18.613521 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"c8daa12e-5397-4806-b316-ae2bcf9bdff4","Type":"ContainerStarted","Data":"08301b69ea55ec97f49eba82c428d46a32a5a9de5fb176012b50ea078b848480"} Feb 02 21:22:18 crc kubenswrapper[4789]: I0202 21:22:18.613560 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"c8daa12e-5397-4806-b316-ae2bcf9bdff4","Type":"ContainerStarted","Data":"ec43cad0628bf46068d077dc975225ec4d6124d514680394ddee75aa9ed1cc15"} Feb 02 21:22:18 crc kubenswrapper[4789]: I0202 21:22:18.637669 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-dzvhk" podStartSLOduration=18.637639465 podStartE2EDuration="18.637639465s" podCreationTimestamp="2026-02-02 21:22:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:22:18.6239074 +0000 UTC m=+158.918932419" watchObservedRunningTime="2026-02-02 21:22:18.637639465 +0000 UTC m=+158.932664494" Feb 02 21:22:18 crc kubenswrapper[4789]: I0202 21:22:18.648882 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:18 crc kubenswrapper[4789]: E0202 21:22:18.649267 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 21:22:19.14925401 +0000 UTC m=+159.444279029 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x2wtg" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:18 crc kubenswrapper[4789]: I0202 21:22:18.657522 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=1.657507677 podStartE2EDuration="1.657507677s" podCreationTimestamp="2026-02-02 21:22:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:22:18.657009063 +0000 UTC m=+158.952034082" watchObservedRunningTime="2026-02-02 21:22:18.657507677 +0000 UTC m=+158.952532696" Feb 02 21:22:18 crc kubenswrapper[4789]: I0202 21:22:18.749740 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:22:18 crc kubenswrapper[4789]: E0202 21:22:18.750056 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 21:22:19.250030101 +0000 UTC m=+159.545055120 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:18 crc kubenswrapper[4789]: I0202 21:22:18.836956 4789 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-02T21:22:17.839263067Z","Handler":null,"Name":""} Feb 02 21:22:18 crc kubenswrapper[4789]: I0202 21:22:18.851142 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:18 crc kubenswrapper[4789]: E0202 21:22:18.851424 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 21:22:19.351413261 +0000 UTC m=+159.646438280 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x2wtg" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 21:22:18 crc kubenswrapper[4789]: I0202 21:22:18.855590 4789 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 02 21:22:18 crc kubenswrapper[4789]: I0202 21:22:18.855622 4789 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 02 21:22:18 crc kubenswrapper[4789]: I0202 21:22:18.952304 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 21:22:18 crc kubenswrapper[4789]: I0202 21:22:18.965447 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 02 21:22:19 crc kubenswrapper[4789]: I0202 21:22:19.054358 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:19 crc kubenswrapper[4789]: I0202 21:22:19.061324 4789 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 02 21:22:19 crc kubenswrapper[4789]: I0202 21:22:19.061364 4789 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:19 crc kubenswrapper[4789]: I0202 21:22:19.088107 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x2wtg\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:19 crc kubenswrapper[4789]: I0202 21:22:19.142840 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:19 crc kubenswrapper[4789]: I0202 21:22:19.621938 4789 generic.go:334] "Generic (PLEG): container finished" podID="c8daa12e-5397-4806-b316-ae2bcf9bdff4" containerID="08301b69ea55ec97f49eba82c428d46a32a5a9de5fb176012b50ea078b848480" exitCode=0 Feb 02 21:22:19 crc kubenswrapper[4789]: I0202 21:22:19.622187 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"c8daa12e-5397-4806-b316-ae2bcf9bdff4","Type":"ContainerDied","Data":"08301b69ea55ec97f49eba82c428d46a32a5a9de5fb176012b50ea078b848480"} Feb 02 21:22:19 crc kubenswrapper[4789]: I0202 21:22:19.651482 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-x2wtg"] Feb 02 21:22:19 crc kubenswrapper[4789]: W0202 21:22:19.667257 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda9b60922_75eb_4c97_85d5_12c146fe6cb1.slice/crio-1d48e75fd44d0e14cf9b1fbf9b6851cd3b2329e46d78740c0d4fdcc05ce9dd5c WatchSource:0}: Error finding container 1d48e75fd44d0e14cf9b1fbf9b6851cd3b2329e46d78740c0d4fdcc05ce9dd5c: Status 404 returned error can't find the container with id 1d48e75fd44d0e14cf9b1fbf9b6851cd3b2329e46d78740c0d4fdcc05ce9dd5c Feb 02 21:22:20 crc kubenswrapper[4789]: I0202 21:22:20.431784 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 02 21:22:20 crc kubenswrapper[4789]: I0202 21:22:20.633209 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" event={"ID":"a9b60922-75eb-4c97-85d5-12c146fe6cb1","Type":"ContainerStarted","Data":"d44c7e357159ffb561b9fe6df14c08d45dd33a1b4bc58f7e708e7e9c13287d1d"} Feb 02 21:22:20 crc kubenswrapper[4789]: I0202 21:22:20.633466 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" event={"ID":"a9b60922-75eb-4c97-85d5-12c146fe6cb1","Type":"ContainerStarted","Data":"1d48e75fd44d0e14cf9b1fbf9b6851cd3b2329e46d78740c0d4fdcc05ce9dd5c"} Feb 02 21:22:20 crc kubenswrapper[4789]: I0202 21:22:20.633524 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:20 crc kubenswrapper[4789]: I0202 21:22:20.653816 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" podStartSLOduration=138.653800007 podStartE2EDuration="2m18.653800007s" podCreationTimestamp="2026-02-02 21:20:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:22:20.652042166 +0000 UTC m=+160.947067185" watchObservedRunningTime="2026-02-02 21:22:20.653800007 +0000 UTC m=+160.948825026" Feb 02 21:22:20 crc kubenswrapper[4789]: I0202 21:22:20.974798 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 21:22:21 crc kubenswrapper[4789]: I0202 21:22:21.078516 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c8daa12e-5397-4806-b316-ae2bcf9bdff4-kube-api-access\") pod \"c8daa12e-5397-4806-b316-ae2bcf9bdff4\" (UID: \"c8daa12e-5397-4806-b316-ae2bcf9bdff4\") " Feb 02 21:22:21 crc kubenswrapper[4789]: I0202 21:22:21.078587 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c8daa12e-5397-4806-b316-ae2bcf9bdff4-kubelet-dir\") pod \"c8daa12e-5397-4806-b316-ae2bcf9bdff4\" (UID: \"c8daa12e-5397-4806-b316-ae2bcf9bdff4\") " Feb 02 21:22:21 crc kubenswrapper[4789]: I0202 21:22:21.078711 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c8daa12e-5397-4806-b316-ae2bcf9bdff4-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "c8daa12e-5397-4806-b316-ae2bcf9bdff4" (UID: "c8daa12e-5397-4806-b316-ae2bcf9bdff4"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 21:22:21 crc kubenswrapper[4789]: I0202 21:22:21.078907 4789 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c8daa12e-5397-4806-b316-ae2bcf9bdff4-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 02 21:22:21 crc kubenswrapper[4789]: I0202 21:22:21.083862 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8daa12e-5397-4806-b316-ae2bcf9bdff4-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c8daa12e-5397-4806-b316-ae2bcf9bdff4" (UID: "c8daa12e-5397-4806-b316-ae2bcf9bdff4"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:22:21 crc kubenswrapper[4789]: I0202 21:22:21.180807 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c8daa12e-5397-4806-b316-ae2bcf9bdff4-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 21:22:21 crc kubenswrapper[4789]: I0202 21:22:21.640868 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"c8daa12e-5397-4806-b316-ae2bcf9bdff4","Type":"ContainerDied","Data":"ec43cad0628bf46068d077dc975225ec4d6124d514680394ddee75aa9ed1cc15"} Feb 02 21:22:21 crc kubenswrapper[4789]: I0202 21:22:21.640906 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 21:22:21 crc kubenswrapper[4789]: I0202 21:22:21.640932 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec43cad0628bf46068d077dc975225ec4d6124d514680394ddee75aa9ed1cc15" Feb 02 21:22:21 crc kubenswrapper[4789]: E0202 21:22:21.704048 4789 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-podc8daa12e_5397_4806_b316_ae2bcf9bdff4.slice/crio-ec43cad0628bf46068d077dc975225ec4d6124d514680394ddee75aa9ed1cc15\": RecentStats: unable to find data in memory cache]" Feb 02 21:22:22 crc kubenswrapper[4789]: I0202 21:22:22.497318 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" Feb 02 21:22:22 crc kubenswrapper[4789]: I0202 21:22:22.601023 4789 patch_prober.go:28] interesting pod/downloads-7954f5f757-dd7g5 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 02 21:22:22 crc kubenswrapper[4789]: I0202 21:22:22.601071 4789 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-dd7g5" podUID="899bce18-bfcc-42b8-ab5e-149d16e8eddb" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 02 21:22:22 crc kubenswrapper[4789]: I0202 21:22:22.601104 4789 patch_prober.go:28] interesting pod/downloads-7954f5f757-dd7g5 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 02 21:22:22 crc kubenswrapper[4789]: I0202 21:22:22.601194 4789 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-dd7g5" podUID="899bce18-bfcc-42b8-ab5e-149d16e8eddb" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 02 21:22:22 crc kubenswrapper[4789]: I0202 21:22:22.841551 4789 patch_prober.go:28] interesting pod/machine-config-daemon-c8vcn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 21:22:22 crc kubenswrapper[4789]: I0202 21:22:22.841652 4789 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 21:22:22 crc kubenswrapper[4789]: I0202 21:22:22.894416 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-x568j" Feb 02 21:22:22 crc kubenswrapper[4789]: I0202 21:22:22.899163 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-x568j" Feb 02 21:22:25 crc kubenswrapper[4789]: I0202 21:22:25.031009 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2dc26662-64d3-47f0-9e0d-d340760ca348-metrics-certs\") pod \"network-metrics-daemon-vjbpg\" (UID: \"2dc26662-64d3-47f0-9e0d-d340760ca348\") " pod="openshift-multus/network-metrics-daemon-vjbpg" Feb 02 21:22:25 crc kubenswrapper[4789]: I0202 21:22:25.064219 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2dc26662-64d3-47f0-9e0d-d340760ca348-metrics-certs\") pod \"network-metrics-daemon-vjbpg\" (UID: \"2dc26662-64d3-47f0-9e0d-d340760ca348\") " pod="openshift-multus/network-metrics-daemon-vjbpg" Feb 02 21:22:25 crc kubenswrapper[4789]: I0202 21:22:25.078938 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vjbpg" Feb 02 21:22:32 crc kubenswrapper[4789]: I0202 21:22:32.601916 4789 patch_prober.go:28] interesting pod/downloads-7954f5f757-dd7g5 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 02 21:22:32 crc kubenswrapper[4789]: I0202 21:22:32.602037 4789 patch_prober.go:28] interesting pod/downloads-7954f5f757-dd7g5 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 02 21:22:32 crc kubenswrapper[4789]: I0202 21:22:32.602261 4789 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-dd7g5" podUID="899bce18-bfcc-42b8-ab5e-149d16e8eddb" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 02 21:22:32 crc kubenswrapper[4789]: I0202 21:22:32.602189 4789 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-dd7g5" podUID="899bce18-bfcc-42b8-ab5e-149d16e8eddb" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 02 21:22:32 crc kubenswrapper[4789]: I0202 21:22:32.602356 4789 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-dd7g5" Feb 02 21:22:32 crc kubenswrapper[4789]: I0202 21:22:32.602830 4789 patch_prober.go:28] interesting pod/downloads-7954f5f757-dd7g5 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 02 21:22:32 crc kubenswrapper[4789]: I0202 21:22:32.602851 4789 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-dd7g5" podUID="899bce18-bfcc-42b8-ab5e-149d16e8eddb" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 02 21:22:32 crc kubenswrapper[4789]: I0202 21:22:32.603150 4789 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"62c9a4d31f583a8c7de7ebbcfca30eefeeffcd9d943249f0571941624fcdae1a"} pod="openshift-console/downloads-7954f5f757-dd7g5" containerMessage="Container download-server failed liveness probe, will be restarted" Feb 02 21:22:32 crc kubenswrapper[4789]: I0202 21:22:32.603275 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-dd7g5" podUID="899bce18-bfcc-42b8-ab5e-149d16e8eddb" containerName="download-server" containerID="cri-o://62c9a4d31f583a8c7de7ebbcfca30eefeeffcd9d943249f0571941624fcdae1a" gracePeriod=2 Feb 02 21:22:33 crc kubenswrapper[4789]: I0202 21:22:33.744707 4789 generic.go:334] "Generic (PLEG): container finished" podID="899bce18-bfcc-42b8-ab5e-149d16e8eddb" containerID="62c9a4d31f583a8c7de7ebbcfca30eefeeffcd9d943249f0571941624fcdae1a" exitCode=0 Feb 02 21:22:33 crc kubenswrapper[4789]: I0202 21:22:33.744788 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-dd7g5" event={"ID":"899bce18-bfcc-42b8-ab5e-149d16e8eddb","Type":"ContainerDied","Data":"62c9a4d31f583a8c7de7ebbcfca30eefeeffcd9d943249f0571941624fcdae1a"} Feb 02 21:22:39 crc kubenswrapper[4789]: I0202 21:22:39.151078 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:22:42 crc kubenswrapper[4789]: I0202 21:22:42.601845 4789 patch_prober.go:28] interesting pod/downloads-7954f5f757-dd7g5 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 02 21:22:42 crc kubenswrapper[4789]: I0202 21:22:42.602295 4789 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-dd7g5" podUID="899bce18-bfcc-42b8-ab5e-149d16e8eddb" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 02 21:22:43 crc kubenswrapper[4789]: I0202 21:22:43.639124 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-c9zn5" Feb 02 21:22:43 crc kubenswrapper[4789]: E0202 21:22:43.952019 4789 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 02 21:22:43 crc kubenswrapper[4789]: E0202 21:22:43.952877 4789 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xk4bk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-plkrb_openshift-marketplace(6a0bac2b-aef3-4313-9184-16e08bc0e572): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 02 21:22:43 crc kubenswrapper[4789]: E0202 21:22:43.954267 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-plkrb" podUID="6a0bac2b-aef3-4313-9184-16e08bc0e572" Feb 02 21:22:46 crc kubenswrapper[4789]: E0202 21:22:46.022471 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-plkrb" podUID="6a0bac2b-aef3-4313-9184-16e08bc0e572" Feb 02 21:22:46 crc kubenswrapper[4789]: E0202 21:22:46.091736 4789 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 02 21:22:46 crc kubenswrapper[4789]: E0202 21:22:46.091876 4789 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cp5n7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-f288q_openshift-marketplace(750c480c-359c-47cc-9cc1-72c36bc5c783): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 02 21:22:46 crc kubenswrapper[4789]: E0202 21:22:46.093079 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-f288q" podUID="750c480c-359c-47cc-9cc1-72c36bc5c783" Feb 02 21:22:49 crc kubenswrapper[4789]: E0202 21:22:49.212293 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-f288q" podUID="750c480c-359c-47cc-9cc1-72c36bc5c783" Feb 02 21:22:49 crc kubenswrapper[4789]: E0202 21:22:49.302458 4789 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 02 21:22:49 crc kubenswrapper[4789]: E0202 21:22:49.302626 4789 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cjh2r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-s96wn_openshift-marketplace(59f183d3-7fe9-4f2d-bd43-02a2ed7eafc5): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 02 21:22:49 crc kubenswrapper[4789]: E0202 21:22:49.303779 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-s96wn" podUID="59f183d3-7fe9-4f2d-bd43-02a2ed7eafc5" Feb 02 21:22:49 crc kubenswrapper[4789]: I0202 21:22:49.466625 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 21:22:50 crc kubenswrapper[4789]: E0202 21:22:50.399055 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-s96wn" podUID="59f183d3-7fe9-4f2d-bd43-02a2ed7eafc5" Feb 02 21:22:50 crc kubenswrapper[4789]: E0202 21:22:50.479026 4789 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 02 21:22:50 crc kubenswrapper[4789]: E0202 21:22:50.479406 4789 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g7vkf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-pnd42_openshift-marketplace(275ff536-c274-436e-89f2-a2c138f9857a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 02 21:22:50 crc kubenswrapper[4789]: E0202 21:22:50.480822 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-pnd42" podUID="275ff536-c274-436e-89f2-a2c138f9857a" Feb 02 21:22:50 crc kubenswrapper[4789]: E0202 21:22:50.487977 4789 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 02 21:22:50 crc kubenswrapper[4789]: E0202 21:22:50.488070 4789 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b9kns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-mv4vf_openshift-marketplace(59a1480d-000f-481e-b287-78e39812c69b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 02 21:22:50 crc kubenswrapper[4789]: E0202 21:22:50.489835 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-mv4vf" podUID="59a1480d-000f-481e-b287-78e39812c69b" Feb 02 21:22:50 crc kubenswrapper[4789]: E0202 21:22:50.505273 4789 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 02 21:22:50 crc kubenswrapper[4789]: E0202 21:22:50.505621 4789 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8h9rt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-bxbt7_openshift-marketplace(564193d6-a25b-478d-8957-54183764c6d7): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 02 21:22:50 crc kubenswrapper[4789]: E0202 21:22:50.507319 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-bxbt7" podUID="564193d6-a25b-478d-8957-54183764c6d7" Feb 02 21:22:50 crc kubenswrapper[4789]: I0202 21:22:50.821546 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-vjbpg"] Feb 02 21:22:50 crc kubenswrapper[4789]: W0202 21:22:50.825201 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2dc26662_64d3_47f0_9e0d_d340760ca348.slice/crio-774fbe85dc00f77d4a20787d4a67ad85cc152b8406466ccf11bbd691dc4312cc WatchSource:0}: Error finding container 774fbe85dc00f77d4a20787d4a67ad85cc152b8406466ccf11bbd691dc4312cc: Status 404 returned error can't find the container with id 774fbe85dc00f77d4a20787d4a67ad85cc152b8406466ccf11bbd691dc4312cc Feb 02 21:22:50 crc kubenswrapper[4789]: I0202 21:22:50.850889 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-vjbpg" event={"ID":"2dc26662-64d3-47f0-9e0d-d340760ca348","Type":"ContainerStarted","Data":"774fbe85dc00f77d4a20787d4a67ad85cc152b8406466ccf11bbd691dc4312cc"} Feb 02 21:22:50 crc kubenswrapper[4789]: I0202 21:22:50.853341 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-dd7g5" event={"ID":"899bce18-bfcc-42b8-ab5e-149d16e8eddb","Type":"ContainerStarted","Data":"a6314982f723ac7953b4a339dc15b639b2cc11c31b7ea46157337a23ae1283d0"} Feb 02 21:22:50 crc kubenswrapper[4789]: I0202 21:22:50.853953 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-dd7g5" Feb 02 21:22:50 crc kubenswrapper[4789]: I0202 21:22:50.854327 4789 patch_prober.go:28] interesting pod/downloads-7954f5f757-dd7g5 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 02 21:22:50 crc kubenswrapper[4789]: I0202 21:22:50.854396 4789 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-dd7g5" podUID="899bce18-bfcc-42b8-ab5e-149d16e8eddb" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 02 21:22:50 crc kubenswrapper[4789]: I0202 21:22:50.857946 4789 generic.go:334] "Generic (PLEG): container finished" podID="82a7bf20-8db7-4d0f-91d4-a85ae5da91f5" containerID="bbd9b82a8be7a356538c1ecaa75712512788d43e3d8b235627a7ec86676d073d" exitCode=0 Feb 02 21:22:50 crc kubenswrapper[4789]: I0202 21:22:50.858225 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ltv7s" event={"ID":"82a7bf20-8db7-4d0f-91d4-a85ae5da91f5","Type":"ContainerDied","Data":"bbd9b82a8be7a356538c1ecaa75712512788d43e3d8b235627a7ec86676d073d"} Feb 02 21:22:50 crc kubenswrapper[4789]: I0202 21:22:50.861064 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cjz6p" event={"ID":"81c40f7e-bff1-432e-95e4-dfeeba942abc","Type":"ContainerStarted","Data":"c98eb539a295af5ef6314ed1f2b8c46ff441391f9a308f81c1104760af7dd3c9"} Feb 02 21:22:50 crc kubenswrapper[4789]: E0202 21:22:50.862803 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-pnd42" podUID="275ff536-c274-436e-89f2-a2c138f9857a" Feb 02 21:22:51 crc kubenswrapper[4789]: I0202 21:22:51.867338 4789 generic.go:334] "Generic (PLEG): container finished" podID="81c40f7e-bff1-432e-95e4-dfeeba942abc" containerID="c98eb539a295af5ef6314ed1f2b8c46ff441391f9a308f81c1104760af7dd3c9" exitCode=0 Feb 02 21:22:51 crc kubenswrapper[4789]: I0202 21:22:51.867594 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cjz6p" event={"ID":"81c40f7e-bff1-432e-95e4-dfeeba942abc","Type":"ContainerDied","Data":"c98eb539a295af5ef6314ed1f2b8c46ff441391f9a308f81c1104760af7dd3c9"} Feb 02 21:22:51 crc kubenswrapper[4789]: I0202 21:22:51.871464 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-vjbpg" event={"ID":"2dc26662-64d3-47f0-9e0d-d340760ca348","Type":"ContainerStarted","Data":"1c8429f14973479eb64fb1f58f41e21b6df9e3180b33e247505c8ba268681ebd"} Feb 02 21:22:51 crc kubenswrapper[4789]: I0202 21:22:51.871489 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-vjbpg" event={"ID":"2dc26662-64d3-47f0-9e0d-d340760ca348","Type":"ContainerStarted","Data":"f102252f40c00c406d17d6c2979d21965c600ac03cff34186ea44cee129df1b0"} Feb 02 21:22:51 crc kubenswrapper[4789]: I0202 21:22:51.871897 4789 patch_prober.go:28] interesting pod/downloads-7954f5f757-dd7g5 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 02 21:22:51 crc kubenswrapper[4789]: I0202 21:22:51.871942 4789 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-dd7g5" podUID="899bce18-bfcc-42b8-ab5e-149d16e8eddb" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 02 21:22:51 crc kubenswrapper[4789]: I0202 21:22:51.923107 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-vjbpg" podStartSLOduration=169.923092138 podStartE2EDuration="2m49.923092138s" podCreationTimestamp="2026-02-02 21:20:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:22:51.922148451 +0000 UTC m=+192.217173540" watchObservedRunningTime="2026-02-02 21:22:51.923092138 +0000 UTC m=+192.218117157" Feb 02 21:22:52 crc kubenswrapper[4789]: I0202 21:22:52.600918 4789 patch_prober.go:28] interesting pod/downloads-7954f5f757-dd7g5 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 02 21:22:52 crc kubenswrapper[4789]: I0202 21:22:52.601389 4789 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-dd7g5" podUID="899bce18-bfcc-42b8-ab5e-149d16e8eddb" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 02 21:22:52 crc kubenswrapper[4789]: I0202 21:22:52.601008 4789 patch_prober.go:28] interesting pod/downloads-7954f5f757-dd7g5 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 02 21:22:52 crc kubenswrapper[4789]: I0202 21:22:52.601501 4789 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-dd7g5" podUID="899bce18-bfcc-42b8-ab5e-149d16e8eddb" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 02 21:22:52 crc kubenswrapper[4789]: I0202 21:22:52.842047 4789 patch_prober.go:28] interesting pod/machine-config-daemon-c8vcn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 21:22:52 crc kubenswrapper[4789]: I0202 21:22:52.842114 4789 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 21:22:53 crc kubenswrapper[4789]: I0202 21:22:53.882481 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ltv7s" event={"ID":"82a7bf20-8db7-4d0f-91d4-a85ae5da91f5","Type":"ContainerStarted","Data":"2a4cab794383b216ccbb9f3936ff4f7b6210e83ed15250e69065668dbfed4b6b"} Feb 02 21:22:53 crc kubenswrapper[4789]: I0202 21:22:53.904934 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ltv7s" podStartSLOduration=2.225762937 podStartE2EDuration="43.904917781s" podCreationTimestamp="2026-02-02 21:22:10 +0000 UTC" firstStartedPulling="2026-02-02 21:22:11.361880753 +0000 UTC m=+151.656905772" lastFinishedPulling="2026-02-02 21:22:53.041035597 +0000 UTC m=+193.336060616" observedRunningTime="2026-02-02 21:22:53.900935937 +0000 UTC m=+194.195961016" watchObservedRunningTime="2026-02-02 21:22:53.904917781 +0000 UTC m=+194.199942800" Feb 02 21:22:54 crc kubenswrapper[4789]: I0202 21:22:54.334702 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 02 21:22:54 crc kubenswrapper[4789]: E0202 21:22:54.335347 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8daa12e-5397-4806-b316-ae2bcf9bdff4" containerName="pruner" Feb 02 21:22:54 crc kubenswrapper[4789]: I0202 21:22:54.335359 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8daa12e-5397-4806-b316-ae2bcf9bdff4" containerName="pruner" Feb 02 21:22:54 crc kubenswrapper[4789]: I0202 21:22:54.335467 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8daa12e-5397-4806-b316-ae2bcf9bdff4" containerName="pruner" Feb 02 21:22:54 crc kubenswrapper[4789]: I0202 21:22:54.335890 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 21:22:54 crc kubenswrapper[4789]: I0202 21:22:54.340131 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 02 21:22:54 crc kubenswrapper[4789]: I0202 21:22:54.340502 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 02 21:22:54 crc kubenswrapper[4789]: I0202 21:22:54.342229 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 02 21:22:54 crc kubenswrapper[4789]: I0202 21:22:54.467051 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/af3abaf5-38f1-49f0-a236-1536193c8842-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"af3abaf5-38f1-49f0-a236-1536193c8842\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 21:22:54 crc kubenswrapper[4789]: I0202 21:22:54.467621 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/af3abaf5-38f1-49f0-a236-1536193c8842-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"af3abaf5-38f1-49f0-a236-1536193c8842\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 21:22:54 crc kubenswrapper[4789]: I0202 21:22:54.569145 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/af3abaf5-38f1-49f0-a236-1536193c8842-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"af3abaf5-38f1-49f0-a236-1536193c8842\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 21:22:54 crc kubenswrapper[4789]: I0202 21:22:54.569223 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/af3abaf5-38f1-49f0-a236-1536193c8842-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"af3abaf5-38f1-49f0-a236-1536193c8842\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 21:22:54 crc kubenswrapper[4789]: I0202 21:22:54.569306 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/af3abaf5-38f1-49f0-a236-1536193c8842-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"af3abaf5-38f1-49f0-a236-1536193c8842\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 21:22:54 crc kubenswrapper[4789]: I0202 21:22:54.589025 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/af3abaf5-38f1-49f0-a236-1536193c8842-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"af3abaf5-38f1-49f0-a236-1536193c8842\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 21:22:54 crc kubenswrapper[4789]: I0202 21:22:54.669328 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 21:22:54 crc kubenswrapper[4789]: I0202 21:22:54.889313 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cjz6p" event={"ID":"81c40f7e-bff1-432e-95e4-dfeeba942abc","Type":"ContainerStarted","Data":"3d7b48a2741295411897bad63f9bfcec9147766f4a7ea8d8866401ec5c0eed52"} Feb 02 21:22:54 crc kubenswrapper[4789]: I0202 21:22:54.907078 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-cjz6p" podStartSLOduration=5.595875363 podStartE2EDuration="41.907063087s" podCreationTimestamp="2026-02-02 21:22:13 +0000 UTC" firstStartedPulling="2026-02-02 21:22:17.59556201 +0000 UTC m=+157.890587029" lastFinishedPulling="2026-02-02 21:22:53.906749684 +0000 UTC m=+194.201774753" observedRunningTime="2026-02-02 21:22:54.903517485 +0000 UTC m=+195.198542514" watchObservedRunningTime="2026-02-02 21:22:54.907063087 +0000 UTC m=+195.202088106" Feb 02 21:22:55 crc kubenswrapper[4789]: I0202 21:22:55.190536 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 02 21:22:55 crc kubenswrapper[4789]: W0202 21:22:55.204051 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podaf3abaf5_38f1_49f0_a236_1536193c8842.slice/crio-1497b0a1178b6d9436359d67cfa12764ba30f4c958851b5dc0afa49171a83188 WatchSource:0}: Error finding container 1497b0a1178b6d9436359d67cfa12764ba30f4c958851b5dc0afa49171a83188: Status 404 returned error can't find the container with id 1497b0a1178b6d9436359d67cfa12764ba30f4c958851b5dc0afa49171a83188 Feb 02 21:22:55 crc kubenswrapper[4789]: I0202 21:22:55.895267 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"af3abaf5-38f1-49f0-a236-1536193c8842","Type":"ContainerStarted","Data":"20a35055ee41b38ee47c9093227c0a95443660deac4302b1353271d9ac61df91"} Feb 02 21:22:55 crc kubenswrapper[4789]: I0202 21:22:55.895508 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"af3abaf5-38f1-49f0-a236-1536193c8842","Type":"ContainerStarted","Data":"1497b0a1178b6d9436359d67cfa12764ba30f4c958851b5dc0afa49171a83188"} Feb 02 21:22:56 crc kubenswrapper[4789]: I0202 21:22:56.902383 4789 generic.go:334] "Generic (PLEG): container finished" podID="af3abaf5-38f1-49f0-a236-1536193c8842" containerID="20a35055ee41b38ee47c9093227c0a95443660deac4302b1353271d9ac61df91" exitCode=0 Feb 02 21:22:56 crc kubenswrapper[4789]: I0202 21:22:56.902424 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"af3abaf5-38f1-49f0-a236-1536193c8842","Type":"ContainerDied","Data":"20a35055ee41b38ee47c9093227c0a95443660deac4302b1353271d9ac61df91"} Feb 02 21:22:58 crc kubenswrapper[4789]: I0202 21:22:58.191615 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 21:22:58 crc kubenswrapper[4789]: I0202 21:22:58.209486 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/af3abaf5-38f1-49f0-a236-1536193c8842-kube-api-access\") pod \"af3abaf5-38f1-49f0-a236-1536193c8842\" (UID: \"af3abaf5-38f1-49f0-a236-1536193c8842\") " Feb 02 21:22:58 crc kubenswrapper[4789]: I0202 21:22:58.209558 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/af3abaf5-38f1-49f0-a236-1536193c8842-kubelet-dir\") pod \"af3abaf5-38f1-49f0-a236-1536193c8842\" (UID: \"af3abaf5-38f1-49f0-a236-1536193c8842\") " Feb 02 21:22:58 crc kubenswrapper[4789]: I0202 21:22:58.209753 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af3abaf5-38f1-49f0-a236-1536193c8842-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "af3abaf5-38f1-49f0-a236-1536193c8842" (UID: "af3abaf5-38f1-49f0-a236-1536193c8842"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 21:22:58 crc kubenswrapper[4789]: I0202 21:22:58.214962 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af3abaf5-38f1-49f0-a236-1536193c8842-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "af3abaf5-38f1-49f0-a236-1536193c8842" (UID: "af3abaf5-38f1-49f0-a236-1536193c8842"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:22:58 crc kubenswrapper[4789]: I0202 21:22:58.310347 4789 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/af3abaf5-38f1-49f0-a236-1536193c8842-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 02 21:22:58 crc kubenswrapper[4789]: I0202 21:22:58.310380 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/af3abaf5-38f1-49f0-a236-1536193c8842-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 21:22:58 crc kubenswrapper[4789]: I0202 21:22:58.916134 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"af3abaf5-38f1-49f0-a236-1536193c8842","Type":"ContainerDied","Data":"1497b0a1178b6d9436359d67cfa12764ba30f4c958851b5dc0afa49171a83188"} Feb 02 21:22:58 crc kubenswrapper[4789]: I0202 21:22:58.916175 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1497b0a1178b6d9436359d67cfa12764ba30f4c958851b5dc0afa49171a83188" Feb 02 21:22:58 crc kubenswrapper[4789]: I0202 21:22:58.916177 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 21:23:00 crc kubenswrapper[4789]: I0202 21:23:00.443307 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ltv7s" Feb 02 21:23:00 crc kubenswrapper[4789]: I0202 21:23:00.443665 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ltv7s" Feb 02 21:23:00 crc kubenswrapper[4789]: I0202 21:23:00.718929 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ltv7s" Feb 02 21:23:00 crc kubenswrapper[4789]: I0202 21:23:00.926652 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-plkrb" event={"ID":"6a0bac2b-aef3-4313-9184-16e08bc0e572","Type":"ContainerStarted","Data":"590ed2a89e21dae5188718f16074675d5af68c13149ef52a83582cd7239858eb"} Feb 02 21:23:00 crc kubenswrapper[4789]: I0202 21:23:00.929881 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f288q" event={"ID":"750c480c-359c-47cc-9cc1-72c36bc5c783","Type":"ContainerStarted","Data":"6e1788092a8d54e865f11e8456b7f7d7a70047411a71b52076a35fa69eb69730"} Feb 02 21:23:00 crc kubenswrapper[4789]: I0202 21:23:00.978183 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ltv7s" Feb 02 21:23:01 crc kubenswrapper[4789]: I0202 21:23:01.935514 4789 generic.go:334] "Generic (PLEG): container finished" podID="6a0bac2b-aef3-4313-9184-16e08bc0e572" containerID="590ed2a89e21dae5188718f16074675d5af68c13149ef52a83582cd7239858eb" exitCode=0 Feb 02 21:23:01 crc kubenswrapper[4789]: I0202 21:23:01.935615 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-plkrb" event={"ID":"6a0bac2b-aef3-4313-9184-16e08bc0e572","Type":"ContainerDied","Data":"590ed2a89e21dae5188718f16074675d5af68c13149ef52a83582cd7239858eb"} Feb 02 21:23:01 crc kubenswrapper[4789]: I0202 21:23:01.946483 4789 generic.go:334] "Generic (PLEG): container finished" podID="750c480c-359c-47cc-9cc1-72c36bc5c783" containerID="6e1788092a8d54e865f11e8456b7f7d7a70047411a71b52076a35fa69eb69730" exitCode=0 Feb 02 21:23:01 crc kubenswrapper[4789]: I0202 21:23:01.946565 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f288q" event={"ID":"750c480c-359c-47cc-9cc1-72c36bc5c783","Type":"ContainerDied","Data":"6e1788092a8d54e865f11e8456b7f7d7a70047411a71b52076a35fa69eb69730"} Feb 02 21:23:02 crc kubenswrapper[4789]: I0202 21:23:02.321156 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 02 21:23:02 crc kubenswrapper[4789]: E0202 21:23:02.321388 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af3abaf5-38f1-49f0-a236-1536193c8842" containerName="pruner" Feb 02 21:23:02 crc kubenswrapper[4789]: I0202 21:23:02.321400 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="af3abaf5-38f1-49f0-a236-1536193c8842" containerName="pruner" Feb 02 21:23:02 crc kubenswrapper[4789]: I0202 21:23:02.321522 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="af3abaf5-38f1-49f0-a236-1536193c8842" containerName="pruner" Feb 02 21:23:02 crc kubenswrapper[4789]: I0202 21:23:02.322018 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 02 21:23:02 crc kubenswrapper[4789]: I0202 21:23:02.325487 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 02 21:23:02 crc kubenswrapper[4789]: I0202 21:23:02.325677 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 02 21:23:02 crc kubenswrapper[4789]: I0202 21:23:02.363183 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d8cad602-dbda-4b61-a2b8-dc9f65726c1c-kubelet-dir\") pod \"installer-9-crc\" (UID: \"d8cad602-dbda-4b61-a2b8-dc9f65726c1c\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 21:23:02 crc kubenswrapper[4789]: I0202 21:23:02.363298 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d8cad602-dbda-4b61-a2b8-dc9f65726c1c-var-lock\") pod \"installer-9-crc\" (UID: \"d8cad602-dbda-4b61-a2b8-dc9f65726c1c\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 21:23:02 crc kubenswrapper[4789]: I0202 21:23:02.363330 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d8cad602-dbda-4b61-a2b8-dc9f65726c1c-kube-api-access\") pod \"installer-9-crc\" (UID: \"d8cad602-dbda-4b61-a2b8-dc9f65726c1c\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 21:23:02 crc kubenswrapper[4789]: I0202 21:23:02.364759 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 02 21:23:02 crc kubenswrapper[4789]: I0202 21:23:02.464125 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d8cad602-dbda-4b61-a2b8-dc9f65726c1c-var-lock\") pod \"installer-9-crc\" (UID: \"d8cad602-dbda-4b61-a2b8-dc9f65726c1c\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 21:23:02 crc kubenswrapper[4789]: I0202 21:23:02.464172 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d8cad602-dbda-4b61-a2b8-dc9f65726c1c-kube-api-access\") pod \"installer-9-crc\" (UID: \"d8cad602-dbda-4b61-a2b8-dc9f65726c1c\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 21:23:02 crc kubenswrapper[4789]: I0202 21:23:02.464197 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d8cad602-dbda-4b61-a2b8-dc9f65726c1c-kubelet-dir\") pod \"installer-9-crc\" (UID: \"d8cad602-dbda-4b61-a2b8-dc9f65726c1c\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 21:23:02 crc kubenswrapper[4789]: I0202 21:23:02.464273 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d8cad602-dbda-4b61-a2b8-dc9f65726c1c-kubelet-dir\") pod \"installer-9-crc\" (UID: \"d8cad602-dbda-4b61-a2b8-dc9f65726c1c\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 21:23:02 crc kubenswrapper[4789]: I0202 21:23:02.464310 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d8cad602-dbda-4b61-a2b8-dc9f65726c1c-var-lock\") pod \"installer-9-crc\" (UID: \"d8cad602-dbda-4b61-a2b8-dc9f65726c1c\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 21:23:02 crc kubenswrapper[4789]: I0202 21:23:02.487556 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d8cad602-dbda-4b61-a2b8-dc9f65726c1c-kube-api-access\") pod \"installer-9-crc\" (UID: \"d8cad602-dbda-4b61-a2b8-dc9f65726c1c\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 21:23:02 crc kubenswrapper[4789]: I0202 21:23:02.607534 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-dd7g5" Feb 02 21:23:02 crc kubenswrapper[4789]: I0202 21:23:02.658101 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 02 21:23:02 crc kubenswrapper[4789]: I0202 21:23:02.848655 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 02 21:23:02 crc kubenswrapper[4789]: I0202 21:23:02.953736 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"d8cad602-dbda-4b61-a2b8-dc9f65726c1c","Type":"ContainerStarted","Data":"6e5f680d98820fab94bbfc63288d9656f849fce78734f2ad096b085f29cbe053"} Feb 02 21:23:02 crc kubenswrapper[4789]: I0202 21:23:02.956874 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pnd42" event={"ID":"275ff536-c274-436e-89f2-a2c138f9857a","Type":"ContainerStarted","Data":"033af14f9a5770b1bf35134d24e39c204b25841b6c4bb538102e0d47b6d71e81"} Feb 02 21:23:03 crc kubenswrapper[4789]: I0202 21:23:03.708365 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-cjz6p" Feb 02 21:23:03 crc kubenswrapper[4789]: I0202 21:23:03.708449 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-cjz6p" Feb 02 21:23:03 crc kubenswrapper[4789]: I0202 21:23:03.832106 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-cjz6p" Feb 02 21:23:03 crc kubenswrapper[4789]: I0202 21:23:03.963018 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"d8cad602-dbda-4b61-a2b8-dc9f65726c1c","Type":"ContainerStarted","Data":"c38abfd9c53fe42bb6d24993fd09d681a39c2134fcbee4f9cd4b2ab7e218090b"} Feb 02 21:23:03 crc kubenswrapper[4789]: I0202 21:23:03.966557 4789 generic.go:334] "Generic (PLEG): container finished" podID="275ff536-c274-436e-89f2-a2c138f9857a" containerID="033af14f9a5770b1bf35134d24e39c204b25841b6c4bb538102e0d47b6d71e81" exitCode=0 Feb 02 21:23:03 crc kubenswrapper[4789]: I0202 21:23:03.967177 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pnd42" event={"ID":"275ff536-c274-436e-89f2-a2c138f9857a","Type":"ContainerDied","Data":"033af14f9a5770b1bf35134d24e39c204b25841b6c4bb538102e0d47b6d71e81"} Feb 02 21:23:04 crc kubenswrapper[4789]: I0202 21:23:04.028012 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-cjz6p" Feb 02 21:23:04 crc kubenswrapper[4789]: I0202 21:23:04.991557 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.9915335020000002 podStartE2EDuration="2.991533502s" podCreationTimestamp="2026-02-02 21:23:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:23:04.986070817 +0000 UTC m=+205.281095866" watchObservedRunningTime="2026-02-02 21:23:04.991533502 +0000 UTC m=+205.286558521" Feb 02 21:23:06 crc kubenswrapper[4789]: I0202 21:23:06.992043 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f288q" event={"ID":"750c480c-359c-47cc-9cc1-72c36bc5c783","Type":"ContainerStarted","Data":"8308fd1a7bb9bcaa33faa9a92962b285bc117f943be24e2dbb1701244f146a4b"} Feb 02 21:23:07 crc kubenswrapper[4789]: I0202 21:23:07.019125 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-f288q" podStartSLOduration=2.524645115 podStartE2EDuration="57.019101929s" podCreationTimestamp="2026-02-02 21:22:10 +0000 UTC" firstStartedPulling="2026-02-02 21:22:11.410813012 +0000 UTC m=+151.705838031" lastFinishedPulling="2026-02-02 21:23:05.905269826 +0000 UTC m=+206.200294845" observedRunningTime="2026-02-02 21:23:07.014388564 +0000 UTC m=+207.309413643" watchObservedRunningTime="2026-02-02 21:23:07.019101929 +0000 UTC m=+207.314126948" Feb 02 21:23:09 crc kubenswrapper[4789]: I0202 21:23:09.005098 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pnd42" event={"ID":"275ff536-c274-436e-89f2-a2c138f9857a","Type":"ContainerStarted","Data":"28bc3935b17d166d06443bd1c8d091f293a0176298f91733c7cd71fd05aaa175"} Feb 02 21:23:09 crc kubenswrapper[4789]: I0202 21:23:09.031914 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-pnd42" podStartSLOduration=3.6963064660000002 podStartE2EDuration="59.031899126s" podCreationTimestamp="2026-02-02 21:22:10 +0000 UTC" firstStartedPulling="2026-02-02 21:22:12.499031315 +0000 UTC m=+152.794056334" lastFinishedPulling="2026-02-02 21:23:07.834623975 +0000 UTC m=+208.129648994" observedRunningTime="2026-02-02 21:23:09.029539204 +0000 UTC m=+209.324564233" watchObservedRunningTime="2026-02-02 21:23:09.031899126 +0000 UTC m=+209.326924145" Feb 02 21:23:10 crc kubenswrapper[4789]: I0202 21:23:10.649160 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-f288q" Feb 02 21:23:10 crc kubenswrapper[4789]: I0202 21:23:10.649319 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-f288q" Feb 02 21:23:10 crc kubenswrapper[4789]: I0202 21:23:10.704960 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-f288q" Feb 02 21:23:11 crc kubenswrapper[4789]: I0202 21:23:11.079942 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-pnd42" Feb 02 21:23:11 crc kubenswrapper[4789]: I0202 21:23:11.080005 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-pnd42" Feb 02 21:23:11 crc kubenswrapper[4789]: I0202 21:23:11.086828 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-f288q" Feb 02 21:23:11 crc kubenswrapper[4789]: I0202 21:23:11.153552 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-pnd42" Feb 02 21:23:11 crc kubenswrapper[4789]: I0202 21:23:11.308175 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-zfv5p"] Feb 02 21:23:12 crc kubenswrapper[4789]: I0202 21:23:12.029151 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-plkrb" event={"ID":"6a0bac2b-aef3-4313-9184-16e08bc0e572","Type":"ContainerStarted","Data":"1a8e903989057f7a9a82739598c50632f493108b4e8ea3236f3bfc7def8f9b50"} Feb 02 21:23:12 crc kubenswrapper[4789]: I0202 21:23:12.031346 4789 generic.go:334] "Generic (PLEG): container finished" podID="564193d6-a25b-478d-8957-54183764c6d7" containerID="bb080bd01cbeb2ab4fe00b0b7a31819672e0040f03cf8a10a0e3db7194f1f325" exitCode=0 Feb 02 21:23:12 crc kubenswrapper[4789]: I0202 21:23:12.031386 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bxbt7" event={"ID":"564193d6-a25b-478d-8957-54183764c6d7","Type":"ContainerDied","Data":"bb080bd01cbeb2ab4fe00b0b7a31819672e0040f03cf8a10a0e3db7194f1f325"} Feb 02 21:23:12 crc kubenswrapper[4789]: I0202 21:23:12.035717 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s96wn" event={"ID":"59f183d3-7fe9-4f2d-bd43-02a2ed7eafc5","Type":"ContainerStarted","Data":"2b699fd1c3ae25c688f54fd2e401e73695fe89f22e4f546a9160da11cc2233bb"} Feb 02 21:23:12 crc kubenswrapper[4789]: I0202 21:23:12.038467 4789 generic.go:334] "Generic (PLEG): container finished" podID="59a1480d-000f-481e-b287-78e39812c69b" containerID="68dd3d2a2137f4d53e49382bd30825efd92ef65ce16f0c02a3d69aeb6577eb4b" exitCode=0 Feb 02 21:23:12 crc kubenswrapper[4789]: I0202 21:23:12.038551 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mv4vf" event={"ID":"59a1480d-000f-481e-b287-78e39812c69b","Type":"ContainerDied","Data":"68dd3d2a2137f4d53e49382bd30825efd92ef65ce16f0c02a3d69aeb6577eb4b"} Feb 02 21:23:12 crc kubenswrapper[4789]: I0202 21:23:12.068675 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-plkrb" podStartSLOduration=3.057450906 podStartE2EDuration="1m2.068653016s" podCreationTimestamp="2026-02-02 21:22:10 +0000 UTC" firstStartedPulling="2026-02-02 21:22:12.494523785 +0000 UTC m=+152.789548804" lastFinishedPulling="2026-02-02 21:23:11.505725895 +0000 UTC m=+211.800750914" observedRunningTime="2026-02-02 21:23:12.048985535 +0000 UTC m=+212.344010564" watchObservedRunningTime="2026-02-02 21:23:12.068653016 +0000 UTC m=+212.363678035" Feb 02 21:23:13 crc kubenswrapper[4789]: I0202 21:23:13.046229 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bxbt7" event={"ID":"564193d6-a25b-478d-8957-54183764c6d7","Type":"ContainerStarted","Data":"a5d2145cdae073d825441cad9b35721d68b259b9cd3e53ab1927e197ef900782"} Feb 02 21:23:13 crc kubenswrapper[4789]: I0202 21:23:13.048243 4789 generic.go:334] "Generic (PLEG): container finished" podID="59f183d3-7fe9-4f2d-bd43-02a2ed7eafc5" containerID="2b699fd1c3ae25c688f54fd2e401e73695fe89f22e4f546a9160da11cc2233bb" exitCode=0 Feb 02 21:23:13 crc kubenswrapper[4789]: I0202 21:23:13.048324 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s96wn" event={"ID":"59f183d3-7fe9-4f2d-bd43-02a2ed7eafc5","Type":"ContainerDied","Data":"2b699fd1c3ae25c688f54fd2e401e73695fe89f22e4f546a9160da11cc2233bb"} Feb 02 21:23:13 crc kubenswrapper[4789]: I0202 21:23:13.050910 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mv4vf" event={"ID":"59a1480d-000f-481e-b287-78e39812c69b","Type":"ContainerStarted","Data":"65a1c1171f5c0fda293b842ce4780d9e22281e43ce2d206959577566ddc7dd1a"} Feb 02 21:23:13 crc kubenswrapper[4789]: I0202 21:23:13.062761 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-bxbt7" Feb 02 21:23:13 crc kubenswrapper[4789]: I0202 21:23:13.062831 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-bxbt7" Feb 02 21:23:13 crc kubenswrapper[4789]: I0202 21:23:13.077533 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-bxbt7" podStartSLOduration=4.144492915 podStartE2EDuration="1m1.077514768s" podCreationTimestamp="2026-02-02 21:22:12 +0000 UTC" firstStartedPulling="2026-02-02 21:22:15.561926756 +0000 UTC m=+155.856951775" lastFinishedPulling="2026-02-02 21:23:12.494948609 +0000 UTC m=+212.789973628" observedRunningTime="2026-02-02 21:23:13.072705861 +0000 UTC m=+213.367730890" watchObservedRunningTime="2026-02-02 21:23:13.077514768 +0000 UTC m=+213.372539807" Feb 02 21:23:13 crc kubenswrapper[4789]: I0202 21:23:13.097972 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mv4vf" podStartSLOduration=3.171330205 podStartE2EDuration="1m1.097951039s" podCreationTimestamp="2026-02-02 21:22:12 +0000 UTC" firstStartedPulling="2026-02-02 21:22:14.558894776 +0000 UTC m=+154.853919795" lastFinishedPulling="2026-02-02 21:23:12.48551559 +0000 UTC m=+212.780540629" observedRunningTime="2026-02-02 21:23:13.092990288 +0000 UTC m=+213.388015317" watchObservedRunningTime="2026-02-02 21:23:13.097951039 +0000 UTC m=+213.392976068" Feb 02 21:23:13 crc kubenswrapper[4789]: I0202 21:23:13.124349 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-pnd42" Feb 02 21:23:14 crc kubenswrapper[4789]: I0202 21:23:14.058559 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s96wn" event={"ID":"59f183d3-7fe9-4f2d-bd43-02a2ed7eafc5","Type":"ContainerStarted","Data":"b69c4deea2e7d8dae7091127faadbfcee7b99486dcc60c3a7817a96bc2ec9691"} Feb 02 21:23:14 crc kubenswrapper[4789]: I0202 21:23:14.079560 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-s96wn" podStartSLOduration=3.966638486 podStartE2EDuration="1m1.079539461s" podCreationTimestamp="2026-02-02 21:22:13 +0000 UTC" firstStartedPulling="2026-02-02 21:22:16.594468135 +0000 UTC m=+156.889493154" lastFinishedPulling="2026-02-02 21:23:13.70736911 +0000 UTC m=+214.002394129" observedRunningTime="2026-02-02 21:23:14.077162348 +0000 UTC m=+214.372187367" watchObservedRunningTime="2026-02-02 21:23:14.079539461 +0000 UTC m=+214.374564490" Feb 02 21:23:14 crc kubenswrapper[4789]: I0202 21:23:14.136319 4789 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-bxbt7" podUID="564193d6-a25b-478d-8957-54183764c6d7" containerName="registry-server" probeResult="failure" output=< Feb 02 21:23:14 crc kubenswrapper[4789]: timeout: failed to connect service ":50051" within 1s Feb 02 21:23:14 crc kubenswrapper[4789]: > Feb 02 21:23:17 crc kubenswrapper[4789]: I0202 21:23:17.152497 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pnd42"] Feb 02 21:23:17 crc kubenswrapper[4789]: I0202 21:23:17.153053 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-pnd42" podUID="275ff536-c274-436e-89f2-a2c138f9857a" containerName="registry-server" containerID="cri-o://28bc3935b17d166d06443bd1c8d091f293a0176298f91733c7cd71fd05aaa175" gracePeriod=2 Feb 02 21:23:18 crc kubenswrapper[4789]: I0202 21:23:18.086436 4789 generic.go:334] "Generic (PLEG): container finished" podID="275ff536-c274-436e-89f2-a2c138f9857a" containerID="28bc3935b17d166d06443bd1c8d091f293a0176298f91733c7cd71fd05aaa175" exitCode=0 Feb 02 21:23:18 crc kubenswrapper[4789]: I0202 21:23:18.086565 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pnd42" event={"ID":"275ff536-c274-436e-89f2-a2c138f9857a","Type":"ContainerDied","Data":"28bc3935b17d166d06443bd1c8d091f293a0176298f91733c7cd71fd05aaa175"} Feb 02 21:23:19 crc kubenswrapper[4789]: I0202 21:23:19.001926 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pnd42" Feb 02 21:23:19 crc kubenswrapper[4789]: I0202 21:23:19.095725 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pnd42" event={"ID":"275ff536-c274-436e-89f2-a2c138f9857a","Type":"ContainerDied","Data":"9834d3b4ccd99791cc5938177aa698d33a9671e34c16c654c3e9e4791f3ef93e"} Feb 02 21:23:19 crc kubenswrapper[4789]: I0202 21:23:19.095804 4789 scope.go:117] "RemoveContainer" containerID="28bc3935b17d166d06443bd1c8d091f293a0176298f91733c7cd71fd05aaa175" Feb 02 21:23:19 crc kubenswrapper[4789]: I0202 21:23:19.095803 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pnd42" Feb 02 21:23:19 crc kubenswrapper[4789]: I0202 21:23:19.107528 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/275ff536-c274-436e-89f2-a2c138f9857a-utilities\") pod \"275ff536-c274-436e-89f2-a2c138f9857a\" (UID: \"275ff536-c274-436e-89f2-a2c138f9857a\") " Feb 02 21:23:19 crc kubenswrapper[4789]: I0202 21:23:19.107597 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g7vkf\" (UniqueName: \"kubernetes.io/projected/275ff536-c274-436e-89f2-a2c138f9857a-kube-api-access-g7vkf\") pod \"275ff536-c274-436e-89f2-a2c138f9857a\" (UID: \"275ff536-c274-436e-89f2-a2c138f9857a\") " Feb 02 21:23:19 crc kubenswrapper[4789]: I0202 21:23:19.107749 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/275ff536-c274-436e-89f2-a2c138f9857a-catalog-content\") pod \"275ff536-c274-436e-89f2-a2c138f9857a\" (UID: \"275ff536-c274-436e-89f2-a2c138f9857a\") " Feb 02 21:23:19 crc kubenswrapper[4789]: I0202 21:23:19.108917 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/275ff536-c274-436e-89f2-a2c138f9857a-utilities" (OuterVolumeSpecName: "utilities") pod "275ff536-c274-436e-89f2-a2c138f9857a" (UID: "275ff536-c274-436e-89f2-a2c138f9857a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 21:23:19 crc kubenswrapper[4789]: I0202 21:23:19.113906 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/275ff536-c274-436e-89f2-a2c138f9857a-kube-api-access-g7vkf" (OuterVolumeSpecName: "kube-api-access-g7vkf") pod "275ff536-c274-436e-89f2-a2c138f9857a" (UID: "275ff536-c274-436e-89f2-a2c138f9857a"). InnerVolumeSpecName "kube-api-access-g7vkf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:23:19 crc kubenswrapper[4789]: I0202 21:23:19.119916 4789 scope.go:117] "RemoveContainer" containerID="033af14f9a5770b1bf35134d24e39c204b25841b6c4bb538102e0d47b6d71e81" Feb 02 21:23:19 crc kubenswrapper[4789]: I0202 21:23:19.147705 4789 scope.go:117] "RemoveContainer" containerID="233b02ad9b48ca4ea49cdc1f7dd4cecc28c3ecf9b574869823c5aca9ea12ba80" Feb 02 21:23:19 crc kubenswrapper[4789]: I0202 21:23:19.166644 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/275ff536-c274-436e-89f2-a2c138f9857a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "275ff536-c274-436e-89f2-a2c138f9857a" (UID: "275ff536-c274-436e-89f2-a2c138f9857a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 21:23:19 crc kubenswrapper[4789]: I0202 21:23:19.209635 4789 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/275ff536-c274-436e-89f2-a2c138f9857a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 21:23:19 crc kubenswrapper[4789]: I0202 21:23:19.209688 4789 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/275ff536-c274-436e-89f2-a2c138f9857a-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 21:23:19 crc kubenswrapper[4789]: I0202 21:23:19.209712 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g7vkf\" (UniqueName: \"kubernetes.io/projected/275ff536-c274-436e-89f2-a2c138f9857a-kube-api-access-g7vkf\") on node \"crc\" DevicePath \"\"" Feb 02 21:23:19 crc kubenswrapper[4789]: I0202 21:23:19.450863 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pnd42"] Feb 02 21:23:19 crc kubenswrapper[4789]: I0202 21:23:19.458561 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-pnd42"] Feb 02 21:23:20 crc kubenswrapper[4789]: I0202 21:23:20.429911 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="275ff536-c274-436e-89f2-a2c138f9857a" path="/var/lib/kubelet/pods/275ff536-c274-436e-89f2-a2c138f9857a/volumes" Feb 02 21:23:20 crc kubenswrapper[4789]: I0202 21:23:20.892605 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-plkrb" Feb 02 21:23:20 crc kubenswrapper[4789]: I0202 21:23:20.893518 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-plkrb" Feb 02 21:23:20 crc kubenswrapper[4789]: I0202 21:23:20.944026 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-plkrb" Feb 02 21:23:21 crc kubenswrapper[4789]: I0202 21:23:21.164914 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-plkrb" Feb 02 21:23:22 crc kubenswrapper[4789]: I0202 21:23:22.147886 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-plkrb"] Feb 02 21:23:22 crc kubenswrapper[4789]: I0202 21:23:22.650335 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mv4vf" Feb 02 21:23:22 crc kubenswrapper[4789]: I0202 21:23:22.650403 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mv4vf" Feb 02 21:23:22 crc kubenswrapper[4789]: I0202 21:23:22.706298 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mv4vf" Feb 02 21:23:22 crc kubenswrapper[4789]: I0202 21:23:22.842433 4789 patch_prober.go:28] interesting pod/machine-config-daemon-c8vcn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 21:23:22 crc kubenswrapper[4789]: I0202 21:23:22.842513 4789 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 21:23:22 crc kubenswrapper[4789]: I0202 21:23:22.842576 4789 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" Feb 02 21:23:22 crc kubenswrapper[4789]: I0202 21:23:22.843436 4789 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b108a17d437cce7e94365267d96e99e84944d9dd12174c5acb380bc1a1f9c885"} pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 21:23:22 crc kubenswrapper[4789]: I0202 21:23:22.843545 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" containerName="machine-config-daemon" containerID="cri-o://b108a17d437cce7e94365267d96e99e84944d9dd12174c5acb380bc1a1f9c885" gracePeriod=600 Feb 02 21:23:23 crc kubenswrapper[4789]: I0202 21:23:23.126817 4789 generic.go:334] "Generic (PLEG): container finished" podID="bdf018b4-1451-4d37-be6e-05802b67c73e" containerID="b108a17d437cce7e94365267d96e99e84944d9dd12174c5acb380bc1a1f9c885" exitCode=0 Feb 02 21:23:23 crc kubenswrapper[4789]: I0202 21:23:23.126917 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" event={"ID":"bdf018b4-1451-4d37-be6e-05802b67c73e","Type":"ContainerDied","Data":"b108a17d437cce7e94365267d96e99e84944d9dd12174c5acb380bc1a1f9c885"} Feb 02 21:23:23 crc kubenswrapper[4789]: I0202 21:23:23.134267 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-bxbt7" Feb 02 21:23:23 crc kubenswrapper[4789]: I0202 21:23:23.214482 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mv4vf" Feb 02 21:23:23 crc kubenswrapper[4789]: I0202 21:23:23.215760 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-bxbt7" Feb 02 21:23:24 crc kubenswrapper[4789]: I0202 21:23:24.052727 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-s96wn" Feb 02 21:23:24 crc kubenswrapper[4789]: I0202 21:23:24.053162 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-s96wn" Feb 02 21:23:24 crc kubenswrapper[4789]: I0202 21:23:24.111974 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-s96wn" Feb 02 21:23:24 crc kubenswrapper[4789]: I0202 21:23:24.135136 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" event={"ID":"bdf018b4-1451-4d37-be6e-05802b67c73e","Type":"ContainerStarted","Data":"6a12ca5c9003220282cb93388ad24eb2dec9d907ff7cf49d91e52f983ba6b208"} Feb 02 21:23:24 crc kubenswrapper[4789]: I0202 21:23:24.135212 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-plkrb" podUID="6a0bac2b-aef3-4313-9184-16e08bc0e572" containerName="registry-server" containerID="cri-o://1a8e903989057f7a9a82739598c50632f493108b4e8ea3236f3bfc7def8f9b50" gracePeriod=2 Feb 02 21:23:24 crc kubenswrapper[4789]: I0202 21:23:24.199340 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-s96wn" Feb 02 21:23:24 crc kubenswrapper[4789]: I0202 21:23:24.947668 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bxbt7"] Feb 02 21:23:25 crc kubenswrapper[4789]: I0202 21:23:25.150647 4789 generic.go:334] "Generic (PLEG): container finished" podID="6a0bac2b-aef3-4313-9184-16e08bc0e572" containerID="1a8e903989057f7a9a82739598c50632f493108b4e8ea3236f3bfc7def8f9b50" exitCode=0 Feb 02 21:23:25 crc kubenswrapper[4789]: I0202 21:23:25.150725 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-plkrb" event={"ID":"6a0bac2b-aef3-4313-9184-16e08bc0e572","Type":"ContainerDied","Data":"1a8e903989057f7a9a82739598c50632f493108b4e8ea3236f3bfc7def8f9b50"} Feb 02 21:23:25 crc kubenswrapper[4789]: I0202 21:23:25.151372 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-bxbt7" podUID="564193d6-a25b-478d-8957-54183764c6d7" containerName="registry-server" containerID="cri-o://a5d2145cdae073d825441cad9b35721d68b259b9cd3e53ab1927e197ef900782" gracePeriod=2 Feb 02 21:23:25 crc kubenswrapper[4789]: I0202 21:23:25.282943 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-plkrb" Feb 02 21:23:25 crc kubenswrapper[4789]: I0202 21:23:25.394276 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xk4bk\" (UniqueName: \"kubernetes.io/projected/6a0bac2b-aef3-4313-9184-16e08bc0e572-kube-api-access-xk4bk\") pod \"6a0bac2b-aef3-4313-9184-16e08bc0e572\" (UID: \"6a0bac2b-aef3-4313-9184-16e08bc0e572\") " Feb 02 21:23:25 crc kubenswrapper[4789]: I0202 21:23:25.394521 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a0bac2b-aef3-4313-9184-16e08bc0e572-catalog-content\") pod \"6a0bac2b-aef3-4313-9184-16e08bc0e572\" (UID: \"6a0bac2b-aef3-4313-9184-16e08bc0e572\") " Feb 02 21:23:25 crc kubenswrapper[4789]: I0202 21:23:25.394666 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a0bac2b-aef3-4313-9184-16e08bc0e572-utilities\") pod \"6a0bac2b-aef3-4313-9184-16e08bc0e572\" (UID: \"6a0bac2b-aef3-4313-9184-16e08bc0e572\") " Feb 02 21:23:25 crc kubenswrapper[4789]: I0202 21:23:25.396228 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a0bac2b-aef3-4313-9184-16e08bc0e572-utilities" (OuterVolumeSpecName: "utilities") pod "6a0bac2b-aef3-4313-9184-16e08bc0e572" (UID: "6a0bac2b-aef3-4313-9184-16e08bc0e572"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 21:23:25 crc kubenswrapper[4789]: I0202 21:23:25.401053 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a0bac2b-aef3-4313-9184-16e08bc0e572-kube-api-access-xk4bk" (OuterVolumeSpecName: "kube-api-access-xk4bk") pod "6a0bac2b-aef3-4313-9184-16e08bc0e572" (UID: "6a0bac2b-aef3-4313-9184-16e08bc0e572"). InnerVolumeSpecName "kube-api-access-xk4bk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:23:25 crc kubenswrapper[4789]: I0202 21:23:25.463677 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a0bac2b-aef3-4313-9184-16e08bc0e572-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6a0bac2b-aef3-4313-9184-16e08bc0e572" (UID: "6a0bac2b-aef3-4313-9184-16e08bc0e572"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 21:23:25 crc kubenswrapper[4789]: I0202 21:23:25.495930 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xk4bk\" (UniqueName: \"kubernetes.io/projected/6a0bac2b-aef3-4313-9184-16e08bc0e572-kube-api-access-xk4bk\") on node \"crc\" DevicePath \"\"" Feb 02 21:23:25 crc kubenswrapper[4789]: I0202 21:23:25.495967 4789 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a0bac2b-aef3-4313-9184-16e08bc0e572-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 21:23:25 crc kubenswrapper[4789]: I0202 21:23:25.495982 4789 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a0bac2b-aef3-4313-9184-16e08bc0e572-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 21:23:25 crc kubenswrapper[4789]: I0202 21:23:25.581731 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bxbt7" Feb 02 21:23:25 crc kubenswrapper[4789]: I0202 21:23:25.597519 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8h9rt\" (UniqueName: \"kubernetes.io/projected/564193d6-a25b-478d-8957-54183764c6d7-kube-api-access-8h9rt\") pod \"564193d6-a25b-478d-8957-54183764c6d7\" (UID: \"564193d6-a25b-478d-8957-54183764c6d7\") " Feb 02 21:23:25 crc kubenswrapper[4789]: I0202 21:23:25.597697 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/564193d6-a25b-478d-8957-54183764c6d7-catalog-content\") pod \"564193d6-a25b-478d-8957-54183764c6d7\" (UID: \"564193d6-a25b-478d-8957-54183764c6d7\") " Feb 02 21:23:25 crc kubenswrapper[4789]: I0202 21:23:25.597848 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/564193d6-a25b-478d-8957-54183764c6d7-utilities\") pod \"564193d6-a25b-478d-8957-54183764c6d7\" (UID: \"564193d6-a25b-478d-8957-54183764c6d7\") " Feb 02 21:23:25 crc kubenswrapper[4789]: I0202 21:23:25.599397 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/564193d6-a25b-478d-8957-54183764c6d7-utilities" (OuterVolumeSpecName: "utilities") pod "564193d6-a25b-478d-8957-54183764c6d7" (UID: "564193d6-a25b-478d-8957-54183764c6d7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 21:23:25 crc kubenswrapper[4789]: I0202 21:23:25.603772 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/564193d6-a25b-478d-8957-54183764c6d7-kube-api-access-8h9rt" (OuterVolumeSpecName: "kube-api-access-8h9rt") pod "564193d6-a25b-478d-8957-54183764c6d7" (UID: "564193d6-a25b-478d-8957-54183764c6d7"). InnerVolumeSpecName "kube-api-access-8h9rt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:23:25 crc kubenswrapper[4789]: I0202 21:23:25.641219 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/564193d6-a25b-478d-8957-54183764c6d7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "564193d6-a25b-478d-8957-54183764c6d7" (UID: "564193d6-a25b-478d-8957-54183764c6d7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 21:23:25 crc kubenswrapper[4789]: I0202 21:23:25.700220 4789 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/564193d6-a25b-478d-8957-54183764c6d7-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 21:23:25 crc kubenswrapper[4789]: I0202 21:23:25.700274 4789 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/564193d6-a25b-478d-8957-54183764c6d7-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 21:23:25 crc kubenswrapper[4789]: I0202 21:23:25.700294 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8h9rt\" (UniqueName: \"kubernetes.io/projected/564193d6-a25b-478d-8957-54183764c6d7-kube-api-access-8h9rt\") on node \"crc\" DevicePath \"\"" Feb 02 21:23:26 crc kubenswrapper[4789]: I0202 21:23:26.160015 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-plkrb" event={"ID":"6a0bac2b-aef3-4313-9184-16e08bc0e572","Type":"ContainerDied","Data":"4dd4d33818a41df74b4ded52630d616776f69de22da4cd8d4bd328272ad08ea6"} Feb 02 21:23:26 crc kubenswrapper[4789]: I0202 21:23:26.160114 4789 scope.go:117] "RemoveContainer" containerID="1a8e903989057f7a9a82739598c50632f493108b4e8ea3236f3bfc7def8f9b50" Feb 02 21:23:26 crc kubenswrapper[4789]: I0202 21:23:26.160056 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-plkrb" Feb 02 21:23:26 crc kubenswrapper[4789]: I0202 21:23:26.163841 4789 generic.go:334] "Generic (PLEG): container finished" podID="564193d6-a25b-478d-8957-54183764c6d7" containerID="a5d2145cdae073d825441cad9b35721d68b259b9cd3e53ab1927e197ef900782" exitCode=0 Feb 02 21:23:26 crc kubenswrapper[4789]: I0202 21:23:26.163878 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bxbt7" event={"ID":"564193d6-a25b-478d-8957-54183764c6d7","Type":"ContainerDied","Data":"a5d2145cdae073d825441cad9b35721d68b259b9cd3e53ab1927e197ef900782"} Feb 02 21:23:26 crc kubenswrapper[4789]: I0202 21:23:26.163904 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bxbt7" event={"ID":"564193d6-a25b-478d-8957-54183764c6d7","Type":"ContainerDied","Data":"792c31f09013898876009a97025591e1596b900b21c8ed72ab7c3a53b3034156"} Feb 02 21:23:26 crc kubenswrapper[4789]: I0202 21:23:26.163991 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bxbt7" Feb 02 21:23:26 crc kubenswrapper[4789]: I0202 21:23:26.182195 4789 scope.go:117] "RemoveContainer" containerID="590ed2a89e21dae5188718f16074675d5af68c13149ef52a83582cd7239858eb" Feb 02 21:23:26 crc kubenswrapper[4789]: I0202 21:23:26.196882 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-plkrb"] Feb 02 21:23:26 crc kubenswrapper[4789]: I0202 21:23:26.200553 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-plkrb"] Feb 02 21:23:26 crc kubenswrapper[4789]: I0202 21:23:26.209971 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bxbt7"] Feb 02 21:23:26 crc kubenswrapper[4789]: I0202 21:23:26.213349 4789 scope.go:117] "RemoveContainer" containerID="a4c04810899e44a97ca0f7eace2668519fa19114ff170a61b3f7620bc94541a8" Feb 02 21:23:26 crc kubenswrapper[4789]: I0202 21:23:26.215865 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-bxbt7"] Feb 02 21:23:26 crc kubenswrapper[4789]: I0202 21:23:26.236672 4789 scope.go:117] "RemoveContainer" containerID="a5d2145cdae073d825441cad9b35721d68b259b9cd3e53ab1927e197ef900782" Feb 02 21:23:26 crc kubenswrapper[4789]: I0202 21:23:26.259387 4789 scope.go:117] "RemoveContainer" containerID="bb080bd01cbeb2ab4fe00b0b7a31819672e0040f03cf8a10a0e3db7194f1f325" Feb 02 21:23:26 crc kubenswrapper[4789]: I0202 21:23:26.275518 4789 scope.go:117] "RemoveContainer" containerID="fb33dd91e892706094be7c996e3f139340d946e243f38261286fda06b1d389e7" Feb 02 21:23:26 crc kubenswrapper[4789]: I0202 21:23:26.293189 4789 scope.go:117] "RemoveContainer" containerID="a5d2145cdae073d825441cad9b35721d68b259b9cd3e53ab1927e197ef900782" Feb 02 21:23:26 crc kubenswrapper[4789]: E0202 21:23:26.293692 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5d2145cdae073d825441cad9b35721d68b259b9cd3e53ab1927e197ef900782\": container with ID starting with a5d2145cdae073d825441cad9b35721d68b259b9cd3e53ab1927e197ef900782 not found: ID does not exist" containerID="a5d2145cdae073d825441cad9b35721d68b259b9cd3e53ab1927e197ef900782" Feb 02 21:23:26 crc kubenswrapper[4789]: I0202 21:23:26.293729 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5d2145cdae073d825441cad9b35721d68b259b9cd3e53ab1927e197ef900782"} err="failed to get container status \"a5d2145cdae073d825441cad9b35721d68b259b9cd3e53ab1927e197ef900782\": rpc error: code = NotFound desc = could not find container \"a5d2145cdae073d825441cad9b35721d68b259b9cd3e53ab1927e197ef900782\": container with ID starting with a5d2145cdae073d825441cad9b35721d68b259b9cd3e53ab1927e197ef900782 not found: ID does not exist" Feb 02 21:23:26 crc kubenswrapper[4789]: I0202 21:23:26.293757 4789 scope.go:117] "RemoveContainer" containerID="bb080bd01cbeb2ab4fe00b0b7a31819672e0040f03cf8a10a0e3db7194f1f325" Feb 02 21:23:26 crc kubenswrapper[4789]: E0202 21:23:26.294136 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb080bd01cbeb2ab4fe00b0b7a31819672e0040f03cf8a10a0e3db7194f1f325\": container with ID starting with bb080bd01cbeb2ab4fe00b0b7a31819672e0040f03cf8a10a0e3db7194f1f325 not found: ID does not exist" containerID="bb080bd01cbeb2ab4fe00b0b7a31819672e0040f03cf8a10a0e3db7194f1f325" Feb 02 21:23:26 crc kubenswrapper[4789]: I0202 21:23:26.294163 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb080bd01cbeb2ab4fe00b0b7a31819672e0040f03cf8a10a0e3db7194f1f325"} err="failed to get container status \"bb080bd01cbeb2ab4fe00b0b7a31819672e0040f03cf8a10a0e3db7194f1f325\": rpc error: code = NotFound desc = could not find container \"bb080bd01cbeb2ab4fe00b0b7a31819672e0040f03cf8a10a0e3db7194f1f325\": container with ID starting with bb080bd01cbeb2ab4fe00b0b7a31819672e0040f03cf8a10a0e3db7194f1f325 not found: ID does not exist" Feb 02 21:23:26 crc kubenswrapper[4789]: I0202 21:23:26.294178 4789 scope.go:117] "RemoveContainer" containerID="fb33dd91e892706094be7c996e3f139340d946e243f38261286fda06b1d389e7" Feb 02 21:23:26 crc kubenswrapper[4789]: E0202 21:23:26.294471 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb33dd91e892706094be7c996e3f139340d946e243f38261286fda06b1d389e7\": container with ID starting with fb33dd91e892706094be7c996e3f139340d946e243f38261286fda06b1d389e7 not found: ID does not exist" containerID="fb33dd91e892706094be7c996e3f139340d946e243f38261286fda06b1d389e7" Feb 02 21:23:26 crc kubenswrapper[4789]: I0202 21:23:26.294539 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb33dd91e892706094be7c996e3f139340d946e243f38261286fda06b1d389e7"} err="failed to get container status \"fb33dd91e892706094be7c996e3f139340d946e243f38261286fda06b1d389e7\": rpc error: code = NotFound desc = could not find container \"fb33dd91e892706094be7c996e3f139340d946e243f38261286fda06b1d389e7\": container with ID starting with fb33dd91e892706094be7c996e3f139340d946e243f38261286fda06b1d389e7 not found: ID does not exist" Feb 02 21:23:26 crc kubenswrapper[4789]: I0202 21:23:26.426167 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="564193d6-a25b-478d-8957-54183764c6d7" path="/var/lib/kubelet/pods/564193d6-a25b-478d-8957-54183764c6d7/volumes" Feb 02 21:23:26 crc kubenswrapper[4789]: I0202 21:23:26.426747 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a0bac2b-aef3-4313-9184-16e08bc0e572" path="/var/lib/kubelet/pods/6a0bac2b-aef3-4313-9184-16e08bc0e572/volumes" Feb 02 21:23:27 crc kubenswrapper[4789]: I0202 21:23:27.342284 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-s96wn"] Feb 02 21:23:27 crc kubenswrapper[4789]: I0202 21:23:27.342500 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-s96wn" podUID="59f183d3-7fe9-4f2d-bd43-02a2ed7eafc5" containerName="registry-server" containerID="cri-o://b69c4deea2e7d8dae7091127faadbfcee7b99486dcc60c3a7817a96bc2ec9691" gracePeriod=2 Feb 02 21:23:27 crc kubenswrapper[4789]: I0202 21:23:27.760150 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s96wn" Feb 02 21:23:27 crc kubenswrapper[4789]: I0202 21:23:27.936032 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59f183d3-7fe9-4f2d-bd43-02a2ed7eafc5-utilities\") pod \"59f183d3-7fe9-4f2d-bd43-02a2ed7eafc5\" (UID: \"59f183d3-7fe9-4f2d-bd43-02a2ed7eafc5\") " Feb 02 21:23:27 crc kubenswrapper[4789]: I0202 21:23:27.936182 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59f183d3-7fe9-4f2d-bd43-02a2ed7eafc5-catalog-content\") pod \"59f183d3-7fe9-4f2d-bd43-02a2ed7eafc5\" (UID: \"59f183d3-7fe9-4f2d-bd43-02a2ed7eafc5\") " Feb 02 21:23:27 crc kubenswrapper[4789]: I0202 21:23:27.936226 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjh2r\" (UniqueName: \"kubernetes.io/projected/59f183d3-7fe9-4f2d-bd43-02a2ed7eafc5-kube-api-access-cjh2r\") pod \"59f183d3-7fe9-4f2d-bd43-02a2ed7eafc5\" (UID: \"59f183d3-7fe9-4f2d-bd43-02a2ed7eafc5\") " Feb 02 21:23:27 crc kubenswrapper[4789]: I0202 21:23:27.937969 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59f183d3-7fe9-4f2d-bd43-02a2ed7eafc5-utilities" (OuterVolumeSpecName: "utilities") pod "59f183d3-7fe9-4f2d-bd43-02a2ed7eafc5" (UID: "59f183d3-7fe9-4f2d-bd43-02a2ed7eafc5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 21:23:27 crc kubenswrapper[4789]: I0202 21:23:27.940923 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59f183d3-7fe9-4f2d-bd43-02a2ed7eafc5-kube-api-access-cjh2r" (OuterVolumeSpecName: "kube-api-access-cjh2r") pod "59f183d3-7fe9-4f2d-bd43-02a2ed7eafc5" (UID: "59f183d3-7fe9-4f2d-bd43-02a2ed7eafc5"). InnerVolumeSpecName "kube-api-access-cjh2r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:23:28 crc kubenswrapper[4789]: I0202 21:23:28.037444 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cjh2r\" (UniqueName: \"kubernetes.io/projected/59f183d3-7fe9-4f2d-bd43-02a2ed7eafc5-kube-api-access-cjh2r\") on node \"crc\" DevicePath \"\"" Feb 02 21:23:28 crc kubenswrapper[4789]: I0202 21:23:28.037481 4789 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59f183d3-7fe9-4f2d-bd43-02a2ed7eafc5-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 21:23:28 crc kubenswrapper[4789]: I0202 21:23:28.048388 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59f183d3-7fe9-4f2d-bd43-02a2ed7eafc5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "59f183d3-7fe9-4f2d-bd43-02a2ed7eafc5" (UID: "59f183d3-7fe9-4f2d-bd43-02a2ed7eafc5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 21:23:28 crc kubenswrapper[4789]: I0202 21:23:28.139132 4789 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59f183d3-7fe9-4f2d-bd43-02a2ed7eafc5-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 21:23:28 crc kubenswrapper[4789]: I0202 21:23:28.176929 4789 generic.go:334] "Generic (PLEG): container finished" podID="59f183d3-7fe9-4f2d-bd43-02a2ed7eafc5" containerID="b69c4deea2e7d8dae7091127faadbfcee7b99486dcc60c3a7817a96bc2ec9691" exitCode=0 Feb 02 21:23:28 crc kubenswrapper[4789]: I0202 21:23:28.176981 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s96wn" event={"ID":"59f183d3-7fe9-4f2d-bd43-02a2ed7eafc5","Type":"ContainerDied","Data":"b69c4deea2e7d8dae7091127faadbfcee7b99486dcc60c3a7817a96bc2ec9691"} Feb 02 21:23:28 crc kubenswrapper[4789]: I0202 21:23:28.177041 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s96wn" Feb 02 21:23:28 crc kubenswrapper[4789]: I0202 21:23:28.177058 4789 scope.go:117] "RemoveContainer" containerID="b69c4deea2e7d8dae7091127faadbfcee7b99486dcc60c3a7817a96bc2ec9691" Feb 02 21:23:28 crc kubenswrapper[4789]: I0202 21:23:28.177041 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s96wn" event={"ID":"59f183d3-7fe9-4f2d-bd43-02a2ed7eafc5","Type":"ContainerDied","Data":"a4ae250cef40aa3a3b4063417e9d167076a5a4657b66d5c9b599c8b2bd146455"} Feb 02 21:23:28 crc kubenswrapper[4789]: I0202 21:23:28.193148 4789 scope.go:117] "RemoveContainer" containerID="2b699fd1c3ae25c688f54fd2e401e73695fe89f22e4f546a9160da11cc2233bb" Feb 02 21:23:28 crc kubenswrapper[4789]: I0202 21:23:28.209533 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-s96wn"] Feb 02 21:23:28 crc kubenswrapper[4789]: I0202 21:23:28.209848 4789 scope.go:117] "RemoveContainer" containerID="3d40b3a942f7b7975ce73830f98892a351c35d284d4e253c75cb08052db9c78d" Feb 02 21:23:28 crc kubenswrapper[4789]: I0202 21:23:28.212639 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-s96wn"] Feb 02 21:23:28 crc kubenswrapper[4789]: I0202 21:23:28.242930 4789 scope.go:117] "RemoveContainer" containerID="b69c4deea2e7d8dae7091127faadbfcee7b99486dcc60c3a7817a96bc2ec9691" Feb 02 21:23:28 crc kubenswrapper[4789]: E0202 21:23:28.243461 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b69c4deea2e7d8dae7091127faadbfcee7b99486dcc60c3a7817a96bc2ec9691\": container with ID starting with b69c4deea2e7d8dae7091127faadbfcee7b99486dcc60c3a7817a96bc2ec9691 not found: ID does not exist" containerID="b69c4deea2e7d8dae7091127faadbfcee7b99486dcc60c3a7817a96bc2ec9691" Feb 02 21:23:28 crc kubenswrapper[4789]: I0202 21:23:28.243504 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b69c4deea2e7d8dae7091127faadbfcee7b99486dcc60c3a7817a96bc2ec9691"} err="failed to get container status \"b69c4deea2e7d8dae7091127faadbfcee7b99486dcc60c3a7817a96bc2ec9691\": rpc error: code = NotFound desc = could not find container \"b69c4deea2e7d8dae7091127faadbfcee7b99486dcc60c3a7817a96bc2ec9691\": container with ID starting with b69c4deea2e7d8dae7091127faadbfcee7b99486dcc60c3a7817a96bc2ec9691 not found: ID does not exist" Feb 02 21:23:28 crc kubenswrapper[4789]: I0202 21:23:28.243532 4789 scope.go:117] "RemoveContainer" containerID="2b699fd1c3ae25c688f54fd2e401e73695fe89f22e4f546a9160da11cc2233bb" Feb 02 21:23:28 crc kubenswrapper[4789]: E0202 21:23:28.243982 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b699fd1c3ae25c688f54fd2e401e73695fe89f22e4f546a9160da11cc2233bb\": container with ID starting with 2b699fd1c3ae25c688f54fd2e401e73695fe89f22e4f546a9160da11cc2233bb not found: ID does not exist" containerID="2b699fd1c3ae25c688f54fd2e401e73695fe89f22e4f546a9160da11cc2233bb" Feb 02 21:23:28 crc kubenswrapper[4789]: I0202 21:23:28.244046 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b699fd1c3ae25c688f54fd2e401e73695fe89f22e4f546a9160da11cc2233bb"} err="failed to get container status \"2b699fd1c3ae25c688f54fd2e401e73695fe89f22e4f546a9160da11cc2233bb\": rpc error: code = NotFound desc = could not find container \"2b699fd1c3ae25c688f54fd2e401e73695fe89f22e4f546a9160da11cc2233bb\": container with ID starting with 2b699fd1c3ae25c688f54fd2e401e73695fe89f22e4f546a9160da11cc2233bb not found: ID does not exist" Feb 02 21:23:28 crc kubenswrapper[4789]: I0202 21:23:28.244084 4789 scope.go:117] "RemoveContainer" containerID="3d40b3a942f7b7975ce73830f98892a351c35d284d4e253c75cb08052db9c78d" Feb 02 21:23:28 crc kubenswrapper[4789]: E0202 21:23:28.244456 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d40b3a942f7b7975ce73830f98892a351c35d284d4e253c75cb08052db9c78d\": container with ID starting with 3d40b3a942f7b7975ce73830f98892a351c35d284d4e253c75cb08052db9c78d not found: ID does not exist" containerID="3d40b3a942f7b7975ce73830f98892a351c35d284d4e253c75cb08052db9c78d" Feb 02 21:23:28 crc kubenswrapper[4789]: I0202 21:23:28.244490 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d40b3a942f7b7975ce73830f98892a351c35d284d4e253c75cb08052db9c78d"} err="failed to get container status \"3d40b3a942f7b7975ce73830f98892a351c35d284d4e253c75cb08052db9c78d\": rpc error: code = NotFound desc = could not find container \"3d40b3a942f7b7975ce73830f98892a351c35d284d4e253c75cb08052db9c78d\": container with ID starting with 3d40b3a942f7b7975ce73830f98892a351c35d284d4e253c75cb08052db9c78d not found: ID does not exist" Feb 02 21:23:28 crc kubenswrapper[4789]: I0202 21:23:28.429974 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59f183d3-7fe9-4f2d-bd43-02a2ed7eafc5" path="/var/lib/kubelet/pods/59f183d3-7fe9-4f2d-bd43-02a2ed7eafc5/volumes" Feb 02 21:23:36 crc kubenswrapper[4789]: I0202 21:23:36.340744 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-zfv5p" podUID="a2edcffa-d93c-4125-863d-05812a4ff79a" containerName="oauth-openshift" containerID="cri-o://ec8a9168c6f25216a41a23bc5e03444a15486e81cfdec6bf31a3133c82ba0e72" gracePeriod=15 Feb 02 21:23:36 crc kubenswrapper[4789]: I0202 21:23:36.745637 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-zfv5p" Feb 02 21:23:36 crc kubenswrapper[4789]: I0202 21:23:36.851547 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a2edcffa-d93c-4125-863d-05812a4ff79a-v4-0-config-user-template-provider-selection\") pod \"a2edcffa-d93c-4125-863d-05812a4ff79a\" (UID: \"a2edcffa-d93c-4125-863d-05812a4ff79a\") " Feb 02 21:23:36 crc kubenswrapper[4789]: I0202 21:23:36.851673 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a2edcffa-d93c-4125-863d-05812a4ff79a-v4-0-config-system-router-certs\") pod \"a2edcffa-d93c-4125-863d-05812a4ff79a\" (UID: \"a2edcffa-d93c-4125-863d-05812a4ff79a\") " Feb 02 21:23:36 crc kubenswrapper[4789]: I0202 21:23:36.851723 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a2edcffa-d93c-4125-863d-05812a4ff79a-v4-0-config-system-cliconfig\") pod \"a2edcffa-d93c-4125-863d-05812a4ff79a\" (UID: \"a2edcffa-d93c-4125-863d-05812a4ff79a\") " Feb 02 21:23:36 crc kubenswrapper[4789]: I0202 21:23:36.851752 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a2edcffa-d93c-4125-863d-05812a4ff79a-audit-dir\") pod \"a2edcffa-d93c-4125-863d-05812a4ff79a\" (UID: \"a2edcffa-d93c-4125-863d-05812a4ff79a\") " Feb 02 21:23:36 crc kubenswrapper[4789]: I0202 21:23:36.851783 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a2edcffa-d93c-4125-863d-05812a4ff79a-v4-0-config-user-idp-0-file-data\") pod \"a2edcffa-d93c-4125-863d-05812a4ff79a\" (UID: \"a2edcffa-d93c-4125-863d-05812a4ff79a\") " Feb 02 21:23:36 crc kubenswrapper[4789]: I0202 21:23:36.851837 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a2edcffa-d93c-4125-863d-05812a4ff79a-v4-0-config-system-session\") pod \"a2edcffa-d93c-4125-863d-05812a4ff79a\" (UID: \"a2edcffa-d93c-4125-863d-05812a4ff79a\") " Feb 02 21:23:36 crc kubenswrapper[4789]: I0202 21:23:36.851870 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a2edcffa-d93c-4125-863d-05812a4ff79a-v4-0-config-system-serving-cert\") pod \"a2edcffa-d93c-4125-863d-05812a4ff79a\" (UID: \"a2edcffa-d93c-4125-863d-05812a4ff79a\") " Feb 02 21:23:36 crc kubenswrapper[4789]: I0202 21:23:36.851907 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a2edcffa-d93c-4125-863d-05812a4ff79a-v4-0-config-system-service-ca\") pod \"a2edcffa-d93c-4125-863d-05812a4ff79a\" (UID: \"a2edcffa-d93c-4125-863d-05812a4ff79a\") " Feb 02 21:23:36 crc kubenswrapper[4789]: I0202 21:23:36.851893 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2edcffa-d93c-4125-863d-05812a4ff79a-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "a2edcffa-d93c-4125-863d-05812a4ff79a" (UID: "a2edcffa-d93c-4125-863d-05812a4ff79a"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 21:23:36 crc kubenswrapper[4789]: I0202 21:23:36.851939 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a2edcffa-d93c-4125-863d-05812a4ff79a-v4-0-config-user-template-login\") pod \"a2edcffa-d93c-4125-863d-05812a4ff79a\" (UID: \"a2edcffa-d93c-4125-863d-05812a4ff79a\") " Feb 02 21:23:36 crc kubenswrapper[4789]: I0202 21:23:36.851971 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a2edcffa-d93c-4125-863d-05812a4ff79a-v4-0-config-system-trusted-ca-bundle\") pod \"a2edcffa-d93c-4125-863d-05812a4ff79a\" (UID: \"a2edcffa-d93c-4125-863d-05812a4ff79a\") " Feb 02 21:23:36 crc kubenswrapper[4789]: I0202 21:23:36.852033 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a2edcffa-d93c-4125-863d-05812a4ff79a-audit-policies\") pod \"a2edcffa-d93c-4125-863d-05812a4ff79a\" (UID: \"a2edcffa-d93c-4125-863d-05812a4ff79a\") " Feb 02 21:23:36 crc kubenswrapper[4789]: I0202 21:23:36.852065 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a2edcffa-d93c-4125-863d-05812a4ff79a-v4-0-config-system-ocp-branding-template\") pod \"a2edcffa-d93c-4125-863d-05812a4ff79a\" (UID: \"a2edcffa-d93c-4125-863d-05812a4ff79a\") " Feb 02 21:23:36 crc kubenswrapper[4789]: I0202 21:23:36.852135 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2vf9z\" (UniqueName: \"kubernetes.io/projected/a2edcffa-d93c-4125-863d-05812a4ff79a-kube-api-access-2vf9z\") pod \"a2edcffa-d93c-4125-863d-05812a4ff79a\" (UID: \"a2edcffa-d93c-4125-863d-05812a4ff79a\") " Feb 02 21:23:36 crc kubenswrapper[4789]: I0202 21:23:36.852165 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a2edcffa-d93c-4125-863d-05812a4ff79a-v4-0-config-user-template-error\") pod \"a2edcffa-d93c-4125-863d-05812a4ff79a\" (UID: \"a2edcffa-d93c-4125-863d-05812a4ff79a\") " Feb 02 21:23:36 crc kubenswrapper[4789]: I0202 21:23:36.852458 4789 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a2edcffa-d93c-4125-863d-05812a4ff79a-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 02 21:23:36 crc kubenswrapper[4789]: I0202 21:23:36.853453 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2edcffa-d93c-4125-863d-05812a4ff79a-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "a2edcffa-d93c-4125-863d-05812a4ff79a" (UID: "a2edcffa-d93c-4125-863d-05812a4ff79a"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:23:36 crc kubenswrapper[4789]: I0202 21:23:36.853524 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2edcffa-d93c-4125-863d-05812a4ff79a-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "a2edcffa-d93c-4125-863d-05812a4ff79a" (UID: "a2edcffa-d93c-4125-863d-05812a4ff79a"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:23:36 crc kubenswrapper[4789]: I0202 21:23:36.853839 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2edcffa-d93c-4125-863d-05812a4ff79a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "a2edcffa-d93c-4125-863d-05812a4ff79a" (UID: "a2edcffa-d93c-4125-863d-05812a4ff79a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:23:36 crc kubenswrapper[4789]: I0202 21:23:36.854214 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2edcffa-d93c-4125-863d-05812a4ff79a-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "a2edcffa-d93c-4125-863d-05812a4ff79a" (UID: "a2edcffa-d93c-4125-863d-05812a4ff79a"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:23:36 crc kubenswrapper[4789]: I0202 21:23:36.858759 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2edcffa-d93c-4125-863d-05812a4ff79a-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "a2edcffa-d93c-4125-863d-05812a4ff79a" (UID: "a2edcffa-d93c-4125-863d-05812a4ff79a"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:23:36 crc kubenswrapper[4789]: I0202 21:23:36.859263 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2edcffa-d93c-4125-863d-05812a4ff79a-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "a2edcffa-d93c-4125-863d-05812a4ff79a" (UID: "a2edcffa-d93c-4125-863d-05812a4ff79a"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:23:36 crc kubenswrapper[4789]: I0202 21:23:36.866224 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2edcffa-d93c-4125-863d-05812a4ff79a-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "a2edcffa-d93c-4125-863d-05812a4ff79a" (UID: "a2edcffa-d93c-4125-863d-05812a4ff79a"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:23:36 crc kubenswrapper[4789]: I0202 21:23:36.866302 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2edcffa-d93c-4125-863d-05812a4ff79a-kube-api-access-2vf9z" (OuterVolumeSpecName: "kube-api-access-2vf9z") pod "a2edcffa-d93c-4125-863d-05812a4ff79a" (UID: "a2edcffa-d93c-4125-863d-05812a4ff79a"). InnerVolumeSpecName "kube-api-access-2vf9z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:23:36 crc kubenswrapper[4789]: I0202 21:23:36.867763 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2edcffa-d93c-4125-863d-05812a4ff79a-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "a2edcffa-d93c-4125-863d-05812a4ff79a" (UID: "a2edcffa-d93c-4125-863d-05812a4ff79a"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:23:36 crc kubenswrapper[4789]: I0202 21:23:36.868253 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2edcffa-d93c-4125-863d-05812a4ff79a-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "a2edcffa-d93c-4125-863d-05812a4ff79a" (UID: "a2edcffa-d93c-4125-863d-05812a4ff79a"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:23:36 crc kubenswrapper[4789]: I0202 21:23:36.868968 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2edcffa-d93c-4125-863d-05812a4ff79a-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "a2edcffa-d93c-4125-863d-05812a4ff79a" (UID: "a2edcffa-d93c-4125-863d-05812a4ff79a"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:23:36 crc kubenswrapper[4789]: I0202 21:23:36.869021 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2edcffa-d93c-4125-863d-05812a4ff79a-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "a2edcffa-d93c-4125-863d-05812a4ff79a" (UID: "a2edcffa-d93c-4125-863d-05812a4ff79a"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:23:36 crc kubenswrapper[4789]: I0202 21:23:36.869302 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2edcffa-d93c-4125-863d-05812a4ff79a-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "a2edcffa-d93c-4125-863d-05812a4ff79a" (UID: "a2edcffa-d93c-4125-863d-05812a4ff79a"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:23:36 crc kubenswrapper[4789]: I0202 21:23:36.954393 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2vf9z\" (UniqueName: \"kubernetes.io/projected/a2edcffa-d93c-4125-863d-05812a4ff79a-kube-api-access-2vf9z\") on node \"crc\" DevicePath \"\"" Feb 02 21:23:36 crc kubenswrapper[4789]: I0202 21:23:36.954449 4789 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a2edcffa-d93c-4125-863d-05812a4ff79a-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 02 21:23:36 crc kubenswrapper[4789]: I0202 21:23:36.954473 4789 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a2edcffa-d93c-4125-863d-05812a4ff79a-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 02 21:23:36 crc kubenswrapper[4789]: I0202 21:23:36.954497 4789 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a2edcffa-d93c-4125-863d-05812a4ff79a-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 02 21:23:36 crc kubenswrapper[4789]: I0202 21:23:36.954519 4789 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a2edcffa-d93c-4125-863d-05812a4ff79a-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 02 21:23:36 crc kubenswrapper[4789]: I0202 21:23:36.954539 4789 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a2edcffa-d93c-4125-863d-05812a4ff79a-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 02 21:23:36 crc kubenswrapper[4789]: I0202 21:23:36.954556 4789 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a2edcffa-d93c-4125-863d-05812a4ff79a-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 21:23:36 crc kubenswrapper[4789]: I0202 21:23:36.954575 4789 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a2edcffa-d93c-4125-863d-05812a4ff79a-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 02 21:23:36 crc kubenswrapper[4789]: I0202 21:23:36.954621 4789 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a2edcffa-d93c-4125-863d-05812a4ff79a-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 02 21:23:36 crc kubenswrapper[4789]: I0202 21:23:36.954640 4789 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a2edcffa-d93c-4125-863d-05812a4ff79a-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 02 21:23:36 crc kubenswrapper[4789]: I0202 21:23:36.954658 4789 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a2edcffa-d93c-4125-863d-05812a4ff79a-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 21:23:36 crc kubenswrapper[4789]: I0202 21:23:36.954678 4789 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a2edcffa-d93c-4125-863d-05812a4ff79a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 02 21:23:36 crc kubenswrapper[4789]: I0202 21:23:36.954695 4789 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a2edcffa-d93c-4125-863d-05812a4ff79a-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 02 21:23:37 crc kubenswrapper[4789]: I0202 21:23:37.234338 4789 generic.go:334] "Generic (PLEG): container finished" podID="a2edcffa-d93c-4125-863d-05812a4ff79a" containerID="ec8a9168c6f25216a41a23bc5e03444a15486e81cfdec6bf31a3133c82ba0e72" exitCode=0 Feb 02 21:23:37 crc kubenswrapper[4789]: I0202 21:23:37.234395 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-zfv5p" event={"ID":"a2edcffa-d93c-4125-863d-05812a4ff79a","Type":"ContainerDied","Data":"ec8a9168c6f25216a41a23bc5e03444a15486e81cfdec6bf31a3133c82ba0e72"} Feb 02 21:23:37 crc kubenswrapper[4789]: I0202 21:23:37.234436 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-zfv5p" event={"ID":"a2edcffa-d93c-4125-863d-05812a4ff79a","Type":"ContainerDied","Data":"3eb6eca71866a76b44568a7e75b9bd8edd4de7a3a0214003eaf3708473118b3a"} Feb 02 21:23:37 crc kubenswrapper[4789]: I0202 21:23:37.234453 4789 scope.go:117] "RemoveContainer" containerID="ec8a9168c6f25216a41a23bc5e03444a15486e81cfdec6bf31a3133c82ba0e72" Feb 02 21:23:37 crc kubenswrapper[4789]: I0202 21:23:37.234614 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-zfv5p" Feb 02 21:23:37 crc kubenswrapper[4789]: I0202 21:23:37.267797 4789 scope.go:117] "RemoveContainer" containerID="ec8a9168c6f25216a41a23bc5e03444a15486e81cfdec6bf31a3133c82ba0e72" Feb 02 21:23:37 crc kubenswrapper[4789]: E0202 21:23:37.268533 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec8a9168c6f25216a41a23bc5e03444a15486e81cfdec6bf31a3133c82ba0e72\": container with ID starting with ec8a9168c6f25216a41a23bc5e03444a15486e81cfdec6bf31a3133c82ba0e72 not found: ID does not exist" containerID="ec8a9168c6f25216a41a23bc5e03444a15486e81cfdec6bf31a3133c82ba0e72" Feb 02 21:23:37 crc kubenswrapper[4789]: I0202 21:23:37.268599 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec8a9168c6f25216a41a23bc5e03444a15486e81cfdec6bf31a3133c82ba0e72"} err="failed to get container status \"ec8a9168c6f25216a41a23bc5e03444a15486e81cfdec6bf31a3133c82ba0e72\": rpc error: code = NotFound desc = could not find container \"ec8a9168c6f25216a41a23bc5e03444a15486e81cfdec6bf31a3133c82ba0e72\": container with ID starting with ec8a9168c6f25216a41a23bc5e03444a15486e81cfdec6bf31a3133c82ba0e72 not found: ID does not exist" Feb 02 21:23:37 crc kubenswrapper[4789]: I0202 21:23:37.287149 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-zfv5p"] Feb 02 21:23:37 crc kubenswrapper[4789]: I0202 21:23:37.295101 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-zfv5p"] Feb 02 21:23:37 crc kubenswrapper[4789]: I0202 21:23:37.843317 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-7f5b9fd94b-nlmjk"] Feb 02 21:23:37 crc kubenswrapper[4789]: E0202 21:23:37.843859 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a0bac2b-aef3-4313-9184-16e08bc0e572" containerName="extract-content" Feb 02 21:23:37 crc kubenswrapper[4789]: I0202 21:23:37.843891 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a0bac2b-aef3-4313-9184-16e08bc0e572" containerName="extract-content" Feb 02 21:23:37 crc kubenswrapper[4789]: E0202 21:23:37.843915 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="275ff536-c274-436e-89f2-a2c138f9857a" containerName="registry-server" Feb 02 21:23:37 crc kubenswrapper[4789]: I0202 21:23:37.843931 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="275ff536-c274-436e-89f2-a2c138f9857a" containerName="registry-server" Feb 02 21:23:37 crc kubenswrapper[4789]: E0202 21:23:37.843958 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59f183d3-7fe9-4f2d-bd43-02a2ed7eafc5" containerName="extract-content" Feb 02 21:23:37 crc kubenswrapper[4789]: I0202 21:23:37.843974 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="59f183d3-7fe9-4f2d-bd43-02a2ed7eafc5" containerName="extract-content" Feb 02 21:23:37 crc kubenswrapper[4789]: E0202 21:23:37.843999 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="564193d6-a25b-478d-8957-54183764c6d7" containerName="registry-server" Feb 02 21:23:37 crc kubenswrapper[4789]: I0202 21:23:37.844015 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="564193d6-a25b-478d-8957-54183764c6d7" containerName="registry-server" Feb 02 21:23:37 crc kubenswrapper[4789]: E0202 21:23:37.844036 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="275ff536-c274-436e-89f2-a2c138f9857a" containerName="extract-content" Feb 02 21:23:37 crc kubenswrapper[4789]: I0202 21:23:37.844053 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="275ff536-c274-436e-89f2-a2c138f9857a" containerName="extract-content" Feb 02 21:23:37 crc kubenswrapper[4789]: E0202 21:23:37.844080 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="275ff536-c274-436e-89f2-a2c138f9857a" containerName="extract-utilities" Feb 02 21:23:37 crc kubenswrapper[4789]: I0202 21:23:37.844096 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="275ff536-c274-436e-89f2-a2c138f9857a" containerName="extract-utilities" Feb 02 21:23:37 crc kubenswrapper[4789]: E0202 21:23:37.844119 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59f183d3-7fe9-4f2d-bd43-02a2ed7eafc5" containerName="extract-utilities" Feb 02 21:23:37 crc kubenswrapper[4789]: I0202 21:23:37.844136 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="59f183d3-7fe9-4f2d-bd43-02a2ed7eafc5" containerName="extract-utilities" Feb 02 21:23:37 crc kubenswrapper[4789]: E0202 21:23:37.844156 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a0bac2b-aef3-4313-9184-16e08bc0e572" containerName="registry-server" Feb 02 21:23:37 crc kubenswrapper[4789]: I0202 21:23:37.844171 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a0bac2b-aef3-4313-9184-16e08bc0e572" containerName="registry-server" Feb 02 21:23:37 crc kubenswrapper[4789]: E0202 21:23:37.844197 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a0bac2b-aef3-4313-9184-16e08bc0e572" containerName="extract-utilities" Feb 02 21:23:37 crc kubenswrapper[4789]: I0202 21:23:37.844213 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a0bac2b-aef3-4313-9184-16e08bc0e572" containerName="extract-utilities" Feb 02 21:23:37 crc kubenswrapper[4789]: E0202 21:23:37.844240 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="564193d6-a25b-478d-8957-54183764c6d7" containerName="extract-utilities" Feb 02 21:23:37 crc kubenswrapper[4789]: I0202 21:23:37.844255 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="564193d6-a25b-478d-8957-54183764c6d7" containerName="extract-utilities" Feb 02 21:23:37 crc kubenswrapper[4789]: E0202 21:23:37.844273 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="564193d6-a25b-478d-8957-54183764c6d7" containerName="extract-content" Feb 02 21:23:37 crc kubenswrapper[4789]: I0202 21:23:37.844291 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="564193d6-a25b-478d-8957-54183764c6d7" containerName="extract-content" Feb 02 21:23:37 crc kubenswrapper[4789]: E0202 21:23:37.844309 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59f183d3-7fe9-4f2d-bd43-02a2ed7eafc5" containerName="registry-server" Feb 02 21:23:37 crc kubenswrapper[4789]: I0202 21:23:37.844324 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="59f183d3-7fe9-4f2d-bd43-02a2ed7eafc5" containerName="registry-server" Feb 02 21:23:37 crc kubenswrapper[4789]: E0202 21:23:37.844348 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2edcffa-d93c-4125-863d-05812a4ff79a" containerName="oauth-openshift" Feb 02 21:23:37 crc kubenswrapper[4789]: I0202 21:23:37.844362 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2edcffa-d93c-4125-863d-05812a4ff79a" containerName="oauth-openshift" Feb 02 21:23:37 crc kubenswrapper[4789]: I0202 21:23:37.844626 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2edcffa-d93c-4125-863d-05812a4ff79a" containerName="oauth-openshift" Feb 02 21:23:37 crc kubenswrapper[4789]: I0202 21:23:37.844659 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="275ff536-c274-436e-89f2-a2c138f9857a" containerName="registry-server" Feb 02 21:23:37 crc kubenswrapper[4789]: I0202 21:23:37.844680 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="59f183d3-7fe9-4f2d-bd43-02a2ed7eafc5" containerName="registry-server" Feb 02 21:23:37 crc kubenswrapper[4789]: I0202 21:23:37.844714 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a0bac2b-aef3-4313-9184-16e08bc0e572" containerName="registry-server" Feb 02 21:23:37 crc kubenswrapper[4789]: I0202 21:23:37.844744 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="564193d6-a25b-478d-8957-54183764c6d7" containerName="registry-server" Feb 02 21:23:37 crc kubenswrapper[4789]: I0202 21:23:37.845515 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7f5b9fd94b-nlmjk" Feb 02 21:23:37 crc kubenswrapper[4789]: I0202 21:23:37.849886 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 02 21:23:37 crc kubenswrapper[4789]: I0202 21:23:37.854639 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 02 21:23:37 crc kubenswrapper[4789]: I0202 21:23:37.854721 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 02 21:23:37 crc kubenswrapper[4789]: I0202 21:23:37.854849 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 02 21:23:37 crc kubenswrapper[4789]: I0202 21:23:37.859158 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 02 21:23:37 crc kubenswrapper[4789]: I0202 21:23:37.859542 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 02 21:23:37 crc kubenswrapper[4789]: I0202 21:23:37.859839 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 02 21:23:37 crc kubenswrapper[4789]: I0202 21:23:37.861968 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 02 21:23:37 crc kubenswrapper[4789]: I0202 21:23:37.862344 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 02 21:23:37 crc kubenswrapper[4789]: I0202 21:23:37.864387 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 02 21:23:37 crc kubenswrapper[4789]: I0202 21:23:37.865376 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 02 21:23:37 crc kubenswrapper[4789]: I0202 21:23:37.865830 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 02 21:23:37 crc kubenswrapper[4789]: I0202 21:23:37.866768 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 02 21:23:37 crc kubenswrapper[4789]: I0202 21:23:37.870482 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7f5b9fd94b-nlmjk"] Feb 02 21:23:37 crc kubenswrapper[4789]: I0202 21:23:37.873715 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 02 21:23:37 crc kubenswrapper[4789]: I0202 21:23:37.884573 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 02 21:23:37 crc kubenswrapper[4789]: I0202 21:23:37.967716 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qb425\" (UniqueName: \"kubernetes.io/projected/b6fa490e-3720-4f7c-b87e-aae664997d28-kube-api-access-qb425\") pod \"oauth-openshift-7f5b9fd94b-nlmjk\" (UID: \"b6fa490e-3720-4f7c-b87e-aae664997d28\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-nlmjk" Feb 02 21:23:37 crc kubenswrapper[4789]: I0202 21:23:37.967764 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b6fa490e-3720-4f7c-b87e-aae664997d28-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7f5b9fd94b-nlmjk\" (UID: \"b6fa490e-3720-4f7c-b87e-aae664997d28\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-nlmjk" Feb 02 21:23:37 crc kubenswrapper[4789]: I0202 21:23:37.967793 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b6fa490e-3720-4f7c-b87e-aae664997d28-v4-0-config-user-template-login\") pod \"oauth-openshift-7f5b9fd94b-nlmjk\" (UID: \"b6fa490e-3720-4f7c-b87e-aae664997d28\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-nlmjk" Feb 02 21:23:37 crc kubenswrapper[4789]: I0202 21:23:37.967811 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b6fa490e-3720-4f7c-b87e-aae664997d28-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7f5b9fd94b-nlmjk\" (UID: \"b6fa490e-3720-4f7c-b87e-aae664997d28\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-nlmjk" Feb 02 21:23:37 crc kubenswrapper[4789]: I0202 21:23:37.967829 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b6fa490e-3720-4f7c-b87e-aae664997d28-v4-0-config-user-template-error\") pod \"oauth-openshift-7f5b9fd94b-nlmjk\" (UID: \"b6fa490e-3720-4f7c-b87e-aae664997d28\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-nlmjk" Feb 02 21:23:37 crc kubenswrapper[4789]: I0202 21:23:37.967851 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b6fa490e-3720-4f7c-b87e-aae664997d28-audit-dir\") pod \"oauth-openshift-7f5b9fd94b-nlmjk\" (UID: \"b6fa490e-3720-4f7c-b87e-aae664997d28\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-nlmjk" Feb 02 21:23:37 crc kubenswrapper[4789]: I0202 21:23:37.967868 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b6fa490e-3720-4f7c-b87e-aae664997d28-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7f5b9fd94b-nlmjk\" (UID: \"b6fa490e-3720-4f7c-b87e-aae664997d28\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-nlmjk" Feb 02 21:23:37 crc kubenswrapper[4789]: I0202 21:23:37.967890 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b6fa490e-3720-4f7c-b87e-aae664997d28-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7f5b9fd94b-nlmjk\" (UID: \"b6fa490e-3720-4f7c-b87e-aae664997d28\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-nlmjk" Feb 02 21:23:37 crc kubenswrapper[4789]: I0202 21:23:37.968080 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b6fa490e-3720-4f7c-b87e-aae664997d28-v4-0-config-system-service-ca\") pod \"oauth-openshift-7f5b9fd94b-nlmjk\" (UID: \"b6fa490e-3720-4f7c-b87e-aae664997d28\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-nlmjk" Feb 02 21:23:37 crc kubenswrapper[4789]: I0202 21:23:37.968142 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b6fa490e-3720-4f7c-b87e-aae664997d28-v4-0-config-system-session\") pod \"oauth-openshift-7f5b9fd94b-nlmjk\" (UID: \"b6fa490e-3720-4f7c-b87e-aae664997d28\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-nlmjk" Feb 02 21:23:37 crc kubenswrapper[4789]: I0202 21:23:37.968182 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b6fa490e-3720-4f7c-b87e-aae664997d28-v4-0-config-system-router-certs\") pod \"oauth-openshift-7f5b9fd94b-nlmjk\" (UID: \"b6fa490e-3720-4f7c-b87e-aae664997d28\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-nlmjk" Feb 02 21:23:37 crc kubenswrapper[4789]: I0202 21:23:37.968235 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b6fa490e-3720-4f7c-b87e-aae664997d28-audit-policies\") pod \"oauth-openshift-7f5b9fd94b-nlmjk\" (UID: \"b6fa490e-3720-4f7c-b87e-aae664997d28\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-nlmjk" Feb 02 21:23:37 crc kubenswrapper[4789]: I0202 21:23:37.968268 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b6fa490e-3720-4f7c-b87e-aae664997d28-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7f5b9fd94b-nlmjk\" (UID: \"b6fa490e-3720-4f7c-b87e-aae664997d28\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-nlmjk" Feb 02 21:23:37 crc kubenswrapper[4789]: I0202 21:23:37.968406 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b6fa490e-3720-4f7c-b87e-aae664997d28-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7f5b9fd94b-nlmjk\" (UID: \"b6fa490e-3720-4f7c-b87e-aae664997d28\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-nlmjk" Feb 02 21:23:38 crc kubenswrapper[4789]: I0202 21:23:38.069833 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qb425\" (UniqueName: \"kubernetes.io/projected/b6fa490e-3720-4f7c-b87e-aae664997d28-kube-api-access-qb425\") pod \"oauth-openshift-7f5b9fd94b-nlmjk\" (UID: \"b6fa490e-3720-4f7c-b87e-aae664997d28\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-nlmjk" Feb 02 21:23:38 crc kubenswrapper[4789]: I0202 21:23:38.069929 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b6fa490e-3720-4f7c-b87e-aae664997d28-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7f5b9fd94b-nlmjk\" (UID: \"b6fa490e-3720-4f7c-b87e-aae664997d28\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-nlmjk" Feb 02 21:23:38 crc kubenswrapper[4789]: I0202 21:23:38.070005 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b6fa490e-3720-4f7c-b87e-aae664997d28-v4-0-config-user-template-login\") pod \"oauth-openshift-7f5b9fd94b-nlmjk\" (UID: \"b6fa490e-3720-4f7c-b87e-aae664997d28\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-nlmjk" Feb 02 21:23:38 crc kubenswrapper[4789]: I0202 21:23:38.070077 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b6fa490e-3720-4f7c-b87e-aae664997d28-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7f5b9fd94b-nlmjk\" (UID: \"b6fa490e-3720-4f7c-b87e-aae664997d28\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-nlmjk" Feb 02 21:23:38 crc kubenswrapper[4789]: I0202 21:23:38.070143 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b6fa490e-3720-4f7c-b87e-aae664997d28-v4-0-config-user-template-error\") pod \"oauth-openshift-7f5b9fd94b-nlmjk\" (UID: \"b6fa490e-3720-4f7c-b87e-aae664997d28\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-nlmjk" Feb 02 21:23:38 crc kubenswrapper[4789]: I0202 21:23:38.070203 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b6fa490e-3720-4f7c-b87e-aae664997d28-audit-dir\") pod \"oauth-openshift-7f5b9fd94b-nlmjk\" (UID: \"b6fa490e-3720-4f7c-b87e-aae664997d28\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-nlmjk" Feb 02 21:23:38 crc kubenswrapper[4789]: I0202 21:23:38.070261 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b6fa490e-3720-4f7c-b87e-aae664997d28-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7f5b9fd94b-nlmjk\" (UID: \"b6fa490e-3720-4f7c-b87e-aae664997d28\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-nlmjk" Feb 02 21:23:38 crc kubenswrapper[4789]: I0202 21:23:38.070323 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b6fa490e-3720-4f7c-b87e-aae664997d28-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7f5b9fd94b-nlmjk\" (UID: \"b6fa490e-3720-4f7c-b87e-aae664997d28\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-nlmjk" Feb 02 21:23:38 crc kubenswrapper[4789]: I0202 21:23:38.070384 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b6fa490e-3720-4f7c-b87e-aae664997d28-audit-dir\") pod \"oauth-openshift-7f5b9fd94b-nlmjk\" (UID: \"b6fa490e-3720-4f7c-b87e-aae664997d28\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-nlmjk" Feb 02 21:23:38 crc kubenswrapper[4789]: I0202 21:23:38.070394 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b6fa490e-3720-4f7c-b87e-aae664997d28-v4-0-config-system-service-ca\") pod \"oauth-openshift-7f5b9fd94b-nlmjk\" (UID: \"b6fa490e-3720-4f7c-b87e-aae664997d28\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-nlmjk" Feb 02 21:23:38 crc kubenswrapper[4789]: I0202 21:23:38.070471 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b6fa490e-3720-4f7c-b87e-aae664997d28-v4-0-config-system-session\") pod \"oauth-openshift-7f5b9fd94b-nlmjk\" (UID: \"b6fa490e-3720-4f7c-b87e-aae664997d28\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-nlmjk" Feb 02 21:23:38 crc kubenswrapper[4789]: I0202 21:23:38.070508 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b6fa490e-3720-4f7c-b87e-aae664997d28-v4-0-config-system-router-certs\") pod \"oauth-openshift-7f5b9fd94b-nlmjk\" (UID: \"b6fa490e-3720-4f7c-b87e-aae664997d28\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-nlmjk" Feb 02 21:23:38 crc kubenswrapper[4789]: I0202 21:23:38.070541 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b6fa490e-3720-4f7c-b87e-aae664997d28-audit-policies\") pod \"oauth-openshift-7f5b9fd94b-nlmjk\" (UID: \"b6fa490e-3720-4f7c-b87e-aae664997d28\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-nlmjk" Feb 02 21:23:38 crc kubenswrapper[4789]: I0202 21:23:38.070572 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b6fa490e-3720-4f7c-b87e-aae664997d28-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7f5b9fd94b-nlmjk\" (UID: \"b6fa490e-3720-4f7c-b87e-aae664997d28\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-nlmjk" Feb 02 21:23:38 crc kubenswrapper[4789]: I0202 21:23:38.070644 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b6fa490e-3720-4f7c-b87e-aae664997d28-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7f5b9fd94b-nlmjk\" (UID: \"b6fa490e-3720-4f7c-b87e-aae664997d28\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-nlmjk" Feb 02 21:23:38 crc kubenswrapper[4789]: I0202 21:23:38.072278 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b6fa490e-3720-4f7c-b87e-aae664997d28-v4-0-config-system-service-ca\") pod \"oauth-openshift-7f5b9fd94b-nlmjk\" (UID: \"b6fa490e-3720-4f7c-b87e-aae664997d28\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-nlmjk" Feb 02 21:23:38 crc kubenswrapper[4789]: I0202 21:23:38.072623 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b6fa490e-3720-4f7c-b87e-aae664997d28-audit-policies\") pod \"oauth-openshift-7f5b9fd94b-nlmjk\" (UID: \"b6fa490e-3720-4f7c-b87e-aae664997d28\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-nlmjk" Feb 02 21:23:38 crc kubenswrapper[4789]: I0202 21:23:38.075138 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b6fa490e-3720-4f7c-b87e-aae664997d28-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7f5b9fd94b-nlmjk\" (UID: \"b6fa490e-3720-4f7c-b87e-aae664997d28\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-nlmjk" Feb 02 21:23:38 crc kubenswrapper[4789]: I0202 21:23:38.075458 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b6fa490e-3720-4f7c-b87e-aae664997d28-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7f5b9fd94b-nlmjk\" (UID: \"b6fa490e-3720-4f7c-b87e-aae664997d28\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-nlmjk" Feb 02 21:23:38 crc kubenswrapper[4789]: I0202 21:23:38.077500 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b6fa490e-3720-4f7c-b87e-aae664997d28-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7f5b9fd94b-nlmjk\" (UID: \"b6fa490e-3720-4f7c-b87e-aae664997d28\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-nlmjk" Feb 02 21:23:38 crc kubenswrapper[4789]: I0202 21:23:38.077648 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b6fa490e-3720-4f7c-b87e-aae664997d28-v4-0-config-system-session\") pod \"oauth-openshift-7f5b9fd94b-nlmjk\" (UID: \"b6fa490e-3720-4f7c-b87e-aae664997d28\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-nlmjk" Feb 02 21:23:38 crc kubenswrapper[4789]: I0202 21:23:38.078282 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b6fa490e-3720-4f7c-b87e-aae664997d28-v4-0-config-user-template-login\") pod \"oauth-openshift-7f5b9fd94b-nlmjk\" (UID: \"b6fa490e-3720-4f7c-b87e-aae664997d28\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-nlmjk" Feb 02 21:23:38 crc kubenswrapper[4789]: I0202 21:23:38.078829 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b6fa490e-3720-4f7c-b87e-aae664997d28-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7f5b9fd94b-nlmjk\" (UID: \"b6fa490e-3720-4f7c-b87e-aae664997d28\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-nlmjk" Feb 02 21:23:38 crc kubenswrapper[4789]: I0202 21:23:38.079417 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b6fa490e-3720-4f7c-b87e-aae664997d28-v4-0-config-user-template-error\") pod \"oauth-openshift-7f5b9fd94b-nlmjk\" (UID: \"b6fa490e-3720-4f7c-b87e-aae664997d28\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-nlmjk" Feb 02 21:23:38 crc kubenswrapper[4789]: I0202 21:23:38.082926 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b6fa490e-3720-4f7c-b87e-aae664997d28-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7f5b9fd94b-nlmjk\" (UID: \"b6fa490e-3720-4f7c-b87e-aae664997d28\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-nlmjk" Feb 02 21:23:38 crc kubenswrapper[4789]: I0202 21:23:38.087254 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b6fa490e-3720-4f7c-b87e-aae664997d28-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7f5b9fd94b-nlmjk\" (UID: \"b6fa490e-3720-4f7c-b87e-aae664997d28\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-nlmjk" Feb 02 21:23:38 crc kubenswrapper[4789]: I0202 21:23:38.087642 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b6fa490e-3720-4f7c-b87e-aae664997d28-v4-0-config-system-router-certs\") pod \"oauth-openshift-7f5b9fd94b-nlmjk\" (UID: \"b6fa490e-3720-4f7c-b87e-aae664997d28\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-nlmjk" Feb 02 21:23:38 crc kubenswrapper[4789]: I0202 21:23:38.104565 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qb425\" (UniqueName: \"kubernetes.io/projected/b6fa490e-3720-4f7c-b87e-aae664997d28-kube-api-access-qb425\") pod \"oauth-openshift-7f5b9fd94b-nlmjk\" (UID: \"b6fa490e-3720-4f7c-b87e-aae664997d28\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-nlmjk" Feb 02 21:23:38 crc kubenswrapper[4789]: I0202 21:23:38.177501 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7f5b9fd94b-nlmjk" Feb 02 21:23:38 crc kubenswrapper[4789]: I0202 21:23:38.409687 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7f5b9fd94b-nlmjk"] Feb 02 21:23:38 crc kubenswrapper[4789]: I0202 21:23:38.427708 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2edcffa-d93c-4125-863d-05812a4ff79a" path="/var/lib/kubelet/pods/a2edcffa-d93c-4125-863d-05812a4ff79a/volumes" Feb 02 21:23:39 crc kubenswrapper[4789]: I0202 21:23:39.258480 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7f5b9fd94b-nlmjk" event={"ID":"b6fa490e-3720-4f7c-b87e-aae664997d28","Type":"ContainerStarted","Data":"11d630113e92ece67ebf80ca7bcc052a475281cb230f2ecc1b24d10645368894"} Feb 02 21:23:39 crc kubenswrapper[4789]: I0202 21:23:39.258963 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7f5b9fd94b-nlmjk" event={"ID":"b6fa490e-3720-4f7c-b87e-aae664997d28","Type":"ContainerStarted","Data":"7cbc88d4988e4f601e138b43ba634b8de377c7fe131b7605636eb7ca77010b66"} Feb 02 21:23:39 crc kubenswrapper[4789]: I0202 21:23:39.259428 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-7f5b9fd94b-nlmjk" Feb 02 21:23:39 crc kubenswrapper[4789]: I0202 21:23:39.269506 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-7f5b9fd94b-nlmjk" Feb 02 21:23:39 crc kubenswrapper[4789]: I0202 21:23:39.304576 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-7f5b9fd94b-nlmjk" podStartSLOduration=28.304552593 podStartE2EDuration="28.304552593s" podCreationTimestamp="2026-02-02 21:23:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:23:39.287101751 +0000 UTC m=+239.582126830" watchObservedRunningTime="2026-02-02 21:23:39.304552593 +0000 UTC m=+239.599577622" Feb 02 21:23:41 crc kubenswrapper[4789]: I0202 21:23:41.582199 4789 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 02 21:23:41 crc kubenswrapper[4789]: I0202 21:23:41.583504 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://66a4db3799101ccca8a89d6bfd2c9d36940b8710ee3d256e47cd61cfe6ac7c07" gracePeriod=15 Feb 02 21:23:41 crc kubenswrapper[4789]: I0202 21:23:41.583618 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://9cc46c408bb643834e494025442760929995b22175ce2af65a14004761848c2f" gracePeriod=15 Feb 02 21:23:41 crc kubenswrapper[4789]: I0202 21:23:41.583837 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://5cbe4a41d8721562e34639bc3ab24cc7f735d8de00e14d38b1b623b58740c5b9" gracePeriod=15 Feb 02 21:23:41 crc kubenswrapper[4789]: I0202 21:23:41.583882 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://3e95c9e553b8e14141ddde6a221c0e5e4d46533d1631893e5d55416b9d16f4a2" gracePeriod=15 Feb 02 21:23:41 crc kubenswrapper[4789]: I0202 21:23:41.584033 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://72bf64d682578abb47e9fabf5e6c31e3a20594cc4a2d13304b2036f4198af265" gracePeriod=15 Feb 02 21:23:41 crc kubenswrapper[4789]: I0202 21:23:41.588261 4789 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 02 21:23:41 crc kubenswrapper[4789]: E0202 21:23:41.588729 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 02 21:23:41 crc kubenswrapper[4789]: I0202 21:23:41.588773 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 02 21:23:41 crc kubenswrapper[4789]: E0202 21:23:41.588803 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 02 21:23:41 crc kubenswrapper[4789]: I0202 21:23:41.588823 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 02 21:23:41 crc kubenswrapper[4789]: E0202 21:23:41.588849 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 02 21:23:41 crc kubenswrapper[4789]: I0202 21:23:41.588867 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 02 21:23:41 crc kubenswrapper[4789]: E0202 21:23:41.588901 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 02 21:23:41 crc kubenswrapper[4789]: I0202 21:23:41.588922 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 02 21:23:41 crc kubenswrapper[4789]: E0202 21:23:41.588943 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 02 21:23:41 crc kubenswrapper[4789]: I0202 21:23:41.588960 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 02 21:23:41 crc kubenswrapper[4789]: E0202 21:23:41.588990 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 02 21:23:41 crc kubenswrapper[4789]: I0202 21:23:41.589009 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 02 21:23:41 crc kubenswrapper[4789]: E0202 21:23:41.589037 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 02 21:23:41 crc kubenswrapper[4789]: I0202 21:23:41.589057 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 02 21:23:41 crc kubenswrapper[4789]: I0202 21:23:41.589320 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 02 21:23:41 crc kubenswrapper[4789]: I0202 21:23:41.589351 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 02 21:23:41 crc kubenswrapper[4789]: I0202 21:23:41.589375 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 02 21:23:41 crc kubenswrapper[4789]: I0202 21:23:41.589394 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 02 21:23:41 crc kubenswrapper[4789]: I0202 21:23:41.589420 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 02 21:23:41 crc kubenswrapper[4789]: I0202 21:23:41.589444 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 02 21:23:41 crc kubenswrapper[4789]: E0202 21:23:41.591526 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 02 21:23:41 crc kubenswrapper[4789]: I0202 21:23:41.591575 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 02 21:23:41 crc kubenswrapper[4789]: I0202 21:23:41.591933 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 02 21:23:41 crc kubenswrapper[4789]: I0202 21:23:41.597025 4789 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 02 21:23:41 crc kubenswrapper[4789]: I0202 21:23:41.598648 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 21:23:41 crc kubenswrapper[4789]: I0202 21:23:41.605382 4789 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Feb 02 21:23:41 crc kubenswrapper[4789]: E0202 21:23:41.666549 4789 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.189:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 21:23:41 crc kubenswrapper[4789]: I0202 21:23:41.722480 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 21:23:41 crc kubenswrapper[4789]: I0202 21:23:41.722600 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 21:23:41 crc kubenswrapper[4789]: I0202 21:23:41.722645 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 21:23:41 crc kubenswrapper[4789]: I0202 21:23:41.722725 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 21:23:41 crc kubenswrapper[4789]: I0202 21:23:41.722762 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 21:23:41 crc kubenswrapper[4789]: I0202 21:23:41.722790 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 21:23:41 crc kubenswrapper[4789]: I0202 21:23:41.722836 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 21:23:41 crc kubenswrapper[4789]: I0202 21:23:41.722869 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 21:23:41 crc kubenswrapper[4789]: I0202 21:23:41.824373 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 21:23:41 crc kubenswrapper[4789]: I0202 21:23:41.824468 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 21:23:41 crc kubenswrapper[4789]: I0202 21:23:41.824494 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 21:23:41 crc kubenswrapper[4789]: I0202 21:23:41.824533 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 21:23:41 crc kubenswrapper[4789]: I0202 21:23:41.824555 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 21:23:41 crc kubenswrapper[4789]: I0202 21:23:41.824613 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 21:23:41 crc kubenswrapper[4789]: I0202 21:23:41.824646 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 21:23:41 crc kubenswrapper[4789]: I0202 21:23:41.824672 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 21:23:41 crc kubenswrapper[4789]: I0202 21:23:41.824749 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 21:23:41 crc kubenswrapper[4789]: I0202 21:23:41.824794 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 21:23:41 crc kubenswrapper[4789]: I0202 21:23:41.824828 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 21:23:41 crc kubenswrapper[4789]: I0202 21:23:41.824854 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 21:23:41 crc kubenswrapper[4789]: I0202 21:23:41.824882 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 21:23:41 crc kubenswrapper[4789]: I0202 21:23:41.824906 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 21:23:41 crc kubenswrapper[4789]: I0202 21:23:41.824930 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 21:23:41 crc kubenswrapper[4789]: I0202 21:23:41.824956 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 21:23:41 crc kubenswrapper[4789]: I0202 21:23:41.968042 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 21:23:42 crc kubenswrapper[4789]: W0202 21:23:42.005031 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-850cc19c1b84346b1617cf56025f220e159a22f31886ca20c6b4011ff81b836f WatchSource:0}: Error finding container 850cc19c1b84346b1617cf56025f220e159a22f31886ca20c6b4011ff81b836f: Status 404 returned error can't find the container with id 850cc19c1b84346b1617cf56025f220e159a22f31886ca20c6b4011ff81b836f Feb 02 21:23:42 crc kubenswrapper[4789]: E0202 21:23:42.009728 4789 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.189:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18908af12d52f25f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 21:23:42.008767071 +0000 UTC m=+242.303792100,LastTimestamp:2026-02-02 21:23:42.008767071 +0000 UTC m=+242.303792100,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 21:23:42 crc kubenswrapper[4789]: I0202 21:23:42.281986 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"850cc19c1b84346b1617cf56025f220e159a22f31886ca20c6b4011ff81b836f"} Feb 02 21:23:42 crc kubenswrapper[4789]: I0202 21:23:42.285688 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 02 21:23:42 crc kubenswrapper[4789]: I0202 21:23:42.287451 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 02 21:23:42 crc kubenswrapper[4789]: I0202 21:23:42.288478 4789 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="66a4db3799101ccca8a89d6bfd2c9d36940b8710ee3d256e47cd61cfe6ac7c07" exitCode=0 Feb 02 21:23:42 crc kubenswrapper[4789]: I0202 21:23:42.288526 4789 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5cbe4a41d8721562e34639bc3ab24cc7f735d8de00e14d38b1b623b58740c5b9" exitCode=0 Feb 02 21:23:42 crc kubenswrapper[4789]: I0202 21:23:42.288535 4789 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="9cc46c408bb643834e494025442760929995b22175ce2af65a14004761848c2f" exitCode=0 Feb 02 21:23:42 crc kubenswrapper[4789]: I0202 21:23:42.288545 4789 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="3e95c9e553b8e14141ddde6a221c0e5e4d46533d1631893e5d55416b9d16f4a2" exitCode=2 Feb 02 21:23:42 crc kubenswrapper[4789]: I0202 21:23:42.288677 4789 scope.go:117] "RemoveContainer" containerID="5d8bd6ccf6c7345f0ddc84a2983b618218fc65283b8d351c2df30d1221f304b7" Feb 02 21:23:42 crc kubenswrapper[4789]: I0202 21:23:42.291567 4789 generic.go:334] "Generic (PLEG): container finished" podID="d8cad602-dbda-4b61-a2b8-dc9f65726c1c" containerID="c38abfd9c53fe42bb6d24993fd09d681a39c2134fcbee4f9cd4b2ab7e218090b" exitCode=0 Feb 02 21:23:42 crc kubenswrapper[4789]: I0202 21:23:42.291701 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"d8cad602-dbda-4b61-a2b8-dc9f65726c1c","Type":"ContainerDied","Data":"c38abfd9c53fe42bb6d24993fd09d681a39c2134fcbee4f9cd4b2ab7e218090b"} Feb 02 21:23:42 crc kubenswrapper[4789]: I0202 21:23:42.292782 4789 status_manager.go:851] "Failed to get status for pod" podUID="d8cad602-dbda-4b61-a2b8-dc9f65726c1c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.189:6443: connect: connection refused" Feb 02 21:23:43 crc kubenswrapper[4789]: I0202 21:23:43.302839 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"4b518cffb9b5cb9f4659132d8a4128d42cb33d5951fb9f951e7bc64a0d8c0ad7"} Feb 02 21:23:43 crc kubenswrapper[4789]: I0202 21:23:43.303785 4789 status_manager.go:851] "Failed to get status for pod" podUID="d8cad602-dbda-4b61-a2b8-dc9f65726c1c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.189:6443: connect: connection refused" Feb 02 21:23:43 crc kubenswrapper[4789]: E0202 21:23:43.304283 4789 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.189:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 21:23:43 crc kubenswrapper[4789]: I0202 21:23:43.307009 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 02 21:23:43 crc kubenswrapper[4789]: I0202 21:23:43.697667 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 02 21:23:43 crc kubenswrapper[4789]: I0202 21:23:43.699331 4789 status_manager.go:851] "Failed to get status for pod" podUID="d8cad602-dbda-4b61-a2b8-dc9f65726c1c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.189:6443: connect: connection refused" Feb 02 21:23:43 crc kubenswrapper[4789]: I0202 21:23:43.861256 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d8cad602-dbda-4b61-a2b8-dc9f65726c1c-kube-api-access\") pod \"d8cad602-dbda-4b61-a2b8-dc9f65726c1c\" (UID: \"d8cad602-dbda-4b61-a2b8-dc9f65726c1c\") " Feb 02 21:23:43 crc kubenswrapper[4789]: I0202 21:23:43.861741 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d8cad602-dbda-4b61-a2b8-dc9f65726c1c-var-lock\") pod \"d8cad602-dbda-4b61-a2b8-dc9f65726c1c\" (UID: \"d8cad602-dbda-4b61-a2b8-dc9f65726c1c\") " Feb 02 21:23:43 crc kubenswrapper[4789]: I0202 21:23:43.861784 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d8cad602-dbda-4b61-a2b8-dc9f65726c1c-kubelet-dir\") pod \"d8cad602-dbda-4b61-a2b8-dc9f65726c1c\" (UID: \"d8cad602-dbda-4b61-a2b8-dc9f65726c1c\") " Feb 02 21:23:43 crc kubenswrapper[4789]: I0202 21:23:43.861874 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d8cad602-dbda-4b61-a2b8-dc9f65726c1c-var-lock" (OuterVolumeSpecName: "var-lock") pod "d8cad602-dbda-4b61-a2b8-dc9f65726c1c" (UID: "d8cad602-dbda-4b61-a2b8-dc9f65726c1c"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 21:23:43 crc kubenswrapper[4789]: I0202 21:23:43.862059 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d8cad602-dbda-4b61-a2b8-dc9f65726c1c-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "d8cad602-dbda-4b61-a2b8-dc9f65726c1c" (UID: "d8cad602-dbda-4b61-a2b8-dc9f65726c1c"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 21:23:43 crc kubenswrapper[4789]: I0202 21:23:43.862112 4789 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d8cad602-dbda-4b61-a2b8-dc9f65726c1c-var-lock\") on node \"crc\" DevicePath \"\"" Feb 02 21:23:43 crc kubenswrapper[4789]: I0202 21:23:43.866125 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8cad602-dbda-4b61-a2b8-dc9f65726c1c-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d8cad602-dbda-4b61-a2b8-dc9f65726c1c" (UID: "d8cad602-dbda-4b61-a2b8-dc9f65726c1c"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:23:43 crc kubenswrapper[4789]: I0202 21:23:43.963030 4789 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d8cad602-dbda-4b61-a2b8-dc9f65726c1c-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 02 21:23:43 crc kubenswrapper[4789]: I0202 21:23:43.963084 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d8cad602-dbda-4b61-a2b8-dc9f65726c1c-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 21:23:44 crc kubenswrapper[4789]: I0202 21:23:44.318766 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 02 21:23:44 crc kubenswrapper[4789]: I0202 21:23:44.319951 4789 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="72bf64d682578abb47e9fabf5e6c31e3a20594cc4a2d13304b2036f4198af265" exitCode=0 Feb 02 21:23:44 crc kubenswrapper[4789]: I0202 21:23:44.322299 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"d8cad602-dbda-4b61-a2b8-dc9f65726c1c","Type":"ContainerDied","Data":"6e5f680d98820fab94bbfc63288d9656f849fce78734f2ad096b085f29cbe053"} Feb 02 21:23:44 crc kubenswrapper[4789]: I0202 21:23:44.322347 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 02 21:23:44 crc kubenswrapper[4789]: I0202 21:23:44.322381 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e5f680d98820fab94bbfc63288d9656f849fce78734f2ad096b085f29cbe053" Feb 02 21:23:44 crc kubenswrapper[4789]: E0202 21:23:44.323366 4789 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.189:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 21:23:44 crc kubenswrapper[4789]: I0202 21:23:44.344316 4789 status_manager.go:851] "Failed to get status for pod" podUID="d8cad602-dbda-4b61-a2b8-dc9f65726c1c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.189:6443: connect: connection refused" Feb 02 21:23:44 crc kubenswrapper[4789]: E0202 21:23:44.469468 4789 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.189:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" volumeName="registry-storage" Feb 02 21:23:44 crc kubenswrapper[4789]: I0202 21:23:44.471675 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 02 21:23:44 crc kubenswrapper[4789]: I0202 21:23:44.473031 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 21:23:44 crc kubenswrapper[4789]: I0202 21:23:44.473424 4789 status_manager.go:851] "Failed to get status for pod" podUID="d8cad602-dbda-4b61-a2b8-dc9f65726c1c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.189:6443: connect: connection refused" Feb 02 21:23:44 crc kubenswrapper[4789]: I0202 21:23:44.473656 4789 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.189:6443: connect: connection refused" Feb 02 21:23:44 crc kubenswrapper[4789]: I0202 21:23:44.671027 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 02 21:23:44 crc kubenswrapper[4789]: I0202 21:23:44.671303 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 21:23:44 crc kubenswrapper[4789]: I0202 21:23:44.671622 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 02 21:23:44 crc kubenswrapper[4789]: I0202 21:23:44.671675 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 02 21:23:44 crc kubenswrapper[4789]: I0202 21:23:44.671739 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 21:23:44 crc kubenswrapper[4789]: I0202 21:23:44.671836 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 21:23:44 crc kubenswrapper[4789]: I0202 21:23:44.672616 4789 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 02 21:23:44 crc kubenswrapper[4789]: I0202 21:23:44.672654 4789 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 02 21:23:44 crc kubenswrapper[4789]: I0202 21:23:44.672674 4789 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 02 21:23:45 crc kubenswrapper[4789]: I0202 21:23:45.332914 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 02 21:23:45 crc kubenswrapper[4789]: I0202 21:23:45.333722 4789 scope.go:117] "RemoveContainer" containerID="66a4db3799101ccca8a89d6bfd2c9d36940b8710ee3d256e47cd61cfe6ac7c07" Feb 02 21:23:45 crc kubenswrapper[4789]: I0202 21:23:45.333848 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 21:23:45 crc kubenswrapper[4789]: I0202 21:23:45.351756 4789 scope.go:117] "RemoveContainer" containerID="5cbe4a41d8721562e34639bc3ab24cc7f735d8de00e14d38b1b623b58740c5b9" Feb 02 21:23:45 crc kubenswrapper[4789]: I0202 21:23:45.358293 4789 status_manager.go:851] "Failed to get status for pod" podUID="d8cad602-dbda-4b61-a2b8-dc9f65726c1c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.189:6443: connect: connection refused" Feb 02 21:23:45 crc kubenswrapper[4789]: I0202 21:23:45.358704 4789 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.189:6443: connect: connection refused" Feb 02 21:23:45 crc kubenswrapper[4789]: I0202 21:23:45.365281 4789 scope.go:117] "RemoveContainer" containerID="9cc46c408bb643834e494025442760929995b22175ce2af65a14004761848c2f" Feb 02 21:23:45 crc kubenswrapper[4789]: I0202 21:23:45.381105 4789 scope.go:117] "RemoveContainer" containerID="3e95c9e553b8e14141ddde6a221c0e5e4d46533d1631893e5d55416b9d16f4a2" Feb 02 21:23:45 crc kubenswrapper[4789]: I0202 21:23:45.400106 4789 scope.go:117] "RemoveContainer" containerID="72bf64d682578abb47e9fabf5e6c31e3a20594cc4a2d13304b2036f4198af265" Feb 02 21:23:45 crc kubenswrapper[4789]: I0202 21:23:45.413427 4789 scope.go:117] "RemoveContainer" containerID="94bdc1c30a3c895835afc4f205ec20725ec13707db6957046cae7791b8850679" Feb 02 21:23:46 crc kubenswrapper[4789]: I0202 21:23:46.431718 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 02 21:23:47 crc kubenswrapper[4789]: E0202 21:23:47.834124 4789 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.189:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18908af12d52f25f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 21:23:42.008767071 +0000 UTC m=+242.303792100,LastTimestamp:2026-02-02 21:23:42.008767071 +0000 UTC m=+242.303792100,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 21:23:50 crc kubenswrapper[4789]: I0202 21:23:50.424635 4789 status_manager.go:851] "Failed to get status for pod" podUID="d8cad602-dbda-4b61-a2b8-dc9f65726c1c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.189:6443: connect: connection refused" Feb 02 21:23:51 crc kubenswrapper[4789]: E0202 21:23:51.757890 4789 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.189:6443: connect: connection refused" Feb 02 21:23:51 crc kubenswrapper[4789]: E0202 21:23:51.759655 4789 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.189:6443: connect: connection refused" Feb 02 21:23:51 crc kubenswrapper[4789]: E0202 21:23:51.760318 4789 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.189:6443: connect: connection refused" Feb 02 21:23:51 crc kubenswrapper[4789]: E0202 21:23:51.760909 4789 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.189:6443: connect: connection refused" Feb 02 21:23:51 crc kubenswrapper[4789]: E0202 21:23:51.761276 4789 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.189:6443: connect: connection refused" Feb 02 21:23:51 crc kubenswrapper[4789]: I0202 21:23:51.761331 4789 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 02 21:23:51 crc kubenswrapper[4789]: E0202 21:23:51.761860 4789 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.189:6443: connect: connection refused" interval="200ms" Feb 02 21:23:51 crc kubenswrapper[4789]: E0202 21:23:51.963017 4789 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.189:6443: connect: connection refused" interval="400ms" Feb 02 21:23:52 crc kubenswrapper[4789]: E0202 21:23:52.363456 4789 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.189:6443: connect: connection refused" interval="800ms" Feb 02 21:23:53 crc kubenswrapper[4789]: E0202 21:23:53.164949 4789 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.189:6443: connect: connection refused" interval="1.6s" Feb 02 21:23:53 crc kubenswrapper[4789]: I0202 21:23:53.418965 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 21:23:53 crc kubenswrapper[4789]: I0202 21:23:53.419659 4789 status_manager.go:851] "Failed to get status for pod" podUID="d8cad602-dbda-4b61-a2b8-dc9f65726c1c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.189:6443: connect: connection refused" Feb 02 21:23:53 crc kubenswrapper[4789]: I0202 21:23:53.436907 4789 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9563eded-ca82-4eb6-90d4-e62b8acbe296" Feb 02 21:23:53 crc kubenswrapper[4789]: I0202 21:23:53.436948 4789 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9563eded-ca82-4eb6-90d4-e62b8acbe296" Feb 02 21:23:53 crc kubenswrapper[4789]: E0202 21:23:53.437474 4789 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.189:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 21:23:53 crc kubenswrapper[4789]: I0202 21:23:53.438163 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 21:23:54 crc kubenswrapper[4789]: I0202 21:23:54.423091 4789 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="50d3f9881db95029f0250c070bc57610739fd273965d9f9ac5cbd277a984629b" exitCode=0 Feb 02 21:23:54 crc kubenswrapper[4789]: I0202 21:23:54.428266 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 02 21:23:54 crc kubenswrapper[4789]: I0202 21:23:54.428327 4789 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="460e055777327e1734238bf51929751678ea5a757b00a344daa31c8c95e4c463" exitCode=1 Feb 02 21:23:54 crc kubenswrapper[4789]: I0202 21:23:54.430013 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"50d3f9881db95029f0250c070bc57610739fd273965d9f9ac5cbd277a984629b"} Feb 02 21:23:54 crc kubenswrapper[4789]: I0202 21:23:54.430243 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"928012749b7bceadb527d078d78c447642ee8a9b0f7009765361e5a362569d69"} Feb 02 21:23:54 crc kubenswrapper[4789]: I0202 21:23:54.430393 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"460e055777327e1734238bf51929751678ea5a757b00a344daa31c8c95e4c463"} Feb 02 21:23:54 crc kubenswrapper[4789]: I0202 21:23:54.430599 4789 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9563eded-ca82-4eb6-90d4-e62b8acbe296" Feb 02 21:23:54 crc kubenswrapper[4789]: I0202 21:23:54.430624 4789 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9563eded-ca82-4eb6-90d4-e62b8acbe296" Feb 02 21:23:54 crc kubenswrapper[4789]: I0202 21:23:54.431194 4789 status_manager.go:851] "Failed to get status for pod" podUID="d8cad602-dbda-4b61-a2b8-dc9f65726c1c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.189:6443: connect: connection refused" Feb 02 21:23:54 crc kubenswrapper[4789]: E0202 21:23:54.431218 4789 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.189:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 21:23:54 crc kubenswrapper[4789]: I0202 21:23:54.431610 4789 scope.go:117] "RemoveContainer" containerID="460e055777327e1734238bf51929751678ea5a757b00a344daa31c8c95e4c463" Feb 02 21:23:54 crc kubenswrapper[4789]: I0202 21:23:54.431653 4789 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.189:6443: connect: connection refused" Feb 02 21:23:54 crc kubenswrapper[4789]: I0202 21:23:54.432178 4789 status_manager.go:851] "Failed to get status for pod" podUID="d8cad602-dbda-4b61-a2b8-dc9f65726c1c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.189:6443: connect: connection refused" Feb 02 21:23:54 crc kubenswrapper[4789]: E0202 21:23:54.765938 4789 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.189:6443: connect: connection refused" interval="3.2s" Feb 02 21:23:55 crc kubenswrapper[4789]: I0202 21:23:55.100666 4789 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 21:23:55 crc kubenswrapper[4789]: I0202 21:23:55.449155 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 02 21:23:55 crc kubenswrapper[4789]: I0202 21:23:55.449304 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"4c73cb399e166961ac703b205b48ec29fa291102ec5a9383e0460ecb04755878"} Feb 02 21:23:55 crc kubenswrapper[4789]: I0202 21:23:55.451417 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"cc455765afd4c377da30ba087b8d400ab2a395cce1d28a118f1f6e2cbec03b2d"} Feb 02 21:23:55 crc kubenswrapper[4789]: I0202 21:23:55.451447 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"279fbb297c13d2c889ae586d9ceb03291493ff4342a225479a4e86964d314e3c"} Feb 02 21:23:55 crc kubenswrapper[4789]: I0202 21:23:55.451457 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"409e67a26a6fb1df5a58170358c10ee202c4f4a4b89cbc10cdae85493a39729e"} Feb 02 21:23:56 crc kubenswrapper[4789]: I0202 21:23:56.461548 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"e57810dc6d8c5d9c1bb189434e4ec3da0e374ae8bd19ef879e71daa1c6aa0358"} Feb 02 21:23:56 crc kubenswrapper[4789]: I0202 21:23:56.461612 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"b76aeea46961c4eec27d640c12311df0415ac15955c15b39502f88d4855d9c6b"} Feb 02 21:23:56 crc kubenswrapper[4789]: I0202 21:23:56.462106 4789 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9563eded-ca82-4eb6-90d4-e62b8acbe296" Feb 02 21:23:56 crc kubenswrapper[4789]: I0202 21:23:56.462145 4789 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9563eded-ca82-4eb6-90d4-e62b8acbe296" Feb 02 21:23:57 crc kubenswrapper[4789]: I0202 21:23:57.123141 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 21:23:57 crc kubenswrapper[4789]: I0202 21:23:57.123360 4789 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 02 21:23:57 crc kubenswrapper[4789]: I0202 21:23:57.123456 4789 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 02 21:23:57 crc kubenswrapper[4789]: I0202 21:23:57.655402 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 21:23:58 crc kubenswrapper[4789]: I0202 21:23:58.439257 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 21:23:58 crc kubenswrapper[4789]: I0202 21:23:58.439347 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 21:23:58 crc kubenswrapper[4789]: I0202 21:23:58.447698 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 21:24:01 crc kubenswrapper[4789]: I0202 21:24:01.469656 4789 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 21:24:01 crc kubenswrapper[4789]: I0202 21:24:01.490015 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 21:24:01 crc kubenswrapper[4789]: I0202 21:24:01.490049 4789 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9563eded-ca82-4eb6-90d4-e62b8acbe296" Feb 02 21:24:01 crc kubenswrapper[4789]: I0202 21:24:01.490071 4789 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9563eded-ca82-4eb6-90d4-e62b8acbe296" Feb 02 21:24:01 crc kubenswrapper[4789]: I0202 21:24:01.493829 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 21:24:01 crc kubenswrapper[4789]: I0202 21:24:01.502937 4789 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="6e0f6c3a-21cc-4f09-a90b-0991b27759f2" Feb 02 21:24:02 crc kubenswrapper[4789]: I0202 21:24:02.496645 4789 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9563eded-ca82-4eb6-90d4-e62b8acbe296" Feb 02 21:24:02 crc kubenswrapper[4789]: I0202 21:24:02.496689 4789 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9563eded-ca82-4eb6-90d4-e62b8acbe296" Feb 02 21:24:03 crc kubenswrapper[4789]: I0202 21:24:03.501963 4789 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9563eded-ca82-4eb6-90d4-e62b8acbe296" Feb 02 21:24:03 crc kubenswrapper[4789]: I0202 21:24:03.502291 4789 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9563eded-ca82-4eb6-90d4-e62b8acbe296" Feb 02 21:24:07 crc kubenswrapper[4789]: I0202 21:24:07.123207 4789 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 02 21:24:07 crc kubenswrapper[4789]: I0202 21:24:07.123297 4789 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 02 21:24:10 crc kubenswrapper[4789]: I0202 21:24:10.448747 4789 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="6e0f6c3a-21cc-4f09-a90b-0991b27759f2" Feb 02 21:24:11 crc kubenswrapper[4789]: I0202 21:24:11.368712 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 02 21:24:11 crc kubenswrapper[4789]: I0202 21:24:11.375743 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 02 21:24:12 crc kubenswrapper[4789]: I0202 21:24:12.098731 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 02 21:24:12 crc kubenswrapper[4789]: I0202 21:24:12.863074 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 02 21:24:13 crc kubenswrapper[4789]: I0202 21:24:13.176117 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 02 21:24:13 crc kubenswrapper[4789]: I0202 21:24:13.332201 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 02 21:24:14 crc kubenswrapper[4789]: I0202 21:24:14.202238 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 02 21:24:14 crc kubenswrapper[4789]: I0202 21:24:14.612037 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 02 21:24:14 crc kubenswrapper[4789]: I0202 21:24:14.657663 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 02 21:24:14 crc kubenswrapper[4789]: I0202 21:24:14.750858 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 02 21:24:14 crc kubenswrapper[4789]: I0202 21:24:14.811741 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 02 21:24:15 crc kubenswrapper[4789]: I0202 21:24:15.021439 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 02 21:24:15 crc kubenswrapper[4789]: I0202 21:24:15.026622 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 02 21:24:15 crc kubenswrapper[4789]: I0202 21:24:15.039020 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 02 21:24:15 crc kubenswrapper[4789]: I0202 21:24:15.316898 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 02 21:24:15 crc kubenswrapper[4789]: I0202 21:24:15.377474 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 02 21:24:15 crc kubenswrapper[4789]: I0202 21:24:15.383330 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 02 21:24:15 crc kubenswrapper[4789]: I0202 21:24:15.386097 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 02 21:24:15 crc kubenswrapper[4789]: I0202 21:24:15.392250 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 02 21:24:15 crc kubenswrapper[4789]: I0202 21:24:15.543275 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 02 21:24:16 crc kubenswrapper[4789]: I0202 21:24:16.018788 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 02 21:24:16 crc kubenswrapper[4789]: I0202 21:24:16.027501 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 02 21:24:16 crc kubenswrapper[4789]: I0202 21:24:16.077291 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 02 21:24:16 crc kubenswrapper[4789]: I0202 21:24:16.114357 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 02 21:24:16 crc kubenswrapper[4789]: I0202 21:24:16.136723 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 02 21:24:16 crc kubenswrapper[4789]: I0202 21:24:16.241387 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 02 21:24:16 crc kubenswrapper[4789]: I0202 21:24:16.242636 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 02 21:24:16 crc kubenswrapper[4789]: I0202 21:24:16.481468 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 02 21:24:16 crc kubenswrapper[4789]: I0202 21:24:16.508979 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 02 21:24:16 crc kubenswrapper[4789]: I0202 21:24:16.521263 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 02 21:24:16 crc kubenswrapper[4789]: I0202 21:24:16.533400 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 02 21:24:16 crc kubenswrapper[4789]: I0202 21:24:16.550720 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 02 21:24:16 crc kubenswrapper[4789]: I0202 21:24:16.558312 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 02 21:24:16 crc kubenswrapper[4789]: I0202 21:24:16.592336 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 02 21:24:16 crc kubenswrapper[4789]: I0202 21:24:16.608917 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 02 21:24:16 crc kubenswrapper[4789]: I0202 21:24:16.623041 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 02 21:24:16 crc kubenswrapper[4789]: I0202 21:24:16.659248 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 02 21:24:16 crc kubenswrapper[4789]: I0202 21:24:16.691517 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 02 21:24:16 crc kubenswrapper[4789]: I0202 21:24:16.707778 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 02 21:24:16 crc kubenswrapper[4789]: I0202 21:24:16.730645 4789 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 02 21:24:16 crc kubenswrapper[4789]: I0202 21:24:16.736652 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 02 21:24:16 crc kubenswrapper[4789]: I0202 21:24:16.750666 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 02 21:24:16 crc kubenswrapper[4789]: I0202 21:24:16.786913 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 02 21:24:16 crc kubenswrapper[4789]: I0202 21:24:16.791119 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 02 21:24:16 crc kubenswrapper[4789]: I0202 21:24:16.869685 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 02 21:24:16 crc kubenswrapper[4789]: I0202 21:24:16.910330 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 02 21:24:16 crc kubenswrapper[4789]: I0202 21:24:16.923258 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 02 21:24:17 crc kubenswrapper[4789]: I0202 21:24:17.076672 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 02 21:24:17 crc kubenswrapper[4789]: I0202 21:24:17.077403 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 02 21:24:17 crc kubenswrapper[4789]: I0202 21:24:17.088817 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 02 21:24:17 crc kubenswrapper[4789]: I0202 21:24:17.096289 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 02 21:24:17 crc kubenswrapper[4789]: I0202 21:24:17.123304 4789 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 02 21:24:17 crc kubenswrapper[4789]: I0202 21:24:17.123378 4789 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 02 21:24:17 crc kubenswrapper[4789]: I0202 21:24:17.123449 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 21:24:17 crc kubenswrapper[4789]: I0202 21:24:17.124396 4789 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"4c73cb399e166961ac703b205b48ec29fa291102ec5a9383e0460ecb04755878"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Feb 02 21:24:17 crc kubenswrapper[4789]: I0202 21:24:17.124638 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" containerID="cri-o://4c73cb399e166961ac703b205b48ec29fa291102ec5a9383e0460ecb04755878" gracePeriod=30 Feb 02 21:24:17 crc kubenswrapper[4789]: I0202 21:24:17.152805 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 02 21:24:17 crc kubenswrapper[4789]: I0202 21:24:17.158254 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 02 21:24:17 crc kubenswrapper[4789]: I0202 21:24:17.175920 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 02 21:24:17 crc kubenswrapper[4789]: I0202 21:24:17.293369 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 02 21:24:17 crc kubenswrapper[4789]: I0202 21:24:17.300752 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 02 21:24:17 crc kubenswrapper[4789]: I0202 21:24:17.441809 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 02 21:24:17 crc kubenswrapper[4789]: I0202 21:24:17.643566 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 02 21:24:17 crc kubenswrapper[4789]: I0202 21:24:17.663133 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 02 21:24:17 crc kubenswrapper[4789]: I0202 21:24:17.668256 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 02 21:24:17 crc kubenswrapper[4789]: I0202 21:24:17.732097 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 02 21:24:17 crc kubenswrapper[4789]: I0202 21:24:17.732470 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 02 21:24:17 crc kubenswrapper[4789]: I0202 21:24:17.746137 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 02 21:24:17 crc kubenswrapper[4789]: I0202 21:24:17.838132 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 02 21:24:17 crc kubenswrapper[4789]: I0202 21:24:17.857042 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 02 21:24:17 crc kubenswrapper[4789]: I0202 21:24:17.878034 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 02 21:24:17 crc kubenswrapper[4789]: I0202 21:24:17.950609 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 02 21:24:17 crc kubenswrapper[4789]: I0202 21:24:17.994273 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 02 21:24:18 crc kubenswrapper[4789]: I0202 21:24:18.013192 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 02 21:24:18 crc kubenswrapper[4789]: I0202 21:24:18.079075 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 02 21:24:18 crc kubenswrapper[4789]: I0202 21:24:18.098939 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 02 21:24:18 crc kubenswrapper[4789]: I0202 21:24:18.101845 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 02 21:24:18 crc kubenswrapper[4789]: I0202 21:24:18.128464 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 02 21:24:18 crc kubenswrapper[4789]: I0202 21:24:18.208914 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 02 21:24:18 crc kubenswrapper[4789]: I0202 21:24:18.371767 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 02 21:24:18 crc kubenswrapper[4789]: I0202 21:24:18.497686 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 02 21:24:18 crc kubenswrapper[4789]: I0202 21:24:18.518790 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 02 21:24:18 crc kubenswrapper[4789]: I0202 21:24:18.600889 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 02 21:24:18 crc kubenswrapper[4789]: I0202 21:24:18.613189 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 02 21:24:18 crc kubenswrapper[4789]: I0202 21:24:18.628109 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 02 21:24:18 crc kubenswrapper[4789]: I0202 21:24:18.628141 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 02 21:24:18 crc kubenswrapper[4789]: I0202 21:24:18.636987 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 02 21:24:18 crc kubenswrapper[4789]: I0202 21:24:18.655829 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 02 21:24:18 crc kubenswrapper[4789]: I0202 21:24:18.790520 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 02 21:24:18 crc kubenswrapper[4789]: I0202 21:24:18.838319 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 02 21:24:18 crc kubenswrapper[4789]: I0202 21:24:18.858816 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 02 21:24:18 crc kubenswrapper[4789]: I0202 21:24:18.881015 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 02 21:24:18 crc kubenswrapper[4789]: I0202 21:24:18.910125 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 02 21:24:18 crc kubenswrapper[4789]: I0202 21:24:18.943310 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 02 21:24:19 crc kubenswrapper[4789]: I0202 21:24:19.042117 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 02 21:24:19 crc kubenswrapper[4789]: I0202 21:24:19.102463 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 02 21:24:19 crc kubenswrapper[4789]: I0202 21:24:19.136264 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 02 21:24:19 crc kubenswrapper[4789]: I0202 21:24:19.166792 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 02 21:24:19 crc kubenswrapper[4789]: I0202 21:24:19.186761 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 02 21:24:19 crc kubenswrapper[4789]: I0202 21:24:19.253072 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 02 21:24:19 crc kubenswrapper[4789]: I0202 21:24:19.282927 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 02 21:24:19 crc kubenswrapper[4789]: I0202 21:24:19.292242 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 02 21:24:19 crc kubenswrapper[4789]: I0202 21:24:19.313897 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 02 21:24:19 crc kubenswrapper[4789]: I0202 21:24:19.371499 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 02 21:24:19 crc kubenswrapper[4789]: I0202 21:24:19.395836 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 02 21:24:19 crc kubenswrapper[4789]: I0202 21:24:19.398532 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 02 21:24:19 crc kubenswrapper[4789]: I0202 21:24:19.421718 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 02 21:24:19 crc kubenswrapper[4789]: I0202 21:24:19.473786 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 02 21:24:19 crc kubenswrapper[4789]: I0202 21:24:19.507114 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 02 21:24:19 crc kubenswrapper[4789]: I0202 21:24:19.554593 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 02 21:24:19 crc kubenswrapper[4789]: I0202 21:24:19.582869 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 02 21:24:19 crc kubenswrapper[4789]: I0202 21:24:19.655272 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 02 21:24:19 crc kubenswrapper[4789]: I0202 21:24:19.656630 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 02 21:24:19 crc kubenswrapper[4789]: I0202 21:24:19.703038 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 02 21:24:19 crc kubenswrapper[4789]: I0202 21:24:19.747609 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 02 21:24:19 crc kubenswrapper[4789]: I0202 21:24:19.790226 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 02 21:24:19 crc kubenswrapper[4789]: I0202 21:24:19.808005 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 02 21:24:19 crc kubenswrapper[4789]: I0202 21:24:19.867565 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 02 21:24:19 crc kubenswrapper[4789]: I0202 21:24:19.924066 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 02 21:24:19 crc kubenswrapper[4789]: I0202 21:24:19.994763 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 02 21:24:20 crc kubenswrapper[4789]: I0202 21:24:20.001221 4789 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 02 21:24:20 crc kubenswrapper[4789]: I0202 21:24:20.008966 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 02 21:24:20 crc kubenswrapper[4789]: I0202 21:24:20.070991 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 02 21:24:20 crc kubenswrapper[4789]: I0202 21:24:20.076660 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 02 21:24:20 crc kubenswrapper[4789]: I0202 21:24:20.126530 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 02 21:24:20 crc kubenswrapper[4789]: I0202 21:24:20.193424 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 02 21:24:20 crc kubenswrapper[4789]: I0202 21:24:20.211861 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 02 21:24:20 crc kubenswrapper[4789]: I0202 21:24:20.219771 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 02 21:24:20 crc kubenswrapper[4789]: I0202 21:24:20.482990 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 02 21:24:20 crc kubenswrapper[4789]: I0202 21:24:20.516511 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 02 21:24:20 crc kubenswrapper[4789]: I0202 21:24:20.553696 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 02 21:24:20 crc kubenswrapper[4789]: I0202 21:24:20.683430 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 02 21:24:20 crc kubenswrapper[4789]: I0202 21:24:20.756862 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 02 21:24:20 crc kubenswrapper[4789]: I0202 21:24:20.766459 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 02 21:24:20 crc kubenswrapper[4789]: I0202 21:24:20.768055 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 02 21:24:20 crc kubenswrapper[4789]: I0202 21:24:20.964108 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 02 21:24:21 crc kubenswrapper[4789]: I0202 21:24:21.117428 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 02 21:24:21 crc kubenswrapper[4789]: I0202 21:24:21.137056 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 02 21:24:21 crc kubenswrapper[4789]: I0202 21:24:21.172296 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 02 21:24:21 crc kubenswrapper[4789]: I0202 21:24:21.226840 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 02 21:24:21 crc kubenswrapper[4789]: I0202 21:24:21.360942 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 02 21:24:21 crc kubenswrapper[4789]: I0202 21:24:21.410376 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 02 21:24:21 crc kubenswrapper[4789]: I0202 21:24:21.448603 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 02 21:24:21 crc kubenswrapper[4789]: I0202 21:24:21.556571 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 02 21:24:21 crc kubenswrapper[4789]: I0202 21:24:21.563368 4789 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 02 21:24:21 crc kubenswrapper[4789]: I0202 21:24:21.567789 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 02 21:24:21 crc kubenswrapper[4789]: I0202 21:24:21.569182 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 02 21:24:21 crc kubenswrapper[4789]: I0202 21:24:21.579514 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 02 21:24:21 crc kubenswrapper[4789]: I0202 21:24:21.697765 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 02 21:24:21 crc kubenswrapper[4789]: I0202 21:24:21.698082 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 02 21:24:21 crc kubenswrapper[4789]: I0202 21:24:21.786252 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 02 21:24:21 crc kubenswrapper[4789]: I0202 21:24:21.798486 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 02 21:24:21 crc kubenswrapper[4789]: I0202 21:24:21.978146 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 02 21:24:22 crc kubenswrapper[4789]: I0202 21:24:22.012127 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 02 21:24:22 crc kubenswrapper[4789]: I0202 21:24:22.013968 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 02 21:24:22 crc kubenswrapper[4789]: I0202 21:24:22.048919 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 02 21:24:22 crc kubenswrapper[4789]: I0202 21:24:22.081228 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 02 21:24:22 crc kubenswrapper[4789]: I0202 21:24:22.109122 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 02 21:24:22 crc kubenswrapper[4789]: I0202 21:24:22.193178 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 02 21:24:22 crc kubenswrapper[4789]: I0202 21:24:22.302885 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 02 21:24:22 crc kubenswrapper[4789]: I0202 21:24:22.427722 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 02 21:24:22 crc kubenswrapper[4789]: I0202 21:24:22.474405 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 02 21:24:22 crc kubenswrapper[4789]: I0202 21:24:22.576375 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 02 21:24:22 crc kubenswrapper[4789]: I0202 21:24:22.643139 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 02 21:24:22 crc kubenswrapper[4789]: I0202 21:24:22.763653 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 02 21:24:22 crc kubenswrapper[4789]: I0202 21:24:22.892419 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 02 21:24:22 crc kubenswrapper[4789]: I0202 21:24:22.951500 4789 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 02 21:24:22 crc kubenswrapper[4789]: I0202 21:24:22.958005 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 02 21:24:22 crc kubenswrapper[4789]: I0202 21:24:22.958297 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 02 21:24:22 crc kubenswrapper[4789]: I0202 21:24:22.964164 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 21:24:22 crc kubenswrapper[4789]: I0202 21:24:22.987969 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=21.987947461 podStartE2EDuration="21.987947461s" podCreationTimestamp="2026-02-02 21:24:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:24:22.982108977 +0000 UTC m=+283.277134016" watchObservedRunningTime="2026-02-02 21:24:22.987947461 +0000 UTC m=+283.282972490" Feb 02 21:24:23 crc kubenswrapper[4789]: I0202 21:24:23.053194 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 02 21:24:23 crc kubenswrapper[4789]: I0202 21:24:23.078796 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 02 21:24:23 crc kubenswrapper[4789]: I0202 21:24:23.113028 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 02 21:24:23 crc kubenswrapper[4789]: I0202 21:24:23.161361 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 02 21:24:23 crc kubenswrapper[4789]: I0202 21:24:23.173026 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 02 21:24:23 crc kubenswrapper[4789]: I0202 21:24:23.219471 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 02 21:24:23 crc kubenswrapper[4789]: I0202 21:24:23.256180 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 02 21:24:23 crc kubenswrapper[4789]: I0202 21:24:23.280419 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 02 21:24:23 crc kubenswrapper[4789]: I0202 21:24:23.370108 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 02 21:24:23 crc kubenswrapper[4789]: I0202 21:24:23.388738 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 02 21:24:23 crc kubenswrapper[4789]: I0202 21:24:23.389847 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 02 21:24:23 crc kubenswrapper[4789]: I0202 21:24:23.498930 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 02 21:24:23 crc kubenswrapper[4789]: I0202 21:24:23.506544 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 02 21:24:23 crc kubenswrapper[4789]: I0202 21:24:23.641872 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 02 21:24:23 crc kubenswrapper[4789]: I0202 21:24:23.676031 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 02 21:24:23 crc kubenswrapper[4789]: I0202 21:24:23.706812 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 02 21:24:23 crc kubenswrapper[4789]: I0202 21:24:23.710411 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 02 21:24:23 crc kubenswrapper[4789]: I0202 21:24:23.792339 4789 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 02 21:24:23 crc kubenswrapper[4789]: I0202 21:24:23.793896 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://4b518cffb9b5cb9f4659132d8a4128d42cb33d5951fb9f951e7bc64a0d8c0ad7" gracePeriod=5 Feb 02 21:24:23 crc kubenswrapper[4789]: I0202 21:24:23.841716 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 02 21:24:23 crc kubenswrapper[4789]: I0202 21:24:23.856407 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 02 21:24:23 crc kubenswrapper[4789]: I0202 21:24:23.867708 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 02 21:24:23 crc kubenswrapper[4789]: I0202 21:24:23.918981 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 02 21:24:23 crc kubenswrapper[4789]: I0202 21:24:23.949365 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 02 21:24:23 crc kubenswrapper[4789]: I0202 21:24:23.950293 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 02 21:24:23 crc kubenswrapper[4789]: I0202 21:24:23.997009 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 02 21:24:24 crc kubenswrapper[4789]: I0202 21:24:24.142553 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 02 21:24:24 crc kubenswrapper[4789]: I0202 21:24:24.284874 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 02 21:24:24 crc kubenswrapper[4789]: I0202 21:24:24.429777 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 02 21:24:24 crc kubenswrapper[4789]: I0202 21:24:24.498146 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 02 21:24:24 crc kubenswrapper[4789]: I0202 21:24:24.568713 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 02 21:24:24 crc kubenswrapper[4789]: I0202 21:24:24.575518 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 02 21:24:24 crc kubenswrapper[4789]: I0202 21:24:24.754350 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 02 21:24:24 crc kubenswrapper[4789]: I0202 21:24:24.771817 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 02 21:24:24 crc kubenswrapper[4789]: I0202 21:24:24.793806 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 02 21:24:24 crc kubenswrapper[4789]: I0202 21:24:24.795129 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 02 21:24:24 crc kubenswrapper[4789]: I0202 21:24:24.834376 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 02 21:24:24 crc kubenswrapper[4789]: I0202 21:24:24.855035 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 02 21:24:24 crc kubenswrapper[4789]: I0202 21:24:24.867917 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 02 21:24:25 crc kubenswrapper[4789]: I0202 21:24:25.024530 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 02 21:24:25 crc kubenswrapper[4789]: I0202 21:24:25.056533 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 02 21:24:25 crc kubenswrapper[4789]: I0202 21:24:25.082949 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 02 21:24:25 crc kubenswrapper[4789]: I0202 21:24:25.091130 4789 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 02 21:24:25 crc kubenswrapper[4789]: I0202 21:24:25.100639 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 02 21:24:25 crc kubenswrapper[4789]: I0202 21:24:25.103234 4789 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 02 21:24:25 crc kubenswrapper[4789]: I0202 21:24:25.242560 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 02 21:24:25 crc kubenswrapper[4789]: I0202 21:24:25.277509 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 02 21:24:25 crc kubenswrapper[4789]: I0202 21:24:25.314015 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 02 21:24:25 crc kubenswrapper[4789]: I0202 21:24:25.342716 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 02 21:24:25 crc kubenswrapper[4789]: I0202 21:24:25.360793 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 02 21:24:25 crc kubenswrapper[4789]: I0202 21:24:25.422341 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 02 21:24:25 crc kubenswrapper[4789]: I0202 21:24:25.480212 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 02 21:24:25 crc kubenswrapper[4789]: I0202 21:24:25.610573 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 02 21:24:25 crc kubenswrapper[4789]: I0202 21:24:25.760375 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 02 21:24:25 crc kubenswrapper[4789]: I0202 21:24:25.901687 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 02 21:24:25 crc kubenswrapper[4789]: I0202 21:24:25.905613 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 02 21:24:26 crc kubenswrapper[4789]: I0202 21:24:26.136960 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 02 21:24:26 crc kubenswrapper[4789]: I0202 21:24:26.141169 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 02 21:24:26 crc kubenswrapper[4789]: I0202 21:24:26.230487 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 02 21:24:26 crc kubenswrapper[4789]: I0202 21:24:26.288311 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 02 21:24:26 crc kubenswrapper[4789]: I0202 21:24:26.447828 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 02 21:24:26 crc kubenswrapper[4789]: I0202 21:24:26.624212 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 02 21:24:26 crc kubenswrapper[4789]: I0202 21:24:26.769861 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 02 21:24:26 crc kubenswrapper[4789]: I0202 21:24:26.839196 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 02 21:24:26 crc kubenswrapper[4789]: I0202 21:24:26.943726 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 02 21:24:26 crc kubenswrapper[4789]: I0202 21:24:26.956239 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 02 21:24:26 crc kubenswrapper[4789]: I0202 21:24:26.976891 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 02 21:24:27 crc kubenswrapper[4789]: I0202 21:24:27.008005 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 02 21:24:27 crc kubenswrapper[4789]: I0202 21:24:27.179392 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 02 21:24:27 crc kubenswrapper[4789]: I0202 21:24:27.227948 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 02 21:24:27 crc kubenswrapper[4789]: I0202 21:24:27.245262 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 02 21:24:27 crc kubenswrapper[4789]: I0202 21:24:27.304179 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 02 21:24:27 crc kubenswrapper[4789]: I0202 21:24:27.304866 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 02 21:24:27 crc kubenswrapper[4789]: I0202 21:24:27.352385 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 02 21:24:27 crc kubenswrapper[4789]: I0202 21:24:27.365493 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 02 21:24:27 crc kubenswrapper[4789]: I0202 21:24:27.395847 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 02 21:24:27 crc kubenswrapper[4789]: I0202 21:24:27.438704 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 02 21:24:27 crc kubenswrapper[4789]: I0202 21:24:27.923539 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 02 21:24:28 crc kubenswrapper[4789]: I0202 21:24:28.031651 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 02 21:24:28 crc kubenswrapper[4789]: I0202 21:24:28.285668 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 02 21:24:28 crc kubenswrapper[4789]: I0202 21:24:28.300286 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 02 21:24:28 crc kubenswrapper[4789]: I0202 21:24:28.515443 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 02 21:24:28 crc kubenswrapper[4789]: I0202 21:24:28.582709 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 02 21:24:28 crc kubenswrapper[4789]: I0202 21:24:28.862621 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 02 21:24:29 crc kubenswrapper[4789]: I0202 21:24:29.229287 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 02 21:24:29 crc kubenswrapper[4789]: I0202 21:24:29.320654 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 02 21:24:29 crc kubenswrapper[4789]: I0202 21:24:29.403195 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 02 21:24:29 crc kubenswrapper[4789]: I0202 21:24:29.403321 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 21:24:29 crc kubenswrapper[4789]: I0202 21:24:29.427726 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 02 21:24:29 crc kubenswrapper[4789]: I0202 21:24:29.446802 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 02 21:24:29 crc kubenswrapper[4789]: I0202 21:24:29.446893 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 02 21:24:29 crc kubenswrapper[4789]: I0202 21:24:29.446944 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 02 21:24:29 crc kubenswrapper[4789]: I0202 21:24:29.447071 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 02 21:24:29 crc kubenswrapper[4789]: I0202 21:24:29.447145 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 02 21:24:29 crc kubenswrapper[4789]: I0202 21:24:29.447463 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 21:24:29 crc kubenswrapper[4789]: I0202 21:24:29.447470 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 21:24:29 crc kubenswrapper[4789]: I0202 21:24:29.447494 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 21:24:29 crc kubenswrapper[4789]: I0202 21:24:29.447525 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 21:24:29 crc kubenswrapper[4789]: I0202 21:24:29.459451 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 21:24:29 crc kubenswrapper[4789]: I0202 21:24:29.548874 4789 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 02 21:24:29 crc kubenswrapper[4789]: I0202 21:24:29.548920 4789 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 02 21:24:29 crc kubenswrapper[4789]: I0202 21:24:29.548930 4789 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 02 21:24:29 crc kubenswrapper[4789]: I0202 21:24:29.548938 4789 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 02 21:24:29 crc kubenswrapper[4789]: I0202 21:24:29.548946 4789 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 02 21:24:29 crc kubenswrapper[4789]: I0202 21:24:29.641113 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 02 21:24:29 crc kubenswrapper[4789]: I0202 21:24:29.677124 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 02 21:24:29 crc kubenswrapper[4789]: I0202 21:24:29.677184 4789 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="4b518cffb9b5cb9f4659132d8a4128d42cb33d5951fb9f951e7bc64a0d8c0ad7" exitCode=137 Feb 02 21:24:29 crc kubenswrapper[4789]: I0202 21:24:29.677236 4789 scope.go:117] "RemoveContainer" containerID="4b518cffb9b5cb9f4659132d8a4128d42cb33d5951fb9f951e7bc64a0d8c0ad7" Feb 02 21:24:29 crc kubenswrapper[4789]: I0202 21:24:29.677314 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 21:24:29 crc kubenswrapper[4789]: I0202 21:24:29.703869 4789 scope.go:117] "RemoveContainer" containerID="4b518cffb9b5cb9f4659132d8a4128d42cb33d5951fb9f951e7bc64a0d8c0ad7" Feb 02 21:24:29 crc kubenswrapper[4789]: E0202 21:24:29.704363 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b518cffb9b5cb9f4659132d8a4128d42cb33d5951fb9f951e7bc64a0d8c0ad7\": container with ID starting with 4b518cffb9b5cb9f4659132d8a4128d42cb33d5951fb9f951e7bc64a0d8c0ad7 not found: ID does not exist" containerID="4b518cffb9b5cb9f4659132d8a4128d42cb33d5951fb9f951e7bc64a0d8c0ad7" Feb 02 21:24:29 crc kubenswrapper[4789]: I0202 21:24:29.704406 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b518cffb9b5cb9f4659132d8a4128d42cb33d5951fb9f951e7bc64a0d8c0ad7"} err="failed to get container status \"4b518cffb9b5cb9f4659132d8a4128d42cb33d5951fb9f951e7bc64a0d8c0ad7\": rpc error: code = NotFound desc = could not find container \"4b518cffb9b5cb9f4659132d8a4128d42cb33d5951fb9f951e7bc64a0d8c0ad7\": container with ID starting with 4b518cffb9b5cb9f4659132d8a4128d42cb33d5951fb9f951e7bc64a0d8c0ad7 not found: ID does not exist" Feb 02 21:24:29 crc kubenswrapper[4789]: I0202 21:24:29.738546 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 02 21:24:30 crc kubenswrapper[4789]: I0202 21:24:30.427978 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 02 21:24:30 crc kubenswrapper[4789]: I0202 21:24:30.809280 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 02 21:24:30 crc kubenswrapper[4789]: I0202 21:24:30.949509 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 02 21:24:31 crc kubenswrapper[4789]: I0202 21:24:31.130560 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 02 21:24:40 crc kubenswrapper[4789]: I0202 21:24:40.184343 4789 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Feb 02 21:24:47 crc kubenswrapper[4789]: I0202 21:24:47.792208 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Feb 02 21:24:47 crc kubenswrapper[4789]: I0202 21:24:47.795831 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 02 21:24:47 crc kubenswrapper[4789]: I0202 21:24:47.795948 4789 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="4c73cb399e166961ac703b205b48ec29fa291102ec5a9383e0460ecb04755878" exitCode=137 Feb 02 21:24:47 crc kubenswrapper[4789]: I0202 21:24:47.796031 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"4c73cb399e166961ac703b205b48ec29fa291102ec5a9383e0460ecb04755878"} Feb 02 21:24:47 crc kubenswrapper[4789]: I0202 21:24:47.796126 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"78dd8d0edb739f86ec50ef3e496d1acd654b06a35a740b674a3b27bfda266299"} Feb 02 21:24:47 crc kubenswrapper[4789]: I0202 21:24:47.796163 4789 scope.go:117] "RemoveContainer" containerID="460e055777327e1734238bf51929751678ea5a757b00a344daa31c8c95e4c463" Feb 02 21:24:48 crc kubenswrapper[4789]: I0202 21:24:48.805772 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Feb 02 21:24:57 crc kubenswrapper[4789]: I0202 21:24:57.122786 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 21:24:57 crc kubenswrapper[4789]: I0202 21:24:57.138163 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 21:24:57 crc kubenswrapper[4789]: I0202 21:24:57.655132 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 21:24:57 crc kubenswrapper[4789]: I0202 21:24:57.663145 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 21:25:08 crc kubenswrapper[4789]: I0202 21:25:08.480669 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-4z5px"] Feb 02 21:25:08 crc kubenswrapper[4789]: I0202 21:25:08.481425 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-4z5px" podUID="6777175c-7525-4ae6-9b3e-391b3e21abf8" containerName="controller-manager" containerID="cri-o://c1f504f3ae1b387311e1902cf3465280092480c81575bb7c7622dd0298fc8324" gracePeriod=30 Feb 02 21:25:08 crc kubenswrapper[4789]: I0202 21:25:08.484478 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-bdqgm"] Feb 02 21:25:08 crc kubenswrapper[4789]: I0202 21:25:08.484676 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bdqgm" podUID="50fa75e9-6b6f-4dc5-a3be-5d3e2d7f8169" containerName="route-controller-manager" containerID="cri-o://391d0bd791deb4acffb0e3d2d0c7ec607bb6e0f45bbd10001b41863fddbd0104" gracePeriod=30 Feb 02 21:25:08 crc kubenswrapper[4789]: I0202 21:25:08.950291 4789 generic.go:334] "Generic (PLEG): container finished" podID="6777175c-7525-4ae6-9b3e-391b3e21abf8" containerID="c1f504f3ae1b387311e1902cf3465280092480c81575bb7c7622dd0298fc8324" exitCode=0 Feb 02 21:25:08 crc kubenswrapper[4789]: I0202 21:25:08.950405 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-4z5px" event={"ID":"6777175c-7525-4ae6-9b3e-391b3e21abf8","Type":"ContainerDied","Data":"c1f504f3ae1b387311e1902cf3465280092480c81575bb7c7622dd0298fc8324"} Feb 02 21:25:08 crc kubenswrapper[4789]: I0202 21:25:08.953444 4789 generic.go:334] "Generic (PLEG): container finished" podID="50fa75e9-6b6f-4dc5-a3be-5d3e2d7f8169" containerID="391d0bd791deb4acffb0e3d2d0c7ec607bb6e0f45bbd10001b41863fddbd0104" exitCode=0 Feb 02 21:25:08 crc kubenswrapper[4789]: I0202 21:25:08.953501 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bdqgm" event={"ID":"50fa75e9-6b6f-4dc5-a3be-5d3e2d7f8169","Type":"ContainerDied","Data":"391d0bd791deb4acffb0e3d2d0c7ec607bb6e0f45bbd10001b41863fddbd0104"} Feb 02 21:25:09 crc kubenswrapper[4789]: I0202 21:25:09.464263 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bdqgm" Feb 02 21:25:09 crc kubenswrapper[4789]: I0202 21:25:09.527821 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/50fa75e9-6b6f-4dc5-a3be-5d3e2d7f8169-client-ca\") pod \"50fa75e9-6b6f-4dc5-a3be-5d3e2d7f8169\" (UID: \"50fa75e9-6b6f-4dc5-a3be-5d3e2d7f8169\") " Feb 02 21:25:09 crc kubenswrapper[4789]: I0202 21:25:09.527878 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50fa75e9-6b6f-4dc5-a3be-5d3e2d7f8169-config\") pod \"50fa75e9-6b6f-4dc5-a3be-5d3e2d7f8169\" (UID: \"50fa75e9-6b6f-4dc5-a3be-5d3e2d7f8169\") " Feb 02 21:25:09 crc kubenswrapper[4789]: I0202 21:25:09.527913 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6tfzp\" (UniqueName: \"kubernetes.io/projected/50fa75e9-6b6f-4dc5-a3be-5d3e2d7f8169-kube-api-access-6tfzp\") pod \"50fa75e9-6b6f-4dc5-a3be-5d3e2d7f8169\" (UID: \"50fa75e9-6b6f-4dc5-a3be-5d3e2d7f8169\") " Feb 02 21:25:09 crc kubenswrapper[4789]: I0202 21:25:09.527952 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/50fa75e9-6b6f-4dc5-a3be-5d3e2d7f8169-serving-cert\") pod \"50fa75e9-6b6f-4dc5-a3be-5d3e2d7f8169\" (UID: \"50fa75e9-6b6f-4dc5-a3be-5d3e2d7f8169\") " Feb 02 21:25:09 crc kubenswrapper[4789]: I0202 21:25:09.529041 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50fa75e9-6b6f-4dc5-a3be-5d3e2d7f8169-client-ca" (OuterVolumeSpecName: "client-ca") pod "50fa75e9-6b6f-4dc5-a3be-5d3e2d7f8169" (UID: "50fa75e9-6b6f-4dc5-a3be-5d3e2d7f8169"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:25:09 crc kubenswrapper[4789]: I0202 21:25:09.529235 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50fa75e9-6b6f-4dc5-a3be-5d3e2d7f8169-config" (OuterVolumeSpecName: "config") pod "50fa75e9-6b6f-4dc5-a3be-5d3e2d7f8169" (UID: "50fa75e9-6b6f-4dc5-a3be-5d3e2d7f8169"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:25:09 crc kubenswrapper[4789]: I0202 21:25:09.536559 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50fa75e9-6b6f-4dc5-a3be-5d3e2d7f8169-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "50fa75e9-6b6f-4dc5-a3be-5d3e2d7f8169" (UID: "50fa75e9-6b6f-4dc5-a3be-5d3e2d7f8169"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:25:09 crc kubenswrapper[4789]: I0202 21:25:09.537800 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50fa75e9-6b6f-4dc5-a3be-5d3e2d7f8169-kube-api-access-6tfzp" (OuterVolumeSpecName: "kube-api-access-6tfzp") pod "50fa75e9-6b6f-4dc5-a3be-5d3e2d7f8169" (UID: "50fa75e9-6b6f-4dc5-a3be-5d3e2d7f8169"). InnerVolumeSpecName "kube-api-access-6tfzp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:25:09 crc kubenswrapper[4789]: I0202 21:25:09.574721 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-4z5px" Feb 02 21:25:09 crc kubenswrapper[4789]: I0202 21:25:09.628442 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6777175c-7525-4ae6-9b3e-391b3e21abf8-config\") pod \"6777175c-7525-4ae6-9b3e-391b3e21abf8\" (UID: \"6777175c-7525-4ae6-9b3e-391b3e21abf8\") " Feb 02 21:25:09 crc kubenswrapper[4789]: I0202 21:25:09.628479 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6777175c-7525-4ae6-9b3e-391b3e21abf8-proxy-ca-bundles\") pod \"6777175c-7525-4ae6-9b3e-391b3e21abf8\" (UID: \"6777175c-7525-4ae6-9b3e-391b3e21abf8\") " Feb 02 21:25:09 crc kubenswrapper[4789]: I0202 21:25:09.628508 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddt5h\" (UniqueName: \"kubernetes.io/projected/6777175c-7525-4ae6-9b3e-391b3e21abf8-kube-api-access-ddt5h\") pod \"6777175c-7525-4ae6-9b3e-391b3e21abf8\" (UID: \"6777175c-7525-4ae6-9b3e-391b3e21abf8\") " Feb 02 21:25:09 crc kubenswrapper[4789]: I0202 21:25:09.628573 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6777175c-7525-4ae6-9b3e-391b3e21abf8-serving-cert\") pod \"6777175c-7525-4ae6-9b3e-391b3e21abf8\" (UID: \"6777175c-7525-4ae6-9b3e-391b3e21abf8\") " Feb 02 21:25:09 crc kubenswrapper[4789]: I0202 21:25:09.628625 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6777175c-7525-4ae6-9b3e-391b3e21abf8-client-ca\") pod \"6777175c-7525-4ae6-9b3e-391b3e21abf8\" (UID: \"6777175c-7525-4ae6-9b3e-391b3e21abf8\") " Feb 02 21:25:09 crc kubenswrapper[4789]: I0202 21:25:09.628837 4789 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/50fa75e9-6b6f-4dc5-a3be-5d3e2d7f8169-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 21:25:09 crc kubenswrapper[4789]: I0202 21:25:09.628847 4789 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50fa75e9-6b6f-4dc5-a3be-5d3e2d7f8169-config\") on node \"crc\" DevicePath \"\"" Feb 02 21:25:09 crc kubenswrapper[4789]: I0202 21:25:09.628855 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6tfzp\" (UniqueName: \"kubernetes.io/projected/50fa75e9-6b6f-4dc5-a3be-5d3e2d7f8169-kube-api-access-6tfzp\") on node \"crc\" DevicePath \"\"" Feb 02 21:25:09 crc kubenswrapper[4789]: I0202 21:25:09.628864 4789 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/50fa75e9-6b6f-4dc5-a3be-5d3e2d7f8169-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 21:25:09 crc kubenswrapper[4789]: I0202 21:25:09.629384 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6777175c-7525-4ae6-9b3e-391b3e21abf8-config" (OuterVolumeSpecName: "config") pod "6777175c-7525-4ae6-9b3e-391b3e21abf8" (UID: "6777175c-7525-4ae6-9b3e-391b3e21abf8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:25:09 crc kubenswrapper[4789]: I0202 21:25:09.629392 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6777175c-7525-4ae6-9b3e-391b3e21abf8-client-ca" (OuterVolumeSpecName: "client-ca") pod "6777175c-7525-4ae6-9b3e-391b3e21abf8" (UID: "6777175c-7525-4ae6-9b3e-391b3e21abf8"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:25:09 crc kubenswrapper[4789]: I0202 21:25:09.629590 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6777175c-7525-4ae6-9b3e-391b3e21abf8-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "6777175c-7525-4ae6-9b3e-391b3e21abf8" (UID: "6777175c-7525-4ae6-9b3e-391b3e21abf8"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:25:09 crc kubenswrapper[4789]: I0202 21:25:09.687434 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6777175c-7525-4ae6-9b3e-391b3e21abf8-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6777175c-7525-4ae6-9b3e-391b3e21abf8" (UID: "6777175c-7525-4ae6-9b3e-391b3e21abf8"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:25:09 crc kubenswrapper[4789]: I0202 21:25:09.687943 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6777175c-7525-4ae6-9b3e-391b3e21abf8-kube-api-access-ddt5h" (OuterVolumeSpecName: "kube-api-access-ddt5h") pod "6777175c-7525-4ae6-9b3e-391b3e21abf8" (UID: "6777175c-7525-4ae6-9b3e-391b3e21abf8"). InnerVolumeSpecName "kube-api-access-ddt5h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:25:09 crc kubenswrapper[4789]: I0202 21:25:09.729419 4789 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6777175c-7525-4ae6-9b3e-391b3e21abf8-config\") on node \"crc\" DevicePath \"\"" Feb 02 21:25:09 crc kubenswrapper[4789]: I0202 21:25:09.729453 4789 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6777175c-7525-4ae6-9b3e-391b3e21abf8-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 02 21:25:09 crc kubenswrapper[4789]: I0202 21:25:09.729464 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ddt5h\" (UniqueName: \"kubernetes.io/projected/6777175c-7525-4ae6-9b3e-391b3e21abf8-kube-api-access-ddt5h\") on node \"crc\" DevicePath \"\"" Feb 02 21:25:09 crc kubenswrapper[4789]: I0202 21:25:09.729473 4789 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6777175c-7525-4ae6-9b3e-391b3e21abf8-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 21:25:09 crc kubenswrapper[4789]: I0202 21:25:09.729482 4789 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6777175c-7525-4ae6-9b3e-391b3e21abf8-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 21:25:09 crc kubenswrapper[4789]: I0202 21:25:09.909130 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-748d8f84b-lq9tq"] Feb 02 21:25:09 crc kubenswrapper[4789]: E0202 21:25:09.909376 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6777175c-7525-4ae6-9b3e-391b3e21abf8" containerName="controller-manager" Feb 02 21:25:09 crc kubenswrapper[4789]: I0202 21:25:09.909393 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="6777175c-7525-4ae6-9b3e-391b3e21abf8" containerName="controller-manager" Feb 02 21:25:09 crc kubenswrapper[4789]: E0202 21:25:09.909403 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8cad602-dbda-4b61-a2b8-dc9f65726c1c" containerName="installer" Feb 02 21:25:09 crc kubenswrapper[4789]: I0202 21:25:09.909409 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8cad602-dbda-4b61-a2b8-dc9f65726c1c" containerName="installer" Feb 02 21:25:09 crc kubenswrapper[4789]: E0202 21:25:09.909420 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 02 21:25:09 crc kubenswrapper[4789]: I0202 21:25:09.909427 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 02 21:25:09 crc kubenswrapper[4789]: E0202 21:25:09.909434 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50fa75e9-6b6f-4dc5-a3be-5d3e2d7f8169" containerName="route-controller-manager" Feb 02 21:25:09 crc kubenswrapper[4789]: I0202 21:25:09.909440 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="50fa75e9-6b6f-4dc5-a3be-5d3e2d7f8169" containerName="route-controller-manager" Feb 02 21:25:09 crc kubenswrapper[4789]: I0202 21:25:09.909533 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="6777175c-7525-4ae6-9b3e-391b3e21abf8" containerName="controller-manager" Feb 02 21:25:09 crc kubenswrapper[4789]: I0202 21:25:09.909543 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8cad602-dbda-4b61-a2b8-dc9f65726c1c" containerName="installer" Feb 02 21:25:09 crc kubenswrapper[4789]: I0202 21:25:09.909554 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 02 21:25:09 crc kubenswrapper[4789]: I0202 21:25:09.909563 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="50fa75e9-6b6f-4dc5-a3be-5d3e2d7f8169" containerName="route-controller-manager" Feb 02 21:25:09 crc kubenswrapper[4789]: I0202 21:25:09.909993 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-748d8f84b-lq9tq" Feb 02 21:25:09 crc kubenswrapper[4789]: I0202 21:25:09.912912 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-694bc5c7c7-4mvff"] Feb 02 21:25:09 crc kubenswrapper[4789]: I0202 21:25:09.913663 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-694bc5c7c7-4mvff" Feb 02 21:25:09 crc kubenswrapper[4789]: I0202 21:25:09.923129 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-748d8f84b-lq9tq"] Feb 02 21:25:09 crc kubenswrapper[4789]: I0202 21:25:09.927192 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-694bc5c7c7-4mvff"] Feb 02 21:25:09 crc kubenswrapper[4789]: I0202 21:25:09.931057 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kwr6\" (UniqueName: \"kubernetes.io/projected/e1476bc6-8e90-4ee5-9fed-15e7715823a4-kube-api-access-4kwr6\") pod \"route-controller-manager-694bc5c7c7-4mvff\" (UID: \"e1476bc6-8e90-4ee5-9fed-15e7715823a4\") " pod="openshift-route-controller-manager/route-controller-manager-694bc5c7c7-4mvff" Feb 02 21:25:09 crc kubenswrapper[4789]: I0202 21:25:09.931109 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vnr8\" (UniqueName: \"kubernetes.io/projected/d794f011-01aa-4ff7-925c-d7b2223bdf0e-kube-api-access-7vnr8\") pod \"controller-manager-748d8f84b-lq9tq\" (UID: \"d794f011-01aa-4ff7-925c-d7b2223bdf0e\") " pod="openshift-controller-manager/controller-manager-748d8f84b-lq9tq" Feb 02 21:25:09 crc kubenswrapper[4789]: I0202 21:25:09.931144 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d794f011-01aa-4ff7-925c-d7b2223bdf0e-config\") pod \"controller-manager-748d8f84b-lq9tq\" (UID: \"d794f011-01aa-4ff7-925c-d7b2223bdf0e\") " pod="openshift-controller-manager/controller-manager-748d8f84b-lq9tq" Feb 02 21:25:09 crc kubenswrapper[4789]: I0202 21:25:09.931169 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e1476bc6-8e90-4ee5-9fed-15e7715823a4-client-ca\") pod \"route-controller-manager-694bc5c7c7-4mvff\" (UID: \"e1476bc6-8e90-4ee5-9fed-15e7715823a4\") " pod="openshift-route-controller-manager/route-controller-manager-694bc5c7c7-4mvff" Feb 02 21:25:09 crc kubenswrapper[4789]: I0202 21:25:09.931197 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1476bc6-8e90-4ee5-9fed-15e7715823a4-config\") pod \"route-controller-manager-694bc5c7c7-4mvff\" (UID: \"e1476bc6-8e90-4ee5-9fed-15e7715823a4\") " pod="openshift-route-controller-manager/route-controller-manager-694bc5c7c7-4mvff" Feb 02 21:25:09 crc kubenswrapper[4789]: I0202 21:25:09.931218 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d794f011-01aa-4ff7-925c-d7b2223bdf0e-client-ca\") pod \"controller-manager-748d8f84b-lq9tq\" (UID: \"d794f011-01aa-4ff7-925c-d7b2223bdf0e\") " pod="openshift-controller-manager/controller-manager-748d8f84b-lq9tq" Feb 02 21:25:09 crc kubenswrapper[4789]: I0202 21:25:09.931253 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d794f011-01aa-4ff7-925c-d7b2223bdf0e-proxy-ca-bundles\") pod \"controller-manager-748d8f84b-lq9tq\" (UID: \"d794f011-01aa-4ff7-925c-d7b2223bdf0e\") " pod="openshift-controller-manager/controller-manager-748d8f84b-lq9tq" Feb 02 21:25:09 crc kubenswrapper[4789]: I0202 21:25:09.931276 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d794f011-01aa-4ff7-925c-d7b2223bdf0e-serving-cert\") pod \"controller-manager-748d8f84b-lq9tq\" (UID: \"d794f011-01aa-4ff7-925c-d7b2223bdf0e\") " pod="openshift-controller-manager/controller-manager-748d8f84b-lq9tq" Feb 02 21:25:09 crc kubenswrapper[4789]: I0202 21:25:09.931318 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e1476bc6-8e90-4ee5-9fed-15e7715823a4-serving-cert\") pod \"route-controller-manager-694bc5c7c7-4mvff\" (UID: \"e1476bc6-8e90-4ee5-9fed-15e7715823a4\") " pod="openshift-route-controller-manager/route-controller-manager-694bc5c7c7-4mvff" Feb 02 21:25:09 crc kubenswrapper[4789]: I0202 21:25:09.961903 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bdqgm" event={"ID":"50fa75e9-6b6f-4dc5-a3be-5d3e2d7f8169","Type":"ContainerDied","Data":"c45c09642aad10c01c8b24b0bb37aab8b6119f44acdbf15c737612250557246a"} Feb 02 21:25:09 crc kubenswrapper[4789]: I0202 21:25:09.962162 4789 scope.go:117] "RemoveContainer" containerID="391d0bd791deb4acffb0e3d2d0c7ec607bb6e0f45bbd10001b41863fddbd0104" Feb 02 21:25:09 crc kubenswrapper[4789]: I0202 21:25:09.962318 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bdqgm" Feb 02 21:25:09 crc kubenswrapper[4789]: I0202 21:25:09.966862 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-4z5px" event={"ID":"6777175c-7525-4ae6-9b3e-391b3e21abf8","Type":"ContainerDied","Data":"cf13991a19505657ea1f94681cb0c0f157ad67e9729175ec9f35231006e8cf37"} Feb 02 21:25:09 crc kubenswrapper[4789]: I0202 21:25:09.966940 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-4z5px" Feb 02 21:25:09 crc kubenswrapper[4789]: I0202 21:25:09.984021 4789 scope.go:117] "RemoveContainer" containerID="c1f504f3ae1b387311e1902cf3465280092480c81575bb7c7622dd0298fc8324" Feb 02 21:25:09 crc kubenswrapper[4789]: I0202 21:25:09.996145 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-bdqgm"] Feb 02 21:25:10 crc kubenswrapper[4789]: I0202 21:25:10.002558 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-bdqgm"] Feb 02 21:25:10 crc kubenswrapper[4789]: I0202 21:25:10.015035 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-4z5px"] Feb 02 21:25:10 crc kubenswrapper[4789]: I0202 21:25:10.020020 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-4z5px"] Feb 02 21:25:10 crc kubenswrapper[4789]: I0202 21:25:10.032535 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1476bc6-8e90-4ee5-9fed-15e7715823a4-config\") pod \"route-controller-manager-694bc5c7c7-4mvff\" (UID: \"e1476bc6-8e90-4ee5-9fed-15e7715823a4\") " pod="openshift-route-controller-manager/route-controller-manager-694bc5c7c7-4mvff" Feb 02 21:25:10 crc kubenswrapper[4789]: I0202 21:25:10.032611 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d794f011-01aa-4ff7-925c-d7b2223bdf0e-client-ca\") pod \"controller-manager-748d8f84b-lq9tq\" (UID: \"d794f011-01aa-4ff7-925c-d7b2223bdf0e\") " pod="openshift-controller-manager/controller-manager-748d8f84b-lq9tq" Feb 02 21:25:10 crc kubenswrapper[4789]: I0202 21:25:10.032665 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d794f011-01aa-4ff7-925c-d7b2223bdf0e-proxy-ca-bundles\") pod \"controller-manager-748d8f84b-lq9tq\" (UID: \"d794f011-01aa-4ff7-925c-d7b2223bdf0e\") " pod="openshift-controller-manager/controller-manager-748d8f84b-lq9tq" Feb 02 21:25:10 crc kubenswrapper[4789]: I0202 21:25:10.032686 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d794f011-01aa-4ff7-925c-d7b2223bdf0e-serving-cert\") pod \"controller-manager-748d8f84b-lq9tq\" (UID: \"d794f011-01aa-4ff7-925c-d7b2223bdf0e\") " pod="openshift-controller-manager/controller-manager-748d8f84b-lq9tq" Feb 02 21:25:10 crc kubenswrapper[4789]: I0202 21:25:10.032730 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e1476bc6-8e90-4ee5-9fed-15e7715823a4-serving-cert\") pod \"route-controller-manager-694bc5c7c7-4mvff\" (UID: \"e1476bc6-8e90-4ee5-9fed-15e7715823a4\") " pod="openshift-route-controller-manager/route-controller-manager-694bc5c7c7-4mvff" Feb 02 21:25:10 crc kubenswrapper[4789]: I0202 21:25:10.033770 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d794f011-01aa-4ff7-925c-d7b2223bdf0e-client-ca\") pod \"controller-manager-748d8f84b-lq9tq\" (UID: \"d794f011-01aa-4ff7-925c-d7b2223bdf0e\") " pod="openshift-controller-manager/controller-manager-748d8f84b-lq9tq" Feb 02 21:25:10 crc kubenswrapper[4789]: I0202 21:25:10.033812 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1476bc6-8e90-4ee5-9fed-15e7715823a4-config\") pod \"route-controller-manager-694bc5c7c7-4mvff\" (UID: \"e1476bc6-8e90-4ee5-9fed-15e7715823a4\") " pod="openshift-route-controller-manager/route-controller-manager-694bc5c7c7-4mvff" Feb 02 21:25:10 crc kubenswrapper[4789]: I0202 21:25:10.034042 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d794f011-01aa-4ff7-925c-d7b2223bdf0e-proxy-ca-bundles\") pod \"controller-manager-748d8f84b-lq9tq\" (UID: \"d794f011-01aa-4ff7-925c-d7b2223bdf0e\") " pod="openshift-controller-manager/controller-manager-748d8f84b-lq9tq" Feb 02 21:25:10 crc kubenswrapper[4789]: I0202 21:25:10.033030 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4kwr6\" (UniqueName: \"kubernetes.io/projected/e1476bc6-8e90-4ee5-9fed-15e7715823a4-kube-api-access-4kwr6\") pod \"route-controller-manager-694bc5c7c7-4mvff\" (UID: \"e1476bc6-8e90-4ee5-9fed-15e7715823a4\") " pod="openshift-route-controller-manager/route-controller-manager-694bc5c7c7-4mvff" Feb 02 21:25:10 crc kubenswrapper[4789]: I0202 21:25:10.034195 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vnr8\" (UniqueName: \"kubernetes.io/projected/d794f011-01aa-4ff7-925c-d7b2223bdf0e-kube-api-access-7vnr8\") pod \"controller-manager-748d8f84b-lq9tq\" (UID: \"d794f011-01aa-4ff7-925c-d7b2223bdf0e\") " pod="openshift-controller-manager/controller-manager-748d8f84b-lq9tq" Feb 02 21:25:10 crc kubenswrapper[4789]: I0202 21:25:10.034813 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d794f011-01aa-4ff7-925c-d7b2223bdf0e-config\") pod \"controller-manager-748d8f84b-lq9tq\" (UID: \"d794f011-01aa-4ff7-925c-d7b2223bdf0e\") " pod="openshift-controller-manager/controller-manager-748d8f84b-lq9tq" Feb 02 21:25:10 crc kubenswrapper[4789]: I0202 21:25:10.048877 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d794f011-01aa-4ff7-925c-d7b2223bdf0e-serving-cert\") pod \"controller-manager-748d8f84b-lq9tq\" (UID: \"d794f011-01aa-4ff7-925c-d7b2223bdf0e\") " pod="openshift-controller-manager/controller-manager-748d8f84b-lq9tq" Feb 02 21:25:10 crc kubenswrapper[4789]: I0202 21:25:10.048992 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kwr6\" (UniqueName: \"kubernetes.io/projected/e1476bc6-8e90-4ee5-9fed-15e7715823a4-kube-api-access-4kwr6\") pod \"route-controller-manager-694bc5c7c7-4mvff\" (UID: \"e1476bc6-8e90-4ee5-9fed-15e7715823a4\") " pod="openshift-route-controller-manager/route-controller-manager-694bc5c7c7-4mvff" Feb 02 21:25:10 crc kubenswrapper[4789]: I0202 21:25:10.050369 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vnr8\" (UniqueName: \"kubernetes.io/projected/d794f011-01aa-4ff7-925c-d7b2223bdf0e-kube-api-access-7vnr8\") pod \"controller-manager-748d8f84b-lq9tq\" (UID: \"d794f011-01aa-4ff7-925c-d7b2223bdf0e\") " pod="openshift-controller-manager/controller-manager-748d8f84b-lq9tq" Feb 02 21:25:10 crc kubenswrapper[4789]: I0202 21:25:10.052212 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e1476bc6-8e90-4ee5-9fed-15e7715823a4-serving-cert\") pod \"route-controller-manager-694bc5c7c7-4mvff\" (UID: \"e1476bc6-8e90-4ee5-9fed-15e7715823a4\") " pod="openshift-route-controller-manager/route-controller-manager-694bc5c7c7-4mvff" Feb 02 21:25:10 crc kubenswrapper[4789]: I0202 21:25:10.052769 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d794f011-01aa-4ff7-925c-d7b2223bdf0e-config\") pod \"controller-manager-748d8f84b-lq9tq\" (UID: \"d794f011-01aa-4ff7-925c-d7b2223bdf0e\") " pod="openshift-controller-manager/controller-manager-748d8f84b-lq9tq" Feb 02 21:25:10 crc kubenswrapper[4789]: I0202 21:25:10.052862 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e1476bc6-8e90-4ee5-9fed-15e7715823a4-client-ca\") pod \"route-controller-manager-694bc5c7c7-4mvff\" (UID: \"e1476bc6-8e90-4ee5-9fed-15e7715823a4\") " pod="openshift-route-controller-manager/route-controller-manager-694bc5c7c7-4mvff" Feb 02 21:25:10 crc kubenswrapper[4789]: I0202 21:25:10.053754 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e1476bc6-8e90-4ee5-9fed-15e7715823a4-client-ca\") pod \"route-controller-manager-694bc5c7c7-4mvff\" (UID: \"e1476bc6-8e90-4ee5-9fed-15e7715823a4\") " pod="openshift-route-controller-manager/route-controller-manager-694bc5c7c7-4mvff" Feb 02 21:25:10 crc kubenswrapper[4789]: I0202 21:25:10.227281 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-748d8f84b-lq9tq" Feb 02 21:25:10 crc kubenswrapper[4789]: I0202 21:25:10.233477 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-694bc5c7c7-4mvff" Feb 02 21:25:10 crc kubenswrapper[4789]: I0202 21:25:10.433400 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50fa75e9-6b6f-4dc5-a3be-5d3e2d7f8169" path="/var/lib/kubelet/pods/50fa75e9-6b6f-4dc5-a3be-5d3e2d7f8169/volumes" Feb 02 21:25:10 crc kubenswrapper[4789]: I0202 21:25:10.434288 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6777175c-7525-4ae6-9b3e-391b3e21abf8" path="/var/lib/kubelet/pods/6777175c-7525-4ae6-9b3e-391b3e21abf8/volumes" Feb 02 21:25:10 crc kubenswrapper[4789]: I0202 21:25:10.517508 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-748d8f84b-lq9tq"] Feb 02 21:25:10 crc kubenswrapper[4789]: I0202 21:25:10.606637 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-694bc5c7c7-4mvff"] Feb 02 21:25:10 crc kubenswrapper[4789]: I0202 21:25:10.972537 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-694bc5c7c7-4mvff" event={"ID":"e1476bc6-8e90-4ee5-9fed-15e7715823a4","Type":"ContainerStarted","Data":"09f658a6b96d0aa5e52e14c941e84fdb23f28371085eea16a5b6cdc06b0a0864"} Feb 02 21:25:10 crc kubenswrapper[4789]: I0202 21:25:10.972820 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-694bc5c7c7-4mvff" event={"ID":"e1476bc6-8e90-4ee5-9fed-15e7715823a4","Type":"ContainerStarted","Data":"52b4d445c4aab166c42c0a4ea5606a394d3388d9d61ce7cd08e3d6128c1ec124"} Feb 02 21:25:10 crc kubenswrapper[4789]: I0202 21:25:10.972838 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-694bc5c7c7-4mvff" Feb 02 21:25:10 crc kubenswrapper[4789]: I0202 21:25:10.976875 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-748d8f84b-lq9tq" event={"ID":"d794f011-01aa-4ff7-925c-d7b2223bdf0e","Type":"ContainerStarted","Data":"c3a04c067c0235facda414e4916378264439fb7f34c51f2ce4b0bba38e3637f8"} Feb 02 21:25:10 crc kubenswrapper[4789]: I0202 21:25:10.976903 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-748d8f84b-lq9tq" event={"ID":"d794f011-01aa-4ff7-925c-d7b2223bdf0e","Type":"ContainerStarted","Data":"e03ffa584bec20ce2c509cb48c4eb730389159f9a8a277618608c203d4b0ca52"} Feb 02 21:25:10 crc kubenswrapper[4789]: I0202 21:25:10.977697 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-748d8f84b-lq9tq" Feb 02 21:25:10 crc kubenswrapper[4789]: I0202 21:25:10.981944 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-748d8f84b-lq9tq" Feb 02 21:25:10 crc kubenswrapper[4789]: I0202 21:25:10.994623 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-694bc5c7c7-4mvff" podStartSLOduration=2.994607685 podStartE2EDuration="2.994607685s" podCreationTimestamp="2026-02-02 21:25:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:25:10.992977888 +0000 UTC m=+331.288002907" watchObservedRunningTime="2026-02-02 21:25:10.994607685 +0000 UTC m=+331.289632704" Feb 02 21:25:11 crc kubenswrapper[4789]: I0202 21:25:11.007332 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-748d8f84b-lq9tq" podStartSLOduration=3.007314207 podStartE2EDuration="3.007314207s" podCreationTimestamp="2026-02-02 21:25:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:25:11.003887469 +0000 UTC m=+331.298912488" watchObservedRunningTime="2026-02-02 21:25:11.007314207 +0000 UTC m=+331.302339226" Feb 02 21:25:11 crc kubenswrapper[4789]: I0202 21:25:11.260341 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-694bc5c7c7-4mvff" Feb 02 21:25:18 crc kubenswrapper[4789]: I0202 21:25:18.554030 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-748d8f84b-lq9tq"] Feb 02 21:25:18 crc kubenswrapper[4789]: I0202 21:25:18.555205 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-748d8f84b-lq9tq" podUID="d794f011-01aa-4ff7-925c-d7b2223bdf0e" containerName="controller-manager" containerID="cri-o://c3a04c067c0235facda414e4916378264439fb7f34c51f2ce4b0bba38e3637f8" gracePeriod=30 Feb 02 21:25:19 crc kubenswrapper[4789]: I0202 21:25:19.041520 4789 generic.go:334] "Generic (PLEG): container finished" podID="d794f011-01aa-4ff7-925c-d7b2223bdf0e" containerID="c3a04c067c0235facda414e4916378264439fb7f34c51f2ce4b0bba38e3637f8" exitCode=0 Feb 02 21:25:19 crc kubenswrapper[4789]: I0202 21:25:19.041647 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-748d8f84b-lq9tq" event={"ID":"d794f011-01aa-4ff7-925c-d7b2223bdf0e","Type":"ContainerDied","Data":"c3a04c067c0235facda414e4916378264439fb7f34c51f2ce4b0bba38e3637f8"} Feb 02 21:25:19 crc kubenswrapper[4789]: I0202 21:25:19.202984 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-748d8f84b-lq9tq" Feb 02 21:25:19 crc kubenswrapper[4789]: I0202 21:25:19.267606 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d794f011-01aa-4ff7-925c-d7b2223bdf0e-serving-cert\") pod \"d794f011-01aa-4ff7-925c-d7b2223bdf0e\" (UID: \"d794f011-01aa-4ff7-925c-d7b2223bdf0e\") " Feb 02 21:25:19 crc kubenswrapper[4789]: I0202 21:25:19.267766 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7vnr8\" (UniqueName: \"kubernetes.io/projected/d794f011-01aa-4ff7-925c-d7b2223bdf0e-kube-api-access-7vnr8\") pod \"d794f011-01aa-4ff7-925c-d7b2223bdf0e\" (UID: \"d794f011-01aa-4ff7-925c-d7b2223bdf0e\") " Feb 02 21:25:19 crc kubenswrapper[4789]: I0202 21:25:19.267854 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d794f011-01aa-4ff7-925c-d7b2223bdf0e-config\") pod \"d794f011-01aa-4ff7-925c-d7b2223bdf0e\" (UID: \"d794f011-01aa-4ff7-925c-d7b2223bdf0e\") " Feb 02 21:25:19 crc kubenswrapper[4789]: I0202 21:25:19.267972 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d794f011-01aa-4ff7-925c-d7b2223bdf0e-client-ca\") pod \"d794f011-01aa-4ff7-925c-d7b2223bdf0e\" (UID: \"d794f011-01aa-4ff7-925c-d7b2223bdf0e\") " Feb 02 21:25:19 crc kubenswrapper[4789]: I0202 21:25:19.268044 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d794f011-01aa-4ff7-925c-d7b2223bdf0e-proxy-ca-bundles\") pod \"d794f011-01aa-4ff7-925c-d7b2223bdf0e\" (UID: \"d794f011-01aa-4ff7-925c-d7b2223bdf0e\") " Feb 02 21:25:19 crc kubenswrapper[4789]: I0202 21:25:19.270298 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d794f011-01aa-4ff7-925c-d7b2223bdf0e-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "d794f011-01aa-4ff7-925c-d7b2223bdf0e" (UID: "d794f011-01aa-4ff7-925c-d7b2223bdf0e"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:25:19 crc kubenswrapper[4789]: I0202 21:25:19.270354 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d794f011-01aa-4ff7-925c-d7b2223bdf0e-config" (OuterVolumeSpecName: "config") pod "d794f011-01aa-4ff7-925c-d7b2223bdf0e" (UID: "d794f011-01aa-4ff7-925c-d7b2223bdf0e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:25:19 crc kubenswrapper[4789]: I0202 21:25:19.270918 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d794f011-01aa-4ff7-925c-d7b2223bdf0e-client-ca" (OuterVolumeSpecName: "client-ca") pod "d794f011-01aa-4ff7-925c-d7b2223bdf0e" (UID: "d794f011-01aa-4ff7-925c-d7b2223bdf0e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:25:19 crc kubenswrapper[4789]: I0202 21:25:19.280171 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d794f011-01aa-4ff7-925c-d7b2223bdf0e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d794f011-01aa-4ff7-925c-d7b2223bdf0e" (UID: "d794f011-01aa-4ff7-925c-d7b2223bdf0e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:25:19 crc kubenswrapper[4789]: I0202 21:25:19.280237 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d794f011-01aa-4ff7-925c-d7b2223bdf0e-kube-api-access-7vnr8" (OuterVolumeSpecName: "kube-api-access-7vnr8") pod "d794f011-01aa-4ff7-925c-d7b2223bdf0e" (UID: "d794f011-01aa-4ff7-925c-d7b2223bdf0e"). InnerVolumeSpecName "kube-api-access-7vnr8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:25:19 crc kubenswrapper[4789]: I0202 21:25:19.370286 4789 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d794f011-01aa-4ff7-925c-d7b2223bdf0e-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 21:25:19 crc kubenswrapper[4789]: I0202 21:25:19.370343 4789 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d794f011-01aa-4ff7-925c-d7b2223bdf0e-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 02 21:25:19 crc kubenswrapper[4789]: I0202 21:25:19.370365 4789 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d794f011-01aa-4ff7-925c-d7b2223bdf0e-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 21:25:19 crc kubenswrapper[4789]: I0202 21:25:19.370383 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7vnr8\" (UniqueName: \"kubernetes.io/projected/d794f011-01aa-4ff7-925c-d7b2223bdf0e-kube-api-access-7vnr8\") on node \"crc\" DevicePath \"\"" Feb 02 21:25:19 crc kubenswrapper[4789]: I0202 21:25:19.370405 4789 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d794f011-01aa-4ff7-925c-d7b2223bdf0e-config\") on node \"crc\" DevicePath \"\"" Feb 02 21:25:19 crc kubenswrapper[4789]: I0202 21:25:19.924898 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5dc8478dc5-7pn4j"] Feb 02 21:25:19 crc kubenswrapper[4789]: E0202 21:25:19.925245 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d794f011-01aa-4ff7-925c-d7b2223bdf0e" containerName="controller-manager" Feb 02 21:25:19 crc kubenswrapper[4789]: I0202 21:25:19.925267 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="d794f011-01aa-4ff7-925c-d7b2223bdf0e" containerName="controller-manager" Feb 02 21:25:19 crc kubenswrapper[4789]: I0202 21:25:19.925440 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="d794f011-01aa-4ff7-925c-d7b2223bdf0e" containerName="controller-manager" Feb 02 21:25:19 crc kubenswrapper[4789]: I0202 21:25:19.926063 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5dc8478dc5-7pn4j" Feb 02 21:25:19 crc kubenswrapper[4789]: I0202 21:25:19.940646 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5dc8478dc5-7pn4j"] Feb 02 21:25:19 crc kubenswrapper[4789]: I0202 21:25:19.979341 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/77982fc2-b527-48a6-ab37-56c9b9b443f8-proxy-ca-bundles\") pod \"controller-manager-5dc8478dc5-7pn4j\" (UID: \"77982fc2-b527-48a6-ab37-56c9b9b443f8\") " pod="openshift-controller-manager/controller-manager-5dc8478dc5-7pn4j" Feb 02 21:25:19 crc kubenswrapper[4789]: I0202 21:25:19.979404 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77982fc2-b527-48a6-ab37-56c9b9b443f8-serving-cert\") pod \"controller-manager-5dc8478dc5-7pn4j\" (UID: \"77982fc2-b527-48a6-ab37-56c9b9b443f8\") " pod="openshift-controller-manager/controller-manager-5dc8478dc5-7pn4j" Feb 02 21:25:19 crc kubenswrapper[4789]: I0202 21:25:19.979446 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77982fc2-b527-48a6-ab37-56c9b9b443f8-config\") pod \"controller-manager-5dc8478dc5-7pn4j\" (UID: \"77982fc2-b527-48a6-ab37-56c9b9b443f8\") " pod="openshift-controller-manager/controller-manager-5dc8478dc5-7pn4j" Feb 02 21:25:19 crc kubenswrapper[4789]: I0202 21:25:19.979479 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhp9f\" (UniqueName: \"kubernetes.io/projected/77982fc2-b527-48a6-ab37-56c9b9b443f8-kube-api-access-rhp9f\") pod \"controller-manager-5dc8478dc5-7pn4j\" (UID: \"77982fc2-b527-48a6-ab37-56c9b9b443f8\") " pod="openshift-controller-manager/controller-manager-5dc8478dc5-7pn4j" Feb 02 21:25:19 crc kubenswrapper[4789]: I0202 21:25:19.979647 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/77982fc2-b527-48a6-ab37-56c9b9b443f8-client-ca\") pod \"controller-manager-5dc8478dc5-7pn4j\" (UID: \"77982fc2-b527-48a6-ab37-56c9b9b443f8\") " pod="openshift-controller-manager/controller-manager-5dc8478dc5-7pn4j" Feb 02 21:25:20 crc kubenswrapper[4789]: I0202 21:25:20.051017 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-748d8f84b-lq9tq" event={"ID":"d794f011-01aa-4ff7-925c-d7b2223bdf0e","Type":"ContainerDied","Data":"e03ffa584bec20ce2c509cb48c4eb730389159f9a8a277618608c203d4b0ca52"} Feb 02 21:25:20 crc kubenswrapper[4789]: I0202 21:25:20.051085 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-748d8f84b-lq9tq" Feb 02 21:25:20 crc kubenswrapper[4789]: I0202 21:25:20.051098 4789 scope.go:117] "RemoveContainer" containerID="c3a04c067c0235facda414e4916378264439fb7f34c51f2ce4b0bba38e3637f8" Feb 02 21:25:20 crc kubenswrapper[4789]: I0202 21:25:20.081643 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77982fc2-b527-48a6-ab37-56c9b9b443f8-config\") pod \"controller-manager-5dc8478dc5-7pn4j\" (UID: \"77982fc2-b527-48a6-ab37-56c9b9b443f8\") " pod="openshift-controller-manager/controller-manager-5dc8478dc5-7pn4j" Feb 02 21:25:20 crc kubenswrapper[4789]: I0202 21:25:20.081714 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhp9f\" (UniqueName: \"kubernetes.io/projected/77982fc2-b527-48a6-ab37-56c9b9b443f8-kube-api-access-rhp9f\") pod \"controller-manager-5dc8478dc5-7pn4j\" (UID: \"77982fc2-b527-48a6-ab37-56c9b9b443f8\") " pod="openshift-controller-manager/controller-manager-5dc8478dc5-7pn4j" Feb 02 21:25:20 crc kubenswrapper[4789]: I0202 21:25:20.081804 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/77982fc2-b527-48a6-ab37-56c9b9b443f8-client-ca\") pod \"controller-manager-5dc8478dc5-7pn4j\" (UID: \"77982fc2-b527-48a6-ab37-56c9b9b443f8\") " pod="openshift-controller-manager/controller-manager-5dc8478dc5-7pn4j" Feb 02 21:25:20 crc kubenswrapper[4789]: I0202 21:25:20.081896 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/77982fc2-b527-48a6-ab37-56c9b9b443f8-proxy-ca-bundles\") pod \"controller-manager-5dc8478dc5-7pn4j\" (UID: \"77982fc2-b527-48a6-ab37-56c9b9b443f8\") " pod="openshift-controller-manager/controller-manager-5dc8478dc5-7pn4j" Feb 02 21:25:20 crc kubenswrapper[4789]: I0202 21:25:20.081939 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77982fc2-b527-48a6-ab37-56c9b9b443f8-serving-cert\") pod \"controller-manager-5dc8478dc5-7pn4j\" (UID: \"77982fc2-b527-48a6-ab37-56c9b9b443f8\") " pod="openshift-controller-manager/controller-manager-5dc8478dc5-7pn4j" Feb 02 21:25:20 crc kubenswrapper[4789]: I0202 21:25:20.089005 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/77982fc2-b527-48a6-ab37-56c9b9b443f8-proxy-ca-bundles\") pod \"controller-manager-5dc8478dc5-7pn4j\" (UID: \"77982fc2-b527-48a6-ab37-56c9b9b443f8\") " pod="openshift-controller-manager/controller-manager-5dc8478dc5-7pn4j" Feb 02 21:25:20 crc kubenswrapper[4789]: I0202 21:25:20.092348 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77982fc2-b527-48a6-ab37-56c9b9b443f8-config\") pod \"controller-manager-5dc8478dc5-7pn4j\" (UID: \"77982fc2-b527-48a6-ab37-56c9b9b443f8\") " pod="openshift-controller-manager/controller-manager-5dc8478dc5-7pn4j" Feb 02 21:25:20 crc kubenswrapper[4789]: I0202 21:25:20.096027 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77982fc2-b527-48a6-ab37-56c9b9b443f8-serving-cert\") pod \"controller-manager-5dc8478dc5-7pn4j\" (UID: \"77982fc2-b527-48a6-ab37-56c9b9b443f8\") " pod="openshift-controller-manager/controller-manager-5dc8478dc5-7pn4j" Feb 02 21:25:20 crc kubenswrapper[4789]: I0202 21:25:20.097449 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/77982fc2-b527-48a6-ab37-56c9b9b443f8-client-ca\") pod \"controller-manager-5dc8478dc5-7pn4j\" (UID: \"77982fc2-b527-48a6-ab37-56c9b9b443f8\") " pod="openshift-controller-manager/controller-manager-5dc8478dc5-7pn4j" Feb 02 21:25:20 crc kubenswrapper[4789]: I0202 21:25:20.097713 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-748d8f84b-lq9tq"] Feb 02 21:25:20 crc kubenswrapper[4789]: I0202 21:25:20.108570 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-748d8f84b-lq9tq"] Feb 02 21:25:20 crc kubenswrapper[4789]: I0202 21:25:20.112642 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhp9f\" (UniqueName: \"kubernetes.io/projected/77982fc2-b527-48a6-ab37-56c9b9b443f8-kube-api-access-rhp9f\") pod \"controller-manager-5dc8478dc5-7pn4j\" (UID: \"77982fc2-b527-48a6-ab37-56c9b9b443f8\") " pod="openshift-controller-manager/controller-manager-5dc8478dc5-7pn4j" Feb 02 21:25:20 crc kubenswrapper[4789]: I0202 21:25:20.253435 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5dc8478dc5-7pn4j" Feb 02 21:25:20 crc kubenswrapper[4789]: I0202 21:25:20.435004 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d794f011-01aa-4ff7-925c-d7b2223bdf0e" path="/var/lib/kubelet/pods/d794f011-01aa-4ff7-925c-d7b2223bdf0e/volumes" Feb 02 21:25:20 crc kubenswrapper[4789]: I0202 21:25:20.532257 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5dc8478dc5-7pn4j"] Feb 02 21:25:20 crc kubenswrapper[4789]: W0202 21:25:20.546332 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod77982fc2_b527_48a6_ab37_56c9b9b443f8.slice/crio-e77633fa7627eddf5801b3fbcbf780dc49b803aa71c5595aeeeec2fba3d6b989 WatchSource:0}: Error finding container e77633fa7627eddf5801b3fbcbf780dc49b803aa71c5595aeeeec2fba3d6b989: Status 404 returned error can't find the container with id e77633fa7627eddf5801b3fbcbf780dc49b803aa71c5595aeeeec2fba3d6b989 Feb 02 21:25:21 crc kubenswrapper[4789]: I0202 21:25:21.057502 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5dc8478dc5-7pn4j" event={"ID":"77982fc2-b527-48a6-ab37-56c9b9b443f8","Type":"ContainerStarted","Data":"eb732b90232918b4187952550ac2a532c5708fcc878097186c615b9448471d1f"} Feb 02 21:25:21 crc kubenswrapper[4789]: I0202 21:25:21.057905 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5dc8478dc5-7pn4j" event={"ID":"77982fc2-b527-48a6-ab37-56c9b9b443f8","Type":"ContainerStarted","Data":"e77633fa7627eddf5801b3fbcbf780dc49b803aa71c5595aeeeec2fba3d6b989"} Feb 02 21:25:21 crc kubenswrapper[4789]: I0202 21:25:21.057926 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5dc8478dc5-7pn4j" Feb 02 21:25:21 crc kubenswrapper[4789]: I0202 21:25:21.061605 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5dc8478dc5-7pn4j" Feb 02 21:25:21 crc kubenswrapper[4789]: I0202 21:25:21.076849 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5dc8478dc5-7pn4j" podStartSLOduration=3.076831026 podStartE2EDuration="3.076831026s" podCreationTimestamp="2026-02-02 21:25:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:25:21.073604844 +0000 UTC m=+341.368629873" watchObservedRunningTime="2026-02-02 21:25:21.076831026 +0000 UTC m=+341.371856055" Feb 02 21:25:46 crc kubenswrapper[4789]: I0202 21:25:46.054966 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-f288q"] Feb 02 21:25:46 crc kubenswrapper[4789]: I0202 21:25:46.055912 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-f288q" podUID="750c480c-359c-47cc-9cc1-72c36bc5c783" containerName="registry-server" containerID="cri-o://8308fd1a7bb9bcaa33faa9a92962b285bc117f943be24e2dbb1701244f146a4b" gracePeriod=30 Feb 02 21:25:46 crc kubenswrapper[4789]: I0202 21:25:46.070993 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ltv7s"] Feb 02 21:25:46 crc kubenswrapper[4789]: I0202 21:25:46.071347 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ltv7s" podUID="82a7bf20-8db7-4d0f-91d4-a85ae5da91f5" containerName="registry-server" containerID="cri-o://2a4cab794383b216ccbb9f3936ff4f7b6210e83ed15250e69065668dbfed4b6b" gracePeriod=30 Feb 02 21:25:46 crc kubenswrapper[4789]: I0202 21:25:46.083727 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7sn8m"] Feb 02 21:25:46 crc kubenswrapper[4789]: I0202 21:25:46.084130 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-7sn8m" podUID="6559dcc4-e08f-4c1b-89b4-164673cd2ed0" containerName="marketplace-operator" containerID="cri-o://83d4500eef1a1480016f0a774d8f505793fe5d64c66c533f6875f2d3f0ee8b35" gracePeriod=30 Feb 02 21:25:46 crc kubenswrapper[4789]: I0202 21:25:46.093981 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mv4vf"] Feb 02 21:25:46 crc kubenswrapper[4789]: I0202 21:25:46.094300 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mv4vf" podUID="59a1480d-000f-481e-b287-78e39812c69b" containerName="registry-server" containerID="cri-o://65a1c1171f5c0fda293b842ce4780d9e22281e43ce2d206959577566ddc7dd1a" gracePeriod=30 Feb 02 21:25:46 crc kubenswrapper[4789]: I0202 21:25:46.106213 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cjz6p"] Feb 02 21:25:46 crc kubenswrapper[4789]: I0202 21:25:46.107128 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-cjz6p" podUID="81c40f7e-bff1-432e-95e4-dfeeba942abc" containerName="registry-server" containerID="cri-o://3d7b48a2741295411897bad63f9bfcec9147766f4a7ea8d8866401ec5c0eed52" gracePeriod=30 Feb 02 21:25:46 crc kubenswrapper[4789]: I0202 21:25:46.109378 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-jppg9"] Feb 02 21:25:46 crc kubenswrapper[4789]: I0202 21:25:46.110679 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-jppg9" Feb 02 21:25:46 crc kubenswrapper[4789]: I0202 21:25:46.113516 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-jppg9"] Feb 02 21:25:46 crc kubenswrapper[4789]: I0202 21:25:46.235701 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/baf8f0ee-a9ae-4e65-ad37-b9cea71e0a91-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-jppg9\" (UID: \"baf8f0ee-a9ae-4e65-ad37-b9cea71e0a91\") " pod="openshift-marketplace/marketplace-operator-79b997595-jppg9" Feb 02 21:25:46 crc kubenswrapper[4789]: I0202 21:25:46.236090 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jps9\" (UniqueName: \"kubernetes.io/projected/baf8f0ee-a9ae-4e65-ad37-b9cea71e0a91-kube-api-access-9jps9\") pod \"marketplace-operator-79b997595-jppg9\" (UID: \"baf8f0ee-a9ae-4e65-ad37-b9cea71e0a91\") " pod="openshift-marketplace/marketplace-operator-79b997595-jppg9" Feb 02 21:25:46 crc kubenswrapper[4789]: I0202 21:25:46.236134 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/baf8f0ee-a9ae-4e65-ad37-b9cea71e0a91-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-jppg9\" (UID: \"baf8f0ee-a9ae-4e65-ad37-b9cea71e0a91\") " pod="openshift-marketplace/marketplace-operator-79b997595-jppg9" Feb 02 21:25:46 crc kubenswrapper[4789]: I0202 21:25:46.259461 4789 generic.go:334] "Generic (PLEG): container finished" podID="59a1480d-000f-481e-b287-78e39812c69b" containerID="65a1c1171f5c0fda293b842ce4780d9e22281e43ce2d206959577566ddc7dd1a" exitCode=0 Feb 02 21:25:46 crc kubenswrapper[4789]: I0202 21:25:46.259533 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mv4vf" event={"ID":"59a1480d-000f-481e-b287-78e39812c69b","Type":"ContainerDied","Data":"65a1c1171f5c0fda293b842ce4780d9e22281e43ce2d206959577566ddc7dd1a"} Feb 02 21:25:46 crc kubenswrapper[4789]: I0202 21:25:46.272931 4789 generic.go:334] "Generic (PLEG): container finished" podID="81c40f7e-bff1-432e-95e4-dfeeba942abc" containerID="3d7b48a2741295411897bad63f9bfcec9147766f4a7ea8d8866401ec5c0eed52" exitCode=0 Feb 02 21:25:46 crc kubenswrapper[4789]: I0202 21:25:46.272986 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cjz6p" event={"ID":"81c40f7e-bff1-432e-95e4-dfeeba942abc","Type":"ContainerDied","Data":"3d7b48a2741295411897bad63f9bfcec9147766f4a7ea8d8866401ec5c0eed52"} Feb 02 21:25:46 crc kubenswrapper[4789]: I0202 21:25:46.276494 4789 generic.go:334] "Generic (PLEG): container finished" podID="750c480c-359c-47cc-9cc1-72c36bc5c783" containerID="8308fd1a7bb9bcaa33faa9a92962b285bc117f943be24e2dbb1701244f146a4b" exitCode=0 Feb 02 21:25:46 crc kubenswrapper[4789]: I0202 21:25:46.276555 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f288q" event={"ID":"750c480c-359c-47cc-9cc1-72c36bc5c783","Type":"ContainerDied","Data":"8308fd1a7bb9bcaa33faa9a92962b285bc117f943be24e2dbb1701244f146a4b"} Feb 02 21:25:46 crc kubenswrapper[4789]: I0202 21:25:46.277797 4789 generic.go:334] "Generic (PLEG): container finished" podID="6559dcc4-e08f-4c1b-89b4-164673cd2ed0" containerID="83d4500eef1a1480016f0a774d8f505793fe5d64c66c533f6875f2d3f0ee8b35" exitCode=0 Feb 02 21:25:46 crc kubenswrapper[4789]: I0202 21:25:46.277829 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-7sn8m" event={"ID":"6559dcc4-e08f-4c1b-89b4-164673cd2ed0","Type":"ContainerDied","Data":"83d4500eef1a1480016f0a774d8f505793fe5d64c66c533f6875f2d3f0ee8b35"} Feb 02 21:25:46 crc kubenswrapper[4789]: I0202 21:25:46.287409 4789 generic.go:334] "Generic (PLEG): container finished" podID="82a7bf20-8db7-4d0f-91d4-a85ae5da91f5" containerID="2a4cab794383b216ccbb9f3936ff4f7b6210e83ed15250e69065668dbfed4b6b" exitCode=0 Feb 02 21:25:46 crc kubenswrapper[4789]: I0202 21:25:46.287438 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ltv7s" event={"ID":"82a7bf20-8db7-4d0f-91d4-a85ae5da91f5","Type":"ContainerDied","Data":"2a4cab794383b216ccbb9f3936ff4f7b6210e83ed15250e69065668dbfed4b6b"} Feb 02 21:25:46 crc kubenswrapper[4789]: I0202 21:25:46.337400 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/baf8f0ee-a9ae-4e65-ad37-b9cea71e0a91-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-jppg9\" (UID: \"baf8f0ee-a9ae-4e65-ad37-b9cea71e0a91\") " pod="openshift-marketplace/marketplace-operator-79b997595-jppg9" Feb 02 21:25:46 crc kubenswrapper[4789]: I0202 21:25:46.337484 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jps9\" (UniqueName: \"kubernetes.io/projected/baf8f0ee-a9ae-4e65-ad37-b9cea71e0a91-kube-api-access-9jps9\") pod \"marketplace-operator-79b997595-jppg9\" (UID: \"baf8f0ee-a9ae-4e65-ad37-b9cea71e0a91\") " pod="openshift-marketplace/marketplace-operator-79b997595-jppg9" Feb 02 21:25:46 crc kubenswrapper[4789]: I0202 21:25:46.337895 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/baf8f0ee-a9ae-4e65-ad37-b9cea71e0a91-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-jppg9\" (UID: \"baf8f0ee-a9ae-4e65-ad37-b9cea71e0a91\") " pod="openshift-marketplace/marketplace-operator-79b997595-jppg9" Feb 02 21:25:46 crc kubenswrapper[4789]: I0202 21:25:46.339605 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/baf8f0ee-a9ae-4e65-ad37-b9cea71e0a91-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-jppg9\" (UID: \"baf8f0ee-a9ae-4e65-ad37-b9cea71e0a91\") " pod="openshift-marketplace/marketplace-operator-79b997595-jppg9" Feb 02 21:25:46 crc kubenswrapper[4789]: I0202 21:25:46.352328 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/baf8f0ee-a9ae-4e65-ad37-b9cea71e0a91-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-jppg9\" (UID: \"baf8f0ee-a9ae-4e65-ad37-b9cea71e0a91\") " pod="openshift-marketplace/marketplace-operator-79b997595-jppg9" Feb 02 21:25:46 crc kubenswrapper[4789]: I0202 21:25:46.354356 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jps9\" (UniqueName: \"kubernetes.io/projected/baf8f0ee-a9ae-4e65-ad37-b9cea71e0a91-kube-api-access-9jps9\") pod \"marketplace-operator-79b997595-jppg9\" (UID: \"baf8f0ee-a9ae-4e65-ad37-b9cea71e0a91\") " pod="openshift-marketplace/marketplace-operator-79b997595-jppg9" Feb 02 21:25:46 crc kubenswrapper[4789]: I0202 21:25:46.577314 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-jppg9" Feb 02 21:25:46 crc kubenswrapper[4789]: I0202 21:25:46.591088 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f288q" Feb 02 21:25:46 crc kubenswrapper[4789]: I0202 21:25:46.648518 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/750c480c-359c-47cc-9cc1-72c36bc5c783-catalog-content\") pod \"750c480c-359c-47cc-9cc1-72c36bc5c783\" (UID: \"750c480c-359c-47cc-9cc1-72c36bc5c783\") " Feb 02 21:25:46 crc kubenswrapper[4789]: I0202 21:25:46.648607 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/750c480c-359c-47cc-9cc1-72c36bc5c783-utilities\") pod \"750c480c-359c-47cc-9cc1-72c36bc5c783\" (UID: \"750c480c-359c-47cc-9cc1-72c36bc5c783\") " Feb 02 21:25:46 crc kubenswrapper[4789]: I0202 21:25:46.648725 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cp5n7\" (UniqueName: \"kubernetes.io/projected/750c480c-359c-47cc-9cc1-72c36bc5c783-kube-api-access-cp5n7\") pod \"750c480c-359c-47cc-9cc1-72c36bc5c783\" (UID: \"750c480c-359c-47cc-9cc1-72c36bc5c783\") " Feb 02 21:25:46 crc kubenswrapper[4789]: I0202 21:25:46.651139 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/750c480c-359c-47cc-9cc1-72c36bc5c783-utilities" (OuterVolumeSpecName: "utilities") pod "750c480c-359c-47cc-9cc1-72c36bc5c783" (UID: "750c480c-359c-47cc-9cc1-72c36bc5c783"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 21:25:46 crc kubenswrapper[4789]: I0202 21:25:46.658597 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/750c480c-359c-47cc-9cc1-72c36bc5c783-kube-api-access-cp5n7" (OuterVolumeSpecName: "kube-api-access-cp5n7") pod "750c480c-359c-47cc-9cc1-72c36bc5c783" (UID: "750c480c-359c-47cc-9cc1-72c36bc5c783"). InnerVolumeSpecName "kube-api-access-cp5n7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:25:46 crc kubenswrapper[4789]: I0202 21:25:46.705280 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/750c480c-359c-47cc-9cc1-72c36bc5c783-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "750c480c-359c-47cc-9cc1-72c36bc5c783" (UID: "750c480c-359c-47cc-9cc1-72c36bc5c783"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 21:25:46 crc kubenswrapper[4789]: I0202 21:25:46.751055 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cp5n7\" (UniqueName: \"kubernetes.io/projected/750c480c-359c-47cc-9cc1-72c36bc5c783-kube-api-access-cp5n7\") on node \"crc\" DevicePath \"\"" Feb 02 21:25:46 crc kubenswrapper[4789]: I0202 21:25:46.751075 4789 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/750c480c-359c-47cc-9cc1-72c36bc5c783-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 21:25:46 crc kubenswrapper[4789]: I0202 21:25:46.751084 4789 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/750c480c-359c-47cc-9cc1-72c36bc5c783-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 21:25:46 crc kubenswrapper[4789]: I0202 21:25:46.763138 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ltv7s" Feb 02 21:25:46 crc kubenswrapper[4789]: I0202 21:25:46.815789 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cjz6p" Feb 02 21:25:46 crc kubenswrapper[4789]: I0202 21:25:46.816233 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-7sn8m" Feb 02 21:25:46 crc kubenswrapper[4789]: I0202 21:25:46.818782 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mv4vf" Feb 02 21:25:46 crc kubenswrapper[4789]: I0202 21:25:46.851926 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6559dcc4-e08f-4c1b-89b4-164673cd2ed0-marketplace-trusted-ca\") pod \"6559dcc4-e08f-4c1b-89b4-164673cd2ed0\" (UID: \"6559dcc4-e08f-4c1b-89b4-164673cd2ed0\") " Feb 02 21:25:46 crc kubenswrapper[4789]: I0202 21:25:46.852634 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6559dcc4-e08f-4c1b-89b4-164673cd2ed0-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "6559dcc4-e08f-4c1b-89b4-164673cd2ed0" (UID: "6559dcc4-e08f-4c1b-89b4-164673cd2ed0"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:25:46 crc kubenswrapper[4789]: I0202 21:25:46.852693 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nqm2q\" (UniqueName: \"kubernetes.io/projected/6559dcc4-e08f-4c1b-89b4-164673cd2ed0-kube-api-access-nqm2q\") pod \"6559dcc4-e08f-4c1b-89b4-164673cd2ed0\" (UID: \"6559dcc4-e08f-4c1b-89b4-164673cd2ed0\") " Feb 02 21:25:46 crc kubenswrapper[4789]: I0202 21:25:46.853225 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b9kns\" (UniqueName: \"kubernetes.io/projected/59a1480d-000f-481e-b287-78e39812c69b-kube-api-access-b9kns\") pod \"59a1480d-000f-481e-b287-78e39812c69b\" (UID: \"59a1480d-000f-481e-b287-78e39812c69b\") " Feb 02 21:25:46 crc kubenswrapper[4789]: I0202 21:25:46.853282 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82a7bf20-8db7-4d0f-91d4-a85ae5da91f5-utilities\") pod \"82a7bf20-8db7-4d0f-91d4-a85ae5da91f5\" (UID: \"82a7bf20-8db7-4d0f-91d4-a85ae5da91f5\") " Feb 02 21:25:46 crc kubenswrapper[4789]: I0202 21:25:46.853303 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82a7bf20-8db7-4d0f-91d4-a85ae5da91f5-catalog-content\") pod \"82a7bf20-8db7-4d0f-91d4-a85ae5da91f5\" (UID: \"82a7bf20-8db7-4d0f-91d4-a85ae5da91f5\") " Feb 02 21:25:46 crc kubenswrapper[4789]: I0202 21:25:46.853335 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-956jm\" (UniqueName: \"kubernetes.io/projected/82a7bf20-8db7-4d0f-91d4-a85ae5da91f5-kube-api-access-956jm\") pod \"82a7bf20-8db7-4d0f-91d4-a85ae5da91f5\" (UID: \"82a7bf20-8db7-4d0f-91d4-a85ae5da91f5\") " Feb 02 21:25:46 crc kubenswrapper[4789]: I0202 21:25:46.853386 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xngdn\" (UniqueName: \"kubernetes.io/projected/81c40f7e-bff1-432e-95e4-dfeeba942abc-kube-api-access-xngdn\") pod \"81c40f7e-bff1-432e-95e4-dfeeba942abc\" (UID: \"81c40f7e-bff1-432e-95e4-dfeeba942abc\") " Feb 02 21:25:46 crc kubenswrapper[4789]: I0202 21:25:46.853419 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59a1480d-000f-481e-b287-78e39812c69b-utilities\") pod \"59a1480d-000f-481e-b287-78e39812c69b\" (UID: \"59a1480d-000f-481e-b287-78e39812c69b\") " Feb 02 21:25:46 crc kubenswrapper[4789]: I0202 21:25:46.853434 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81c40f7e-bff1-432e-95e4-dfeeba942abc-utilities\") pod \"81c40f7e-bff1-432e-95e4-dfeeba942abc\" (UID: \"81c40f7e-bff1-432e-95e4-dfeeba942abc\") " Feb 02 21:25:46 crc kubenswrapper[4789]: I0202 21:25:46.853464 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59a1480d-000f-481e-b287-78e39812c69b-catalog-content\") pod \"59a1480d-000f-481e-b287-78e39812c69b\" (UID: \"59a1480d-000f-481e-b287-78e39812c69b\") " Feb 02 21:25:46 crc kubenswrapper[4789]: I0202 21:25:46.853512 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81c40f7e-bff1-432e-95e4-dfeeba942abc-catalog-content\") pod \"81c40f7e-bff1-432e-95e4-dfeeba942abc\" (UID: \"81c40f7e-bff1-432e-95e4-dfeeba942abc\") " Feb 02 21:25:46 crc kubenswrapper[4789]: I0202 21:25:46.853545 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6559dcc4-e08f-4c1b-89b4-164673cd2ed0-marketplace-operator-metrics\") pod \"6559dcc4-e08f-4c1b-89b4-164673cd2ed0\" (UID: \"6559dcc4-e08f-4c1b-89b4-164673cd2ed0\") " Feb 02 21:25:46 crc kubenswrapper[4789]: I0202 21:25:46.854208 4789 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6559dcc4-e08f-4c1b-89b4-164673cd2ed0-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 02 21:25:46 crc kubenswrapper[4789]: I0202 21:25:46.854258 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59a1480d-000f-481e-b287-78e39812c69b-utilities" (OuterVolumeSpecName: "utilities") pod "59a1480d-000f-481e-b287-78e39812c69b" (UID: "59a1480d-000f-481e-b287-78e39812c69b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 21:25:46 crc kubenswrapper[4789]: I0202 21:25:46.855189 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82a7bf20-8db7-4d0f-91d4-a85ae5da91f5-utilities" (OuterVolumeSpecName: "utilities") pod "82a7bf20-8db7-4d0f-91d4-a85ae5da91f5" (UID: "82a7bf20-8db7-4d0f-91d4-a85ae5da91f5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 21:25:46 crc kubenswrapper[4789]: I0202 21:25:46.855206 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6559dcc4-e08f-4c1b-89b4-164673cd2ed0-kube-api-access-nqm2q" (OuterVolumeSpecName: "kube-api-access-nqm2q") pod "6559dcc4-e08f-4c1b-89b4-164673cd2ed0" (UID: "6559dcc4-e08f-4c1b-89b4-164673cd2ed0"). InnerVolumeSpecName "kube-api-access-nqm2q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:25:46 crc kubenswrapper[4789]: I0202 21:25:46.855729 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59a1480d-000f-481e-b287-78e39812c69b-kube-api-access-b9kns" (OuterVolumeSpecName: "kube-api-access-b9kns") pod "59a1480d-000f-481e-b287-78e39812c69b" (UID: "59a1480d-000f-481e-b287-78e39812c69b"). InnerVolumeSpecName "kube-api-access-b9kns". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:25:46 crc kubenswrapper[4789]: I0202 21:25:46.856662 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81c40f7e-bff1-432e-95e4-dfeeba942abc-kube-api-access-xngdn" (OuterVolumeSpecName: "kube-api-access-xngdn") pod "81c40f7e-bff1-432e-95e4-dfeeba942abc" (UID: "81c40f7e-bff1-432e-95e4-dfeeba942abc"). InnerVolumeSpecName "kube-api-access-xngdn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:25:46 crc kubenswrapper[4789]: I0202 21:25:46.856793 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81c40f7e-bff1-432e-95e4-dfeeba942abc-utilities" (OuterVolumeSpecName: "utilities") pod "81c40f7e-bff1-432e-95e4-dfeeba942abc" (UID: "81c40f7e-bff1-432e-95e4-dfeeba942abc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 21:25:46 crc kubenswrapper[4789]: I0202 21:25:46.863461 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6559dcc4-e08f-4c1b-89b4-164673cd2ed0-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "6559dcc4-e08f-4c1b-89b4-164673cd2ed0" (UID: "6559dcc4-e08f-4c1b-89b4-164673cd2ed0"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:25:46 crc kubenswrapper[4789]: I0202 21:25:46.883161 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59a1480d-000f-481e-b287-78e39812c69b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "59a1480d-000f-481e-b287-78e39812c69b" (UID: "59a1480d-000f-481e-b287-78e39812c69b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 21:25:46 crc kubenswrapper[4789]: I0202 21:25:46.883540 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82a7bf20-8db7-4d0f-91d4-a85ae5da91f5-kube-api-access-956jm" (OuterVolumeSpecName: "kube-api-access-956jm") pod "82a7bf20-8db7-4d0f-91d4-a85ae5da91f5" (UID: "82a7bf20-8db7-4d0f-91d4-a85ae5da91f5"). InnerVolumeSpecName "kube-api-access-956jm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:25:46 crc kubenswrapper[4789]: I0202 21:25:46.921421 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82a7bf20-8db7-4d0f-91d4-a85ae5da91f5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "82a7bf20-8db7-4d0f-91d4-a85ae5da91f5" (UID: "82a7bf20-8db7-4d0f-91d4-a85ae5da91f5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 21:25:46 crc kubenswrapper[4789]: I0202 21:25:46.955533 4789 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59a1480d-000f-481e-b287-78e39812c69b-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 21:25:46 crc kubenswrapper[4789]: I0202 21:25:46.955571 4789 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81c40f7e-bff1-432e-95e4-dfeeba942abc-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 21:25:46 crc kubenswrapper[4789]: I0202 21:25:46.955666 4789 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59a1480d-000f-481e-b287-78e39812c69b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 21:25:46 crc kubenswrapper[4789]: I0202 21:25:46.955682 4789 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6559dcc4-e08f-4c1b-89b4-164673cd2ed0-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 02 21:25:46 crc kubenswrapper[4789]: I0202 21:25:46.955695 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nqm2q\" (UniqueName: \"kubernetes.io/projected/6559dcc4-e08f-4c1b-89b4-164673cd2ed0-kube-api-access-nqm2q\") on node \"crc\" DevicePath \"\"" Feb 02 21:25:46 crc kubenswrapper[4789]: I0202 21:25:46.955707 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b9kns\" (UniqueName: \"kubernetes.io/projected/59a1480d-000f-481e-b287-78e39812c69b-kube-api-access-b9kns\") on node \"crc\" DevicePath \"\"" Feb 02 21:25:46 crc kubenswrapper[4789]: I0202 21:25:46.955744 4789 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82a7bf20-8db7-4d0f-91d4-a85ae5da91f5-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 21:25:46 crc kubenswrapper[4789]: I0202 21:25:46.955755 4789 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82a7bf20-8db7-4d0f-91d4-a85ae5da91f5-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 21:25:46 crc kubenswrapper[4789]: I0202 21:25:46.955766 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-956jm\" (UniqueName: \"kubernetes.io/projected/82a7bf20-8db7-4d0f-91d4-a85ae5da91f5-kube-api-access-956jm\") on node \"crc\" DevicePath \"\"" Feb 02 21:25:46 crc kubenswrapper[4789]: I0202 21:25:46.955777 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xngdn\" (UniqueName: \"kubernetes.io/projected/81c40f7e-bff1-432e-95e4-dfeeba942abc-kube-api-access-xngdn\") on node \"crc\" DevicePath \"\"" Feb 02 21:25:46 crc kubenswrapper[4789]: I0202 21:25:46.989251 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81c40f7e-bff1-432e-95e4-dfeeba942abc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "81c40f7e-bff1-432e-95e4-dfeeba942abc" (UID: "81c40f7e-bff1-432e-95e4-dfeeba942abc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 21:25:47 crc kubenswrapper[4789]: I0202 21:25:47.057085 4789 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81c40f7e-bff1-432e-95e4-dfeeba942abc-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 21:25:47 crc kubenswrapper[4789]: I0202 21:25:47.135261 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-jppg9"] Feb 02 21:25:47 crc kubenswrapper[4789]: W0202 21:25:47.148319 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbaf8f0ee_a9ae_4e65_ad37_b9cea71e0a91.slice/crio-b0da1ed2f14df4a00da6a6a2fd0c6376fcf75e6e4fc96fef641094d49e5ebbe0 WatchSource:0}: Error finding container b0da1ed2f14df4a00da6a6a2fd0c6376fcf75e6e4fc96fef641094d49e5ebbe0: Status 404 returned error can't find the container with id b0da1ed2f14df4a00da6a6a2fd0c6376fcf75e6e4fc96fef641094d49e5ebbe0 Feb 02 21:25:47 crc kubenswrapper[4789]: I0202 21:25:47.296105 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cjz6p" Feb 02 21:25:47 crc kubenswrapper[4789]: I0202 21:25:47.296096 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cjz6p" event={"ID":"81c40f7e-bff1-432e-95e4-dfeeba942abc","Type":"ContainerDied","Data":"aa82d38cd7d5694d19bc10855518627433f21b7a691f629a04542c46888cb279"} Feb 02 21:25:47 crc kubenswrapper[4789]: I0202 21:25:47.296270 4789 scope.go:117] "RemoveContainer" containerID="3d7b48a2741295411897bad63f9bfcec9147766f4a7ea8d8866401ec5c0eed52" Feb 02 21:25:47 crc kubenswrapper[4789]: I0202 21:25:47.302655 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f288q" Feb 02 21:25:47 crc kubenswrapper[4789]: I0202 21:25:47.302691 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f288q" event={"ID":"750c480c-359c-47cc-9cc1-72c36bc5c783","Type":"ContainerDied","Data":"8b5df91df219f7b29539d2e1ff442eb9b9a5aaefb0d16ccec37b5113b8f4eb46"} Feb 02 21:25:47 crc kubenswrapper[4789]: I0202 21:25:47.306237 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-7sn8m" Feb 02 21:25:47 crc kubenswrapper[4789]: I0202 21:25:47.306418 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-7sn8m" event={"ID":"6559dcc4-e08f-4c1b-89b4-164673cd2ed0","Type":"ContainerDied","Data":"b635fe6fefad0679163d7c23ad1928b33aeecf8c80d7d0c4c0a1d765f5bd38e6"} Feb 02 21:25:47 crc kubenswrapper[4789]: I0202 21:25:47.314359 4789 scope.go:117] "RemoveContainer" containerID="c98eb539a295af5ef6314ed1f2b8c46ff441391f9a308f81c1104760af7dd3c9" Feb 02 21:25:47 crc kubenswrapper[4789]: I0202 21:25:47.316080 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ltv7s" event={"ID":"82a7bf20-8db7-4d0f-91d4-a85ae5da91f5","Type":"ContainerDied","Data":"f4244784b7875e587ec8991b6b1afb274bd5e6687bb0654f26fa6cacdeb20339"} Feb 02 21:25:47 crc kubenswrapper[4789]: I0202 21:25:47.316387 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ltv7s" Feb 02 21:25:47 crc kubenswrapper[4789]: I0202 21:25:47.337280 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-jppg9" event={"ID":"baf8f0ee-a9ae-4e65-ad37-b9cea71e0a91","Type":"ContainerStarted","Data":"b0da1ed2f14df4a00da6a6a2fd0c6376fcf75e6e4fc96fef641094d49e5ebbe0"} Feb 02 21:25:47 crc kubenswrapper[4789]: I0202 21:25:47.346104 4789 scope.go:117] "RemoveContainer" containerID="6bd97d95fbba2c3f0134efa321d5fe74ff2fa932015c2419f1c36d699153f8b7" Feb 02 21:25:47 crc kubenswrapper[4789]: I0202 21:25:47.354188 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mv4vf" event={"ID":"59a1480d-000f-481e-b287-78e39812c69b","Type":"ContainerDied","Data":"3674216b6c8bcce1577500b22c8276193896fdc7edea55471197014ec86052f2"} Feb 02 21:25:47 crc kubenswrapper[4789]: I0202 21:25:47.354373 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mv4vf" Feb 02 21:25:47 crc kubenswrapper[4789]: I0202 21:25:47.368528 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cjz6p"] Feb 02 21:25:47 crc kubenswrapper[4789]: I0202 21:25:47.376646 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-cjz6p"] Feb 02 21:25:47 crc kubenswrapper[4789]: I0202 21:25:47.381086 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7sn8m"] Feb 02 21:25:47 crc kubenswrapper[4789]: I0202 21:25:47.387472 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7sn8m"] Feb 02 21:25:47 crc kubenswrapper[4789]: I0202 21:25:47.390820 4789 scope.go:117] "RemoveContainer" containerID="8308fd1a7bb9bcaa33faa9a92962b285bc117f943be24e2dbb1701244f146a4b" Feb 02 21:25:47 crc kubenswrapper[4789]: I0202 21:25:47.392821 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ltv7s"] Feb 02 21:25:47 crc kubenswrapper[4789]: I0202 21:25:47.396566 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ltv7s"] Feb 02 21:25:47 crc kubenswrapper[4789]: I0202 21:25:47.404488 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-f288q"] Feb 02 21:25:47 crc kubenswrapper[4789]: I0202 21:25:47.408060 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-f288q"] Feb 02 21:25:47 crc kubenswrapper[4789]: I0202 21:25:47.408393 4789 scope.go:117] "RemoveContainer" containerID="6e1788092a8d54e865f11e8456b7f7d7a70047411a71b52076a35fa69eb69730" Feb 02 21:25:47 crc kubenswrapper[4789]: I0202 21:25:47.423635 4789 scope.go:117] "RemoveContainer" containerID="70c86c8637f3df270c79ff053b0ebd691278cf835acb12e8e71ff091b0fbb12a" Feb 02 21:25:47 crc kubenswrapper[4789]: I0202 21:25:47.426965 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mv4vf"] Feb 02 21:25:47 crc kubenswrapper[4789]: I0202 21:25:47.430444 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mv4vf"] Feb 02 21:25:47 crc kubenswrapper[4789]: I0202 21:25:47.439443 4789 scope.go:117] "RemoveContainer" containerID="83d4500eef1a1480016f0a774d8f505793fe5d64c66c533f6875f2d3f0ee8b35" Feb 02 21:25:47 crc kubenswrapper[4789]: I0202 21:25:47.452908 4789 scope.go:117] "RemoveContainer" containerID="2a4cab794383b216ccbb9f3936ff4f7b6210e83ed15250e69065668dbfed4b6b" Feb 02 21:25:47 crc kubenswrapper[4789]: I0202 21:25:47.466973 4789 scope.go:117] "RemoveContainer" containerID="bbd9b82a8be7a356538c1ecaa75712512788d43e3d8b235627a7ec86676d073d" Feb 02 21:25:47 crc kubenswrapper[4789]: I0202 21:25:47.484358 4789 scope.go:117] "RemoveContainer" containerID="cc7ce86839b24c0822b5912b8064ac65ca86e299b2dec7656e0916bb4c8d7c05" Feb 02 21:25:47 crc kubenswrapper[4789]: I0202 21:25:47.499605 4789 scope.go:117] "RemoveContainer" containerID="65a1c1171f5c0fda293b842ce4780d9e22281e43ce2d206959577566ddc7dd1a" Feb 02 21:25:47 crc kubenswrapper[4789]: I0202 21:25:47.517547 4789 scope.go:117] "RemoveContainer" containerID="68dd3d2a2137f4d53e49382bd30825efd92ef65ce16f0c02a3d69aeb6577eb4b" Feb 02 21:25:47 crc kubenswrapper[4789]: I0202 21:25:47.544117 4789 scope.go:117] "RemoveContainer" containerID="ae5c42fd421d97a919cfd4f5b8b12d5a58d19b41d5cc7fe4134f05a7d4819734" Feb 02 21:25:48 crc kubenswrapper[4789]: I0202 21:25:48.362824 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-jppg9" event={"ID":"baf8f0ee-a9ae-4e65-ad37-b9cea71e0a91","Type":"ContainerStarted","Data":"6456a7ba444373b318be84e1471c10f8f0ddaf2dbbaa5343062b6a2c4f3c539e"} Feb 02 21:25:48 crc kubenswrapper[4789]: I0202 21:25:48.364838 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-jppg9" Feb 02 21:25:48 crc kubenswrapper[4789]: I0202 21:25:48.375481 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-jppg9" Feb 02 21:25:48 crc kubenswrapper[4789]: I0202 21:25:48.382724 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-jppg9" podStartSLOduration=2.382708851 podStartE2EDuration="2.382708851s" podCreationTimestamp="2026-02-02 21:25:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:25:48.38232626 +0000 UTC m=+368.677351329" watchObservedRunningTime="2026-02-02 21:25:48.382708851 +0000 UTC m=+368.677733870" Feb 02 21:25:48 crc kubenswrapper[4789]: I0202 21:25:48.430073 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59a1480d-000f-481e-b287-78e39812c69b" path="/var/lib/kubelet/pods/59a1480d-000f-481e-b287-78e39812c69b/volumes" Feb 02 21:25:48 crc kubenswrapper[4789]: I0202 21:25:48.431890 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6559dcc4-e08f-4c1b-89b4-164673cd2ed0" path="/var/lib/kubelet/pods/6559dcc4-e08f-4c1b-89b4-164673cd2ed0/volumes" Feb 02 21:25:48 crc kubenswrapper[4789]: I0202 21:25:48.433020 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="750c480c-359c-47cc-9cc1-72c36bc5c783" path="/var/lib/kubelet/pods/750c480c-359c-47cc-9cc1-72c36bc5c783/volumes" Feb 02 21:25:48 crc kubenswrapper[4789]: I0202 21:25:48.435429 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81c40f7e-bff1-432e-95e4-dfeeba942abc" path="/var/lib/kubelet/pods/81c40f7e-bff1-432e-95e4-dfeeba942abc/volumes" Feb 02 21:25:48 crc kubenswrapper[4789]: I0202 21:25:48.436850 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82a7bf20-8db7-4d0f-91d4-a85ae5da91f5" path="/var/lib/kubelet/pods/82a7bf20-8db7-4d0f-91d4-a85ae5da91f5/volumes" Feb 02 21:25:52 crc kubenswrapper[4789]: I0202 21:25:52.590840 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-mqzzd"] Feb 02 21:25:52 crc kubenswrapper[4789]: E0202 21:25:52.591598 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="750c480c-359c-47cc-9cc1-72c36bc5c783" containerName="extract-content" Feb 02 21:25:52 crc kubenswrapper[4789]: I0202 21:25:52.591615 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="750c480c-359c-47cc-9cc1-72c36bc5c783" containerName="extract-content" Feb 02 21:25:52 crc kubenswrapper[4789]: E0202 21:25:52.591628 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82a7bf20-8db7-4d0f-91d4-a85ae5da91f5" containerName="extract-content" Feb 02 21:25:52 crc kubenswrapper[4789]: I0202 21:25:52.591634 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="82a7bf20-8db7-4d0f-91d4-a85ae5da91f5" containerName="extract-content" Feb 02 21:25:52 crc kubenswrapper[4789]: E0202 21:25:52.591650 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6559dcc4-e08f-4c1b-89b4-164673cd2ed0" containerName="marketplace-operator" Feb 02 21:25:52 crc kubenswrapper[4789]: I0202 21:25:52.591657 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="6559dcc4-e08f-4c1b-89b4-164673cd2ed0" containerName="marketplace-operator" Feb 02 21:25:52 crc kubenswrapper[4789]: E0202 21:25:52.591668 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81c40f7e-bff1-432e-95e4-dfeeba942abc" containerName="registry-server" Feb 02 21:25:52 crc kubenswrapper[4789]: I0202 21:25:52.591675 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="81c40f7e-bff1-432e-95e4-dfeeba942abc" containerName="registry-server" Feb 02 21:25:52 crc kubenswrapper[4789]: E0202 21:25:52.591687 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81c40f7e-bff1-432e-95e4-dfeeba942abc" containerName="extract-utilities" Feb 02 21:25:52 crc kubenswrapper[4789]: I0202 21:25:52.591693 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="81c40f7e-bff1-432e-95e4-dfeeba942abc" containerName="extract-utilities" Feb 02 21:25:52 crc kubenswrapper[4789]: E0202 21:25:52.591703 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82a7bf20-8db7-4d0f-91d4-a85ae5da91f5" containerName="extract-utilities" Feb 02 21:25:52 crc kubenswrapper[4789]: I0202 21:25:52.591711 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="82a7bf20-8db7-4d0f-91d4-a85ae5da91f5" containerName="extract-utilities" Feb 02 21:25:52 crc kubenswrapper[4789]: E0202 21:25:52.591721 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59a1480d-000f-481e-b287-78e39812c69b" containerName="extract-content" Feb 02 21:25:52 crc kubenswrapper[4789]: I0202 21:25:52.591727 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="59a1480d-000f-481e-b287-78e39812c69b" containerName="extract-content" Feb 02 21:25:52 crc kubenswrapper[4789]: E0202 21:25:52.591737 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82a7bf20-8db7-4d0f-91d4-a85ae5da91f5" containerName="registry-server" Feb 02 21:25:52 crc kubenswrapper[4789]: I0202 21:25:52.591744 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="82a7bf20-8db7-4d0f-91d4-a85ae5da91f5" containerName="registry-server" Feb 02 21:25:52 crc kubenswrapper[4789]: E0202 21:25:52.591753 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59a1480d-000f-481e-b287-78e39812c69b" containerName="extract-utilities" Feb 02 21:25:52 crc kubenswrapper[4789]: I0202 21:25:52.591760 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="59a1480d-000f-481e-b287-78e39812c69b" containerName="extract-utilities" Feb 02 21:25:52 crc kubenswrapper[4789]: E0202 21:25:52.591767 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="750c480c-359c-47cc-9cc1-72c36bc5c783" containerName="registry-server" Feb 02 21:25:52 crc kubenswrapper[4789]: I0202 21:25:52.591774 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="750c480c-359c-47cc-9cc1-72c36bc5c783" containerName="registry-server" Feb 02 21:25:52 crc kubenswrapper[4789]: E0202 21:25:52.591781 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="750c480c-359c-47cc-9cc1-72c36bc5c783" containerName="extract-utilities" Feb 02 21:25:52 crc kubenswrapper[4789]: I0202 21:25:52.591788 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="750c480c-359c-47cc-9cc1-72c36bc5c783" containerName="extract-utilities" Feb 02 21:25:52 crc kubenswrapper[4789]: E0202 21:25:52.591799 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81c40f7e-bff1-432e-95e4-dfeeba942abc" containerName="extract-content" Feb 02 21:25:52 crc kubenswrapper[4789]: I0202 21:25:52.591806 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="81c40f7e-bff1-432e-95e4-dfeeba942abc" containerName="extract-content" Feb 02 21:25:52 crc kubenswrapper[4789]: E0202 21:25:52.591816 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59a1480d-000f-481e-b287-78e39812c69b" containerName="registry-server" Feb 02 21:25:52 crc kubenswrapper[4789]: I0202 21:25:52.591823 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="59a1480d-000f-481e-b287-78e39812c69b" containerName="registry-server" Feb 02 21:25:52 crc kubenswrapper[4789]: I0202 21:25:52.591925 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="6559dcc4-e08f-4c1b-89b4-164673cd2ed0" containerName="marketplace-operator" Feb 02 21:25:52 crc kubenswrapper[4789]: I0202 21:25:52.591937 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="81c40f7e-bff1-432e-95e4-dfeeba942abc" containerName="registry-server" Feb 02 21:25:52 crc kubenswrapper[4789]: I0202 21:25:52.591947 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="59a1480d-000f-481e-b287-78e39812c69b" containerName="registry-server" Feb 02 21:25:52 crc kubenswrapper[4789]: I0202 21:25:52.591957 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="750c480c-359c-47cc-9cc1-72c36bc5c783" containerName="registry-server" Feb 02 21:25:52 crc kubenswrapper[4789]: I0202 21:25:52.591966 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="82a7bf20-8db7-4d0f-91d4-a85ae5da91f5" containerName="registry-server" Feb 02 21:25:52 crc kubenswrapper[4789]: I0202 21:25:52.592386 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-mqzzd" Feb 02 21:25:52 crc kubenswrapper[4789]: I0202 21:25:52.610415 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-mqzzd"] Feb 02 21:25:52 crc kubenswrapper[4789]: I0202 21:25:52.633516 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/860f799b-d79f-4bd5-a639-9cb8556c8403-installation-pull-secrets\") pod \"image-registry-66df7c8f76-mqzzd\" (UID: \"860f799b-d79f-4bd5-a639-9cb8556c8403\") " pod="openshift-image-registry/image-registry-66df7c8f76-mqzzd" Feb 02 21:25:52 crc kubenswrapper[4789]: I0202 21:25:52.633566 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/860f799b-d79f-4bd5-a639-9cb8556c8403-ca-trust-extracted\") pod \"image-registry-66df7c8f76-mqzzd\" (UID: \"860f799b-d79f-4bd5-a639-9cb8556c8403\") " pod="openshift-image-registry/image-registry-66df7c8f76-mqzzd" Feb 02 21:25:52 crc kubenswrapper[4789]: I0202 21:25:52.633610 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/860f799b-d79f-4bd5-a639-9cb8556c8403-trusted-ca\") pod \"image-registry-66df7c8f76-mqzzd\" (UID: \"860f799b-d79f-4bd5-a639-9cb8556c8403\") " pod="openshift-image-registry/image-registry-66df7c8f76-mqzzd" Feb 02 21:25:52 crc kubenswrapper[4789]: I0202 21:25:52.633628 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6c59g\" (UniqueName: \"kubernetes.io/projected/860f799b-d79f-4bd5-a639-9cb8556c8403-kube-api-access-6c59g\") pod \"image-registry-66df7c8f76-mqzzd\" (UID: \"860f799b-d79f-4bd5-a639-9cb8556c8403\") " pod="openshift-image-registry/image-registry-66df7c8f76-mqzzd" Feb 02 21:25:52 crc kubenswrapper[4789]: I0202 21:25:52.633642 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/860f799b-d79f-4bd5-a639-9cb8556c8403-bound-sa-token\") pod \"image-registry-66df7c8f76-mqzzd\" (UID: \"860f799b-d79f-4bd5-a639-9cb8556c8403\") " pod="openshift-image-registry/image-registry-66df7c8f76-mqzzd" Feb 02 21:25:52 crc kubenswrapper[4789]: I0202 21:25:52.633681 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/860f799b-d79f-4bd5-a639-9cb8556c8403-registry-tls\") pod \"image-registry-66df7c8f76-mqzzd\" (UID: \"860f799b-d79f-4bd5-a639-9cb8556c8403\") " pod="openshift-image-registry/image-registry-66df7c8f76-mqzzd" Feb 02 21:25:52 crc kubenswrapper[4789]: I0202 21:25:52.633703 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/860f799b-d79f-4bd5-a639-9cb8556c8403-registry-certificates\") pod \"image-registry-66df7c8f76-mqzzd\" (UID: \"860f799b-d79f-4bd5-a639-9cb8556c8403\") " pod="openshift-image-registry/image-registry-66df7c8f76-mqzzd" Feb 02 21:25:52 crc kubenswrapper[4789]: I0202 21:25:52.633724 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-mqzzd\" (UID: \"860f799b-d79f-4bd5-a639-9cb8556c8403\") " pod="openshift-image-registry/image-registry-66df7c8f76-mqzzd" Feb 02 21:25:52 crc kubenswrapper[4789]: I0202 21:25:52.658323 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-mqzzd\" (UID: \"860f799b-d79f-4bd5-a639-9cb8556c8403\") " pod="openshift-image-registry/image-registry-66df7c8f76-mqzzd" Feb 02 21:25:52 crc kubenswrapper[4789]: I0202 21:25:52.734978 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/860f799b-d79f-4bd5-a639-9cb8556c8403-installation-pull-secrets\") pod \"image-registry-66df7c8f76-mqzzd\" (UID: \"860f799b-d79f-4bd5-a639-9cb8556c8403\") " pod="openshift-image-registry/image-registry-66df7c8f76-mqzzd" Feb 02 21:25:52 crc kubenswrapper[4789]: I0202 21:25:52.735019 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/860f799b-d79f-4bd5-a639-9cb8556c8403-ca-trust-extracted\") pod \"image-registry-66df7c8f76-mqzzd\" (UID: \"860f799b-d79f-4bd5-a639-9cb8556c8403\") " pod="openshift-image-registry/image-registry-66df7c8f76-mqzzd" Feb 02 21:25:52 crc kubenswrapper[4789]: I0202 21:25:52.735046 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/860f799b-d79f-4bd5-a639-9cb8556c8403-trusted-ca\") pod \"image-registry-66df7c8f76-mqzzd\" (UID: \"860f799b-d79f-4bd5-a639-9cb8556c8403\") " pod="openshift-image-registry/image-registry-66df7c8f76-mqzzd" Feb 02 21:25:52 crc kubenswrapper[4789]: I0202 21:25:52.735067 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6c59g\" (UniqueName: \"kubernetes.io/projected/860f799b-d79f-4bd5-a639-9cb8556c8403-kube-api-access-6c59g\") pod \"image-registry-66df7c8f76-mqzzd\" (UID: \"860f799b-d79f-4bd5-a639-9cb8556c8403\") " pod="openshift-image-registry/image-registry-66df7c8f76-mqzzd" Feb 02 21:25:52 crc kubenswrapper[4789]: I0202 21:25:52.735083 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/860f799b-d79f-4bd5-a639-9cb8556c8403-bound-sa-token\") pod \"image-registry-66df7c8f76-mqzzd\" (UID: \"860f799b-d79f-4bd5-a639-9cb8556c8403\") " pod="openshift-image-registry/image-registry-66df7c8f76-mqzzd" Feb 02 21:25:52 crc kubenswrapper[4789]: I0202 21:25:52.735126 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/860f799b-d79f-4bd5-a639-9cb8556c8403-registry-tls\") pod \"image-registry-66df7c8f76-mqzzd\" (UID: \"860f799b-d79f-4bd5-a639-9cb8556c8403\") " pod="openshift-image-registry/image-registry-66df7c8f76-mqzzd" Feb 02 21:25:52 crc kubenswrapper[4789]: I0202 21:25:52.735152 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/860f799b-d79f-4bd5-a639-9cb8556c8403-registry-certificates\") pod \"image-registry-66df7c8f76-mqzzd\" (UID: \"860f799b-d79f-4bd5-a639-9cb8556c8403\") " pod="openshift-image-registry/image-registry-66df7c8f76-mqzzd" Feb 02 21:25:52 crc kubenswrapper[4789]: I0202 21:25:52.735477 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/860f799b-d79f-4bd5-a639-9cb8556c8403-ca-trust-extracted\") pod \"image-registry-66df7c8f76-mqzzd\" (UID: \"860f799b-d79f-4bd5-a639-9cb8556c8403\") " pod="openshift-image-registry/image-registry-66df7c8f76-mqzzd" Feb 02 21:25:52 crc kubenswrapper[4789]: I0202 21:25:52.736308 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/860f799b-d79f-4bd5-a639-9cb8556c8403-registry-certificates\") pod \"image-registry-66df7c8f76-mqzzd\" (UID: \"860f799b-d79f-4bd5-a639-9cb8556c8403\") " pod="openshift-image-registry/image-registry-66df7c8f76-mqzzd" Feb 02 21:25:52 crc kubenswrapper[4789]: I0202 21:25:52.737348 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/860f799b-d79f-4bd5-a639-9cb8556c8403-trusted-ca\") pod \"image-registry-66df7c8f76-mqzzd\" (UID: \"860f799b-d79f-4bd5-a639-9cb8556c8403\") " pod="openshift-image-registry/image-registry-66df7c8f76-mqzzd" Feb 02 21:25:52 crc kubenswrapper[4789]: I0202 21:25:52.740974 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/860f799b-d79f-4bd5-a639-9cb8556c8403-installation-pull-secrets\") pod \"image-registry-66df7c8f76-mqzzd\" (UID: \"860f799b-d79f-4bd5-a639-9cb8556c8403\") " pod="openshift-image-registry/image-registry-66df7c8f76-mqzzd" Feb 02 21:25:52 crc kubenswrapper[4789]: I0202 21:25:52.742035 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/860f799b-d79f-4bd5-a639-9cb8556c8403-registry-tls\") pod \"image-registry-66df7c8f76-mqzzd\" (UID: \"860f799b-d79f-4bd5-a639-9cb8556c8403\") " pod="openshift-image-registry/image-registry-66df7c8f76-mqzzd" Feb 02 21:25:52 crc kubenswrapper[4789]: I0202 21:25:52.750247 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/860f799b-d79f-4bd5-a639-9cb8556c8403-bound-sa-token\") pod \"image-registry-66df7c8f76-mqzzd\" (UID: \"860f799b-d79f-4bd5-a639-9cb8556c8403\") " pod="openshift-image-registry/image-registry-66df7c8f76-mqzzd" Feb 02 21:25:52 crc kubenswrapper[4789]: I0202 21:25:52.754211 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6c59g\" (UniqueName: \"kubernetes.io/projected/860f799b-d79f-4bd5-a639-9cb8556c8403-kube-api-access-6c59g\") pod \"image-registry-66df7c8f76-mqzzd\" (UID: \"860f799b-d79f-4bd5-a639-9cb8556c8403\") " pod="openshift-image-registry/image-registry-66df7c8f76-mqzzd" Feb 02 21:25:52 crc kubenswrapper[4789]: I0202 21:25:52.841321 4789 patch_prober.go:28] interesting pod/machine-config-daemon-c8vcn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 21:25:52 crc kubenswrapper[4789]: I0202 21:25:52.841406 4789 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 21:25:52 crc kubenswrapper[4789]: I0202 21:25:52.911342 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-mqzzd" Feb 02 21:25:53 crc kubenswrapper[4789]: I0202 21:25:53.336291 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-mqzzd"] Feb 02 21:25:53 crc kubenswrapper[4789]: I0202 21:25:53.397243 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-mqzzd" event={"ID":"860f799b-d79f-4bd5-a639-9cb8556c8403","Type":"ContainerStarted","Data":"f7a7792fb99b4b6f05a696618b0c26743a43be260d1b7105f922893ef605a037"} Feb 02 21:25:54 crc kubenswrapper[4789]: I0202 21:25:54.411859 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-mqzzd" event={"ID":"860f799b-d79f-4bd5-a639-9cb8556c8403","Type":"ContainerStarted","Data":"3a90a943d945492aa58919dd13e076bad7f7aa5ac7b48c56403dc95736d94612"} Feb 02 21:25:54 crc kubenswrapper[4789]: I0202 21:25:54.414043 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-mqzzd" Feb 02 21:25:54 crc kubenswrapper[4789]: I0202 21:25:54.442477 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-mqzzd" podStartSLOduration=2.442454388 podStartE2EDuration="2.442454388s" podCreationTimestamp="2026-02-02 21:25:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:25:54.437207918 +0000 UTC m=+374.732232977" watchObservedRunningTime="2026-02-02 21:25:54.442454388 +0000 UTC m=+374.737479447" Feb 02 21:26:03 crc kubenswrapper[4789]: I0202 21:26:03.794286 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jm6mv"] Feb 02 21:26:03 crc kubenswrapper[4789]: I0202 21:26:03.797268 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jm6mv" Feb 02 21:26:03 crc kubenswrapper[4789]: I0202 21:26:03.800416 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 02 21:26:03 crc kubenswrapper[4789]: I0202 21:26:03.809139 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jm6mv"] Feb 02 21:26:03 crc kubenswrapper[4789]: I0202 21:26:03.903012 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6abf2c76-f615-4359-b413-545477a9a5c9-catalog-content\") pod \"community-operators-jm6mv\" (UID: \"6abf2c76-f615-4359-b413-545477a9a5c9\") " pod="openshift-marketplace/community-operators-jm6mv" Feb 02 21:26:03 crc kubenswrapper[4789]: I0202 21:26:03.903112 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6abf2c76-f615-4359-b413-545477a9a5c9-utilities\") pod \"community-operators-jm6mv\" (UID: \"6abf2c76-f615-4359-b413-545477a9a5c9\") " pod="openshift-marketplace/community-operators-jm6mv" Feb 02 21:26:03 crc kubenswrapper[4789]: I0202 21:26:03.903176 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n85ph\" (UniqueName: \"kubernetes.io/projected/6abf2c76-f615-4359-b413-545477a9a5c9-kube-api-access-n85ph\") pod \"community-operators-jm6mv\" (UID: \"6abf2c76-f615-4359-b413-545477a9a5c9\") " pod="openshift-marketplace/community-operators-jm6mv" Feb 02 21:26:04 crc kubenswrapper[4789]: I0202 21:26:04.004595 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6abf2c76-f615-4359-b413-545477a9a5c9-catalog-content\") pod \"community-operators-jm6mv\" (UID: \"6abf2c76-f615-4359-b413-545477a9a5c9\") " pod="openshift-marketplace/community-operators-jm6mv" Feb 02 21:26:04 crc kubenswrapper[4789]: I0202 21:26:04.004670 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6abf2c76-f615-4359-b413-545477a9a5c9-utilities\") pod \"community-operators-jm6mv\" (UID: \"6abf2c76-f615-4359-b413-545477a9a5c9\") " pod="openshift-marketplace/community-operators-jm6mv" Feb 02 21:26:04 crc kubenswrapper[4789]: I0202 21:26:04.004714 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n85ph\" (UniqueName: \"kubernetes.io/projected/6abf2c76-f615-4359-b413-545477a9a5c9-kube-api-access-n85ph\") pod \"community-operators-jm6mv\" (UID: \"6abf2c76-f615-4359-b413-545477a9a5c9\") " pod="openshift-marketplace/community-operators-jm6mv" Feb 02 21:26:04 crc kubenswrapper[4789]: I0202 21:26:04.005355 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6abf2c76-f615-4359-b413-545477a9a5c9-catalog-content\") pod \"community-operators-jm6mv\" (UID: \"6abf2c76-f615-4359-b413-545477a9a5c9\") " pod="openshift-marketplace/community-operators-jm6mv" Feb 02 21:26:04 crc kubenswrapper[4789]: I0202 21:26:04.005848 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6abf2c76-f615-4359-b413-545477a9a5c9-utilities\") pod \"community-operators-jm6mv\" (UID: \"6abf2c76-f615-4359-b413-545477a9a5c9\") " pod="openshift-marketplace/community-operators-jm6mv" Feb 02 21:26:04 crc kubenswrapper[4789]: I0202 21:26:04.040405 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n85ph\" (UniqueName: \"kubernetes.io/projected/6abf2c76-f615-4359-b413-545477a9a5c9-kube-api-access-n85ph\") pod \"community-operators-jm6mv\" (UID: \"6abf2c76-f615-4359-b413-545477a9a5c9\") " pod="openshift-marketplace/community-operators-jm6mv" Feb 02 21:26:04 crc kubenswrapper[4789]: I0202 21:26:04.122909 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jm6mv" Feb 02 21:26:04 crc kubenswrapper[4789]: I0202 21:26:04.386821 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hh84g"] Feb 02 21:26:04 crc kubenswrapper[4789]: I0202 21:26:04.387760 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hh84g" Feb 02 21:26:04 crc kubenswrapper[4789]: I0202 21:26:04.389236 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 02 21:26:04 crc kubenswrapper[4789]: I0202 21:26:04.406398 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hh84g"] Feb 02 21:26:04 crc kubenswrapper[4789]: I0202 21:26:04.510978 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpbq5\" (UniqueName: \"kubernetes.io/projected/40611cb2-5a59-49f8-905f-ce117f332665-kube-api-access-gpbq5\") pod \"redhat-operators-hh84g\" (UID: \"40611cb2-5a59-49f8-905f-ce117f332665\") " pod="openshift-marketplace/redhat-operators-hh84g" Feb 02 21:26:04 crc kubenswrapper[4789]: I0202 21:26:04.511286 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40611cb2-5a59-49f8-905f-ce117f332665-utilities\") pod \"redhat-operators-hh84g\" (UID: \"40611cb2-5a59-49f8-905f-ce117f332665\") " pod="openshift-marketplace/redhat-operators-hh84g" Feb 02 21:26:04 crc kubenswrapper[4789]: I0202 21:26:04.511376 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40611cb2-5a59-49f8-905f-ce117f332665-catalog-content\") pod \"redhat-operators-hh84g\" (UID: \"40611cb2-5a59-49f8-905f-ce117f332665\") " pod="openshift-marketplace/redhat-operators-hh84g" Feb 02 21:26:04 crc kubenswrapper[4789]: I0202 21:26:04.611976 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jm6mv"] Feb 02 21:26:04 crc kubenswrapper[4789]: I0202 21:26:04.612378 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpbq5\" (UniqueName: \"kubernetes.io/projected/40611cb2-5a59-49f8-905f-ce117f332665-kube-api-access-gpbq5\") pod \"redhat-operators-hh84g\" (UID: \"40611cb2-5a59-49f8-905f-ce117f332665\") " pod="openshift-marketplace/redhat-operators-hh84g" Feb 02 21:26:04 crc kubenswrapper[4789]: I0202 21:26:04.612489 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40611cb2-5a59-49f8-905f-ce117f332665-utilities\") pod \"redhat-operators-hh84g\" (UID: \"40611cb2-5a59-49f8-905f-ce117f332665\") " pod="openshift-marketplace/redhat-operators-hh84g" Feb 02 21:26:04 crc kubenswrapper[4789]: I0202 21:26:04.612558 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40611cb2-5a59-49f8-905f-ce117f332665-catalog-content\") pod \"redhat-operators-hh84g\" (UID: \"40611cb2-5a59-49f8-905f-ce117f332665\") " pod="openshift-marketplace/redhat-operators-hh84g" Feb 02 21:26:04 crc kubenswrapper[4789]: I0202 21:26:04.613322 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40611cb2-5a59-49f8-905f-ce117f332665-utilities\") pod \"redhat-operators-hh84g\" (UID: \"40611cb2-5a59-49f8-905f-ce117f332665\") " pod="openshift-marketplace/redhat-operators-hh84g" Feb 02 21:26:04 crc kubenswrapper[4789]: I0202 21:26:04.613340 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40611cb2-5a59-49f8-905f-ce117f332665-catalog-content\") pod \"redhat-operators-hh84g\" (UID: \"40611cb2-5a59-49f8-905f-ce117f332665\") " pod="openshift-marketplace/redhat-operators-hh84g" Feb 02 21:26:04 crc kubenswrapper[4789]: I0202 21:26:04.649769 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpbq5\" (UniqueName: \"kubernetes.io/projected/40611cb2-5a59-49f8-905f-ce117f332665-kube-api-access-gpbq5\") pod \"redhat-operators-hh84g\" (UID: \"40611cb2-5a59-49f8-905f-ce117f332665\") " pod="openshift-marketplace/redhat-operators-hh84g" Feb 02 21:26:04 crc kubenswrapper[4789]: I0202 21:26:04.718158 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hh84g" Feb 02 21:26:05 crc kubenswrapper[4789]: I0202 21:26:05.158009 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hh84g"] Feb 02 21:26:05 crc kubenswrapper[4789]: I0202 21:26:05.479808 4789 generic.go:334] "Generic (PLEG): container finished" podID="6abf2c76-f615-4359-b413-545477a9a5c9" containerID="d0ffeec50a30a5582a7ba2c0bb5b150ffbc05aabccf30e8b895a7935710ac3cf" exitCode=0 Feb 02 21:26:05 crc kubenswrapper[4789]: I0202 21:26:05.479935 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jm6mv" event={"ID":"6abf2c76-f615-4359-b413-545477a9a5c9","Type":"ContainerDied","Data":"d0ffeec50a30a5582a7ba2c0bb5b150ffbc05aabccf30e8b895a7935710ac3cf"} Feb 02 21:26:05 crc kubenswrapper[4789]: I0202 21:26:05.479967 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jm6mv" event={"ID":"6abf2c76-f615-4359-b413-545477a9a5c9","Type":"ContainerStarted","Data":"f745b3f4cef54c01fdff2eb9a760889cf207f72eb1a7f6270e570650629964c9"} Feb 02 21:26:05 crc kubenswrapper[4789]: I0202 21:26:05.488458 4789 generic.go:334] "Generic (PLEG): container finished" podID="40611cb2-5a59-49f8-905f-ce117f332665" containerID="a9341f2461fa8fc528bb513163de0a6b20734a7cd5ad875b4f6d055292eea880" exitCode=0 Feb 02 21:26:05 crc kubenswrapper[4789]: I0202 21:26:05.488525 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hh84g" event={"ID":"40611cb2-5a59-49f8-905f-ce117f332665","Type":"ContainerDied","Data":"a9341f2461fa8fc528bb513163de0a6b20734a7cd5ad875b4f6d055292eea880"} Feb 02 21:26:05 crc kubenswrapper[4789]: I0202 21:26:05.488552 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hh84g" event={"ID":"40611cb2-5a59-49f8-905f-ce117f332665","Type":"ContainerStarted","Data":"fbc37a33609442b19f375e07fd19b075e7723fa519829a1147b08d7e4663a98d"} Feb 02 21:26:06 crc kubenswrapper[4789]: I0202 21:26:06.189733 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tq6p5"] Feb 02 21:26:06 crc kubenswrapper[4789]: I0202 21:26:06.191332 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tq6p5" Feb 02 21:26:06 crc kubenswrapper[4789]: I0202 21:26:06.193928 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 02 21:26:06 crc kubenswrapper[4789]: I0202 21:26:06.212508 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tq6p5"] Feb 02 21:26:06 crc kubenswrapper[4789]: I0202 21:26:06.336819 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8164a9db-3349-43ca-9927-b326f01ab26d-catalog-content\") pod \"certified-operators-tq6p5\" (UID: \"8164a9db-3349-43ca-9927-b326f01ab26d\") " pod="openshift-marketplace/certified-operators-tq6p5" Feb 02 21:26:06 crc kubenswrapper[4789]: I0202 21:26:06.337156 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhmwt\" (UniqueName: \"kubernetes.io/projected/8164a9db-3349-43ca-9927-b326f01ab26d-kube-api-access-mhmwt\") pod \"certified-operators-tq6p5\" (UID: \"8164a9db-3349-43ca-9927-b326f01ab26d\") " pod="openshift-marketplace/certified-operators-tq6p5" Feb 02 21:26:06 crc kubenswrapper[4789]: I0202 21:26:06.337214 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8164a9db-3349-43ca-9927-b326f01ab26d-utilities\") pod \"certified-operators-tq6p5\" (UID: \"8164a9db-3349-43ca-9927-b326f01ab26d\") " pod="openshift-marketplace/certified-operators-tq6p5" Feb 02 21:26:06 crc kubenswrapper[4789]: I0202 21:26:06.438241 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8164a9db-3349-43ca-9927-b326f01ab26d-utilities\") pod \"certified-operators-tq6p5\" (UID: \"8164a9db-3349-43ca-9927-b326f01ab26d\") " pod="openshift-marketplace/certified-operators-tq6p5" Feb 02 21:26:06 crc kubenswrapper[4789]: I0202 21:26:06.438302 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8164a9db-3349-43ca-9927-b326f01ab26d-catalog-content\") pod \"certified-operators-tq6p5\" (UID: \"8164a9db-3349-43ca-9927-b326f01ab26d\") " pod="openshift-marketplace/certified-operators-tq6p5" Feb 02 21:26:06 crc kubenswrapper[4789]: I0202 21:26:06.438333 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhmwt\" (UniqueName: \"kubernetes.io/projected/8164a9db-3349-43ca-9927-b326f01ab26d-kube-api-access-mhmwt\") pod \"certified-operators-tq6p5\" (UID: \"8164a9db-3349-43ca-9927-b326f01ab26d\") " pod="openshift-marketplace/certified-operators-tq6p5" Feb 02 21:26:06 crc kubenswrapper[4789]: I0202 21:26:06.438953 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8164a9db-3349-43ca-9927-b326f01ab26d-catalog-content\") pod \"certified-operators-tq6p5\" (UID: \"8164a9db-3349-43ca-9927-b326f01ab26d\") " pod="openshift-marketplace/certified-operators-tq6p5" Feb 02 21:26:06 crc kubenswrapper[4789]: I0202 21:26:06.439381 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8164a9db-3349-43ca-9927-b326f01ab26d-utilities\") pod \"certified-operators-tq6p5\" (UID: \"8164a9db-3349-43ca-9927-b326f01ab26d\") " pod="openshift-marketplace/certified-operators-tq6p5" Feb 02 21:26:06 crc kubenswrapper[4789]: I0202 21:26:06.457877 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhmwt\" (UniqueName: \"kubernetes.io/projected/8164a9db-3349-43ca-9927-b326f01ab26d-kube-api-access-mhmwt\") pod \"certified-operators-tq6p5\" (UID: \"8164a9db-3349-43ca-9927-b326f01ab26d\") " pod="openshift-marketplace/certified-operators-tq6p5" Feb 02 21:26:06 crc kubenswrapper[4789]: I0202 21:26:06.497528 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hh84g" event={"ID":"40611cb2-5a59-49f8-905f-ce117f332665","Type":"ContainerStarted","Data":"47fafbc87bb2595374cc9a835fe209c6e7147f8d3b02ff487522715aa4c90b1d"} Feb 02 21:26:06 crc kubenswrapper[4789]: I0202 21:26:06.499625 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jm6mv" event={"ID":"6abf2c76-f615-4359-b413-545477a9a5c9","Type":"ContainerStarted","Data":"5c2484f5063ecab6568c41fb39496fdcf797cdc7932c41ee4c1803f07da13201"} Feb 02 21:26:06 crc kubenswrapper[4789]: I0202 21:26:06.504199 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tq6p5" Feb 02 21:26:06 crc kubenswrapper[4789]: I0202 21:26:06.790816 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-54dlc"] Feb 02 21:26:06 crc kubenswrapper[4789]: I0202 21:26:06.793880 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-54dlc" Feb 02 21:26:06 crc kubenswrapper[4789]: I0202 21:26:06.799786 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 02 21:26:06 crc kubenswrapper[4789]: I0202 21:26:06.807834 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-54dlc"] Feb 02 21:26:06 crc kubenswrapper[4789]: I0202 21:26:06.922166 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tq6p5"] Feb 02 21:26:06 crc kubenswrapper[4789]: W0202 21:26:06.926306 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8164a9db_3349_43ca_9927_b326f01ab26d.slice/crio-c212466fc2c396ac17e22e0cd09dbabe6feb9041542fa3c9b9e1ddfb96197eef WatchSource:0}: Error finding container c212466fc2c396ac17e22e0cd09dbabe6feb9041542fa3c9b9e1ddfb96197eef: Status 404 returned error can't find the container with id c212466fc2c396ac17e22e0cd09dbabe6feb9041542fa3c9b9e1ddfb96197eef Feb 02 21:26:06 crc kubenswrapper[4789]: I0202 21:26:06.944080 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eaae7980-489d-4b4d-ae1f-02949a4f8e12-catalog-content\") pod \"redhat-marketplace-54dlc\" (UID: \"eaae7980-489d-4b4d-ae1f-02949a4f8e12\") " pod="openshift-marketplace/redhat-marketplace-54dlc" Feb 02 21:26:06 crc kubenswrapper[4789]: I0202 21:26:06.944128 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqdh4\" (UniqueName: \"kubernetes.io/projected/eaae7980-489d-4b4d-ae1f-02949a4f8e12-kube-api-access-tqdh4\") pod \"redhat-marketplace-54dlc\" (UID: \"eaae7980-489d-4b4d-ae1f-02949a4f8e12\") " pod="openshift-marketplace/redhat-marketplace-54dlc" Feb 02 21:26:06 crc kubenswrapper[4789]: I0202 21:26:06.944165 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eaae7980-489d-4b4d-ae1f-02949a4f8e12-utilities\") pod \"redhat-marketplace-54dlc\" (UID: \"eaae7980-489d-4b4d-ae1f-02949a4f8e12\") " pod="openshift-marketplace/redhat-marketplace-54dlc" Feb 02 21:26:07 crc kubenswrapper[4789]: I0202 21:26:07.045280 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eaae7980-489d-4b4d-ae1f-02949a4f8e12-catalog-content\") pod \"redhat-marketplace-54dlc\" (UID: \"eaae7980-489d-4b4d-ae1f-02949a4f8e12\") " pod="openshift-marketplace/redhat-marketplace-54dlc" Feb 02 21:26:07 crc kubenswrapper[4789]: I0202 21:26:07.045371 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqdh4\" (UniqueName: \"kubernetes.io/projected/eaae7980-489d-4b4d-ae1f-02949a4f8e12-kube-api-access-tqdh4\") pod \"redhat-marketplace-54dlc\" (UID: \"eaae7980-489d-4b4d-ae1f-02949a4f8e12\") " pod="openshift-marketplace/redhat-marketplace-54dlc" Feb 02 21:26:07 crc kubenswrapper[4789]: I0202 21:26:07.045430 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eaae7980-489d-4b4d-ae1f-02949a4f8e12-utilities\") pod \"redhat-marketplace-54dlc\" (UID: \"eaae7980-489d-4b4d-ae1f-02949a4f8e12\") " pod="openshift-marketplace/redhat-marketplace-54dlc" Feb 02 21:26:07 crc kubenswrapper[4789]: I0202 21:26:07.046316 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eaae7980-489d-4b4d-ae1f-02949a4f8e12-catalog-content\") pod \"redhat-marketplace-54dlc\" (UID: \"eaae7980-489d-4b4d-ae1f-02949a4f8e12\") " pod="openshift-marketplace/redhat-marketplace-54dlc" Feb 02 21:26:07 crc kubenswrapper[4789]: I0202 21:26:07.046796 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eaae7980-489d-4b4d-ae1f-02949a4f8e12-utilities\") pod \"redhat-marketplace-54dlc\" (UID: \"eaae7980-489d-4b4d-ae1f-02949a4f8e12\") " pod="openshift-marketplace/redhat-marketplace-54dlc" Feb 02 21:26:07 crc kubenswrapper[4789]: I0202 21:26:07.065285 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqdh4\" (UniqueName: \"kubernetes.io/projected/eaae7980-489d-4b4d-ae1f-02949a4f8e12-kube-api-access-tqdh4\") pod \"redhat-marketplace-54dlc\" (UID: \"eaae7980-489d-4b4d-ae1f-02949a4f8e12\") " pod="openshift-marketplace/redhat-marketplace-54dlc" Feb 02 21:26:07 crc kubenswrapper[4789]: I0202 21:26:07.111572 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-54dlc" Feb 02 21:26:07 crc kubenswrapper[4789]: I0202 21:26:07.506794 4789 generic.go:334] "Generic (PLEG): container finished" podID="8164a9db-3349-43ca-9927-b326f01ab26d" containerID="b59ed6d91875f30a8887c7b6e9e1c913ef0c07647c22775907549e9a58756aec" exitCode=0 Feb 02 21:26:07 crc kubenswrapper[4789]: I0202 21:26:07.506902 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tq6p5" event={"ID":"8164a9db-3349-43ca-9927-b326f01ab26d","Type":"ContainerDied","Data":"b59ed6d91875f30a8887c7b6e9e1c913ef0c07647c22775907549e9a58756aec"} Feb 02 21:26:07 crc kubenswrapper[4789]: I0202 21:26:07.507068 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tq6p5" event={"ID":"8164a9db-3349-43ca-9927-b326f01ab26d","Type":"ContainerStarted","Data":"c212466fc2c396ac17e22e0cd09dbabe6feb9041542fa3c9b9e1ddfb96197eef"} Feb 02 21:26:07 crc kubenswrapper[4789]: I0202 21:26:07.510448 4789 generic.go:334] "Generic (PLEG): container finished" podID="40611cb2-5a59-49f8-905f-ce117f332665" containerID="47fafbc87bb2595374cc9a835fe209c6e7147f8d3b02ff487522715aa4c90b1d" exitCode=0 Feb 02 21:26:07 crc kubenswrapper[4789]: I0202 21:26:07.510596 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hh84g" event={"ID":"40611cb2-5a59-49f8-905f-ce117f332665","Type":"ContainerDied","Data":"47fafbc87bb2595374cc9a835fe209c6e7147f8d3b02ff487522715aa4c90b1d"} Feb 02 21:26:07 crc kubenswrapper[4789]: I0202 21:26:07.513464 4789 generic.go:334] "Generic (PLEG): container finished" podID="6abf2c76-f615-4359-b413-545477a9a5c9" containerID="5c2484f5063ecab6568c41fb39496fdcf797cdc7932c41ee4c1803f07da13201" exitCode=0 Feb 02 21:26:07 crc kubenswrapper[4789]: I0202 21:26:07.513489 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jm6mv" event={"ID":"6abf2c76-f615-4359-b413-545477a9a5c9","Type":"ContainerDied","Data":"5c2484f5063ecab6568c41fb39496fdcf797cdc7932c41ee4c1803f07da13201"} Feb 02 21:26:07 crc kubenswrapper[4789]: I0202 21:26:07.523604 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-54dlc"] Feb 02 21:26:07 crc kubenswrapper[4789]: W0202 21:26:07.544479 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeaae7980_489d_4b4d_ae1f_02949a4f8e12.slice/crio-8d143a81d731bbd5d72ed629e71b4de733acf233a0cdbc79fef24872918ebf55 WatchSource:0}: Error finding container 8d143a81d731bbd5d72ed629e71b4de733acf233a0cdbc79fef24872918ebf55: Status 404 returned error can't find the container with id 8d143a81d731bbd5d72ed629e71b4de733acf233a0cdbc79fef24872918ebf55 Feb 02 21:26:08 crc kubenswrapper[4789]: I0202 21:26:08.519961 4789 generic.go:334] "Generic (PLEG): container finished" podID="8164a9db-3349-43ca-9927-b326f01ab26d" containerID="176b6e215893a6dbf398bc52349fc6d716ae6bcf64092db271808470a2018573" exitCode=0 Feb 02 21:26:08 crc kubenswrapper[4789]: I0202 21:26:08.520119 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tq6p5" event={"ID":"8164a9db-3349-43ca-9927-b326f01ab26d","Type":"ContainerDied","Data":"176b6e215893a6dbf398bc52349fc6d716ae6bcf64092db271808470a2018573"} Feb 02 21:26:08 crc kubenswrapper[4789]: I0202 21:26:08.522267 4789 generic.go:334] "Generic (PLEG): container finished" podID="eaae7980-489d-4b4d-ae1f-02949a4f8e12" containerID="571021d5d5319a49b918c3258fea8db4e6db2fd99ff03ef59315d0ecb535e6ac" exitCode=0 Feb 02 21:26:08 crc kubenswrapper[4789]: I0202 21:26:08.522317 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-54dlc" event={"ID":"eaae7980-489d-4b4d-ae1f-02949a4f8e12","Type":"ContainerDied","Data":"571021d5d5319a49b918c3258fea8db4e6db2fd99ff03ef59315d0ecb535e6ac"} Feb 02 21:26:08 crc kubenswrapper[4789]: I0202 21:26:08.522338 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-54dlc" event={"ID":"eaae7980-489d-4b4d-ae1f-02949a4f8e12","Type":"ContainerStarted","Data":"8d143a81d731bbd5d72ed629e71b4de733acf233a0cdbc79fef24872918ebf55"} Feb 02 21:26:08 crc kubenswrapper[4789]: I0202 21:26:08.525615 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hh84g" event={"ID":"40611cb2-5a59-49f8-905f-ce117f332665","Type":"ContainerStarted","Data":"a3a64752589b3b57aefcccfc2b5178c1fb7c9add349c9dd8c5a983420ee23ce9"} Feb 02 21:26:08 crc kubenswrapper[4789]: I0202 21:26:08.528294 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jm6mv" event={"ID":"6abf2c76-f615-4359-b413-545477a9a5c9","Type":"ContainerStarted","Data":"87b95b762c6f8fa9f68b02262fb26147d9e3bc99f8a9d2f30d80233406d519bf"} Feb 02 21:26:08 crc kubenswrapper[4789]: I0202 21:26:08.587915 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hh84g" podStartSLOduration=2.174588406 podStartE2EDuration="4.587894301s" podCreationTimestamp="2026-02-02 21:26:04 +0000 UTC" firstStartedPulling="2026-02-02 21:26:05.490015009 +0000 UTC m=+385.785040018" lastFinishedPulling="2026-02-02 21:26:07.903320894 +0000 UTC m=+388.198345913" observedRunningTime="2026-02-02 21:26:08.561829008 +0000 UTC m=+388.856854067" watchObservedRunningTime="2026-02-02 21:26:08.587894301 +0000 UTC m=+388.882919340" Feb 02 21:26:08 crc kubenswrapper[4789]: I0202 21:26:08.602066 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jm6mv" podStartSLOduration=3.107423811 podStartE2EDuration="5.602051834s" podCreationTimestamp="2026-02-02 21:26:03 +0000 UTC" firstStartedPulling="2026-02-02 21:26:05.48688336 +0000 UTC m=+385.781908419" lastFinishedPulling="2026-02-02 21:26:07.981511413 +0000 UTC m=+388.276536442" observedRunningTime="2026-02-02 21:26:08.6001257 +0000 UTC m=+388.895150709" watchObservedRunningTime="2026-02-02 21:26:08.602051834 +0000 UTC m=+388.897076853" Feb 02 21:26:09 crc kubenswrapper[4789]: I0202 21:26:09.536452 4789 generic.go:334] "Generic (PLEG): container finished" podID="eaae7980-489d-4b4d-ae1f-02949a4f8e12" containerID="0111d895691685a37514268c794e71ebf84783107bde993958c3959ef217c1f8" exitCode=0 Feb 02 21:26:09 crc kubenswrapper[4789]: I0202 21:26:09.536540 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-54dlc" event={"ID":"eaae7980-489d-4b4d-ae1f-02949a4f8e12","Type":"ContainerDied","Data":"0111d895691685a37514268c794e71ebf84783107bde993958c3959ef217c1f8"} Feb 02 21:26:09 crc kubenswrapper[4789]: I0202 21:26:09.539804 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tq6p5" event={"ID":"8164a9db-3349-43ca-9927-b326f01ab26d","Type":"ContainerStarted","Data":"c0a8374bf902640268f7d969362374af94b3bd26ce7801c4b5ba0e1d06ff694c"} Feb 02 21:26:09 crc kubenswrapper[4789]: I0202 21:26:09.596193 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tq6p5" podStartSLOduration=2.170519502 podStartE2EDuration="3.596177708s" podCreationTimestamp="2026-02-02 21:26:06 +0000 UTC" firstStartedPulling="2026-02-02 21:26:07.508494447 +0000 UTC m=+387.803519466" lastFinishedPulling="2026-02-02 21:26:08.934152653 +0000 UTC m=+389.229177672" observedRunningTime="2026-02-02 21:26:09.594933172 +0000 UTC m=+389.889958191" watchObservedRunningTime="2026-02-02 21:26:09.596177708 +0000 UTC m=+389.891202717" Feb 02 21:26:10 crc kubenswrapper[4789]: I0202 21:26:10.546945 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-54dlc" event={"ID":"eaae7980-489d-4b4d-ae1f-02949a4f8e12","Type":"ContainerStarted","Data":"aedd253438704b56fca2f3a65fd877148b233975691f272f26553bc06c63344b"} Feb 02 21:26:10 crc kubenswrapper[4789]: I0202 21:26:10.567010 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-54dlc" podStartSLOduration=3.067649248 podStartE2EDuration="4.566991026s" podCreationTimestamp="2026-02-02 21:26:06 +0000 UTC" firstStartedPulling="2026-02-02 21:26:08.523653699 +0000 UTC m=+388.818678718" lastFinishedPulling="2026-02-02 21:26:10.022995467 +0000 UTC m=+390.318020496" observedRunningTime="2026-02-02 21:26:10.565987818 +0000 UTC m=+390.861012847" watchObservedRunningTime="2026-02-02 21:26:10.566991026 +0000 UTC m=+390.862016045" Feb 02 21:26:12 crc kubenswrapper[4789]: I0202 21:26:12.919756 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-mqzzd" Feb 02 21:26:13 crc kubenswrapper[4789]: I0202 21:26:13.008217 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-x2wtg"] Feb 02 21:26:14 crc kubenswrapper[4789]: I0202 21:26:14.124068 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jm6mv" Feb 02 21:26:14 crc kubenswrapper[4789]: I0202 21:26:14.125540 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jm6mv" Feb 02 21:26:14 crc kubenswrapper[4789]: I0202 21:26:14.197452 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jm6mv" Feb 02 21:26:14 crc kubenswrapper[4789]: I0202 21:26:14.646316 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jm6mv" Feb 02 21:26:14 crc kubenswrapper[4789]: I0202 21:26:14.718854 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hh84g" Feb 02 21:26:14 crc kubenswrapper[4789]: I0202 21:26:14.718925 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hh84g" Feb 02 21:26:15 crc kubenswrapper[4789]: I0202 21:26:15.780648 4789 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hh84g" podUID="40611cb2-5a59-49f8-905f-ce117f332665" containerName="registry-server" probeResult="failure" output=< Feb 02 21:26:15 crc kubenswrapper[4789]: timeout: failed to connect service ":50051" within 1s Feb 02 21:26:15 crc kubenswrapper[4789]: > Feb 02 21:26:16 crc kubenswrapper[4789]: I0202 21:26:16.504828 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tq6p5" Feb 02 21:26:16 crc kubenswrapper[4789]: I0202 21:26:16.504916 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tq6p5" Feb 02 21:26:16 crc kubenswrapper[4789]: I0202 21:26:16.573425 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tq6p5" Feb 02 21:26:16 crc kubenswrapper[4789]: I0202 21:26:16.655272 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tq6p5" Feb 02 21:26:17 crc kubenswrapper[4789]: I0202 21:26:17.112864 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-54dlc" Feb 02 21:26:17 crc kubenswrapper[4789]: I0202 21:26:17.112975 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-54dlc" Feb 02 21:26:17 crc kubenswrapper[4789]: I0202 21:26:17.183475 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-54dlc" Feb 02 21:26:17 crc kubenswrapper[4789]: I0202 21:26:17.657243 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-54dlc" Feb 02 21:26:22 crc kubenswrapper[4789]: I0202 21:26:22.842359 4789 patch_prober.go:28] interesting pod/machine-config-daemon-c8vcn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 21:26:22 crc kubenswrapper[4789]: I0202 21:26:22.842764 4789 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 21:26:24 crc kubenswrapper[4789]: I0202 21:26:24.783031 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hh84g" Feb 02 21:26:24 crc kubenswrapper[4789]: I0202 21:26:24.850497 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hh84g" Feb 02 21:26:38 crc kubenswrapper[4789]: I0202 21:26:38.066315 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" podUID="a9b60922-75eb-4c97-85d5-12c146fe6cb1" containerName="registry" containerID="cri-o://d44c7e357159ffb561b9fe6df14c08d45dd33a1b4bc58f7e708e7e9c13287d1d" gracePeriod=30 Feb 02 21:26:38 crc kubenswrapper[4789]: I0202 21:26:38.538882 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:26:38 crc kubenswrapper[4789]: I0202 21:26:38.720238 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a9b60922-75eb-4c97-85d5-12c146fe6cb1-trusted-ca\") pod \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " Feb 02 21:26:38 crc kubenswrapper[4789]: I0202 21:26:38.720319 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a9b60922-75eb-4c97-85d5-12c146fe6cb1-bound-sa-token\") pod \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " Feb 02 21:26:38 crc kubenswrapper[4789]: I0202 21:26:38.720356 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a9b60922-75eb-4c97-85d5-12c146fe6cb1-registry-certificates\") pod \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " Feb 02 21:26:38 crc kubenswrapper[4789]: I0202 21:26:38.720547 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " Feb 02 21:26:38 crc kubenswrapper[4789]: I0202 21:26:38.720711 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a9b60922-75eb-4c97-85d5-12c146fe6cb1-installation-pull-secrets\") pod \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " Feb 02 21:26:38 crc kubenswrapper[4789]: I0202 21:26:38.720766 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a9b60922-75eb-4c97-85d5-12c146fe6cb1-ca-trust-extracted\") pod \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " Feb 02 21:26:38 crc kubenswrapper[4789]: I0202 21:26:38.720820 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a9b60922-75eb-4c97-85d5-12c146fe6cb1-registry-tls\") pod \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " Feb 02 21:26:38 crc kubenswrapper[4789]: I0202 21:26:38.720884 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-brj2x\" (UniqueName: \"kubernetes.io/projected/a9b60922-75eb-4c97-85d5-12c146fe6cb1-kube-api-access-brj2x\") pod \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\" (UID: \"a9b60922-75eb-4c97-85d5-12c146fe6cb1\") " Feb 02 21:26:38 crc kubenswrapper[4789]: I0202 21:26:38.721852 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9b60922-75eb-4c97-85d5-12c146fe6cb1-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a9b60922-75eb-4c97-85d5-12c146fe6cb1" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:26:38 crc kubenswrapper[4789]: I0202 21:26:38.723740 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9b60922-75eb-4c97-85d5-12c146fe6cb1-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "a9b60922-75eb-4c97-85d5-12c146fe6cb1" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:26:38 crc kubenswrapper[4789]: I0202 21:26:38.729228 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9b60922-75eb-4c97-85d5-12c146fe6cb1-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "a9b60922-75eb-4c97-85d5-12c146fe6cb1" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:26:38 crc kubenswrapper[4789]: I0202 21:26:38.729360 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9b60922-75eb-4c97-85d5-12c146fe6cb1-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "a9b60922-75eb-4c97-85d5-12c146fe6cb1" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:26:38 crc kubenswrapper[4789]: I0202 21:26:38.736785 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9b60922-75eb-4c97-85d5-12c146fe6cb1-kube-api-access-brj2x" (OuterVolumeSpecName: "kube-api-access-brj2x") pod "a9b60922-75eb-4c97-85d5-12c146fe6cb1" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1"). InnerVolumeSpecName "kube-api-access-brj2x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:26:38 crc kubenswrapper[4789]: I0202 21:26:38.737913 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9b60922-75eb-4c97-85d5-12c146fe6cb1-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a9b60922-75eb-4c97-85d5-12c146fe6cb1" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:26:38 crc kubenswrapper[4789]: I0202 21:26:38.739439 4789 generic.go:334] "Generic (PLEG): container finished" podID="a9b60922-75eb-4c97-85d5-12c146fe6cb1" containerID="d44c7e357159ffb561b9fe6df14c08d45dd33a1b4bc58f7e708e7e9c13287d1d" exitCode=0 Feb 02 21:26:38 crc kubenswrapper[4789]: I0202 21:26:38.739514 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" event={"ID":"a9b60922-75eb-4c97-85d5-12c146fe6cb1","Type":"ContainerDied","Data":"d44c7e357159ffb561b9fe6df14c08d45dd33a1b4bc58f7e708e7e9c13287d1d"} Feb 02 21:26:38 crc kubenswrapper[4789]: I0202 21:26:38.739564 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" event={"ID":"a9b60922-75eb-4c97-85d5-12c146fe6cb1","Type":"ContainerDied","Data":"1d48e75fd44d0e14cf9b1fbf9b6851cd3b2329e46d78740c0d4fdcc05ce9dd5c"} Feb 02 21:26:38 crc kubenswrapper[4789]: I0202 21:26:38.739635 4789 scope.go:117] "RemoveContainer" containerID="d44c7e357159ffb561b9fe6df14c08d45dd33a1b4bc58f7e708e7e9c13287d1d" Feb 02 21:26:38 crc kubenswrapper[4789]: I0202 21:26:38.740053 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-x2wtg" Feb 02 21:26:38 crc kubenswrapper[4789]: I0202 21:26:38.742261 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "a9b60922-75eb-4c97-85d5-12c146fe6cb1" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 02 21:26:38 crc kubenswrapper[4789]: I0202 21:26:38.751246 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a9b60922-75eb-4c97-85d5-12c146fe6cb1-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "a9b60922-75eb-4c97-85d5-12c146fe6cb1" (UID: "a9b60922-75eb-4c97-85d5-12c146fe6cb1"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 21:26:38 crc kubenswrapper[4789]: I0202 21:26:38.793279 4789 scope.go:117] "RemoveContainer" containerID="d44c7e357159ffb561b9fe6df14c08d45dd33a1b4bc58f7e708e7e9c13287d1d" Feb 02 21:26:38 crc kubenswrapper[4789]: E0202 21:26:38.794136 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d44c7e357159ffb561b9fe6df14c08d45dd33a1b4bc58f7e708e7e9c13287d1d\": container with ID starting with d44c7e357159ffb561b9fe6df14c08d45dd33a1b4bc58f7e708e7e9c13287d1d not found: ID does not exist" containerID="d44c7e357159ffb561b9fe6df14c08d45dd33a1b4bc58f7e708e7e9c13287d1d" Feb 02 21:26:38 crc kubenswrapper[4789]: I0202 21:26:38.794188 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d44c7e357159ffb561b9fe6df14c08d45dd33a1b4bc58f7e708e7e9c13287d1d"} err="failed to get container status \"d44c7e357159ffb561b9fe6df14c08d45dd33a1b4bc58f7e708e7e9c13287d1d\": rpc error: code = NotFound desc = could not find container \"d44c7e357159ffb561b9fe6df14c08d45dd33a1b4bc58f7e708e7e9c13287d1d\": container with ID starting with d44c7e357159ffb561b9fe6df14c08d45dd33a1b4bc58f7e708e7e9c13287d1d not found: ID does not exist" Feb 02 21:26:38 crc kubenswrapper[4789]: I0202 21:26:38.822502 4789 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a9b60922-75eb-4c97-85d5-12c146fe6cb1-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 02 21:26:38 crc kubenswrapper[4789]: I0202 21:26:38.822541 4789 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a9b60922-75eb-4c97-85d5-12c146fe6cb1-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 02 21:26:38 crc kubenswrapper[4789]: I0202 21:26:38.822559 4789 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a9b60922-75eb-4c97-85d5-12c146fe6cb1-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 02 21:26:38 crc kubenswrapper[4789]: I0202 21:26:38.822598 4789 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a9b60922-75eb-4c97-85d5-12c146fe6cb1-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 02 21:26:38 crc kubenswrapper[4789]: I0202 21:26:38.822617 4789 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a9b60922-75eb-4c97-85d5-12c146fe6cb1-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 02 21:26:38 crc kubenswrapper[4789]: I0202 21:26:38.822634 4789 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a9b60922-75eb-4c97-85d5-12c146fe6cb1-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 02 21:26:38 crc kubenswrapper[4789]: I0202 21:26:38.822648 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-brj2x\" (UniqueName: \"kubernetes.io/projected/a9b60922-75eb-4c97-85d5-12c146fe6cb1-kube-api-access-brj2x\") on node \"crc\" DevicePath \"\"" Feb 02 21:26:39 crc kubenswrapper[4789]: I0202 21:26:39.088809 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-x2wtg"] Feb 02 21:26:39 crc kubenswrapper[4789]: I0202 21:26:39.097616 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-x2wtg"] Feb 02 21:26:40 crc kubenswrapper[4789]: I0202 21:26:40.432276 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9b60922-75eb-4c97-85d5-12c146fe6cb1" path="/var/lib/kubelet/pods/a9b60922-75eb-4c97-85d5-12c146fe6cb1/volumes" Feb 02 21:26:52 crc kubenswrapper[4789]: I0202 21:26:52.842005 4789 patch_prober.go:28] interesting pod/machine-config-daemon-c8vcn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 21:26:52 crc kubenswrapper[4789]: I0202 21:26:52.842726 4789 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 21:26:52 crc kubenswrapper[4789]: I0202 21:26:52.842806 4789 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" Feb 02 21:26:52 crc kubenswrapper[4789]: I0202 21:26:52.843705 4789 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6a12ca5c9003220282cb93388ad24eb2dec9d907ff7cf49d91e52f983ba6b208"} pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 21:26:52 crc kubenswrapper[4789]: I0202 21:26:52.843799 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" containerName="machine-config-daemon" containerID="cri-o://6a12ca5c9003220282cb93388ad24eb2dec9d907ff7cf49d91e52f983ba6b208" gracePeriod=600 Feb 02 21:26:53 crc kubenswrapper[4789]: I0202 21:26:53.845300 4789 generic.go:334] "Generic (PLEG): container finished" podID="bdf018b4-1451-4d37-be6e-05802b67c73e" containerID="6a12ca5c9003220282cb93388ad24eb2dec9d907ff7cf49d91e52f983ba6b208" exitCode=0 Feb 02 21:26:53 crc kubenswrapper[4789]: I0202 21:26:53.845416 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" event={"ID":"bdf018b4-1451-4d37-be6e-05802b67c73e","Type":"ContainerDied","Data":"6a12ca5c9003220282cb93388ad24eb2dec9d907ff7cf49d91e52f983ba6b208"} Feb 02 21:26:53 crc kubenswrapper[4789]: I0202 21:26:53.846230 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" event={"ID":"bdf018b4-1451-4d37-be6e-05802b67c73e","Type":"ContainerStarted","Data":"56513053a4eff1a4ef3d67ad266c32d8d1fc9194a1d2f87f6627abedca5761d0"} Feb 02 21:26:53 crc kubenswrapper[4789]: I0202 21:26:53.846279 4789 scope.go:117] "RemoveContainer" containerID="b108a17d437cce7e94365267d96e99e84944d9dd12174c5acb380bc1a1f9c885" Feb 02 21:29:22 crc kubenswrapper[4789]: I0202 21:29:22.842161 4789 patch_prober.go:28] interesting pod/machine-config-daemon-c8vcn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 21:29:22 crc kubenswrapper[4789]: I0202 21:29:22.843010 4789 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 21:29:52 crc kubenswrapper[4789]: I0202 21:29:52.842324 4789 patch_prober.go:28] interesting pod/machine-config-daemon-c8vcn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 21:29:52 crc kubenswrapper[4789]: I0202 21:29:52.843126 4789 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 21:30:00 crc kubenswrapper[4789]: I0202 21:30:00.219837 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29501130-2fpc8"] Feb 02 21:30:00 crc kubenswrapper[4789]: E0202 21:30:00.220553 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9b60922-75eb-4c97-85d5-12c146fe6cb1" containerName="registry" Feb 02 21:30:00 crc kubenswrapper[4789]: I0202 21:30:00.220577 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9b60922-75eb-4c97-85d5-12c146fe6cb1" containerName="registry" Feb 02 21:30:00 crc kubenswrapper[4789]: I0202 21:30:00.220864 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9b60922-75eb-4c97-85d5-12c146fe6cb1" containerName="registry" Feb 02 21:30:00 crc kubenswrapper[4789]: I0202 21:30:00.221723 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29501130-2fpc8" Feb 02 21:30:00 crc kubenswrapper[4789]: I0202 21:30:00.223860 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 02 21:30:00 crc kubenswrapper[4789]: I0202 21:30:00.225980 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 02 21:30:00 crc kubenswrapper[4789]: I0202 21:30:00.229218 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29501130-2fpc8"] Feb 02 21:30:00 crc kubenswrapper[4789]: I0202 21:30:00.229637 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgm6z\" (UniqueName: \"kubernetes.io/projected/69f791f2-1e25-45d3-89bd-5269712d52b2-kube-api-access-fgm6z\") pod \"collect-profiles-29501130-2fpc8\" (UID: \"69f791f2-1e25-45d3-89bd-5269712d52b2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501130-2fpc8" Feb 02 21:30:00 crc kubenswrapper[4789]: I0202 21:30:00.229790 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/69f791f2-1e25-45d3-89bd-5269712d52b2-secret-volume\") pod \"collect-profiles-29501130-2fpc8\" (UID: \"69f791f2-1e25-45d3-89bd-5269712d52b2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501130-2fpc8" Feb 02 21:30:00 crc kubenswrapper[4789]: I0202 21:30:00.229852 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/69f791f2-1e25-45d3-89bd-5269712d52b2-config-volume\") pod \"collect-profiles-29501130-2fpc8\" (UID: \"69f791f2-1e25-45d3-89bd-5269712d52b2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501130-2fpc8" Feb 02 21:30:00 crc kubenswrapper[4789]: I0202 21:30:00.330443 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgm6z\" (UniqueName: \"kubernetes.io/projected/69f791f2-1e25-45d3-89bd-5269712d52b2-kube-api-access-fgm6z\") pod \"collect-profiles-29501130-2fpc8\" (UID: \"69f791f2-1e25-45d3-89bd-5269712d52b2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501130-2fpc8" Feb 02 21:30:00 crc kubenswrapper[4789]: I0202 21:30:00.330563 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/69f791f2-1e25-45d3-89bd-5269712d52b2-secret-volume\") pod \"collect-profiles-29501130-2fpc8\" (UID: \"69f791f2-1e25-45d3-89bd-5269712d52b2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501130-2fpc8" Feb 02 21:30:00 crc kubenswrapper[4789]: I0202 21:30:00.330672 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/69f791f2-1e25-45d3-89bd-5269712d52b2-config-volume\") pod \"collect-profiles-29501130-2fpc8\" (UID: \"69f791f2-1e25-45d3-89bd-5269712d52b2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501130-2fpc8" Feb 02 21:30:00 crc kubenswrapper[4789]: I0202 21:30:00.332424 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/69f791f2-1e25-45d3-89bd-5269712d52b2-config-volume\") pod \"collect-profiles-29501130-2fpc8\" (UID: \"69f791f2-1e25-45d3-89bd-5269712d52b2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501130-2fpc8" Feb 02 21:30:00 crc kubenswrapper[4789]: I0202 21:30:00.338252 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/69f791f2-1e25-45d3-89bd-5269712d52b2-secret-volume\") pod \"collect-profiles-29501130-2fpc8\" (UID: \"69f791f2-1e25-45d3-89bd-5269712d52b2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501130-2fpc8" Feb 02 21:30:00 crc kubenswrapper[4789]: I0202 21:30:00.352391 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgm6z\" (UniqueName: \"kubernetes.io/projected/69f791f2-1e25-45d3-89bd-5269712d52b2-kube-api-access-fgm6z\") pod \"collect-profiles-29501130-2fpc8\" (UID: \"69f791f2-1e25-45d3-89bd-5269712d52b2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501130-2fpc8" Feb 02 21:30:00 crc kubenswrapper[4789]: I0202 21:30:00.553516 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29501130-2fpc8" Feb 02 21:30:00 crc kubenswrapper[4789]: I0202 21:30:00.845824 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29501130-2fpc8"] Feb 02 21:30:01 crc kubenswrapper[4789]: I0202 21:30:01.378304 4789 generic.go:334] "Generic (PLEG): container finished" podID="69f791f2-1e25-45d3-89bd-5269712d52b2" containerID="73ccf7635eba9ed44e9de542e269c9d0af37310053e1fba884076dbda477de85" exitCode=0 Feb 02 21:30:01 crc kubenswrapper[4789]: I0202 21:30:01.378394 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29501130-2fpc8" event={"ID":"69f791f2-1e25-45d3-89bd-5269712d52b2","Type":"ContainerDied","Data":"73ccf7635eba9ed44e9de542e269c9d0af37310053e1fba884076dbda477de85"} Feb 02 21:30:01 crc kubenswrapper[4789]: I0202 21:30:01.378481 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29501130-2fpc8" event={"ID":"69f791f2-1e25-45d3-89bd-5269712d52b2","Type":"ContainerStarted","Data":"ff9dbb0c148bf6609c68c143af461f4ea2bdc1631aee1fd2d314e0ca253bda0f"} Feb 02 21:30:02 crc kubenswrapper[4789]: I0202 21:30:02.629188 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29501130-2fpc8" Feb 02 21:30:02 crc kubenswrapper[4789]: I0202 21:30:02.764060 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fgm6z\" (UniqueName: \"kubernetes.io/projected/69f791f2-1e25-45d3-89bd-5269712d52b2-kube-api-access-fgm6z\") pod \"69f791f2-1e25-45d3-89bd-5269712d52b2\" (UID: \"69f791f2-1e25-45d3-89bd-5269712d52b2\") " Feb 02 21:30:02 crc kubenswrapper[4789]: I0202 21:30:02.764160 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/69f791f2-1e25-45d3-89bd-5269712d52b2-config-volume\") pod \"69f791f2-1e25-45d3-89bd-5269712d52b2\" (UID: \"69f791f2-1e25-45d3-89bd-5269712d52b2\") " Feb 02 21:30:02 crc kubenswrapper[4789]: I0202 21:30:02.764241 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/69f791f2-1e25-45d3-89bd-5269712d52b2-secret-volume\") pod \"69f791f2-1e25-45d3-89bd-5269712d52b2\" (UID: \"69f791f2-1e25-45d3-89bd-5269712d52b2\") " Feb 02 21:30:02 crc kubenswrapper[4789]: I0202 21:30:02.765010 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69f791f2-1e25-45d3-89bd-5269712d52b2-config-volume" (OuterVolumeSpecName: "config-volume") pod "69f791f2-1e25-45d3-89bd-5269712d52b2" (UID: "69f791f2-1e25-45d3-89bd-5269712d52b2"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:30:02 crc kubenswrapper[4789]: I0202 21:30:02.772314 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69f791f2-1e25-45d3-89bd-5269712d52b2-kube-api-access-fgm6z" (OuterVolumeSpecName: "kube-api-access-fgm6z") pod "69f791f2-1e25-45d3-89bd-5269712d52b2" (UID: "69f791f2-1e25-45d3-89bd-5269712d52b2"). InnerVolumeSpecName "kube-api-access-fgm6z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:30:02 crc kubenswrapper[4789]: I0202 21:30:02.773364 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69f791f2-1e25-45d3-89bd-5269712d52b2-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "69f791f2-1e25-45d3-89bd-5269712d52b2" (UID: "69f791f2-1e25-45d3-89bd-5269712d52b2"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:30:02 crc kubenswrapper[4789]: I0202 21:30:02.866193 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fgm6z\" (UniqueName: \"kubernetes.io/projected/69f791f2-1e25-45d3-89bd-5269712d52b2-kube-api-access-fgm6z\") on node \"crc\" DevicePath \"\"" Feb 02 21:30:02 crc kubenswrapper[4789]: I0202 21:30:02.866244 4789 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/69f791f2-1e25-45d3-89bd-5269712d52b2-config-volume\") on node \"crc\" DevicePath \"\"" Feb 02 21:30:02 crc kubenswrapper[4789]: I0202 21:30:02.866265 4789 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/69f791f2-1e25-45d3-89bd-5269712d52b2-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 02 21:30:03 crc kubenswrapper[4789]: I0202 21:30:03.394861 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29501130-2fpc8" event={"ID":"69f791f2-1e25-45d3-89bd-5269712d52b2","Type":"ContainerDied","Data":"ff9dbb0c148bf6609c68c143af461f4ea2bdc1631aee1fd2d314e0ca253bda0f"} Feb 02 21:30:03 crc kubenswrapper[4789]: I0202 21:30:03.395423 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff9dbb0c148bf6609c68c143af461f4ea2bdc1631aee1fd2d314e0ca253bda0f" Feb 02 21:30:03 crc kubenswrapper[4789]: I0202 21:30:03.394993 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29501130-2fpc8" Feb 02 21:30:22 crc kubenswrapper[4789]: I0202 21:30:22.842220 4789 patch_prober.go:28] interesting pod/machine-config-daemon-c8vcn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 21:30:22 crc kubenswrapper[4789]: I0202 21:30:22.843805 4789 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 21:30:22 crc kubenswrapper[4789]: I0202 21:30:22.843881 4789 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" Feb 02 21:30:22 crc kubenswrapper[4789]: I0202 21:30:22.844697 4789 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"56513053a4eff1a4ef3d67ad266c32d8d1fc9194a1d2f87f6627abedca5761d0"} pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 21:30:22 crc kubenswrapper[4789]: I0202 21:30:22.844823 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" containerName="machine-config-daemon" containerID="cri-o://56513053a4eff1a4ef3d67ad266c32d8d1fc9194a1d2f87f6627abedca5761d0" gracePeriod=600 Feb 02 21:30:23 crc kubenswrapper[4789]: I0202 21:30:23.602990 4789 generic.go:334] "Generic (PLEG): container finished" podID="bdf018b4-1451-4d37-be6e-05802b67c73e" containerID="56513053a4eff1a4ef3d67ad266c32d8d1fc9194a1d2f87f6627abedca5761d0" exitCode=0 Feb 02 21:30:23 crc kubenswrapper[4789]: I0202 21:30:23.603394 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" event={"ID":"bdf018b4-1451-4d37-be6e-05802b67c73e","Type":"ContainerDied","Data":"56513053a4eff1a4ef3d67ad266c32d8d1fc9194a1d2f87f6627abedca5761d0"} Feb 02 21:30:23 crc kubenswrapper[4789]: I0202 21:30:23.603432 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" event={"ID":"bdf018b4-1451-4d37-be6e-05802b67c73e","Type":"ContainerStarted","Data":"1ec54d6d2f9dd12ba4581ba8d6bcba6253f115c225597c28969e0527a84fb4af"} Feb 02 21:30:23 crc kubenswrapper[4789]: I0202 21:30:23.603456 4789 scope.go:117] "RemoveContainer" containerID="6a12ca5c9003220282cb93388ad24eb2dec9d907ff7cf49d91e52f983ba6b208" Feb 02 21:32:16 crc kubenswrapper[4789]: I0202 21:32:16.663319 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-w8vkt"] Feb 02 21:32:16 crc kubenswrapper[4789]: I0202 21:32:16.664209 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" podUID="2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6" containerName="ovn-controller" containerID="cri-o://021453f1c74b708ad3f39c2defa2664f098a552f3e22315479c483d24110e583" gracePeriod=30 Feb 02 21:32:16 crc kubenswrapper[4789]: I0202 21:32:16.664323 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" podUID="2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6" containerName="kube-rbac-proxy-node" containerID="cri-o://29dad69c19f9411676fab81684dd613d866c494df1e333b46bfff35b98430103" gracePeriod=30 Feb 02 21:32:16 crc kubenswrapper[4789]: I0202 21:32:16.664290 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" podUID="2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6" containerName="nbdb" containerID="cri-o://05637515d4c19e054ce43cc23bb6d8e32c4752282dab08786fff3062eb976461" gracePeriod=30 Feb 02 21:32:16 crc kubenswrapper[4789]: I0202 21:32:16.664376 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" podUID="2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6" containerName="northd" containerID="cri-o://9f05a85f44fd6a7b9efa0f591ec0afebd7818a80b132ec3dddd0faa22c380eea" gracePeriod=30 Feb 02 21:32:16 crc kubenswrapper[4789]: I0202 21:32:16.664306 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" podUID="2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://047a98525fa1b06be5e6f3bab6b417973a13bb32df7fad26ca90b4558ee4e364" gracePeriod=30 Feb 02 21:32:16 crc kubenswrapper[4789]: I0202 21:32:16.664362 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" podUID="2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6" containerName="ovn-acl-logging" containerID="cri-o://fd508dcd4022163f662ad27cf3ffbc4cca551d4020809ca5fac60cee8da80601" gracePeriod=30 Feb 02 21:32:16 crc kubenswrapper[4789]: I0202 21:32:16.664584 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" podUID="2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6" containerName="sbdb" containerID="cri-o://f30344b8366cf5ef712d971dc4d8d552e7b50c1ecea8d2ace88bb80fce62231e" gracePeriod=30 Feb 02 21:32:16 crc kubenswrapper[4789]: I0202 21:32:16.716495 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" podUID="2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6" containerName="ovnkube-controller" containerID="cri-o://7129ba83a6eb1565ad3687f3eceafd29c68b5e8d14512ae91de8f36b552900a7" gracePeriod=30 Feb 02 21:32:16 crc kubenswrapper[4789]: I0202 21:32:16.927685 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w8vkt_2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6/ovnkube-controller/3.log" Feb 02 21:32:16 crc kubenswrapper[4789]: I0202 21:32:16.929747 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w8vkt_2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6/ovn-acl-logging/0.log" Feb 02 21:32:16 crc kubenswrapper[4789]: I0202 21:32:16.930157 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w8vkt_2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6/ovn-controller/0.log" Feb 02 21:32:16 crc kubenswrapper[4789]: I0202 21:32:16.930499 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" Feb 02 21:32:16 crc kubenswrapper[4789]: I0202 21:32:16.983577 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-8svlp"] Feb 02 21:32:16 crc kubenswrapper[4789]: E0202 21:32:16.984068 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6" containerName="ovnkube-controller" Feb 02 21:32:16 crc kubenswrapper[4789]: I0202 21:32:16.984087 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6" containerName="ovnkube-controller" Feb 02 21:32:16 crc kubenswrapper[4789]: E0202 21:32:16.984100 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6" containerName="sbdb" Feb 02 21:32:16 crc kubenswrapper[4789]: I0202 21:32:16.984108 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6" containerName="sbdb" Feb 02 21:32:16 crc kubenswrapper[4789]: E0202 21:32:16.984120 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6" containerName="nbdb" Feb 02 21:32:16 crc kubenswrapper[4789]: I0202 21:32:16.984128 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6" containerName="nbdb" Feb 02 21:32:16 crc kubenswrapper[4789]: E0202 21:32:16.984143 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6" containerName="ovnkube-controller" Feb 02 21:32:16 crc kubenswrapper[4789]: I0202 21:32:16.984151 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6" containerName="ovnkube-controller" Feb 02 21:32:16 crc kubenswrapper[4789]: E0202 21:32:16.984163 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6" containerName="kube-rbac-proxy-ovn-metrics" Feb 02 21:32:16 crc kubenswrapper[4789]: I0202 21:32:16.984171 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6" containerName="kube-rbac-proxy-ovn-metrics" Feb 02 21:32:16 crc kubenswrapper[4789]: E0202 21:32:16.984182 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6" containerName="kubecfg-setup" Feb 02 21:32:16 crc kubenswrapper[4789]: I0202 21:32:16.984190 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6" containerName="kubecfg-setup" Feb 02 21:32:16 crc kubenswrapper[4789]: E0202 21:32:16.984202 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6" containerName="kube-rbac-proxy-node" Feb 02 21:32:16 crc kubenswrapper[4789]: I0202 21:32:16.984209 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6" containerName="kube-rbac-proxy-node" Feb 02 21:32:16 crc kubenswrapper[4789]: E0202 21:32:16.984221 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6" containerName="ovnkube-controller" Feb 02 21:32:16 crc kubenswrapper[4789]: I0202 21:32:16.984228 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6" containerName="ovnkube-controller" Feb 02 21:32:16 crc kubenswrapper[4789]: E0202 21:32:16.984239 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6" containerName="ovn-controller" Feb 02 21:32:16 crc kubenswrapper[4789]: I0202 21:32:16.984247 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6" containerName="ovn-controller" Feb 02 21:32:16 crc kubenswrapper[4789]: E0202 21:32:16.984259 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6" containerName="ovnkube-controller" Feb 02 21:32:16 crc kubenswrapper[4789]: I0202 21:32:16.984267 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6" containerName="ovnkube-controller" Feb 02 21:32:16 crc kubenswrapper[4789]: E0202 21:32:16.984281 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69f791f2-1e25-45d3-89bd-5269712d52b2" containerName="collect-profiles" Feb 02 21:32:16 crc kubenswrapper[4789]: I0202 21:32:16.984290 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="69f791f2-1e25-45d3-89bd-5269712d52b2" containerName="collect-profiles" Feb 02 21:32:16 crc kubenswrapper[4789]: E0202 21:32:16.984300 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6" containerName="ovn-acl-logging" Feb 02 21:32:16 crc kubenswrapper[4789]: I0202 21:32:16.984309 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6" containerName="ovn-acl-logging" Feb 02 21:32:16 crc kubenswrapper[4789]: E0202 21:32:16.984317 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6" containerName="northd" Feb 02 21:32:16 crc kubenswrapper[4789]: I0202 21:32:16.984325 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6" containerName="northd" Feb 02 21:32:16 crc kubenswrapper[4789]: I0202 21:32:16.984434 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6" containerName="ovnkube-controller" Feb 02 21:32:16 crc kubenswrapper[4789]: I0202 21:32:16.984446 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6" containerName="ovn-acl-logging" Feb 02 21:32:16 crc kubenswrapper[4789]: I0202 21:32:16.984457 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="69f791f2-1e25-45d3-89bd-5269712d52b2" containerName="collect-profiles" Feb 02 21:32:16 crc kubenswrapper[4789]: I0202 21:32:16.984473 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6" containerName="ovnkube-controller" Feb 02 21:32:16 crc kubenswrapper[4789]: I0202 21:32:16.984484 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6" containerName="kube-rbac-proxy-node" Feb 02 21:32:16 crc kubenswrapper[4789]: I0202 21:32:16.984495 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6" containerName="ovn-controller" Feb 02 21:32:16 crc kubenswrapper[4789]: I0202 21:32:16.984504 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6" containerName="nbdb" Feb 02 21:32:16 crc kubenswrapper[4789]: I0202 21:32:16.984516 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6" containerName="northd" Feb 02 21:32:16 crc kubenswrapper[4789]: I0202 21:32:16.984525 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6" containerName="ovnkube-controller" Feb 02 21:32:16 crc kubenswrapper[4789]: I0202 21:32:16.984534 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6" containerName="sbdb" Feb 02 21:32:16 crc kubenswrapper[4789]: I0202 21:32:16.984543 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6" containerName="kube-rbac-proxy-ovn-metrics" Feb 02 21:32:16 crc kubenswrapper[4789]: E0202 21:32:16.984676 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6" containerName="ovnkube-controller" Feb 02 21:32:16 crc kubenswrapper[4789]: I0202 21:32:16.984688 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6" containerName="ovnkube-controller" Feb 02 21:32:16 crc kubenswrapper[4789]: I0202 21:32:16.984792 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6" containerName="ovnkube-controller" Feb 02 21:32:16 crc kubenswrapper[4789]: I0202 21:32:16.985024 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6" containerName="ovnkube-controller" Feb 02 21:32:16 crc kubenswrapper[4789]: I0202 21:32:16.986898 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-8svlp" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.061484 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-run-openvswitch\") pod \"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\" (UID: \"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\") " Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.061551 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-host-cni-bin\") pod \"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\" (UID: \"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\") " Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.061627 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6" (UID: "2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.061644 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-var-lib-openvswitch\") pod \"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\" (UID: \"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\") " Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.061715 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-env-overrides\") pod \"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\" (UID: \"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\") " Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.061724 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6" (UID: "2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.061759 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-ovn-node-metrics-cert\") pod \"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\" (UID: \"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\") " Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.061783 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-run-systemd\") pod \"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\" (UID: \"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\") " Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.061798 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-host-kubelet\") pod \"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\" (UID: \"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\") " Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.061792 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6" (UID: "2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.061828 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-ovnkube-script-lib\") pod \"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\" (UID: \"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\") " Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.061849 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-host-cni-netd\") pod \"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\" (UID: \"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\") " Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.061865 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-etc-openvswitch\") pod \"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\" (UID: \"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\") " Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.061885 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-host-slash\") pod \"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\" (UID: \"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\") " Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.061927 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-ovnkube-config\") pod \"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\" (UID: \"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\") " Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.061941 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-node-log\") pod \"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\" (UID: \"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\") " Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.061984 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-run-ovn\") pod \"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\" (UID: \"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\") " Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.062006 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-host-run-ovn-kubernetes\") pod \"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\" (UID: \"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\") " Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.062034 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bnmmf\" (UniqueName: \"kubernetes.io/projected/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-kube-api-access-bnmmf\") pod \"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\" (UID: \"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\") " Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.062053 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-host-var-lib-cni-networks-ovn-kubernetes\") pod \"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\" (UID: \"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\") " Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.062076 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-log-socket\") pod \"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\" (UID: \"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\") " Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.062097 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-host-run-netns\") pod \"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\" (UID: \"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\") " Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.062119 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-systemd-units\") pod \"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\" (UID: \"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6\") " Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.062483 4789 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.062499 4789 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.062513 4789 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.062545 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6" (UID: "2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.062570 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6" (UID: "2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.063075 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6" (UID: "2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.063110 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6" (UID: "2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.063134 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6" (UID: "2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.063151 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-host-slash" (OuterVolumeSpecName: "host-slash") pod "2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6" (UID: "2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.063492 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6" (UID: "2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.063518 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-node-log" (OuterVolumeSpecName: "node-log") pod "2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6" (UID: "2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.063535 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6" (UID: "2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.063553 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6" (UID: "2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.064476 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-log-socket" (OuterVolumeSpecName: "log-socket") pod "2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6" (UID: "2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.064496 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6" (UID: "2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.064540 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6" (UID: "2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.064794 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6" (UID: "2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.068541 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-kube-api-access-bnmmf" (OuterVolumeSpecName: "kube-api-access-bnmmf") pod "2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6" (UID: "2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6"). InnerVolumeSpecName "kube-api-access-bnmmf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.069034 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6" (UID: "2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.084319 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6" (UID: "2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.165672 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/947c802f-a386-4041-8752-44f83a964cd1-host-run-netns\") pod \"ovnkube-node-8svlp\" (UID: \"947c802f-a386-4041-8752-44f83a964cd1\") " pod="openshift-ovn-kubernetes/ovnkube-node-8svlp" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.165730 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/947c802f-a386-4041-8752-44f83a964cd1-host-kubelet\") pod \"ovnkube-node-8svlp\" (UID: \"947c802f-a386-4041-8752-44f83a964cd1\") " pod="openshift-ovn-kubernetes/ovnkube-node-8svlp" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.165750 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/947c802f-a386-4041-8752-44f83a964cd1-run-openvswitch\") pod \"ovnkube-node-8svlp\" (UID: \"947c802f-a386-4041-8752-44f83a964cd1\") " pod="openshift-ovn-kubernetes/ovnkube-node-8svlp" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.165773 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/947c802f-a386-4041-8752-44f83a964cd1-node-log\") pod \"ovnkube-node-8svlp\" (UID: \"947c802f-a386-4041-8752-44f83a964cd1\") " pod="openshift-ovn-kubernetes/ovnkube-node-8svlp" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.165800 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/947c802f-a386-4041-8752-44f83a964cd1-var-lib-openvswitch\") pod \"ovnkube-node-8svlp\" (UID: \"947c802f-a386-4041-8752-44f83a964cd1\") " pod="openshift-ovn-kubernetes/ovnkube-node-8svlp" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.165828 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/947c802f-a386-4041-8752-44f83a964cd1-ovn-node-metrics-cert\") pod \"ovnkube-node-8svlp\" (UID: \"947c802f-a386-4041-8752-44f83a964cd1\") " pod="openshift-ovn-kubernetes/ovnkube-node-8svlp" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.165851 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/947c802f-a386-4041-8752-44f83a964cd1-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-8svlp\" (UID: \"947c802f-a386-4041-8752-44f83a964cd1\") " pod="openshift-ovn-kubernetes/ovnkube-node-8svlp" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.165878 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/947c802f-a386-4041-8752-44f83a964cd1-ovnkube-script-lib\") pod \"ovnkube-node-8svlp\" (UID: \"947c802f-a386-4041-8752-44f83a964cd1\") " pod="openshift-ovn-kubernetes/ovnkube-node-8svlp" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.165908 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/947c802f-a386-4041-8752-44f83a964cd1-host-slash\") pod \"ovnkube-node-8svlp\" (UID: \"947c802f-a386-4041-8752-44f83a964cd1\") " pod="openshift-ovn-kubernetes/ovnkube-node-8svlp" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.165927 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/947c802f-a386-4041-8752-44f83a964cd1-etc-openvswitch\") pod \"ovnkube-node-8svlp\" (UID: \"947c802f-a386-4041-8752-44f83a964cd1\") " pod="openshift-ovn-kubernetes/ovnkube-node-8svlp" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.165946 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/947c802f-a386-4041-8752-44f83a964cd1-log-socket\") pod \"ovnkube-node-8svlp\" (UID: \"947c802f-a386-4041-8752-44f83a964cd1\") " pod="openshift-ovn-kubernetes/ovnkube-node-8svlp" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.165964 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqd9b\" (UniqueName: \"kubernetes.io/projected/947c802f-a386-4041-8752-44f83a964cd1-kube-api-access-xqd9b\") pod \"ovnkube-node-8svlp\" (UID: \"947c802f-a386-4041-8752-44f83a964cd1\") " pod="openshift-ovn-kubernetes/ovnkube-node-8svlp" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.165988 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/947c802f-a386-4041-8752-44f83a964cd1-ovnkube-config\") pod \"ovnkube-node-8svlp\" (UID: \"947c802f-a386-4041-8752-44f83a964cd1\") " pod="openshift-ovn-kubernetes/ovnkube-node-8svlp" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.166033 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/947c802f-a386-4041-8752-44f83a964cd1-host-cni-bin\") pod \"ovnkube-node-8svlp\" (UID: \"947c802f-a386-4041-8752-44f83a964cd1\") " pod="openshift-ovn-kubernetes/ovnkube-node-8svlp" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.166054 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/947c802f-a386-4041-8752-44f83a964cd1-run-ovn\") pod \"ovnkube-node-8svlp\" (UID: \"947c802f-a386-4041-8752-44f83a964cd1\") " pod="openshift-ovn-kubernetes/ovnkube-node-8svlp" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.166073 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/947c802f-a386-4041-8752-44f83a964cd1-run-systemd\") pod \"ovnkube-node-8svlp\" (UID: \"947c802f-a386-4041-8752-44f83a964cd1\") " pod="openshift-ovn-kubernetes/ovnkube-node-8svlp" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.166108 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/947c802f-a386-4041-8752-44f83a964cd1-host-cni-netd\") pod \"ovnkube-node-8svlp\" (UID: \"947c802f-a386-4041-8752-44f83a964cd1\") " pod="openshift-ovn-kubernetes/ovnkube-node-8svlp" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.166173 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/947c802f-a386-4041-8752-44f83a964cd1-systemd-units\") pod \"ovnkube-node-8svlp\" (UID: \"947c802f-a386-4041-8752-44f83a964cd1\") " pod="openshift-ovn-kubernetes/ovnkube-node-8svlp" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.166211 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/947c802f-a386-4041-8752-44f83a964cd1-host-run-ovn-kubernetes\") pod \"ovnkube-node-8svlp\" (UID: \"947c802f-a386-4041-8752-44f83a964cd1\") " pod="openshift-ovn-kubernetes/ovnkube-node-8svlp" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.166232 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/947c802f-a386-4041-8752-44f83a964cd1-env-overrides\") pod \"ovnkube-node-8svlp\" (UID: \"947c802f-a386-4041-8752-44f83a964cd1\") " pod="openshift-ovn-kubernetes/ovnkube-node-8svlp" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.166274 4789 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.166289 4789 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.166301 4789 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.166313 4789 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.166324 4789 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.166335 4789 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.166347 4789 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.166358 4789 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-host-slash\") on node \"crc\" DevicePath \"\"" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.166370 4789 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.166381 4789 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-node-log\") on node \"crc\" DevicePath \"\"" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.166392 4789 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.166403 4789 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.166415 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bnmmf\" (UniqueName: \"kubernetes.io/projected/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-kube-api-access-bnmmf\") on node \"crc\" DevicePath \"\"" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.166428 4789 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.166439 4789 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-log-socket\") on node \"crc\" DevicePath \"\"" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.166452 4789 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.166462 4789 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.268013 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/947c802f-a386-4041-8752-44f83a964cd1-host-run-ovn-kubernetes\") pod \"ovnkube-node-8svlp\" (UID: \"947c802f-a386-4041-8752-44f83a964cd1\") " pod="openshift-ovn-kubernetes/ovnkube-node-8svlp" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.268063 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/947c802f-a386-4041-8752-44f83a964cd1-env-overrides\") pod \"ovnkube-node-8svlp\" (UID: \"947c802f-a386-4041-8752-44f83a964cd1\") " pod="openshift-ovn-kubernetes/ovnkube-node-8svlp" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.268094 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/947c802f-a386-4041-8752-44f83a964cd1-host-run-netns\") pod \"ovnkube-node-8svlp\" (UID: \"947c802f-a386-4041-8752-44f83a964cd1\") " pod="openshift-ovn-kubernetes/ovnkube-node-8svlp" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.268120 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/947c802f-a386-4041-8752-44f83a964cd1-host-kubelet\") pod \"ovnkube-node-8svlp\" (UID: \"947c802f-a386-4041-8752-44f83a964cd1\") " pod="openshift-ovn-kubernetes/ovnkube-node-8svlp" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.268143 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/947c802f-a386-4041-8752-44f83a964cd1-run-openvswitch\") pod \"ovnkube-node-8svlp\" (UID: \"947c802f-a386-4041-8752-44f83a964cd1\") " pod="openshift-ovn-kubernetes/ovnkube-node-8svlp" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.268173 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/947c802f-a386-4041-8752-44f83a964cd1-node-log\") pod \"ovnkube-node-8svlp\" (UID: \"947c802f-a386-4041-8752-44f83a964cd1\") " pod="openshift-ovn-kubernetes/ovnkube-node-8svlp" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.268213 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/947c802f-a386-4041-8752-44f83a964cd1-var-lib-openvswitch\") pod \"ovnkube-node-8svlp\" (UID: \"947c802f-a386-4041-8752-44f83a964cd1\") " pod="openshift-ovn-kubernetes/ovnkube-node-8svlp" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.268236 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/947c802f-a386-4041-8752-44f83a964cd1-host-kubelet\") pod \"ovnkube-node-8svlp\" (UID: \"947c802f-a386-4041-8752-44f83a964cd1\") " pod="openshift-ovn-kubernetes/ovnkube-node-8svlp" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.268256 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/947c802f-a386-4041-8752-44f83a964cd1-run-openvswitch\") pod \"ovnkube-node-8svlp\" (UID: \"947c802f-a386-4041-8752-44f83a964cd1\") " pod="openshift-ovn-kubernetes/ovnkube-node-8svlp" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.268254 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/947c802f-a386-4041-8752-44f83a964cd1-ovn-node-metrics-cert\") pod \"ovnkube-node-8svlp\" (UID: \"947c802f-a386-4041-8752-44f83a964cd1\") " pod="openshift-ovn-kubernetes/ovnkube-node-8svlp" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.268320 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/947c802f-a386-4041-8752-44f83a964cd1-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-8svlp\" (UID: \"947c802f-a386-4041-8752-44f83a964cd1\") " pod="openshift-ovn-kubernetes/ovnkube-node-8svlp" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.268341 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/947c802f-a386-4041-8752-44f83a964cd1-host-run-netns\") pod \"ovnkube-node-8svlp\" (UID: \"947c802f-a386-4041-8752-44f83a964cd1\") " pod="openshift-ovn-kubernetes/ovnkube-node-8svlp" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.268359 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/947c802f-a386-4041-8752-44f83a964cd1-ovnkube-script-lib\") pod \"ovnkube-node-8svlp\" (UID: \"947c802f-a386-4041-8752-44f83a964cd1\") " pod="openshift-ovn-kubernetes/ovnkube-node-8svlp" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.268391 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/947c802f-a386-4041-8752-44f83a964cd1-var-lib-openvswitch\") pod \"ovnkube-node-8svlp\" (UID: \"947c802f-a386-4041-8752-44f83a964cd1\") " pod="openshift-ovn-kubernetes/ovnkube-node-8svlp" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.268409 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/947c802f-a386-4041-8752-44f83a964cd1-host-slash\") pod \"ovnkube-node-8svlp\" (UID: \"947c802f-a386-4041-8752-44f83a964cd1\") " pod="openshift-ovn-kubernetes/ovnkube-node-8svlp" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.268433 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/947c802f-a386-4041-8752-44f83a964cd1-node-log\") pod \"ovnkube-node-8svlp\" (UID: \"947c802f-a386-4041-8752-44f83a964cd1\") " pod="openshift-ovn-kubernetes/ovnkube-node-8svlp" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.268443 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/947c802f-a386-4041-8752-44f83a964cd1-etc-openvswitch\") pod \"ovnkube-node-8svlp\" (UID: \"947c802f-a386-4041-8752-44f83a964cd1\") " pod="openshift-ovn-kubernetes/ovnkube-node-8svlp" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.268197 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/947c802f-a386-4041-8752-44f83a964cd1-host-run-ovn-kubernetes\") pod \"ovnkube-node-8svlp\" (UID: \"947c802f-a386-4041-8752-44f83a964cd1\") " pod="openshift-ovn-kubernetes/ovnkube-node-8svlp" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.268474 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/947c802f-a386-4041-8752-44f83a964cd1-log-socket\") pod \"ovnkube-node-8svlp\" (UID: \"947c802f-a386-4041-8752-44f83a964cd1\") " pod="openshift-ovn-kubernetes/ovnkube-node-8svlp" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.268501 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqd9b\" (UniqueName: \"kubernetes.io/projected/947c802f-a386-4041-8752-44f83a964cd1-kube-api-access-xqd9b\") pod \"ovnkube-node-8svlp\" (UID: \"947c802f-a386-4041-8752-44f83a964cd1\") " pod="openshift-ovn-kubernetes/ovnkube-node-8svlp" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.268538 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/947c802f-a386-4041-8752-44f83a964cd1-ovnkube-config\") pod \"ovnkube-node-8svlp\" (UID: \"947c802f-a386-4041-8752-44f83a964cd1\") " pod="openshift-ovn-kubernetes/ovnkube-node-8svlp" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.268621 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/947c802f-a386-4041-8752-44f83a964cd1-host-cni-bin\") pod \"ovnkube-node-8svlp\" (UID: \"947c802f-a386-4041-8752-44f83a964cd1\") " pod="openshift-ovn-kubernetes/ovnkube-node-8svlp" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.268655 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/947c802f-a386-4041-8752-44f83a964cd1-run-ovn\") pod \"ovnkube-node-8svlp\" (UID: \"947c802f-a386-4041-8752-44f83a964cd1\") " pod="openshift-ovn-kubernetes/ovnkube-node-8svlp" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.268686 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/947c802f-a386-4041-8752-44f83a964cd1-run-systemd\") pod \"ovnkube-node-8svlp\" (UID: \"947c802f-a386-4041-8752-44f83a964cd1\") " pod="openshift-ovn-kubernetes/ovnkube-node-8svlp" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.268686 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/947c802f-a386-4041-8752-44f83a964cd1-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-8svlp\" (UID: \"947c802f-a386-4041-8752-44f83a964cd1\") " pod="openshift-ovn-kubernetes/ovnkube-node-8svlp" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.268728 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/947c802f-a386-4041-8752-44f83a964cd1-host-cni-netd\") pod \"ovnkube-node-8svlp\" (UID: \"947c802f-a386-4041-8752-44f83a964cd1\") " pod="openshift-ovn-kubernetes/ovnkube-node-8svlp" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.268763 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/947c802f-a386-4041-8752-44f83a964cd1-systemd-units\") pod \"ovnkube-node-8svlp\" (UID: \"947c802f-a386-4041-8752-44f83a964cd1\") " pod="openshift-ovn-kubernetes/ovnkube-node-8svlp" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.268849 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/947c802f-a386-4041-8752-44f83a964cd1-systemd-units\") pod \"ovnkube-node-8svlp\" (UID: \"947c802f-a386-4041-8752-44f83a964cd1\") " pod="openshift-ovn-kubernetes/ovnkube-node-8svlp" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.269182 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/947c802f-a386-4041-8752-44f83a964cd1-env-overrides\") pod \"ovnkube-node-8svlp\" (UID: \"947c802f-a386-4041-8752-44f83a964cd1\") " pod="openshift-ovn-kubernetes/ovnkube-node-8svlp" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.269266 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/947c802f-a386-4041-8752-44f83a964cd1-etc-openvswitch\") pod \"ovnkube-node-8svlp\" (UID: \"947c802f-a386-4041-8752-44f83a964cd1\") " pod="openshift-ovn-kubernetes/ovnkube-node-8svlp" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.269313 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/947c802f-a386-4041-8752-44f83a964cd1-host-slash\") pod \"ovnkube-node-8svlp\" (UID: \"947c802f-a386-4041-8752-44f83a964cd1\") " pod="openshift-ovn-kubernetes/ovnkube-node-8svlp" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.269357 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/947c802f-a386-4041-8752-44f83a964cd1-run-ovn\") pod \"ovnkube-node-8svlp\" (UID: \"947c802f-a386-4041-8752-44f83a964cd1\") " pod="openshift-ovn-kubernetes/ovnkube-node-8svlp" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.269399 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/947c802f-a386-4041-8752-44f83a964cd1-host-cni-bin\") pod \"ovnkube-node-8svlp\" (UID: \"947c802f-a386-4041-8752-44f83a964cd1\") " pod="openshift-ovn-kubernetes/ovnkube-node-8svlp" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.269441 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/947c802f-a386-4041-8752-44f83a964cd1-log-socket\") pod \"ovnkube-node-8svlp\" (UID: \"947c802f-a386-4041-8752-44f83a964cd1\") " pod="openshift-ovn-kubernetes/ovnkube-node-8svlp" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.269485 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/947c802f-a386-4041-8752-44f83a964cd1-run-systemd\") pod \"ovnkube-node-8svlp\" (UID: \"947c802f-a386-4041-8752-44f83a964cd1\") " pod="openshift-ovn-kubernetes/ovnkube-node-8svlp" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.269489 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/947c802f-a386-4041-8752-44f83a964cd1-ovnkube-script-lib\") pod \"ovnkube-node-8svlp\" (UID: \"947c802f-a386-4041-8752-44f83a964cd1\") " pod="openshift-ovn-kubernetes/ovnkube-node-8svlp" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.269510 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/947c802f-a386-4041-8752-44f83a964cd1-host-cni-netd\") pod \"ovnkube-node-8svlp\" (UID: \"947c802f-a386-4041-8752-44f83a964cd1\") " pod="openshift-ovn-kubernetes/ovnkube-node-8svlp" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.269662 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/947c802f-a386-4041-8752-44f83a964cd1-ovnkube-config\") pod \"ovnkube-node-8svlp\" (UID: \"947c802f-a386-4041-8752-44f83a964cd1\") " pod="openshift-ovn-kubernetes/ovnkube-node-8svlp" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.274758 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/947c802f-a386-4041-8752-44f83a964cd1-ovn-node-metrics-cert\") pod \"ovnkube-node-8svlp\" (UID: \"947c802f-a386-4041-8752-44f83a964cd1\") " pod="openshift-ovn-kubernetes/ovnkube-node-8svlp" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.294297 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqd9b\" (UniqueName: \"kubernetes.io/projected/947c802f-a386-4041-8752-44f83a964cd1-kube-api-access-xqd9b\") pod \"ovnkube-node-8svlp\" (UID: \"947c802f-a386-4041-8752-44f83a964cd1\") " pod="openshift-ovn-kubernetes/ovnkube-node-8svlp" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.304672 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-8svlp" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.371917 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2x5ws_70a32268-2a2d-47f3-9fc6-4281b8dc6a02/kube-multus/2.log" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.373205 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2x5ws_70a32268-2a2d-47f3-9fc6-4281b8dc6a02/kube-multus/1.log" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.373270 4789 generic.go:334] "Generic (PLEG): container finished" podID="70a32268-2a2d-47f3-9fc6-4281b8dc6a02" containerID="9d3648a8bdabecf0fe7e95880b046a5f5b8a91912f23059a00680ec150976c5f" exitCode=2 Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.373359 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2x5ws" event={"ID":"70a32268-2a2d-47f3-9fc6-4281b8dc6a02","Type":"ContainerDied","Data":"9d3648a8bdabecf0fe7e95880b046a5f5b8a91912f23059a00680ec150976c5f"} Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.373403 4789 scope.go:117] "RemoveContainer" containerID="75cf318c3d63c5316cbeba8abb93919973f88b415ed7116b55333813b8a889fa" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.374221 4789 scope.go:117] "RemoveContainer" containerID="9d3648a8bdabecf0fe7e95880b046a5f5b8a91912f23059a00680ec150976c5f" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.376292 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8svlp" event={"ID":"947c802f-a386-4041-8752-44f83a964cd1","Type":"ContainerStarted","Data":"066137ad47ee55e9a4bf990b513b9ad04cd2e446c97dd0d1616537b95a288226"} Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.380522 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w8vkt_2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6/ovnkube-controller/3.log" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.384860 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w8vkt_2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6/ovn-acl-logging/0.log" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.385819 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w8vkt_2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6/ovn-controller/0.log" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.386403 4789 generic.go:334] "Generic (PLEG): container finished" podID="2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6" containerID="7129ba83a6eb1565ad3687f3eceafd29c68b5e8d14512ae91de8f36b552900a7" exitCode=0 Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.386439 4789 generic.go:334] "Generic (PLEG): container finished" podID="2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6" containerID="f30344b8366cf5ef712d971dc4d8d552e7b50c1ecea8d2ace88bb80fce62231e" exitCode=0 Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.386451 4789 generic.go:334] "Generic (PLEG): container finished" podID="2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6" containerID="05637515d4c19e054ce43cc23bb6d8e32c4752282dab08786fff3062eb976461" exitCode=0 Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.386463 4789 generic.go:334] "Generic (PLEG): container finished" podID="2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6" containerID="9f05a85f44fd6a7b9efa0f591ec0afebd7818a80b132ec3dddd0faa22c380eea" exitCode=0 Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.386477 4789 generic.go:334] "Generic (PLEG): container finished" podID="2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6" containerID="047a98525fa1b06be5e6f3bab6b417973a13bb32df7fad26ca90b4558ee4e364" exitCode=0 Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.386488 4789 generic.go:334] "Generic (PLEG): container finished" podID="2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6" containerID="29dad69c19f9411676fab81684dd613d866c494df1e333b46bfff35b98430103" exitCode=0 Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.386499 4789 generic.go:334] "Generic (PLEG): container finished" podID="2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6" containerID="fd508dcd4022163f662ad27cf3ffbc4cca551d4020809ca5fac60cee8da80601" exitCode=143 Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.386510 4789 generic.go:334] "Generic (PLEG): container finished" podID="2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6" containerID="021453f1c74b708ad3f39c2defa2664f098a552f3e22315479c483d24110e583" exitCode=143 Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.386538 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" event={"ID":"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6","Type":"ContainerDied","Data":"7129ba83a6eb1565ad3687f3eceafd29c68b5e8d14512ae91de8f36b552900a7"} Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.386580 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" event={"ID":"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6","Type":"ContainerDied","Data":"f30344b8366cf5ef712d971dc4d8d552e7b50c1ecea8d2ace88bb80fce62231e"} Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.386619 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" event={"ID":"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6","Type":"ContainerDied","Data":"05637515d4c19e054ce43cc23bb6d8e32c4752282dab08786fff3062eb976461"} Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.386639 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" event={"ID":"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6","Type":"ContainerDied","Data":"9f05a85f44fd6a7b9efa0f591ec0afebd7818a80b132ec3dddd0faa22c380eea"} Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.386654 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" event={"ID":"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6","Type":"ContainerDied","Data":"047a98525fa1b06be5e6f3bab6b417973a13bb32df7fad26ca90b4558ee4e364"} Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.386669 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" event={"ID":"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6","Type":"ContainerDied","Data":"29dad69c19f9411676fab81684dd613d866c494df1e333b46bfff35b98430103"} Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.386684 4789 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7129ba83a6eb1565ad3687f3eceafd29c68b5e8d14512ae91de8f36b552900a7"} Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.386698 4789 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"877f8c63dfc6ee1e69f7f0bc64e5d2896384f737678efb01ad9d27b9030a5fcc"} Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.386707 4789 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f30344b8366cf5ef712d971dc4d8d552e7b50c1ecea8d2ace88bb80fce62231e"} Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.386716 4789 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"05637515d4c19e054ce43cc23bb6d8e32c4752282dab08786fff3062eb976461"} Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.386724 4789 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9f05a85f44fd6a7b9efa0f591ec0afebd7818a80b132ec3dddd0faa22c380eea"} Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.386733 4789 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"047a98525fa1b06be5e6f3bab6b417973a13bb32df7fad26ca90b4558ee4e364"} Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.386741 4789 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"29dad69c19f9411676fab81684dd613d866c494df1e333b46bfff35b98430103"} Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.386749 4789 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fd508dcd4022163f662ad27cf3ffbc4cca551d4020809ca5fac60cee8da80601"} Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.386758 4789 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"021453f1c74b708ad3f39c2defa2664f098a552f3e22315479c483d24110e583"} Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.386766 4789 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e90a8e698f2e9f30a596864d0d038bfad686cca1a56b1a8bc72fab5fe594f355"} Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.386777 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" event={"ID":"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6","Type":"ContainerDied","Data":"fd508dcd4022163f662ad27cf3ffbc4cca551d4020809ca5fac60cee8da80601"} Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.386791 4789 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7129ba83a6eb1565ad3687f3eceafd29c68b5e8d14512ae91de8f36b552900a7"} Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.386801 4789 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"877f8c63dfc6ee1e69f7f0bc64e5d2896384f737678efb01ad9d27b9030a5fcc"} Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.386810 4789 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f30344b8366cf5ef712d971dc4d8d552e7b50c1ecea8d2ace88bb80fce62231e"} Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.386819 4789 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"05637515d4c19e054ce43cc23bb6d8e32c4752282dab08786fff3062eb976461"} Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.386828 4789 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9f05a85f44fd6a7b9efa0f591ec0afebd7818a80b132ec3dddd0faa22c380eea"} Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.386837 4789 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"047a98525fa1b06be5e6f3bab6b417973a13bb32df7fad26ca90b4558ee4e364"} Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.386846 4789 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"29dad69c19f9411676fab81684dd613d866c494df1e333b46bfff35b98430103"} Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.386856 4789 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fd508dcd4022163f662ad27cf3ffbc4cca551d4020809ca5fac60cee8da80601"} Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.386865 4789 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"021453f1c74b708ad3f39c2defa2664f098a552f3e22315479c483d24110e583"} Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.386873 4789 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e90a8e698f2e9f30a596864d0d038bfad686cca1a56b1a8bc72fab5fe594f355"} Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.386884 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" event={"ID":"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6","Type":"ContainerDied","Data":"021453f1c74b708ad3f39c2defa2664f098a552f3e22315479c483d24110e583"} Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.386896 4789 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7129ba83a6eb1565ad3687f3eceafd29c68b5e8d14512ae91de8f36b552900a7"} Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.386905 4789 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"877f8c63dfc6ee1e69f7f0bc64e5d2896384f737678efb01ad9d27b9030a5fcc"} Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.386914 4789 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f30344b8366cf5ef712d971dc4d8d552e7b50c1ecea8d2ace88bb80fce62231e"} Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.386923 4789 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"05637515d4c19e054ce43cc23bb6d8e32c4752282dab08786fff3062eb976461"} Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.386932 4789 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9f05a85f44fd6a7b9efa0f591ec0afebd7818a80b132ec3dddd0faa22c380eea"} Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.386942 4789 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"047a98525fa1b06be5e6f3bab6b417973a13bb32df7fad26ca90b4558ee4e364"} Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.386952 4789 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"29dad69c19f9411676fab81684dd613d866c494df1e333b46bfff35b98430103"} Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.386961 4789 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fd508dcd4022163f662ad27cf3ffbc4cca551d4020809ca5fac60cee8da80601"} Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.386970 4789 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"021453f1c74b708ad3f39c2defa2664f098a552f3e22315479c483d24110e583"} Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.386979 4789 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e90a8e698f2e9f30a596864d0d038bfad686cca1a56b1a8bc72fab5fe594f355"} Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.386991 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" event={"ID":"2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6","Type":"ContainerDied","Data":"86a6091cd023dfaf81ce0bb5e71d75ae4bf89cf422d491ec479ccaacb3d2e3bf"} Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.387004 4789 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7129ba83a6eb1565ad3687f3eceafd29c68b5e8d14512ae91de8f36b552900a7"} Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.387014 4789 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"877f8c63dfc6ee1e69f7f0bc64e5d2896384f737678efb01ad9d27b9030a5fcc"} Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.387023 4789 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f30344b8366cf5ef712d971dc4d8d552e7b50c1ecea8d2ace88bb80fce62231e"} Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.387032 4789 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"05637515d4c19e054ce43cc23bb6d8e32c4752282dab08786fff3062eb976461"} Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.387040 4789 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9f05a85f44fd6a7b9efa0f591ec0afebd7818a80b132ec3dddd0faa22c380eea"} Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.387049 4789 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"047a98525fa1b06be5e6f3bab6b417973a13bb32df7fad26ca90b4558ee4e364"} Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.387057 4789 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"29dad69c19f9411676fab81684dd613d866c494df1e333b46bfff35b98430103"} Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.387065 4789 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fd508dcd4022163f662ad27cf3ffbc4cca551d4020809ca5fac60cee8da80601"} Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.387074 4789 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"021453f1c74b708ad3f39c2defa2664f098a552f3e22315479c483d24110e583"} Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.387082 4789 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e90a8e698f2e9f30a596864d0d038bfad686cca1a56b1a8bc72fab5fe594f355"} Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.387216 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-w8vkt" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.416058 4789 scope.go:117] "RemoveContainer" containerID="7129ba83a6eb1565ad3687f3eceafd29c68b5e8d14512ae91de8f36b552900a7" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.451788 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-w8vkt"] Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.453457 4789 scope.go:117] "RemoveContainer" containerID="877f8c63dfc6ee1e69f7f0bc64e5d2896384f737678efb01ad9d27b9030a5fcc" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.457328 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-w8vkt"] Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.470299 4789 scope.go:117] "RemoveContainer" containerID="f30344b8366cf5ef712d971dc4d8d552e7b50c1ecea8d2ace88bb80fce62231e" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.486481 4789 scope.go:117] "RemoveContainer" containerID="05637515d4c19e054ce43cc23bb6d8e32c4752282dab08786fff3062eb976461" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.553626 4789 scope.go:117] "RemoveContainer" containerID="9f05a85f44fd6a7b9efa0f591ec0afebd7818a80b132ec3dddd0faa22c380eea" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.566579 4789 scope.go:117] "RemoveContainer" containerID="047a98525fa1b06be5e6f3bab6b417973a13bb32df7fad26ca90b4558ee4e364" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.578377 4789 scope.go:117] "RemoveContainer" containerID="29dad69c19f9411676fab81684dd613d866c494df1e333b46bfff35b98430103" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.590297 4789 scope.go:117] "RemoveContainer" containerID="fd508dcd4022163f662ad27cf3ffbc4cca551d4020809ca5fac60cee8da80601" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.602350 4789 scope.go:117] "RemoveContainer" containerID="021453f1c74b708ad3f39c2defa2664f098a552f3e22315479c483d24110e583" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.615430 4789 scope.go:117] "RemoveContainer" containerID="e90a8e698f2e9f30a596864d0d038bfad686cca1a56b1a8bc72fab5fe594f355" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.632163 4789 scope.go:117] "RemoveContainer" containerID="7129ba83a6eb1565ad3687f3eceafd29c68b5e8d14512ae91de8f36b552900a7" Feb 02 21:32:17 crc kubenswrapper[4789]: E0202 21:32:17.633892 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7129ba83a6eb1565ad3687f3eceafd29c68b5e8d14512ae91de8f36b552900a7\": container with ID starting with 7129ba83a6eb1565ad3687f3eceafd29c68b5e8d14512ae91de8f36b552900a7 not found: ID does not exist" containerID="7129ba83a6eb1565ad3687f3eceafd29c68b5e8d14512ae91de8f36b552900a7" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.633951 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7129ba83a6eb1565ad3687f3eceafd29c68b5e8d14512ae91de8f36b552900a7"} err="failed to get container status \"7129ba83a6eb1565ad3687f3eceafd29c68b5e8d14512ae91de8f36b552900a7\": rpc error: code = NotFound desc = could not find container \"7129ba83a6eb1565ad3687f3eceafd29c68b5e8d14512ae91de8f36b552900a7\": container with ID starting with 7129ba83a6eb1565ad3687f3eceafd29c68b5e8d14512ae91de8f36b552900a7 not found: ID does not exist" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.633985 4789 scope.go:117] "RemoveContainer" containerID="877f8c63dfc6ee1e69f7f0bc64e5d2896384f737678efb01ad9d27b9030a5fcc" Feb 02 21:32:17 crc kubenswrapper[4789]: E0202 21:32:17.634890 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"877f8c63dfc6ee1e69f7f0bc64e5d2896384f737678efb01ad9d27b9030a5fcc\": container with ID starting with 877f8c63dfc6ee1e69f7f0bc64e5d2896384f737678efb01ad9d27b9030a5fcc not found: ID does not exist" containerID="877f8c63dfc6ee1e69f7f0bc64e5d2896384f737678efb01ad9d27b9030a5fcc" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.634942 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"877f8c63dfc6ee1e69f7f0bc64e5d2896384f737678efb01ad9d27b9030a5fcc"} err="failed to get container status \"877f8c63dfc6ee1e69f7f0bc64e5d2896384f737678efb01ad9d27b9030a5fcc\": rpc error: code = NotFound desc = could not find container \"877f8c63dfc6ee1e69f7f0bc64e5d2896384f737678efb01ad9d27b9030a5fcc\": container with ID starting with 877f8c63dfc6ee1e69f7f0bc64e5d2896384f737678efb01ad9d27b9030a5fcc not found: ID does not exist" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.634970 4789 scope.go:117] "RemoveContainer" containerID="f30344b8366cf5ef712d971dc4d8d552e7b50c1ecea8d2ace88bb80fce62231e" Feb 02 21:32:17 crc kubenswrapper[4789]: E0202 21:32:17.635280 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f30344b8366cf5ef712d971dc4d8d552e7b50c1ecea8d2ace88bb80fce62231e\": container with ID starting with f30344b8366cf5ef712d971dc4d8d552e7b50c1ecea8d2ace88bb80fce62231e not found: ID does not exist" containerID="f30344b8366cf5ef712d971dc4d8d552e7b50c1ecea8d2ace88bb80fce62231e" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.635335 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f30344b8366cf5ef712d971dc4d8d552e7b50c1ecea8d2ace88bb80fce62231e"} err="failed to get container status \"f30344b8366cf5ef712d971dc4d8d552e7b50c1ecea8d2ace88bb80fce62231e\": rpc error: code = NotFound desc = could not find container \"f30344b8366cf5ef712d971dc4d8d552e7b50c1ecea8d2ace88bb80fce62231e\": container with ID starting with f30344b8366cf5ef712d971dc4d8d552e7b50c1ecea8d2ace88bb80fce62231e not found: ID does not exist" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.635366 4789 scope.go:117] "RemoveContainer" containerID="05637515d4c19e054ce43cc23bb6d8e32c4752282dab08786fff3062eb976461" Feb 02 21:32:17 crc kubenswrapper[4789]: E0202 21:32:17.635794 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05637515d4c19e054ce43cc23bb6d8e32c4752282dab08786fff3062eb976461\": container with ID starting with 05637515d4c19e054ce43cc23bb6d8e32c4752282dab08786fff3062eb976461 not found: ID does not exist" containerID="05637515d4c19e054ce43cc23bb6d8e32c4752282dab08786fff3062eb976461" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.635826 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05637515d4c19e054ce43cc23bb6d8e32c4752282dab08786fff3062eb976461"} err="failed to get container status \"05637515d4c19e054ce43cc23bb6d8e32c4752282dab08786fff3062eb976461\": rpc error: code = NotFound desc = could not find container \"05637515d4c19e054ce43cc23bb6d8e32c4752282dab08786fff3062eb976461\": container with ID starting with 05637515d4c19e054ce43cc23bb6d8e32c4752282dab08786fff3062eb976461 not found: ID does not exist" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.635851 4789 scope.go:117] "RemoveContainer" containerID="9f05a85f44fd6a7b9efa0f591ec0afebd7818a80b132ec3dddd0faa22c380eea" Feb 02 21:32:17 crc kubenswrapper[4789]: E0202 21:32:17.636150 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f05a85f44fd6a7b9efa0f591ec0afebd7818a80b132ec3dddd0faa22c380eea\": container with ID starting with 9f05a85f44fd6a7b9efa0f591ec0afebd7818a80b132ec3dddd0faa22c380eea not found: ID does not exist" containerID="9f05a85f44fd6a7b9efa0f591ec0afebd7818a80b132ec3dddd0faa22c380eea" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.636171 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f05a85f44fd6a7b9efa0f591ec0afebd7818a80b132ec3dddd0faa22c380eea"} err="failed to get container status \"9f05a85f44fd6a7b9efa0f591ec0afebd7818a80b132ec3dddd0faa22c380eea\": rpc error: code = NotFound desc = could not find container \"9f05a85f44fd6a7b9efa0f591ec0afebd7818a80b132ec3dddd0faa22c380eea\": container with ID starting with 9f05a85f44fd6a7b9efa0f591ec0afebd7818a80b132ec3dddd0faa22c380eea not found: ID does not exist" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.636187 4789 scope.go:117] "RemoveContainer" containerID="047a98525fa1b06be5e6f3bab6b417973a13bb32df7fad26ca90b4558ee4e364" Feb 02 21:32:17 crc kubenswrapper[4789]: E0202 21:32:17.637149 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"047a98525fa1b06be5e6f3bab6b417973a13bb32df7fad26ca90b4558ee4e364\": container with ID starting with 047a98525fa1b06be5e6f3bab6b417973a13bb32df7fad26ca90b4558ee4e364 not found: ID does not exist" containerID="047a98525fa1b06be5e6f3bab6b417973a13bb32df7fad26ca90b4558ee4e364" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.637178 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"047a98525fa1b06be5e6f3bab6b417973a13bb32df7fad26ca90b4558ee4e364"} err="failed to get container status \"047a98525fa1b06be5e6f3bab6b417973a13bb32df7fad26ca90b4558ee4e364\": rpc error: code = NotFound desc = could not find container \"047a98525fa1b06be5e6f3bab6b417973a13bb32df7fad26ca90b4558ee4e364\": container with ID starting with 047a98525fa1b06be5e6f3bab6b417973a13bb32df7fad26ca90b4558ee4e364 not found: ID does not exist" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.637196 4789 scope.go:117] "RemoveContainer" containerID="29dad69c19f9411676fab81684dd613d866c494df1e333b46bfff35b98430103" Feb 02 21:32:17 crc kubenswrapper[4789]: E0202 21:32:17.643332 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29dad69c19f9411676fab81684dd613d866c494df1e333b46bfff35b98430103\": container with ID starting with 29dad69c19f9411676fab81684dd613d866c494df1e333b46bfff35b98430103 not found: ID does not exist" containerID="29dad69c19f9411676fab81684dd613d866c494df1e333b46bfff35b98430103" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.643385 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29dad69c19f9411676fab81684dd613d866c494df1e333b46bfff35b98430103"} err="failed to get container status \"29dad69c19f9411676fab81684dd613d866c494df1e333b46bfff35b98430103\": rpc error: code = NotFound desc = could not find container \"29dad69c19f9411676fab81684dd613d866c494df1e333b46bfff35b98430103\": container with ID starting with 29dad69c19f9411676fab81684dd613d866c494df1e333b46bfff35b98430103 not found: ID does not exist" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.643415 4789 scope.go:117] "RemoveContainer" containerID="fd508dcd4022163f662ad27cf3ffbc4cca551d4020809ca5fac60cee8da80601" Feb 02 21:32:17 crc kubenswrapper[4789]: E0202 21:32:17.643910 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd508dcd4022163f662ad27cf3ffbc4cca551d4020809ca5fac60cee8da80601\": container with ID starting with fd508dcd4022163f662ad27cf3ffbc4cca551d4020809ca5fac60cee8da80601 not found: ID does not exist" containerID="fd508dcd4022163f662ad27cf3ffbc4cca551d4020809ca5fac60cee8da80601" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.643942 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd508dcd4022163f662ad27cf3ffbc4cca551d4020809ca5fac60cee8da80601"} err="failed to get container status \"fd508dcd4022163f662ad27cf3ffbc4cca551d4020809ca5fac60cee8da80601\": rpc error: code = NotFound desc = could not find container \"fd508dcd4022163f662ad27cf3ffbc4cca551d4020809ca5fac60cee8da80601\": container with ID starting with fd508dcd4022163f662ad27cf3ffbc4cca551d4020809ca5fac60cee8da80601 not found: ID does not exist" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.643967 4789 scope.go:117] "RemoveContainer" containerID="021453f1c74b708ad3f39c2defa2664f098a552f3e22315479c483d24110e583" Feb 02 21:32:17 crc kubenswrapper[4789]: E0202 21:32:17.644281 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"021453f1c74b708ad3f39c2defa2664f098a552f3e22315479c483d24110e583\": container with ID starting with 021453f1c74b708ad3f39c2defa2664f098a552f3e22315479c483d24110e583 not found: ID does not exist" containerID="021453f1c74b708ad3f39c2defa2664f098a552f3e22315479c483d24110e583" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.644308 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"021453f1c74b708ad3f39c2defa2664f098a552f3e22315479c483d24110e583"} err="failed to get container status \"021453f1c74b708ad3f39c2defa2664f098a552f3e22315479c483d24110e583\": rpc error: code = NotFound desc = could not find container \"021453f1c74b708ad3f39c2defa2664f098a552f3e22315479c483d24110e583\": container with ID starting with 021453f1c74b708ad3f39c2defa2664f098a552f3e22315479c483d24110e583 not found: ID does not exist" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.644329 4789 scope.go:117] "RemoveContainer" containerID="e90a8e698f2e9f30a596864d0d038bfad686cca1a56b1a8bc72fab5fe594f355" Feb 02 21:32:17 crc kubenswrapper[4789]: E0202 21:32:17.644656 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e90a8e698f2e9f30a596864d0d038bfad686cca1a56b1a8bc72fab5fe594f355\": container with ID starting with e90a8e698f2e9f30a596864d0d038bfad686cca1a56b1a8bc72fab5fe594f355 not found: ID does not exist" containerID="e90a8e698f2e9f30a596864d0d038bfad686cca1a56b1a8bc72fab5fe594f355" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.644690 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e90a8e698f2e9f30a596864d0d038bfad686cca1a56b1a8bc72fab5fe594f355"} err="failed to get container status \"e90a8e698f2e9f30a596864d0d038bfad686cca1a56b1a8bc72fab5fe594f355\": rpc error: code = NotFound desc = could not find container \"e90a8e698f2e9f30a596864d0d038bfad686cca1a56b1a8bc72fab5fe594f355\": container with ID starting with e90a8e698f2e9f30a596864d0d038bfad686cca1a56b1a8bc72fab5fe594f355 not found: ID does not exist" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.644712 4789 scope.go:117] "RemoveContainer" containerID="7129ba83a6eb1565ad3687f3eceafd29c68b5e8d14512ae91de8f36b552900a7" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.645046 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7129ba83a6eb1565ad3687f3eceafd29c68b5e8d14512ae91de8f36b552900a7"} err="failed to get container status \"7129ba83a6eb1565ad3687f3eceafd29c68b5e8d14512ae91de8f36b552900a7\": rpc error: code = NotFound desc = could not find container \"7129ba83a6eb1565ad3687f3eceafd29c68b5e8d14512ae91de8f36b552900a7\": container with ID starting with 7129ba83a6eb1565ad3687f3eceafd29c68b5e8d14512ae91de8f36b552900a7 not found: ID does not exist" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.645070 4789 scope.go:117] "RemoveContainer" containerID="877f8c63dfc6ee1e69f7f0bc64e5d2896384f737678efb01ad9d27b9030a5fcc" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.645423 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"877f8c63dfc6ee1e69f7f0bc64e5d2896384f737678efb01ad9d27b9030a5fcc"} err="failed to get container status \"877f8c63dfc6ee1e69f7f0bc64e5d2896384f737678efb01ad9d27b9030a5fcc\": rpc error: code = NotFound desc = could not find container \"877f8c63dfc6ee1e69f7f0bc64e5d2896384f737678efb01ad9d27b9030a5fcc\": container with ID starting with 877f8c63dfc6ee1e69f7f0bc64e5d2896384f737678efb01ad9d27b9030a5fcc not found: ID does not exist" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.645449 4789 scope.go:117] "RemoveContainer" containerID="f30344b8366cf5ef712d971dc4d8d552e7b50c1ecea8d2ace88bb80fce62231e" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.645788 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f30344b8366cf5ef712d971dc4d8d552e7b50c1ecea8d2ace88bb80fce62231e"} err="failed to get container status \"f30344b8366cf5ef712d971dc4d8d552e7b50c1ecea8d2ace88bb80fce62231e\": rpc error: code = NotFound desc = could not find container \"f30344b8366cf5ef712d971dc4d8d552e7b50c1ecea8d2ace88bb80fce62231e\": container with ID starting with f30344b8366cf5ef712d971dc4d8d552e7b50c1ecea8d2ace88bb80fce62231e not found: ID does not exist" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.645811 4789 scope.go:117] "RemoveContainer" containerID="05637515d4c19e054ce43cc23bb6d8e32c4752282dab08786fff3062eb976461" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.646410 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05637515d4c19e054ce43cc23bb6d8e32c4752282dab08786fff3062eb976461"} err="failed to get container status \"05637515d4c19e054ce43cc23bb6d8e32c4752282dab08786fff3062eb976461\": rpc error: code = NotFound desc = could not find container \"05637515d4c19e054ce43cc23bb6d8e32c4752282dab08786fff3062eb976461\": container with ID starting with 05637515d4c19e054ce43cc23bb6d8e32c4752282dab08786fff3062eb976461 not found: ID does not exist" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.646437 4789 scope.go:117] "RemoveContainer" containerID="9f05a85f44fd6a7b9efa0f591ec0afebd7818a80b132ec3dddd0faa22c380eea" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.646784 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f05a85f44fd6a7b9efa0f591ec0afebd7818a80b132ec3dddd0faa22c380eea"} err="failed to get container status \"9f05a85f44fd6a7b9efa0f591ec0afebd7818a80b132ec3dddd0faa22c380eea\": rpc error: code = NotFound desc = could not find container \"9f05a85f44fd6a7b9efa0f591ec0afebd7818a80b132ec3dddd0faa22c380eea\": container with ID starting with 9f05a85f44fd6a7b9efa0f591ec0afebd7818a80b132ec3dddd0faa22c380eea not found: ID does not exist" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.646813 4789 scope.go:117] "RemoveContainer" containerID="047a98525fa1b06be5e6f3bab6b417973a13bb32df7fad26ca90b4558ee4e364" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.647129 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"047a98525fa1b06be5e6f3bab6b417973a13bb32df7fad26ca90b4558ee4e364"} err="failed to get container status \"047a98525fa1b06be5e6f3bab6b417973a13bb32df7fad26ca90b4558ee4e364\": rpc error: code = NotFound desc = could not find container \"047a98525fa1b06be5e6f3bab6b417973a13bb32df7fad26ca90b4558ee4e364\": container with ID starting with 047a98525fa1b06be5e6f3bab6b417973a13bb32df7fad26ca90b4558ee4e364 not found: ID does not exist" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.647154 4789 scope.go:117] "RemoveContainer" containerID="29dad69c19f9411676fab81684dd613d866c494df1e333b46bfff35b98430103" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.647458 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29dad69c19f9411676fab81684dd613d866c494df1e333b46bfff35b98430103"} err="failed to get container status \"29dad69c19f9411676fab81684dd613d866c494df1e333b46bfff35b98430103\": rpc error: code = NotFound desc = could not find container \"29dad69c19f9411676fab81684dd613d866c494df1e333b46bfff35b98430103\": container with ID starting with 29dad69c19f9411676fab81684dd613d866c494df1e333b46bfff35b98430103 not found: ID does not exist" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.647483 4789 scope.go:117] "RemoveContainer" containerID="fd508dcd4022163f662ad27cf3ffbc4cca551d4020809ca5fac60cee8da80601" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.647932 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd508dcd4022163f662ad27cf3ffbc4cca551d4020809ca5fac60cee8da80601"} err="failed to get container status \"fd508dcd4022163f662ad27cf3ffbc4cca551d4020809ca5fac60cee8da80601\": rpc error: code = NotFound desc = could not find container \"fd508dcd4022163f662ad27cf3ffbc4cca551d4020809ca5fac60cee8da80601\": container with ID starting with fd508dcd4022163f662ad27cf3ffbc4cca551d4020809ca5fac60cee8da80601 not found: ID does not exist" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.647962 4789 scope.go:117] "RemoveContainer" containerID="021453f1c74b708ad3f39c2defa2664f098a552f3e22315479c483d24110e583" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.648403 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"021453f1c74b708ad3f39c2defa2664f098a552f3e22315479c483d24110e583"} err="failed to get container status \"021453f1c74b708ad3f39c2defa2664f098a552f3e22315479c483d24110e583\": rpc error: code = NotFound desc = could not find container \"021453f1c74b708ad3f39c2defa2664f098a552f3e22315479c483d24110e583\": container with ID starting with 021453f1c74b708ad3f39c2defa2664f098a552f3e22315479c483d24110e583 not found: ID does not exist" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.648430 4789 scope.go:117] "RemoveContainer" containerID="e90a8e698f2e9f30a596864d0d038bfad686cca1a56b1a8bc72fab5fe594f355" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.648810 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e90a8e698f2e9f30a596864d0d038bfad686cca1a56b1a8bc72fab5fe594f355"} err="failed to get container status \"e90a8e698f2e9f30a596864d0d038bfad686cca1a56b1a8bc72fab5fe594f355\": rpc error: code = NotFound desc = could not find container \"e90a8e698f2e9f30a596864d0d038bfad686cca1a56b1a8bc72fab5fe594f355\": container with ID starting with e90a8e698f2e9f30a596864d0d038bfad686cca1a56b1a8bc72fab5fe594f355 not found: ID does not exist" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.648837 4789 scope.go:117] "RemoveContainer" containerID="7129ba83a6eb1565ad3687f3eceafd29c68b5e8d14512ae91de8f36b552900a7" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.649233 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7129ba83a6eb1565ad3687f3eceafd29c68b5e8d14512ae91de8f36b552900a7"} err="failed to get container status \"7129ba83a6eb1565ad3687f3eceafd29c68b5e8d14512ae91de8f36b552900a7\": rpc error: code = NotFound desc = could not find container \"7129ba83a6eb1565ad3687f3eceafd29c68b5e8d14512ae91de8f36b552900a7\": container with ID starting with 7129ba83a6eb1565ad3687f3eceafd29c68b5e8d14512ae91de8f36b552900a7 not found: ID does not exist" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.649259 4789 scope.go:117] "RemoveContainer" containerID="877f8c63dfc6ee1e69f7f0bc64e5d2896384f737678efb01ad9d27b9030a5fcc" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.649661 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"877f8c63dfc6ee1e69f7f0bc64e5d2896384f737678efb01ad9d27b9030a5fcc"} err="failed to get container status \"877f8c63dfc6ee1e69f7f0bc64e5d2896384f737678efb01ad9d27b9030a5fcc\": rpc error: code = NotFound desc = could not find container \"877f8c63dfc6ee1e69f7f0bc64e5d2896384f737678efb01ad9d27b9030a5fcc\": container with ID starting with 877f8c63dfc6ee1e69f7f0bc64e5d2896384f737678efb01ad9d27b9030a5fcc not found: ID does not exist" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.649686 4789 scope.go:117] "RemoveContainer" containerID="f30344b8366cf5ef712d971dc4d8d552e7b50c1ecea8d2ace88bb80fce62231e" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.649926 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f30344b8366cf5ef712d971dc4d8d552e7b50c1ecea8d2ace88bb80fce62231e"} err="failed to get container status \"f30344b8366cf5ef712d971dc4d8d552e7b50c1ecea8d2ace88bb80fce62231e\": rpc error: code = NotFound desc = could not find container \"f30344b8366cf5ef712d971dc4d8d552e7b50c1ecea8d2ace88bb80fce62231e\": container with ID starting with f30344b8366cf5ef712d971dc4d8d552e7b50c1ecea8d2ace88bb80fce62231e not found: ID does not exist" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.649946 4789 scope.go:117] "RemoveContainer" containerID="05637515d4c19e054ce43cc23bb6d8e32c4752282dab08786fff3062eb976461" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.650155 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05637515d4c19e054ce43cc23bb6d8e32c4752282dab08786fff3062eb976461"} err="failed to get container status \"05637515d4c19e054ce43cc23bb6d8e32c4752282dab08786fff3062eb976461\": rpc error: code = NotFound desc = could not find container \"05637515d4c19e054ce43cc23bb6d8e32c4752282dab08786fff3062eb976461\": container with ID starting with 05637515d4c19e054ce43cc23bb6d8e32c4752282dab08786fff3062eb976461 not found: ID does not exist" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.650177 4789 scope.go:117] "RemoveContainer" containerID="9f05a85f44fd6a7b9efa0f591ec0afebd7818a80b132ec3dddd0faa22c380eea" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.650431 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f05a85f44fd6a7b9efa0f591ec0afebd7818a80b132ec3dddd0faa22c380eea"} err="failed to get container status \"9f05a85f44fd6a7b9efa0f591ec0afebd7818a80b132ec3dddd0faa22c380eea\": rpc error: code = NotFound desc = could not find container \"9f05a85f44fd6a7b9efa0f591ec0afebd7818a80b132ec3dddd0faa22c380eea\": container with ID starting with 9f05a85f44fd6a7b9efa0f591ec0afebd7818a80b132ec3dddd0faa22c380eea not found: ID does not exist" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.650455 4789 scope.go:117] "RemoveContainer" containerID="047a98525fa1b06be5e6f3bab6b417973a13bb32df7fad26ca90b4558ee4e364" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.650672 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"047a98525fa1b06be5e6f3bab6b417973a13bb32df7fad26ca90b4558ee4e364"} err="failed to get container status \"047a98525fa1b06be5e6f3bab6b417973a13bb32df7fad26ca90b4558ee4e364\": rpc error: code = NotFound desc = could not find container \"047a98525fa1b06be5e6f3bab6b417973a13bb32df7fad26ca90b4558ee4e364\": container with ID starting with 047a98525fa1b06be5e6f3bab6b417973a13bb32df7fad26ca90b4558ee4e364 not found: ID does not exist" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.650712 4789 scope.go:117] "RemoveContainer" containerID="29dad69c19f9411676fab81684dd613d866c494df1e333b46bfff35b98430103" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.650946 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29dad69c19f9411676fab81684dd613d866c494df1e333b46bfff35b98430103"} err="failed to get container status \"29dad69c19f9411676fab81684dd613d866c494df1e333b46bfff35b98430103\": rpc error: code = NotFound desc = could not find container \"29dad69c19f9411676fab81684dd613d866c494df1e333b46bfff35b98430103\": container with ID starting with 29dad69c19f9411676fab81684dd613d866c494df1e333b46bfff35b98430103 not found: ID does not exist" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.650980 4789 scope.go:117] "RemoveContainer" containerID="fd508dcd4022163f662ad27cf3ffbc4cca551d4020809ca5fac60cee8da80601" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.651253 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd508dcd4022163f662ad27cf3ffbc4cca551d4020809ca5fac60cee8da80601"} err="failed to get container status \"fd508dcd4022163f662ad27cf3ffbc4cca551d4020809ca5fac60cee8da80601\": rpc error: code = NotFound desc = could not find container \"fd508dcd4022163f662ad27cf3ffbc4cca551d4020809ca5fac60cee8da80601\": container with ID starting with fd508dcd4022163f662ad27cf3ffbc4cca551d4020809ca5fac60cee8da80601 not found: ID does not exist" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.651298 4789 scope.go:117] "RemoveContainer" containerID="021453f1c74b708ad3f39c2defa2664f098a552f3e22315479c483d24110e583" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.651742 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"021453f1c74b708ad3f39c2defa2664f098a552f3e22315479c483d24110e583"} err="failed to get container status \"021453f1c74b708ad3f39c2defa2664f098a552f3e22315479c483d24110e583\": rpc error: code = NotFound desc = could not find container \"021453f1c74b708ad3f39c2defa2664f098a552f3e22315479c483d24110e583\": container with ID starting with 021453f1c74b708ad3f39c2defa2664f098a552f3e22315479c483d24110e583 not found: ID does not exist" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.651768 4789 scope.go:117] "RemoveContainer" containerID="e90a8e698f2e9f30a596864d0d038bfad686cca1a56b1a8bc72fab5fe594f355" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.652014 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e90a8e698f2e9f30a596864d0d038bfad686cca1a56b1a8bc72fab5fe594f355"} err="failed to get container status \"e90a8e698f2e9f30a596864d0d038bfad686cca1a56b1a8bc72fab5fe594f355\": rpc error: code = NotFound desc = could not find container \"e90a8e698f2e9f30a596864d0d038bfad686cca1a56b1a8bc72fab5fe594f355\": container with ID starting with e90a8e698f2e9f30a596864d0d038bfad686cca1a56b1a8bc72fab5fe594f355 not found: ID does not exist" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.652046 4789 scope.go:117] "RemoveContainer" containerID="7129ba83a6eb1565ad3687f3eceafd29c68b5e8d14512ae91de8f36b552900a7" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.652441 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7129ba83a6eb1565ad3687f3eceafd29c68b5e8d14512ae91de8f36b552900a7"} err="failed to get container status \"7129ba83a6eb1565ad3687f3eceafd29c68b5e8d14512ae91de8f36b552900a7\": rpc error: code = NotFound desc = could not find container \"7129ba83a6eb1565ad3687f3eceafd29c68b5e8d14512ae91de8f36b552900a7\": container with ID starting with 7129ba83a6eb1565ad3687f3eceafd29c68b5e8d14512ae91de8f36b552900a7 not found: ID does not exist" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.652462 4789 scope.go:117] "RemoveContainer" containerID="877f8c63dfc6ee1e69f7f0bc64e5d2896384f737678efb01ad9d27b9030a5fcc" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.652711 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"877f8c63dfc6ee1e69f7f0bc64e5d2896384f737678efb01ad9d27b9030a5fcc"} err="failed to get container status \"877f8c63dfc6ee1e69f7f0bc64e5d2896384f737678efb01ad9d27b9030a5fcc\": rpc error: code = NotFound desc = could not find container \"877f8c63dfc6ee1e69f7f0bc64e5d2896384f737678efb01ad9d27b9030a5fcc\": container with ID starting with 877f8c63dfc6ee1e69f7f0bc64e5d2896384f737678efb01ad9d27b9030a5fcc not found: ID does not exist" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.652741 4789 scope.go:117] "RemoveContainer" containerID="f30344b8366cf5ef712d971dc4d8d552e7b50c1ecea8d2ace88bb80fce62231e" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.653248 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f30344b8366cf5ef712d971dc4d8d552e7b50c1ecea8d2ace88bb80fce62231e"} err="failed to get container status \"f30344b8366cf5ef712d971dc4d8d552e7b50c1ecea8d2ace88bb80fce62231e\": rpc error: code = NotFound desc = could not find container \"f30344b8366cf5ef712d971dc4d8d552e7b50c1ecea8d2ace88bb80fce62231e\": container with ID starting with f30344b8366cf5ef712d971dc4d8d552e7b50c1ecea8d2ace88bb80fce62231e not found: ID does not exist" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.653275 4789 scope.go:117] "RemoveContainer" containerID="05637515d4c19e054ce43cc23bb6d8e32c4752282dab08786fff3062eb976461" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.653554 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05637515d4c19e054ce43cc23bb6d8e32c4752282dab08786fff3062eb976461"} err="failed to get container status \"05637515d4c19e054ce43cc23bb6d8e32c4752282dab08786fff3062eb976461\": rpc error: code = NotFound desc = could not find container \"05637515d4c19e054ce43cc23bb6d8e32c4752282dab08786fff3062eb976461\": container with ID starting with 05637515d4c19e054ce43cc23bb6d8e32c4752282dab08786fff3062eb976461 not found: ID does not exist" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.653583 4789 scope.go:117] "RemoveContainer" containerID="9f05a85f44fd6a7b9efa0f591ec0afebd7818a80b132ec3dddd0faa22c380eea" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.653846 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f05a85f44fd6a7b9efa0f591ec0afebd7818a80b132ec3dddd0faa22c380eea"} err="failed to get container status \"9f05a85f44fd6a7b9efa0f591ec0afebd7818a80b132ec3dddd0faa22c380eea\": rpc error: code = NotFound desc = could not find container \"9f05a85f44fd6a7b9efa0f591ec0afebd7818a80b132ec3dddd0faa22c380eea\": container with ID starting with 9f05a85f44fd6a7b9efa0f591ec0afebd7818a80b132ec3dddd0faa22c380eea not found: ID does not exist" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.653863 4789 scope.go:117] "RemoveContainer" containerID="047a98525fa1b06be5e6f3bab6b417973a13bb32df7fad26ca90b4558ee4e364" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.654064 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"047a98525fa1b06be5e6f3bab6b417973a13bb32df7fad26ca90b4558ee4e364"} err="failed to get container status \"047a98525fa1b06be5e6f3bab6b417973a13bb32df7fad26ca90b4558ee4e364\": rpc error: code = NotFound desc = could not find container \"047a98525fa1b06be5e6f3bab6b417973a13bb32df7fad26ca90b4558ee4e364\": container with ID starting with 047a98525fa1b06be5e6f3bab6b417973a13bb32df7fad26ca90b4558ee4e364 not found: ID does not exist" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.654083 4789 scope.go:117] "RemoveContainer" containerID="29dad69c19f9411676fab81684dd613d866c494df1e333b46bfff35b98430103" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.654289 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29dad69c19f9411676fab81684dd613d866c494df1e333b46bfff35b98430103"} err="failed to get container status \"29dad69c19f9411676fab81684dd613d866c494df1e333b46bfff35b98430103\": rpc error: code = NotFound desc = could not find container \"29dad69c19f9411676fab81684dd613d866c494df1e333b46bfff35b98430103\": container with ID starting with 29dad69c19f9411676fab81684dd613d866c494df1e333b46bfff35b98430103 not found: ID does not exist" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.654305 4789 scope.go:117] "RemoveContainer" containerID="fd508dcd4022163f662ad27cf3ffbc4cca551d4020809ca5fac60cee8da80601" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.655274 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd508dcd4022163f662ad27cf3ffbc4cca551d4020809ca5fac60cee8da80601"} err="failed to get container status \"fd508dcd4022163f662ad27cf3ffbc4cca551d4020809ca5fac60cee8da80601\": rpc error: code = NotFound desc = could not find container \"fd508dcd4022163f662ad27cf3ffbc4cca551d4020809ca5fac60cee8da80601\": container with ID starting with fd508dcd4022163f662ad27cf3ffbc4cca551d4020809ca5fac60cee8da80601 not found: ID does not exist" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.655304 4789 scope.go:117] "RemoveContainer" containerID="021453f1c74b708ad3f39c2defa2664f098a552f3e22315479c483d24110e583" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.655662 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"021453f1c74b708ad3f39c2defa2664f098a552f3e22315479c483d24110e583"} err="failed to get container status \"021453f1c74b708ad3f39c2defa2664f098a552f3e22315479c483d24110e583\": rpc error: code = NotFound desc = could not find container \"021453f1c74b708ad3f39c2defa2664f098a552f3e22315479c483d24110e583\": container with ID starting with 021453f1c74b708ad3f39c2defa2664f098a552f3e22315479c483d24110e583 not found: ID does not exist" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.655690 4789 scope.go:117] "RemoveContainer" containerID="e90a8e698f2e9f30a596864d0d038bfad686cca1a56b1a8bc72fab5fe594f355" Feb 02 21:32:17 crc kubenswrapper[4789]: I0202 21:32:17.656007 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e90a8e698f2e9f30a596864d0d038bfad686cca1a56b1a8bc72fab5fe594f355"} err="failed to get container status \"e90a8e698f2e9f30a596864d0d038bfad686cca1a56b1a8bc72fab5fe594f355\": rpc error: code = NotFound desc = could not find container \"e90a8e698f2e9f30a596864d0d038bfad686cca1a56b1a8bc72fab5fe594f355\": container with ID starting with e90a8e698f2e9f30a596864d0d038bfad686cca1a56b1a8bc72fab5fe594f355 not found: ID does not exist" Feb 02 21:32:18 crc kubenswrapper[4789]: I0202 21:32:18.399025 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2x5ws_70a32268-2a2d-47f3-9fc6-4281b8dc6a02/kube-multus/2.log" Feb 02 21:32:18 crc kubenswrapper[4789]: I0202 21:32:18.401373 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2x5ws" event={"ID":"70a32268-2a2d-47f3-9fc6-4281b8dc6a02","Type":"ContainerStarted","Data":"96ef7654d7134d1d32fdca7f20c77bb2c0cfc2b4902e8745447f5a98ccab2dbf"} Feb 02 21:32:18 crc kubenswrapper[4789]: I0202 21:32:18.404630 4789 generic.go:334] "Generic (PLEG): container finished" podID="947c802f-a386-4041-8752-44f83a964cd1" containerID="71b4e1cf827d31ee49f76ab28122ad2950d6a9c7ebdeb7486372d566fd3a0696" exitCode=0 Feb 02 21:32:18 crc kubenswrapper[4789]: I0202 21:32:18.404762 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8svlp" event={"ID":"947c802f-a386-4041-8752-44f83a964cd1","Type":"ContainerDied","Data":"71b4e1cf827d31ee49f76ab28122ad2950d6a9c7ebdeb7486372d566fd3a0696"} Feb 02 21:32:18 crc kubenswrapper[4789]: I0202 21:32:18.442923 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6" path="/var/lib/kubelet/pods/2e38c22e-bcd6-4aa8-89e3-b02b691c8fd6/volumes" Feb 02 21:32:19 crc kubenswrapper[4789]: I0202 21:32:19.416298 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8svlp" event={"ID":"947c802f-a386-4041-8752-44f83a964cd1","Type":"ContainerStarted","Data":"88665b81cef95d9394f9516914c19493626fcce04780da770d6525af507a5275"} Feb 02 21:32:19 crc kubenswrapper[4789]: I0202 21:32:19.416903 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8svlp" event={"ID":"947c802f-a386-4041-8752-44f83a964cd1","Type":"ContainerStarted","Data":"1620e2246d8e4f666c246e07f0d70d007b5044e695452f9e912c8255c245d7c8"} Feb 02 21:32:19 crc kubenswrapper[4789]: I0202 21:32:19.416918 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8svlp" event={"ID":"947c802f-a386-4041-8752-44f83a964cd1","Type":"ContainerStarted","Data":"1c69aa026ac166a63c49777e92fb6e1d8acbf0938f2067164fb5064b16168e74"} Feb 02 21:32:19 crc kubenswrapper[4789]: I0202 21:32:19.416930 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8svlp" event={"ID":"947c802f-a386-4041-8752-44f83a964cd1","Type":"ContainerStarted","Data":"602954c8af1390b1d8d3d09a1be0b5bdf1b5380467779eb13f4757195d41bb24"} Feb 02 21:32:19 crc kubenswrapper[4789]: I0202 21:32:19.416941 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8svlp" event={"ID":"947c802f-a386-4041-8752-44f83a964cd1","Type":"ContainerStarted","Data":"0f96cd9d04286ae0e2c06a83b0c4d42906862968129f9e67e6b62376597d4372"} Feb 02 21:32:19 crc kubenswrapper[4789]: I0202 21:32:19.416952 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8svlp" event={"ID":"947c802f-a386-4041-8752-44f83a964cd1","Type":"ContainerStarted","Data":"5cfb0ff9bbf5a7ac37512edf20857ca14aa1eaef0b9e15c212f5286d5da7d90a"} Feb 02 21:32:19 crc kubenswrapper[4789]: I0202 21:32:19.805048 4789 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 02 21:32:21 crc kubenswrapper[4789]: I0202 21:32:21.441459 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8svlp" event={"ID":"947c802f-a386-4041-8752-44f83a964cd1","Type":"ContainerStarted","Data":"f4f7082bf3b11b200af51576f20cd62a6177ddd1c82c5ca2f7778b9607fca9c1"} Feb 02 21:32:24 crc kubenswrapper[4789]: I0202 21:32:24.117689 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-ggq8b"] Feb 02 21:32:24 crc kubenswrapper[4789]: I0202 21:32:24.119141 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-ggq8b" Feb 02 21:32:24 crc kubenswrapper[4789]: I0202 21:32:24.121111 4789 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-xrcmg" Feb 02 21:32:24 crc kubenswrapper[4789]: I0202 21:32:24.121112 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Feb 02 21:32:24 crc kubenswrapper[4789]: I0202 21:32:24.121195 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Feb 02 21:32:24 crc kubenswrapper[4789]: I0202 21:32:24.121238 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Feb 02 21:32:24 crc kubenswrapper[4789]: I0202 21:32:24.226788 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/52782065-ac4f-47c2-9c7e-fd883fba3d65-node-mnt\") pod \"crc-storage-crc-ggq8b\" (UID: \"52782065-ac4f-47c2-9c7e-fd883fba3d65\") " pod="crc-storage/crc-storage-crc-ggq8b" Feb 02 21:32:24 crc kubenswrapper[4789]: I0202 21:32:24.226843 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hh8nf\" (UniqueName: \"kubernetes.io/projected/52782065-ac4f-47c2-9c7e-fd883fba3d65-kube-api-access-hh8nf\") pod \"crc-storage-crc-ggq8b\" (UID: \"52782065-ac4f-47c2-9c7e-fd883fba3d65\") " pod="crc-storage/crc-storage-crc-ggq8b" Feb 02 21:32:24 crc kubenswrapper[4789]: I0202 21:32:24.226957 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/52782065-ac4f-47c2-9c7e-fd883fba3d65-crc-storage\") pod \"crc-storage-crc-ggq8b\" (UID: \"52782065-ac4f-47c2-9c7e-fd883fba3d65\") " pod="crc-storage/crc-storage-crc-ggq8b" Feb 02 21:32:24 crc kubenswrapper[4789]: I0202 21:32:24.328249 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/52782065-ac4f-47c2-9c7e-fd883fba3d65-crc-storage\") pod \"crc-storage-crc-ggq8b\" (UID: \"52782065-ac4f-47c2-9c7e-fd883fba3d65\") " pod="crc-storage/crc-storage-crc-ggq8b" Feb 02 21:32:24 crc kubenswrapper[4789]: I0202 21:32:24.328399 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/52782065-ac4f-47c2-9c7e-fd883fba3d65-node-mnt\") pod \"crc-storage-crc-ggq8b\" (UID: \"52782065-ac4f-47c2-9c7e-fd883fba3d65\") " pod="crc-storage/crc-storage-crc-ggq8b" Feb 02 21:32:24 crc kubenswrapper[4789]: I0202 21:32:24.328453 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hh8nf\" (UniqueName: \"kubernetes.io/projected/52782065-ac4f-47c2-9c7e-fd883fba3d65-kube-api-access-hh8nf\") pod \"crc-storage-crc-ggq8b\" (UID: \"52782065-ac4f-47c2-9c7e-fd883fba3d65\") " pod="crc-storage/crc-storage-crc-ggq8b" Feb 02 21:32:24 crc kubenswrapper[4789]: I0202 21:32:24.329286 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/52782065-ac4f-47c2-9c7e-fd883fba3d65-node-mnt\") pod \"crc-storage-crc-ggq8b\" (UID: \"52782065-ac4f-47c2-9c7e-fd883fba3d65\") " pod="crc-storage/crc-storage-crc-ggq8b" Feb 02 21:32:24 crc kubenswrapper[4789]: I0202 21:32:24.330187 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/52782065-ac4f-47c2-9c7e-fd883fba3d65-crc-storage\") pod \"crc-storage-crc-ggq8b\" (UID: \"52782065-ac4f-47c2-9c7e-fd883fba3d65\") " pod="crc-storage/crc-storage-crc-ggq8b" Feb 02 21:32:24 crc kubenswrapper[4789]: I0202 21:32:24.349827 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hh8nf\" (UniqueName: \"kubernetes.io/projected/52782065-ac4f-47c2-9c7e-fd883fba3d65-kube-api-access-hh8nf\") pod \"crc-storage-crc-ggq8b\" (UID: \"52782065-ac4f-47c2-9c7e-fd883fba3d65\") " pod="crc-storage/crc-storage-crc-ggq8b" Feb 02 21:32:24 crc kubenswrapper[4789]: I0202 21:32:24.435227 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-ggq8b" Feb 02 21:32:24 crc kubenswrapper[4789]: E0202 21:32:24.463543 4789 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-ggq8b_crc-storage_52782065-ac4f-47c2-9c7e-fd883fba3d65_0(063a9e327551ed2d350e0d08982128c7d2b943096c325845db5b2ae94697e597): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 02 21:32:24 crc kubenswrapper[4789]: E0202 21:32:24.463607 4789 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-ggq8b_crc-storage_52782065-ac4f-47c2-9c7e-fd883fba3d65_0(063a9e327551ed2d350e0d08982128c7d2b943096c325845db5b2ae94697e597): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-ggq8b" Feb 02 21:32:24 crc kubenswrapper[4789]: E0202 21:32:24.463625 4789 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-ggq8b_crc-storage_52782065-ac4f-47c2-9c7e-fd883fba3d65_0(063a9e327551ed2d350e0d08982128c7d2b943096c325845db5b2ae94697e597): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-ggq8b" Feb 02 21:32:24 crc kubenswrapper[4789]: E0202 21:32:24.463662 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-ggq8b_crc-storage(52782065-ac4f-47c2-9c7e-fd883fba3d65)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-ggq8b_crc-storage(52782065-ac4f-47c2-9c7e-fd883fba3d65)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-ggq8b_crc-storage_52782065-ac4f-47c2-9c7e-fd883fba3d65_0(063a9e327551ed2d350e0d08982128c7d2b943096c325845db5b2ae94697e597): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-ggq8b" podUID="52782065-ac4f-47c2-9c7e-fd883fba3d65" Feb 02 21:32:24 crc kubenswrapper[4789]: I0202 21:32:24.464572 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8svlp" event={"ID":"947c802f-a386-4041-8752-44f83a964cd1","Type":"ContainerStarted","Data":"6570e98d8feac982e9c683c20040b4efdbe5d176ee27511bd354729b95d8c933"} Feb 02 21:32:24 crc kubenswrapper[4789]: I0202 21:32:24.465289 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8svlp" Feb 02 21:32:24 crc kubenswrapper[4789]: I0202 21:32:24.465315 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8svlp" Feb 02 21:32:24 crc kubenswrapper[4789]: I0202 21:32:24.499127 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-8svlp" Feb 02 21:32:24 crc kubenswrapper[4789]: I0202 21:32:24.531821 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-8svlp" podStartSLOduration=8.531807926 podStartE2EDuration="8.531807926s" podCreationTimestamp="2026-02-02 21:32:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:32:24.490763465 +0000 UTC m=+764.785788484" watchObservedRunningTime="2026-02-02 21:32:24.531807926 +0000 UTC m=+764.826832945" Feb 02 21:32:25 crc kubenswrapper[4789]: I0202 21:32:25.031691 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-ggq8b"] Feb 02 21:32:25 crc kubenswrapper[4789]: I0202 21:32:25.477361 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-ggq8b" Feb 02 21:32:25 crc kubenswrapper[4789]: I0202 21:32:25.477754 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8svlp" Feb 02 21:32:25 crc kubenswrapper[4789]: I0202 21:32:25.479257 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-ggq8b" Feb 02 21:32:25 crc kubenswrapper[4789]: E0202 21:32:25.535684 4789 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-ggq8b_crc-storage_52782065-ac4f-47c2-9c7e-fd883fba3d65_0(b095cbf1dc79fcfdeb3f7a9f0537c132feab43a7e550512bd8669c412bc5e4fa): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 02 21:32:25 crc kubenswrapper[4789]: E0202 21:32:25.535758 4789 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-ggq8b_crc-storage_52782065-ac4f-47c2-9c7e-fd883fba3d65_0(b095cbf1dc79fcfdeb3f7a9f0537c132feab43a7e550512bd8669c412bc5e4fa): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-ggq8b" Feb 02 21:32:25 crc kubenswrapper[4789]: E0202 21:32:25.535785 4789 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-ggq8b_crc-storage_52782065-ac4f-47c2-9c7e-fd883fba3d65_0(b095cbf1dc79fcfdeb3f7a9f0537c132feab43a7e550512bd8669c412bc5e4fa): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-ggq8b" Feb 02 21:32:25 crc kubenswrapper[4789]: E0202 21:32:25.535844 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-ggq8b_crc-storage(52782065-ac4f-47c2-9c7e-fd883fba3d65)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-ggq8b_crc-storage(52782065-ac4f-47c2-9c7e-fd883fba3d65)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-ggq8b_crc-storage_52782065-ac4f-47c2-9c7e-fd883fba3d65_0(b095cbf1dc79fcfdeb3f7a9f0537c132feab43a7e550512bd8669c412bc5e4fa): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-ggq8b" podUID="52782065-ac4f-47c2-9c7e-fd883fba3d65" Feb 02 21:32:25 crc kubenswrapper[4789]: I0202 21:32:25.542882 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-8svlp" Feb 02 21:32:39 crc kubenswrapper[4789]: I0202 21:32:39.419299 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-ggq8b" Feb 02 21:32:39 crc kubenswrapper[4789]: I0202 21:32:39.420864 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-ggq8b" Feb 02 21:32:39 crc kubenswrapper[4789]: I0202 21:32:39.693297 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-ggq8b"] Feb 02 21:32:39 crc kubenswrapper[4789]: W0202 21:32:39.702925 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod52782065_ac4f_47c2_9c7e_fd883fba3d65.slice/crio-e34f424d6f3f4fe05c4f728cc13be3fbed31a80fd4f321a3b1b36b985b5eba79 WatchSource:0}: Error finding container e34f424d6f3f4fe05c4f728cc13be3fbed31a80fd4f321a3b1b36b985b5eba79: Status 404 returned error can't find the container with id e34f424d6f3f4fe05c4f728cc13be3fbed31a80fd4f321a3b1b36b985b5eba79 Feb 02 21:32:39 crc kubenswrapper[4789]: I0202 21:32:39.706253 4789 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 21:32:40 crc kubenswrapper[4789]: I0202 21:32:40.591000 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-ggq8b" event={"ID":"52782065-ac4f-47c2-9c7e-fd883fba3d65","Type":"ContainerStarted","Data":"e34f424d6f3f4fe05c4f728cc13be3fbed31a80fd4f321a3b1b36b985b5eba79"} Feb 02 21:32:42 crc kubenswrapper[4789]: I0202 21:32:42.609802 4789 generic.go:334] "Generic (PLEG): container finished" podID="52782065-ac4f-47c2-9c7e-fd883fba3d65" containerID="b9abf8df79433fbb9c146f9fb3a16666d748aadc6550e5f4e2dae2aeacb72a0a" exitCode=0 Feb 02 21:32:42 crc kubenswrapper[4789]: I0202 21:32:42.609890 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-ggq8b" event={"ID":"52782065-ac4f-47c2-9c7e-fd883fba3d65","Type":"ContainerDied","Data":"b9abf8df79433fbb9c146f9fb3a16666d748aadc6550e5f4e2dae2aeacb72a0a"} Feb 02 21:32:43 crc kubenswrapper[4789]: I0202 21:32:43.934176 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-ggq8b" Feb 02 21:32:44 crc kubenswrapper[4789]: I0202 21:32:44.014667 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/52782065-ac4f-47c2-9c7e-fd883fba3d65-node-mnt\") pod \"52782065-ac4f-47c2-9c7e-fd883fba3d65\" (UID: \"52782065-ac4f-47c2-9c7e-fd883fba3d65\") " Feb 02 21:32:44 crc kubenswrapper[4789]: I0202 21:32:44.014845 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/52782065-ac4f-47c2-9c7e-fd883fba3d65-crc-storage\") pod \"52782065-ac4f-47c2-9c7e-fd883fba3d65\" (UID: \"52782065-ac4f-47c2-9c7e-fd883fba3d65\") " Feb 02 21:32:44 crc kubenswrapper[4789]: I0202 21:32:44.015043 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hh8nf\" (UniqueName: \"kubernetes.io/projected/52782065-ac4f-47c2-9c7e-fd883fba3d65-kube-api-access-hh8nf\") pod \"52782065-ac4f-47c2-9c7e-fd883fba3d65\" (UID: \"52782065-ac4f-47c2-9c7e-fd883fba3d65\") " Feb 02 21:32:44 crc kubenswrapper[4789]: I0202 21:32:44.015149 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/52782065-ac4f-47c2-9c7e-fd883fba3d65-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "52782065-ac4f-47c2-9c7e-fd883fba3d65" (UID: "52782065-ac4f-47c2-9c7e-fd883fba3d65"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 21:32:44 crc kubenswrapper[4789]: I0202 21:32:44.015872 4789 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/52782065-ac4f-47c2-9c7e-fd883fba3d65-node-mnt\") on node \"crc\" DevicePath \"\"" Feb 02 21:32:44 crc kubenswrapper[4789]: I0202 21:32:44.026858 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52782065-ac4f-47c2-9c7e-fd883fba3d65-kube-api-access-hh8nf" (OuterVolumeSpecName: "kube-api-access-hh8nf") pod "52782065-ac4f-47c2-9c7e-fd883fba3d65" (UID: "52782065-ac4f-47c2-9c7e-fd883fba3d65"). InnerVolumeSpecName "kube-api-access-hh8nf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:32:44 crc kubenswrapper[4789]: I0202 21:32:44.045052 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52782065-ac4f-47c2-9c7e-fd883fba3d65-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "52782065-ac4f-47c2-9c7e-fd883fba3d65" (UID: "52782065-ac4f-47c2-9c7e-fd883fba3d65"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:32:44 crc kubenswrapper[4789]: I0202 21:32:44.118016 4789 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/52782065-ac4f-47c2-9c7e-fd883fba3d65-crc-storage\") on node \"crc\" DevicePath \"\"" Feb 02 21:32:44 crc kubenswrapper[4789]: I0202 21:32:44.118089 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hh8nf\" (UniqueName: \"kubernetes.io/projected/52782065-ac4f-47c2-9c7e-fd883fba3d65-kube-api-access-hh8nf\") on node \"crc\" DevicePath \"\"" Feb 02 21:32:44 crc kubenswrapper[4789]: I0202 21:32:44.623224 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-ggq8b" event={"ID":"52782065-ac4f-47c2-9c7e-fd883fba3d65","Type":"ContainerDied","Data":"e34f424d6f3f4fe05c4f728cc13be3fbed31a80fd4f321a3b1b36b985b5eba79"} Feb 02 21:32:44 crc kubenswrapper[4789]: I0202 21:32:44.623257 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e34f424d6f3f4fe05c4f728cc13be3fbed31a80fd4f321a3b1b36b985b5eba79" Feb 02 21:32:44 crc kubenswrapper[4789]: I0202 21:32:44.623278 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-ggq8b" Feb 02 21:32:47 crc kubenswrapper[4789]: I0202 21:32:47.342573 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-8svlp" Feb 02 21:32:52 crc kubenswrapper[4789]: I0202 21:32:52.188070 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138c54z"] Feb 02 21:32:52 crc kubenswrapper[4789]: E0202 21:32:52.188711 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52782065-ac4f-47c2-9c7e-fd883fba3d65" containerName="storage" Feb 02 21:32:52 crc kubenswrapper[4789]: I0202 21:32:52.188735 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="52782065-ac4f-47c2-9c7e-fd883fba3d65" containerName="storage" Feb 02 21:32:52 crc kubenswrapper[4789]: I0202 21:32:52.189025 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="52782065-ac4f-47c2-9c7e-fd883fba3d65" containerName="storage" Feb 02 21:32:52 crc kubenswrapper[4789]: I0202 21:32:52.190453 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138c54z" Feb 02 21:32:52 crc kubenswrapper[4789]: I0202 21:32:52.192576 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 02 21:32:52 crc kubenswrapper[4789]: I0202 21:32:52.202135 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138c54z"] Feb 02 21:32:52 crc kubenswrapper[4789]: I0202 21:32:52.327554 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmhg9\" (UniqueName: \"kubernetes.io/projected/645a95dc-67cd-4eb7-9273-70ff5fea3a01-kube-api-access-lmhg9\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138c54z\" (UID: \"645a95dc-67cd-4eb7-9273-70ff5fea3a01\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138c54z" Feb 02 21:32:52 crc kubenswrapper[4789]: I0202 21:32:52.327764 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/645a95dc-67cd-4eb7-9273-70ff5fea3a01-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138c54z\" (UID: \"645a95dc-67cd-4eb7-9273-70ff5fea3a01\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138c54z" Feb 02 21:32:52 crc kubenswrapper[4789]: I0202 21:32:52.327829 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/645a95dc-67cd-4eb7-9273-70ff5fea3a01-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138c54z\" (UID: \"645a95dc-67cd-4eb7-9273-70ff5fea3a01\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138c54z" Feb 02 21:32:52 crc kubenswrapper[4789]: I0202 21:32:52.429318 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/645a95dc-67cd-4eb7-9273-70ff5fea3a01-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138c54z\" (UID: \"645a95dc-67cd-4eb7-9273-70ff5fea3a01\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138c54z" Feb 02 21:32:52 crc kubenswrapper[4789]: I0202 21:32:52.429868 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmhg9\" (UniqueName: \"kubernetes.io/projected/645a95dc-67cd-4eb7-9273-70ff5fea3a01-kube-api-access-lmhg9\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138c54z\" (UID: \"645a95dc-67cd-4eb7-9273-70ff5fea3a01\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138c54z" Feb 02 21:32:52 crc kubenswrapper[4789]: I0202 21:32:52.430207 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/645a95dc-67cd-4eb7-9273-70ff5fea3a01-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138c54z\" (UID: \"645a95dc-67cd-4eb7-9273-70ff5fea3a01\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138c54z" Feb 02 21:32:52 crc kubenswrapper[4789]: I0202 21:32:52.430518 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/645a95dc-67cd-4eb7-9273-70ff5fea3a01-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138c54z\" (UID: \"645a95dc-67cd-4eb7-9273-70ff5fea3a01\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138c54z" Feb 02 21:32:52 crc kubenswrapper[4789]: I0202 21:32:52.431011 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/645a95dc-67cd-4eb7-9273-70ff5fea3a01-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138c54z\" (UID: \"645a95dc-67cd-4eb7-9273-70ff5fea3a01\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138c54z" Feb 02 21:32:52 crc kubenswrapper[4789]: I0202 21:32:52.464951 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmhg9\" (UniqueName: \"kubernetes.io/projected/645a95dc-67cd-4eb7-9273-70ff5fea3a01-kube-api-access-lmhg9\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138c54z\" (UID: \"645a95dc-67cd-4eb7-9273-70ff5fea3a01\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138c54z" Feb 02 21:32:52 crc kubenswrapper[4789]: I0202 21:32:52.527352 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138c54z" Feb 02 21:32:52 crc kubenswrapper[4789]: I0202 21:32:52.841512 4789 patch_prober.go:28] interesting pod/machine-config-daemon-c8vcn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 21:32:52 crc kubenswrapper[4789]: I0202 21:32:52.841611 4789 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 21:32:53 crc kubenswrapper[4789]: I0202 21:32:53.015902 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138c54z"] Feb 02 21:32:53 crc kubenswrapper[4789]: W0202 21:32:53.031183 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod645a95dc_67cd_4eb7_9273_70ff5fea3a01.slice/crio-473b8c36c07ddac6b27c99f856390b18c97292034e8c696fe6cc852236a2771b WatchSource:0}: Error finding container 473b8c36c07ddac6b27c99f856390b18c97292034e8c696fe6cc852236a2771b: Status 404 returned error can't find the container with id 473b8c36c07ddac6b27c99f856390b18c97292034e8c696fe6cc852236a2771b Feb 02 21:32:53 crc kubenswrapper[4789]: I0202 21:32:53.683682 4789 generic.go:334] "Generic (PLEG): container finished" podID="645a95dc-67cd-4eb7-9273-70ff5fea3a01" containerID="2cebd16ed9e0c37dc72ad54b62657739ee9679ae645d4e04df6c7a975f7e405c" exitCode=0 Feb 02 21:32:53 crc kubenswrapper[4789]: I0202 21:32:53.683813 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138c54z" event={"ID":"645a95dc-67cd-4eb7-9273-70ff5fea3a01","Type":"ContainerDied","Data":"2cebd16ed9e0c37dc72ad54b62657739ee9679ae645d4e04df6c7a975f7e405c"} Feb 02 21:32:53 crc kubenswrapper[4789]: I0202 21:32:53.684116 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138c54z" event={"ID":"645a95dc-67cd-4eb7-9273-70ff5fea3a01","Type":"ContainerStarted","Data":"473b8c36c07ddac6b27c99f856390b18c97292034e8c696fe6cc852236a2771b"} Feb 02 21:32:54 crc kubenswrapper[4789]: I0202 21:32:54.525261 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xl65p"] Feb 02 21:32:54 crc kubenswrapper[4789]: I0202 21:32:54.527012 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xl65p" Feb 02 21:32:54 crc kubenswrapper[4789]: I0202 21:32:54.549530 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xl65p"] Feb 02 21:32:54 crc kubenswrapper[4789]: I0202 21:32:54.657322 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pc2qs\" (UniqueName: \"kubernetes.io/projected/cb2feeb5-33bb-403a-8e99-a4c544c69c0c-kube-api-access-pc2qs\") pod \"redhat-operators-xl65p\" (UID: \"cb2feeb5-33bb-403a-8e99-a4c544c69c0c\") " pod="openshift-marketplace/redhat-operators-xl65p" Feb 02 21:32:54 crc kubenswrapper[4789]: I0202 21:32:54.658287 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb2feeb5-33bb-403a-8e99-a4c544c69c0c-utilities\") pod \"redhat-operators-xl65p\" (UID: \"cb2feeb5-33bb-403a-8e99-a4c544c69c0c\") " pod="openshift-marketplace/redhat-operators-xl65p" Feb 02 21:32:54 crc kubenswrapper[4789]: I0202 21:32:54.658483 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb2feeb5-33bb-403a-8e99-a4c544c69c0c-catalog-content\") pod \"redhat-operators-xl65p\" (UID: \"cb2feeb5-33bb-403a-8e99-a4c544c69c0c\") " pod="openshift-marketplace/redhat-operators-xl65p" Feb 02 21:32:54 crc kubenswrapper[4789]: I0202 21:32:54.759295 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pc2qs\" (UniqueName: \"kubernetes.io/projected/cb2feeb5-33bb-403a-8e99-a4c544c69c0c-kube-api-access-pc2qs\") pod \"redhat-operators-xl65p\" (UID: \"cb2feeb5-33bb-403a-8e99-a4c544c69c0c\") " pod="openshift-marketplace/redhat-operators-xl65p" Feb 02 21:32:54 crc kubenswrapper[4789]: I0202 21:32:54.759380 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb2feeb5-33bb-403a-8e99-a4c544c69c0c-utilities\") pod \"redhat-operators-xl65p\" (UID: \"cb2feeb5-33bb-403a-8e99-a4c544c69c0c\") " pod="openshift-marketplace/redhat-operators-xl65p" Feb 02 21:32:54 crc kubenswrapper[4789]: I0202 21:32:54.759411 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb2feeb5-33bb-403a-8e99-a4c544c69c0c-catalog-content\") pod \"redhat-operators-xl65p\" (UID: \"cb2feeb5-33bb-403a-8e99-a4c544c69c0c\") " pod="openshift-marketplace/redhat-operators-xl65p" Feb 02 21:32:54 crc kubenswrapper[4789]: I0202 21:32:54.760118 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb2feeb5-33bb-403a-8e99-a4c544c69c0c-catalog-content\") pod \"redhat-operators-xl65p\" (UID: \"cb2feeb5-33bb-403a-8e99-a4c544c69c0c\") " pod="openshift-marketplace/redhat-operators-xl65p" Feb 02 21:32:54 crc kubenswrapper[4789]: I0202 21:32:54.760311 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb2feeb5-33bb-403a-8e99-a4c544c69c0c-utilities\") pod \"redhat-operators-xl65p\" (UID: \"cb2feeb5-33bb-403a-8e99-a4c544c69c0c\") " pod="openshift-marketplace/redhat-operators-xl65p" Feb 02 21:32:54 crc kubenswrapper[4789]: I0202 21:32:54.790477 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pc2qs\" (UniqueName: \"kubernetes.io/projected/cb2feeb5-33bb-403a-8e99-a4c544c69c0c-kube-api-access-pc2qs\") pod \"redhat-operators-xl65p\" (UID: \"cb2feeb5-33bb-403a-8e99-a4c544c69c0c\") " pod="openshift-marketplace/redhat-operators-xl65p" Feb 02 21:32:54 crc kubenswrapper[4789]: I0202 21:32:54.860818 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xl65p" Feb 02 21:32:55 crc kubenswrapper[4789]: I0202 21:32:55.043234 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xl65p"] Feb 02 21:32:55 crc kubenswrapper[4789]: W0202 21:32:55.050765 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcb2feeb5_33bb_403a_8e99_a4c544c69c0c.slice/crio-bed6a7dd77e4dfe506c26f480034e4ce379afc7098e062127422f0fc5f948553 WatchSource:0}: Error finding container bed6a7dd77e4dfe506c26f480034e4ce379afc7098e062127422f0fc5f948553: Status 404 returned error can't find the container with id bed6a7dd77e4dfe506c26f480034e4ce379afc7098e062127422f0fc5f948553 Feb 02 21:32:55 crc kubenswrapper[4789]: I0202 21:32:55.697537 4789 generic.go:334] "Generic (PLEG): container finished" podID="645a95dc-67cd-4eb7-9273-70ff5fea3a01" containerID="9d5edd9f7d8bb763e808023fc16edeaf382e53891fba0914bb493860a9aaaab7" exitCode=0 Feb 02 21:32:55 crc kubenswrapper[4789]: I0202 21:32:55.697590 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138c54z" event={"ID":"645a95dc-67cd-4eb7-9273-70ff5fea3a01","Type":"ContainerDied","Data":"9d5edd9f7d8bb763e808023fc16edeaf382e53891fba0914bb493860a9aaaab7"} Feb 02 21:32:55 crc kubenswrapper[4789]: I0202 21:32:55.699261 4789 generic.go:334] "Generic (PLEG): container finished" podID="cb2feeb5-33bb-403a-8e99-a4c544c69c0c" containerID="490e6b4df9005c91e7ff649ebc631e3a1b17ee301e59abe796d7babf37767e9d" exitCode=0 Feb 02 21:32:55 crc kubenswrapper[4789]: I0202 21:32:55.699283 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xl65p" event={"ID":"cb2feeb5-33bb-403a-8e99-a4c544c69c0c","Type":"ContainerDied","Data":"490e6b4df9005c91e7ff649ebc631e3a1b17ee301e59abe796d7babf37767e9d"} Feb 02 21:32:55 crc kubenswrapper[4789]: I0202 21:32:55.699298 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xl65p" event={"ID":"cb2feeb5-33bb-403a-8e99-a4c544c69c0c","Type":"ContainerStarted","Data":"bed6a7dd77e4dfe506c26f480034e4ce379afc7098e062127422f0fc5f948553"} Feb 02 21:32:56 crc kubenswrapper[4789]: I0202 21:32:56.710355 4789 generic.go:334] "Generic (PLEG): container finished" podID="645a95dc-67cd-4eb7-9273-70ff5fea3a01" containerID="aa2f9cc3e24f03b109a34b03da5871f84f76f306e282cf03e20b2b37625c6d9e" exitCode=0 Feb 02 21:32:56 crc kubenswrapper[4789]: I0202 21:32:56.710465 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138c54z" event={"ID":"645a95dc-67cd-4eb7-9273-70ff5fea3a01","Type":"ContainerDied","Data":"aa2f9cc3e24f03b109a34b03da5871f84f76f306e282cf03e20b2b37625c6d9e"} Feb 02 21:32:56 crc kubenswrapper[4789]: I0202 21:32:56.716690 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xl65p" event={"ID":"cb2feeb5-33bb-403a-8e99-a4c544c69c0c","Type":"ContainerStarted","Data":"3de2a8b97d927d476253c99e4a75643107b378363e08c4a5f368b5848cd00f41"} Feb 02 21:32:57 crc kubenswrapper[4789]: I0202 21:32:57.725549 4789 generic.go:334] "Generic (PLEG): container finished" podID="cb2feeb5-33bb-403a-8e99-a4c544c69c0c" containerID="3de2a8b97d927d476253c99e4a75643107b378363e08c4a5f368b5848cd00f41" exitCode=0 Feb 02 21:32:57 crc kubenswrapper[4789]: I0202 21:32:57.725639 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xl65p" event={"ID":"cb2feeb5-33bb-403a-8e99-a4c544c69c0c","Type":"ContainerDied","Data":"3de2a8b97d927d476253c99e4a75643107b378363e08c4a5f368b5848cd00f41"} Feb 02 21:32:58 crc kubenswrapper[4789]: I0202 21:32:58.032401 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138c54z" Feb 02 21:32:58 crc kubenswrapper[4789]: I0202 21:32:58.205040 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/645a95dc-67cd-4eb7-9273-70ff5fea3a01-util\") pod \"645a95dc-67cd-4eb7-9273-70ff5fea3a01\" (UID: \"645a95dc-67cd-4eb7-9273-70ff5fea3a01\") " Feb 02 21:32:58 crc kubenswrapper[4789]: I0202 21:32:58.205132 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/645a95dc-67cd-4eb7-9273-70ff5fea3a01-bundle\") pod \"645a95dc-67cd-4eb7-9273-70ff5fea3a01\" (UID: \"645a95dc-67cd-4eb7-9273-70ff5fea3a01\") " Feb 02 21:32:58 crc kubenswrapper[4789]: I0202 21:32:58.205214 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lmhg9\" (UniqueName: \"kubernetes.io/projected/645a95dc-67cd-4eb7-9273-70ff5fea3a01-kube-api-access-lmhg9\") pod \"645a95dc-67cd-4eb7-9273-70ff5fea3a01\" (UID: \"645a95dc-67cd-4eb7-9273-70ff5fea3a01\") " Feb 02 21:32:58 crc kubenswrapper[4789]: I0202 21:32:58.206197 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/645a95dc-67cd-4eb7-9273-70ff5fea3a01-bundle" (OuterVolumeSpecName: "bundle") pod "645a95dc-67cd-4eb7-9273-70ff5fea3a01" (UID: "645a95dc-67cd-4eb7-9273-70ff5fea3a01"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 21:32:58 crc kubenswrapper[4789]: I0202 21:32:58.214133 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/645a95dc-67cd-4eb7-9273-70ff5fea3a01-kube-api-access-lmhg9" (OuterVolumeSpecName: "kube-api-access-lmhg9") pod "645a95dc-67cd-4eb7-9273-70ff5fea3a01" (UID: "645a95dc-67cd-4eb7-9273-70ff5fea3a01"). InnerVolumeSpecName "kube-api-access-lmhg9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:32:58 crc kubenswrapper[4789]: I0202 21:32:58.219522 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/645a95dc-67cd-4eb7-9273-70ff5fea3a01-util" (OuterVolumeSpecName: "util") pod "645a95dc-67cd-4eb7-9273-70ff5fea3a01" (UID: "645a95dc-67cd-4eb7-9273-70ff5fea3a01"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 21:32:58 crc kubenswrapper[4789]: I0202 21:32:58.307026 4789 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/645a95dc-67cd-4eb7-9273-70ff5fea3a01-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 21:32:58 crc kubenswrapper[4789]: I0202 21:32:58.307156 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lmhg9\" (UniqueName: \"kubernetes.io/projected/645a95dc-67cd-4eb7-9273-70ff5fea3a01-kube-api-access-lmhg9\") on node \"crc\" DevicePath \"\"" Feb 02 21:32:58 crc kubenswrapper[4789]: I0202 21:32:58.307198 4789 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/645a95dc-67cd-4eb7-9273-70ff5fea3a01-util\") on node \"crc\" DevicePath \"\"" Feb 02 21:32:58 crc kubenswrapper[4789]: I0202 21:32:58.734638 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138c54z" Feb 02 21:32:58 crc kubenswrapper[4789]: I0202 21:32:58.734630 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138c54z" event={"ID":"645a95dc-67cd-4eb7-9273-70ff5fea3a01","Type":"ContainerDied","Data":"473b8c36c07ddac6b27c99f856390b18c97292034e8c696fe6cc852236a2771b"} Feb 02 21:32:58 crc kubenswrapper[4789]: I0202 21:32:58.735100 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="473b8c36c07ddac6b27c99f856390b18c97292034e8c696fe6cc852236a2771b" Feb 02 21:32:58 crc kubenswrapper[4789]: I0202 21:32:58.736973 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xl65p" event={"ID":"cb2feeb5-33bb-403a-8e99-a4c544c69c0c","Type":"ContainerStarted","Data":"fa4be96ee253446c2d3f2f8fdeffdc2381a10636278ac40831554612b9602cda"} Feb 02 21:32:58 crc kubenswrapper[4789]: I0202 21:32:58.755567 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xl65p" podStartSLOduration=2.121154263 podStartE2EDuration="4.755549451s" podCreationTimestamp="2026-02-02 21:32:54 +0000 UTC" firstStartedPulling="2026-02-02 21:32:55.700314379 +0000 UTC m=+795.995339398" lastFinishedPulling="2026-02-02 21:32:58.334709567 +0000 UTC m=+798.629734586" observedRunningTime="2026-02-02 21:32:58.754255785 +0000 UTC m=+799.049280814" watchObservedRunningTime="2026-02-02 21:32:58.755549451 +0000 UTC m=+799.050574470" Feb 02 21:33:02 crc kubenswrapper[4789]: I0202 21:33:02.484257 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-l5mpq"] Feb 02 21:33:02 crc kubenswrapper[4789]: E0202 21:33:02.484776 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="645a95dc-67cd-4eb7-9273-70ff5fea3a01" containerName="util" Feb 02 21:33:02 crc kubenswrapper[4789]: I0202 21:33:02.484790 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="645a95dc-67cd-4eb7-9273-70ff5fea3a01" containerName="util" Feb 02 21:33:02 crc kubenswrapper[4789]: E0202 21:33:02.484802 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="645a95dc-67cd-4eb7-9273-70ff5fea3a01" containerName="extract" Feb 02 21:33:02 crc kubenswrapper[4789]: I0202 21:33:02.484808 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="645a95dc-67cd-4eb7-9273-70ff5fea3a01" containerName="extract" Feb 02 21:33:02 crc kubenswrapper[4789]: E0202 21:33:02.484824 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="645a95dc-67cd-4eb7-9273-70ff5fea3a01" containerName="pull" Feb 02 21:33:02 crc kubenswrapper[4789]: I0202 21:33:02.484831 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="645a95dc-67cd-4eb7-9273-70ff5fea3a01" containerName="pull" Feb 02 21:33:02 crc kubenswrapper[4789]: I0202 21:33:02.484918 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="645a95dc-67cd-4eb7-9273-70ff5fea3a01" containerName="extract" Feb 02 21:33:02 crc kubenswrapper[4789]: I0202 21:33:02.485243 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-l5mpq" Feb 02 21:33:02 crc kubenswrapper[4789]: I0202 21:33:02.487035 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-7nnwp" Feb 02 21:33:02 crc kubenswrapper[4789]: I0202 21:33:02.487517 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 02 21:33:02 crc kubenswrapper[4789]: I0202 21:33:02.487860 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 02 21:33:02 crc kubenswrapper[4789]: I0202 21:33:02.498319 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-l5mpq"] Feb 02 21:33:02 crc kubenswrapper[4789]: I0202 21:33:02.568949 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rktsk\" (UniqueName: \"kubernetes.io/projected/a45d19af-9616-4f0b-99d2-9c250bb43694-kube-api-access-rktsk\") pod \"nmstate-operator-646758c888-l5mpq\" (UID: \"a45d19af-9616-4f0b-99d2-9c250bb43694\") " pod="openshift-nmstate/nmstate-operator-646758c888-l5mpq" Feb 02 21:33:02 crc kubenswrapper[4789]: I0202 21:33:02.669670 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rktsk\" (UniqueName: \"kubernetes.io/projected/a45d19af-9616-4f0b-99d2-9c250bb43694-kube-api-access-rktsk\") pod \"nmstate-operator-646758c888-l5mpq\" (UID: \"a45d19af-9616-4f0b-99d2-9c250bb43694\") " pod="openshift-nmstate/nmstate-operator-646758c888-l5mpq" Feb 02 21:33:02 crc kubenswrapper[4789]: I0202 21:33:02.691279 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rktsk\" (UniqueName: \"kubernetes.io/projected/a45d19af-9616-4f0b-99d2-9c250bb43694-kube-api-access-rktsk\") pod \"nmstate-operator-646758c888-l5mpq\" (UID: \"a45d19af-9616-4f0b-99d2-9c250bb43694\") " pod="openshift-nmstate/nmstate-operator-646758c888-l5mpq" Feb 02 21:33:02 crc kubenswrapper[4789]: I0202 21:33:02.866915 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-l5mpq" Feb 02 21:33:03 crc kubenswrapper[4789]: I0202 21:33:03.128971 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-l5mpq"] Feb 02 21:33:03 crc kubenswrapper[4789]: I0202 21:33:03.765948 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-l5mpq" event={"ID":"a45d19af-9616-4f0b-99d2-9c250bb43694","Type":"ContainerStarted","Data":"87fe17febe8ff8721196044d915ddaba57a56ba0ac834173b1f6db4a405cf1a3"} Feb 02 21:33:04 crc kubenswrapper[4789]: I0202 21:33:04.861775 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xl65p" Feb 02 21:33:04 crc kubenswrapper[4789]: I0202 21:33:04.861848 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xl65p" Feb 02 21:33:05 crc kubenswrapper[4789]: I0202 21:33:05.781951 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-l5mpq" event={"ID":"a45d19af-9616-4f0b-99d2-9c250bb43694","Type":"ContainerStarted","Data":"679db01c12c9a91e3708ff56ec2810850f7e7b54e3271f8fdcf25bea8bdb828e"} Feb 02 21:33:05 crc kubenswrapper[4789]: I0202 21:33:05.813951 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-l5mpq" podStartSLOduration=1.7182737609999998 podStartE2EDuration="3.813931929s" podCreationTimestamp="2026-02-02 21:33:02 +0000 UTC" firstStartedPulling="2026-02-02 21:33:03.145172865 +0000 UTC m=+803.440197884" lastFinishedPulling="2026-02-02 21:33:05.240831003 +0000 UTC m=+805.535856052" observedRunningTime="2026-02-02 21:33:05.813297661 +0000 UTC m=+806.108322680" watchObservedRunningTime="2026-02-02 21:33:05.813931929 +0000 UTC m=+806.108956958" Feb 02 21:33:05 crc kubenswrapper[4789]: I0202 21:33:05.898955 4789 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xl65p" podUID="cb2feeb5-33bb-403a-8e99-a4c544c69c0c" containerName="registry-server" probeResult="failure" output=< Feb 02 21:33:05 crc kubenswrapper[4789]: timeout: failed to connect service ":50051" within 1s Feb 02 21:33:05 crc kubenswrapper[4789]: > Feb 02 21:33:12 crc kubenswrapper[4789]: I0202 21:33:12.327236 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-mwlcp"] Feb 02 21:33:12 crc kubenswrapper[4789]: I0202 21:33:12.328822 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-mwlcp" Feb 02 21:33:12 crc kubenswrapper[4789]: I0202 21:33:12.331173 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-hbxzd" Feb 02 21:33:12 crc kubenswrapper[4789]: I0202 21:33:12.341356 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-4t8ms"] Feb 02 21:33:12 crc kubenswrapper[4789]: I0202 21:33:12.342134 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-4t8ms" Feb 02 21:33:12 crc kubenswrapper[4789]: I0202 21:33:12.343746 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 02 21:33:12 crc kubenswrapper[4789]: I0202 21:33:12.350791 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-56sxm"] Feb 02 21:33:12 crc kubenswrapper[4789]: I0202 21:33:12.351424 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-56sxm" Feb 02 21:33:12 crc kubenswrapper[4789]: I0202 21:33:12.359755 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-mwlcp"] Feb 02 21:33:12 crc kubenswrapper[4789]: I0202 21:33:12.362983 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-4t8ms"] Feb 02 21:33:12 crc kubenswrapper[4789]: I0202 21:33:12.514617 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/8ecd0e71-adc8-4435-98cb-58de4b376820-ovs-socket\") pod \"nmstate-handler-56sxm\" (UID: \"8ecd0e71-adc8-4435-98cb-58de4b376820\") " pod="openshift-nmstate/nmstate-handler-56sxm" Feb 02 21:33:12 crc kubenswrapper[4789]: I0202 21:33:12.514661 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xn74d\" (UniqueName: \"kubernetes.io/projected/8ecd0e71-adc8-4435-98cb-58de4b376820-kube-api-access-xn74d\") pod \"nmstate-handler-56sxm\" (UID: \"8ecd0e71-adc8-4435-98cb-58de4b376820\") " pod="openshift-nmstate/nmstate-handler-56sxm" Feb 02 21:33:12 crc kubenswrapper[4789]: I0202 21:33:12.514769 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/238b79fd-f8fd-4667-b350-6369490157c5-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-4t8ms\" (UID: \"238b79fd-f8fd-4667-b350-6369490157c5\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-4t8ms" Feb 02 21:33:12 crc kubenswrapper[4789]: I0202 21:33:12.514825 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/8ecd0e71-adc8-4435-98cb-58de4b376820-nmstate-lock\") pod \"nmstate-handler-56sxm\" (UID: \"8ecd0e71-adc8-4435-98cb-58de4b376820\") " pod="openshift-nmstate/nmstate-handler-56sxm" Feb 02 21:33:12 crc kubenswrapper[4789]: I0202 21:33:12.514851 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6j4x5\" (UniqueName: \"kubernetes.io/projected/238b79fd-f8fd-4667-b350-6369490157c5-kube-api-access-6j4x5\") pod \"nmstate-webhook-8474b5b9d8-4t8ms\" (UID: \"238b79fd-f8fd-4667-b350-6369490157c5\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-4t8ms" Feb 02 21:33:12 crc kubenswrapper[4789]: I0202 21:33:12.514888 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j774b\" (UniqueName: \"kubernetes.io/projected/31964bc6-453e-4fc8-a06c-cfa7336e0b0f-kube-api-access-j774b\") pod \"nmstate-metrics-54757c584b-mwlcp\" (UID: \"31964bc6-453e-4fc8-a06c-cfa7336e0b0f\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-mwlcp" Feb 02 21:33:12 crc kubenswrapper[4789]: I0202 21:33:12.514946 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/8ecd0e71-adc8-4435-98cb-58de4b376820-dbus-socket\") pod \"nmstate-handler-56sxm\" (UID: \"8ecd0e71-adc8-4435-98cb-58de4b376820\") " pod="openshift-nmstate/nmstate-handler-56sxm" Feb 02 21:33:12 crc kubenswrapper[4789]: I0202 21:33:12.519864 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-tgbb7"] Feb 02 21:33:12 crc kubenswrapper[4789]: I0202 21:33:12.520455 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-tgbb7" Feb 02 21:33:12 crc kubenswrapper[4789]: I0202 21:33:12.521958 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 02 21:33:12 crc kubenswrapper[4789]: I0202 21:33:12.521960 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-rhdpg" Feb 02 21:33:12 crc kubenswrapper[4789]: I0202 21:33:12.522103 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 02 21:33:12 crc kubenswrapper[4789]: I0202 21:33:12.558494 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-tgbb7"] Feb 02 21:33:12 crc kubenswrapper[4789]: I0202 21:33:12.616007 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/8ecd0e71-adc8-4435-98cb-58de4b376820-ovs-socket\") pod \"nmstate-handler-56sxm\" (UID: \"8ecd0e71-adc8-4435-98cb-58de4b376820\") " pod="openshift-nmstate/nmstate-handler-56sxm" Feb 02 21:33:12 crc kubenswrapper[4789]: I0202 21:33:12.616062 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xn74d\" (UniqueName: \"kubernetes.io/projected/8ecd0e71-adc8-4435-98cb-58de4b376820-kube-api-access-xn74d\") pod \"nmstate-handler-56sxm\" (UID: \"8ecd0e71-adc8-4435-98cb-58de4b376820\") " pod="openshift-nmstate/nmstate-handler-56sxm" Feb 02 21:33:12 crc kubenswrapper[4789]: I0202 21:33:12.616095 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/8ecd0e71-adc8-4435-98cb-58de4b376820-ovs-socket\") pod \"nmstate-handler-56sxm\" (UID: \"8ecd0e71-adc8-4435-98cb-58de4b376820\") " pod="openshift-nmstate/nmstate-handler-56sxm" Feb 02 21:33:12 crc kubenswrapper[4789]: I0202 21:33:12.616230 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/238b79fd-f8fd-4667-b350-6369490157c5-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-4t8ms\" (UID: \"238b79fd-f8fd-4667-b350-6369490157c5\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-4t8ms" Feb 02 21:33:12 crc kubenswrapper[4789]: I0202 21:33:12.616269 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/8ecd0e71-adc8-4435-98cb-58de4b376820-nmstate-lock\") pod \"nmstate-handler-56sxm\" (UID: \"8ecd0e71-adc8-4435-98cb-58de4b376820\") " pod="openshift-nmstate/nmstate-handler-56sxm" Feb 02 21:33:12 crc kubenswrapper[4789]: I0202 21:33:12.616304 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6j4x5\" (UniqueName: \"kubernetes.io/projected/238b79fd-f8fd-4667-b350-6369490157c5-kube-api-access-6j4x5\") pod \"nmstate-webhook-8474b5b9d8-4t8ms\" (UID: \"238b79fd-f8fd-4667-b350-6369490157c5\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-4t8ms" Feb 02 21:33:12 crc kubenswrapper[4789]: I0202 21:33:12.616346 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j774b\" (UniqueName: \"kubernetes.io/projected/31964bc6-453e-4fc8-a06c-cfa7336e0b0f-kube-api-access-j774b\") pod \"nmstate-metrics-54757c584b-mwlcp\" (UID: \"31964bc6-453e-4fc8-a06c-cfa7336e0b0f\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-mwlcp" Feb 02 21:33:12 crc kubenswrapper[4789]: I0202 21:33:12.616369 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/8ecd0e71-adc8-4435-98cb-58de4b376820-dbus-socket\") pod \"nmstate-handler-56sxm\" (UID: \"8ecd0e71-adc8-4435-98cb-58de4b376820\") " pod="openshift-nmstate/nmstate-handler-56sxm" Feb 02 21:33:12 crc kubenswrapper[4789]: I0202 21:33:12.616384 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/8ecd0e71-adc8-4435-98cb-58de4b376820-nmstate-lock\") pod \"nmstate-handler-56sxm\" (UID: \"8ecd0e71-adc8-4435-98cb-58de4b376820\") " pod="openshift-nmstate/nmstate-handler-56sxm" Feb 02 21:33:12 crc kubenswrapper[4789]: E0202 21:33:12.616431 4789 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Feb 02 21:33:12 crc kubenswrapper[4789]: E0202 21:33:12.616475 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/238b79fd-f8fd-4667-b350-6369490157c5-tls-key-pair podName:238b79fd-f8fd-4667-b350-6369490157c5 nodeName:}" failed. No retries permitted until 2026-02-02 21:33:13.116460651 +0000 UTC m=+813.411485670 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/238b79fd-f8fd-4667-b350-6369490157c5-tls-key-pair") pod "nmstate-webhook-8474b5b9d8-4t8ms" (UID: "238b79fd-f8fd-4667-b350-6369490157c5") : secret "openshift-nmstate-webhook" not found Feb 02 21:33:12 crc kubenswrapper[4789]: I0202 21:33:12.616788 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/8ecd0e71-adc8-4435-98cb-58de4b376820-dbus-socket\") pod \"nmstate-handler-56sxm\" (UID: \"8ecd0e71-adc8-4435-98cb-58de4b376820\") " pod="openshift-nmstate/nmstate-handler-56sxm" Feb 02 21:33:12 crc kubenswrapper[4789]: I0202 21:33:12.636311 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xn74d\" (UniqueName: \"kubernetes.io/projected/8ecd0e71-adc8-4435-98cb-58de4b376820-kube-api-access-xn74d\") pod \"nmstate-handler-56sxm\" (UID: \"8ecd0e71-adc8-4435-98cb-58de4b376820\") " pod="openshift-nmstate/nmstate-handler-56sxm" Feb 02 21:33:12 crc kubenswrapper[4789]: I0202 21:33:12.637342 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6j4x5\" (UniqueName: \"kubernetes.io/projected/238b79fd-f8fd-4667-b350-6369490157c5-kube-api-access-6j4x5\") pod \"nmstate-webhook-8474b5b9d8-4t8ms\" (UID: \"238b79fd-f8fd-4667-b350-6369490157c5\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-4t8ms" Feb 02 21:33:12 crc kubenswrapper[4789]: I0202 21:33:12.642160 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j774b\" (UniqueName: \"kubernetes.io/projected/31964bc6-453e-4fc8-a06c-cfa7336e0b0f-kube-api-access-j774b\") pod \"nmstate-metrics-54757c584b-mwlcp\" (UID: \"31964bc6-453e-4fc8-a06c-cfa7336e0b0f\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-mwlcp" Feb 02 21:33:12 crc kubenswrapper[4789]: I0202 21:33:12.651571 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-mwlcp" Feb 02 21:33:12 crc kubenswrapper[4789]: I0202 21:33:12.663480 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-8c49b7cb8-vkdlg"] Feb 02 21:33:12 crc kubenswrapper[4789]: I0202 21:33:12.664245 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-8c49b7cb8-vkdlg" Feb 02 21:33:12 crc kubenswrapper[4789]: I0202 21:33:12.692445 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-56sxm" Feb 02 21:33:12 crc kubenswrapper[4789]: I0202 21:33:12.718259 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/5bd8e822-58d8-41a4-85fa-229fe3662cf7-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-tgbb7\" (UID: \"5bd8e822-58d8-41a4-85fa-229fe3662cf7\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-tgbb7" Feb 02 21:33:12 crc kubenswrapper[4789]: I0202 21:33:12.718320 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5bd8e822-58d8-41a4-85fa-229fe3662cf7-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-tgbb7\" (UID: \"5bd8e822-58d8-41a4-85fa-229fe3662cf7\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-tgbb7" Feb 02 21:33:12 crc kubenswrapper[4789]: I0202 21:33:12.718370 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmcxg\" (UniqueName: \"kubernetes.io/projected/5bd8e822-58d8-41a4-85fa-229fe3662cf7-kube-api-access-cmcxg\") pod \"nmstate-console-plugin-7754f76f8b-tgbb7\" (UID: \"5bd8e822-58d8-41a4-85fa-229fe3662cf7\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-tgbb7" Feb 02 21:33:12 crc kubenswrapper[4789]: I0202 21:33:12.718457 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-8c49b7cb8-vkdlg"] Feb 02 21:33:12 crc kubenswrapper[4789]: I0202 21:33:12.819251 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2a10f1bf-f132-40b0-b2fb-752fd044955f-service-ca\") pod \"console-8c49b7cb8-vkdlg\" (UID: \"2a10f1bf-f132-40b0-b2fb-752fd044955f\") " pod="openshift-console/console-8c49b7cb8-vkdlg" Feb 02 21:33:12 crc kubenswrapper[4789]: I0202 21:33:12.819659 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2a10f1bf-f132-40b0-b2fb-752fd044955f-console-config\") pod \"console-8c49b7cb8-vkdlg\" (UID: \"2a10f1bf-f132-40b0-b2fb-752fd044955f\") " pod="openshift-console/console-8c49b7cb8-vkdlg" Feb 02 21:33:12 crc kubenswrapper[4789]: I0202 21:33:12.819686 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/5bd8e822-58d8-41a4-85fa-229fe3662cf7-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-tgbb7\" (UID: \"5bd8e822-58d8-41a4-85fa-229fe3662cf7\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-tgbb7" Feb 02 21:33:12 crc kubenswrapper[4789]: I0202 21:33:12.819808 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2a10f1bf-f132-40b0-b2fb-752fd044955f-console-serving-cert\") pod \"console-8c49b7cb8-vkdlg\" (UID: \"2a10f1bf-f132-40b0-b2fb-752fd044955f\") " pod="openshift-console/console-8c49b7cb8-vkdlg" Feb 02 21:33:12 crc kubenswrapper[4789]: I0202 21:33:12.819829 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2a10f1bf-f132-40b0-b2fb-752fd044955f-trusted-ca-bundle\") pod \"console-8c49b7cb8-vkdlg\" (UID: \"2a10f1bf-f132-40b0-b2fb-752fd044955f\") " pod="openshift-console/console-8c49b7cb8-vkdlg" Feb 02 21:33:12 crc kubenswrapper[4789]: I0202 21:33:12.819852 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5bd8e822-58d8-41a4-85fa-229fe3662cf7-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-tgbb7\" (UID: \"5bd8e822-58d8-41a4-85fa-229fe3662cf7\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-tgbb7" Feb 02 21:33:12 crc kubenswrapper[4789]: I0202 21:33:12.820076 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmcxg\" (UniqueName: \"kubernetes.io/projected/5bd8e822-58d8-41a4-85fa-229fe3662cf7-kube-api-access-cmcxg\") pod \"nmstate-console-plugin-7754f76f8b-tgbb7\" (UID: \"5bd8e822-58d8-41a4-85fa-229fe3662cf7\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-tgbb7" Feb 02 21:33:12 crc kubenswrapper[4789]: I0202 21:33:12.820105 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2a10f1bf-f132-40b0-b2fb-752fd044955f-console-oauth-config\") pod \"console-8c49b7cb8-vkdlg\" (UID: \"2a10f1bf-f132-40b0-b2fb-752fd044955f\") " pod="openshift-console/console-8c49b7cb8-vkdlg" Feb 02 21:33:12 crc kubenswrapper[4789]: I0202 21:33:12.820243 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhx75\" (UniqueName: \"kubernetes.io/projected/2a10f1bf-f132-40b0-b2fb-752fd044955f-kube-api-access-zhx75\") pod \"console-8c49b7cb8-vkdlg\" (UID: \"2a10f1bf-f132-40b0-b2fb-752fd044955f\") " pod="openshift-console/console-8c49b7cb8-vkdlg" Feb 02 21:33:12 crc kubenswrapper[4789]: I0202 21:33:12.820279 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2a10f1bf-f132-40b0-b2fb-752fd044955f-oauth-serving-cert\") pod \"console-8c49b7cb8-vkdlg\" (UID: \"2a10f1bf-f132-40b0-b2fb-752fd044955f\") " pod="openshift-console/console-8c49b7cb8-vkdlg" Feb 02 21:33:12 crc kubenswrapper[4789]: I0202 21:33:12.821432 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5bd8e822-58d8-41a4-85fa-229fe3662cf7-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-tgbb7\" (UID: \"5bd8e822-58d8-41a4-85fa-229fe3662cf7\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-tgbb7" Feb 02 21:33:12 crc kubenswrapper[4789]: I0202 21:33:12.830140 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/5bd8e822-58d8-41a4-85fa-229fe3662cf7-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-tgbb7\" (UID: \"5bd8e822-58d8-41a4-85fa-229fe3662cf7\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-tgbb7" Feb 02 21:33:12 crc kubenswrapper[4789]: I0202 21:33:12.840268 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-56sxm" event={"ID":"8ecd0e71-adc8-4435-98cb-58de4b376820","Type":"ContainerStarted","Data":"eaad9b8f79e5f9ea8aeb330e97d049c39ef1c610e0f7e5802c5e77a28224d766"} Feb 02 21:33:12 crc kubenswrapper[4789]: I0202 21:33:12.843334 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmcxg\" (UniqueName: \"kubernetes.io/projected/5bd8e822-58d8-41a4-85fa-229fe3662cf7-kube-api-access-cmcxg\") pod \"nmstate-console-plugin-7754f76f8b-tgbb7\" (UID: \"5bd8e822-58d8-41a4-85fa-229fe3662cf7\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-tgbb7" Feb 02 21:33:12 crc kubenswrapper[4789]: I0202 21:33:12.920900 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2a10f1bf-f132-40b0-b2fb-752fd044955f-oauth-serving-cert\") pod \"console-8c49b7cb8-vkdlg\" (UID: \"2a10f1bf-f132-40b0-b2fb-752fd044955f\") " pod="openshift-console/console-8c49b7cb8-vkdlg" Feb 02 21:33:12 crc kubenswrapper[4789]: I0202 21:33:12.920978 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2a10f1bf-f132-40b0-b2fb-752fd044955f-service-ca\") pod \"console-8c49b7cb8-vkdlg\" (UID: \"2a10f1bf-f132-40b0-b2fb-752fd044955f\") " pod="openshift-console/console-8c49b7cb8-vkdlg" Feb 02 21:33:12 crc kubenswrapper[4789]: I0202 21:33:12.921006 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2a10f1bf-f132-40b0-b2fb-752fd044955f-console-serving-cert\") pod \"console-8c49b7cb8-vkdlg\" (UID: \"2a10f1bf-f132-40b0-b2fb-752fd044955f\") " pod="openshift-console/console-8c49b7cb8-vkdlg" Feb 02 21:33:12 crc kubenswrapper[4789]: I0202 21:33:12.921020 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2a10f1bf-f132-40b0-b2fb-752fd044955f-console-config\") pod \"console-8c49b7cb8-vkdlg\" (UID: \"2a10f1bf-f132-40b0-b2fb-752fd044955f\") " pod="openshift-console/console-8c49b7cb8-vkdlg" Feb 02 21:33:12 crc kubenswrapper[4789]: I0202 21:33:12.921040 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2a10f1bf-f132-40b0-b2fb-752fd044955f-trusted-ca-bundle\") pod \"console-8c49b7cb8-vkdlg\" (UID: \"2a10f1bf-f132-40b0-b2fb-752fd044955f\") " pod="openshift-console/console-8c49b7cb8-vkdlg" Feb 02 21:33:12 crc kubenswrapper[4789]: I0202 21:33:12.921100 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2a10f1bf-f132-40b0-b2fb-752fd044955f-console-oauth-config\") pod \"console-8c49b7cb8-vkdlg\" (UID: \"2a10f1bf-f132-40b0-b2fb-752fd044955f\") " pod="openshift-console/console-8c49b7cb8-vkdlg" Feb 02 21:33:12 crc kubenswrapper[4789]: I0202 21:33:12.921163 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhx75\" (UniqueName: \"kubernetes.io/projected/2a10f1bf-f132-40b0-b2fb-752fd044955f-kube-api-access-zhx75\") pod \"console-8c49b7cb8-vkdlg\" (UID: \"2a10f1bf-f132-40b0-b2fb-752fd044955f\") " pod="openshift-console/console-8c49b7cb8-vkdlg" Feb 02 21:33:12 crc kubenswrapper[4789]: I0202 21:33:12.922168 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2a10f1bf-f132-40b0-b2fb-752fd044955f-service-ca\") pod \"console-8c49b7cb8-vkdlg\" (UID: \"2a10f1bf-f132-40b0-b2fb-752fd044955f\") " pod="openshift-console/console-8c49b7cb8-vkdlg" Feb 02 21:33:12 crc kubenswrapper[4789]: I0202 21:33:12.922855 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2a10f1bf-f132-40b0-b2fb-752fd044955f-console-config\") pod \"console-8c49b7cb8-vkdlg\" (UID: \"2a10f1bf-f132-40b0-b2fb-752fd044955f\") " pod="openshift-console/console-8c49b7cb8-vkdlg" Feb 02 21:33:12 crc kubenswrapper[4789]: I0202 21:33:12.922905 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2a10f1bf-f132-40b0-b2fb-752fd044955f-trusted-ca-bundle\") pod \"console-8c49b7cb8-vkdlg\" (UID: \"2a10f1bf-f132-40b0-b2fb-752fd044955f\") " pod="openshift-console/console-8c49b7cb8-vkdlg" Feb 02 21:33:12 crc kubenswrapper[4789]: I0202 21:33:12.923852 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2a10f1bf-f132-40b0-b2fb-752fd044955f-oauth-serving-cert\") pod \"console-8c49b7cb8-vkdlg\" (UID: \"2a10f1bf-f132-40b0-b2fb-752fd044955f\") " pod="openshift-console/console-8c49b7cb8-vkdlg" Feb 02 21:33:12 crc kubenswrapper[4789]: I0202 21:33:12.924817 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2a10f1bf-f132-40b0-b2fb-752fd044955f-console-serving-cert\") pod \"console-8c49b7cb8-vkdlg\" (UID: \"2a10f1bf-f132-40b0-b2fb-752fd044955f\") " pod="openshift-console/console-8c49b7cb8-vkdlg" Feb 02 21:33:12 crc kubenswrapper[4789]: I0202 21:33:12.925145 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2a10f1bf-f132-40b0-b2fb-752fd044955f-console-oauth-config\") pod \"console-8c49b7cb8-vkdlg\" (UID: \"2a10f1bf-f132-40b0-b2fb-752fd044955f\") " pod="openshift-console/console-8c49b7cb8-vkdlg" Feb 02 21:33:12 crc kubenswrapper[4789]: I0202 21:33:12.941635 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhx75\" (UniqueName: \"kubernetes.io/projected/2a10f1bf-f132-40b0-b2fb-752fd044955f-kube-api-access-zhx75\") pod \"console-8c49b7cb8-vkdlg\" (UID: \"2a10f1bf-f132-40b0-b2fb-752fd044955f\") " pod="openshift-console/console-8c49b7cb8-vkdlg" Feb 02 21:33:13 crc kubenswrapper[4789]: I0202 21:33:13.002230 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-8c49b7cb8-vkdlg" Feb 02 21:33:13 crc kubenswrapper[4789]: I0202 21:33:13.082972 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-mwlcp"] Feb 02 21:33:13 crc kubenswrapper[4789]: I0202 21:33:13.124221 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/238b79fd-f8fd-4667-b350-6369490157c5-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-4t8ms\" (UID: \"238b79fd-f8fd-4667-b350-6369490157c5\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-4t8ms" Feb 02 21:33:13 crc kubenswrapper[4789]: I0202 21:33:13.129146 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/238b79fd-f8fd-4667-b350-6369490157c5-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-4t8ms\" (UID: \"238b79fd-f8fd-4667-b350-6369490157c5\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-4t8ms" Feb 02 21:33:13 crc kubenswrapper[4789]: I0202 21:33:13.132916 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-tgbb7" Feb 02 21:33:13 crc kubenswrapper[4789]: I0202 21:33:13.246387 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-8c49b7cb8-vkdlg"] Feb 02 21:33:13 crc kubenswrapper[4789]: W0202 21:33:13.249933 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2a10f1bf_f132_40b0_b2fb_752fd044955f.slice/crio-3e6246adfe020fbc692b348926d8b916c3e5c1f14e07a8dd235a9f88c5a01257 WatchSource:0}: Error finding container 3e6246adfe020fbc692b348926d8b916c3e5c1f14e07a8dd235a9f88c5a01257: Status 404 returned error can't find the container with id 3e6246adfe020fbc692b348926d8b916c3e5c1f14e07a8dd235a9f88c5a01257 Feb 02 21:33:13 crc kubenswrapper[4789]: I0202 21:33:13.284351 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-4t8ms" Feb 02 21:33:13 crc kubenswrapper[4789]: I0202 21:33:13.353524 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-tgbb7"] Feb 02 21:33:13 crc kubenswrapper[4789]: W0202 21:33:13.363052 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5bd8e822_58d8_41a4_85fa_229fe3662cf7.slice/crio-729947605b2e13fee2b54abee715781c7a43180f1667a123d2a069113a1e3ecf WatchSource:0}: Error finding container 729947605b2e13fee2b54abee715781c7a43180f1667a123d2a069113a1e3ecf: Status 404 returned error can't find the container with id 729947605b2e13fee2b54abee715781c7a43180f1667a123d2a069113a1e3ecf Feb 02 21:33:13 crc kubenswrapper[4789]: I0202 21:33:13.750791 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-4t8ms"] Feb 02 21:33:13 crc kubenswrapper[4789]: W0202 21:33:13.766285 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod238b79fd_f8fd_4667_b350_6369490157c5.slice/crio-448fe921ca05590666103e1d776ea2e5d0c252330950741054572ed688a331f4 WatchSource:0}: Error finding container 448fe921ca05590666103e1d776ea2e5d0c252330950741054572ed688a331f4: Status 404 returned error can't find the container with id 448fe921ca05590666103e1d776ea2e5d0c252330950741054572ed688a331f4 Feb 02 21:33:13 crc kubenswrapper[4789]: I0202 21:33:13.849845 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-mwlcp" event={"ID":"31964bc6-453e-4fc8-a06c-cfa7336e0b0f","Type":"ContainerStarted","Data":"2e0d6225a328c9cb2418929d2692cde69534ba3059533b762bdf98a8671c186f"} Feb 02 21:33:13 crc kubenswrapper[4789]: I0202 21:33:13.851524 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-4t8ms" event={"ID":"238b79fd-f8fd-4667-b350-6369490157c5","Type":"ContainerStarted","Data":"448fe921ca05590666103e1d776ea2e5d0c252330950741054572ed688a331f4"} Feb 02 21:33:13 crc kubenswrapper[4789]: I0202 21:33:13.853174 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-tgbb7" event={"ID":"5bd8e822-58d8-41a4-85fa-229fe3662cf7","Type":"ContainerStarted","Data":"729947605b2e13fee2b54abee715781c7a43180f1667a123d2a069113a1e3ecf"} Feb 02 21:33:13 crc kubenswrapper[4789]: I0202 21:33:13.854465 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-8c49b7cb8-vkdlg" event={"ID":"2a10f1bf-f132-40b0-b2fb-752fd044955f","Type":"ContainerStarted","Data":"52ae189c0e04f4b2169c3f33d6d4a974a97a1d9bf3d5390f88eafedc591bc5b1"} Feb 02 21:33:13 crc kubenswrapper[4789]: I0202 21:33:13.854492 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-8c49b7cb8-vkdlg" event={"ID":"2a10f1bf-f132-40b0-b2fb-752fd044955f","Type":"ContainerStarted","Data":"3e6246adfe020fbc692b348926d8b916c3e5c1f14e07a8dd235a9f88c5a01257"} Feb 02 21:33:13 crc kubenswrapper[4789]: I0202 21:33:13.872030 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-8c49b7cb8-vkdlg" podStartSLOduration=1.871998637 podStartE2EDuration="1.871998637s" podCreationTimestamp="2026-02-02 21:33:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:33:13.869857266 +0000 UTC m=+814.164882335" watchObservedRunningTime="2026-02-02 21:33:13.871998637 +0000 UTC m=+814.167023666" Feb 02 21:33:14 crc kubenswrapper[4789]: I0202 21:33:14.912416 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xl65p" Feb 02 21:33:14 crc kubenswrapper[4789]: I0202 21:33:14.954260 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xl65p" Feb 02 21:33:15 crc kubenswrapper[4789]: I0202 21:33:15.138968 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xl65p"] Feb 02 21:33:15 crc kubenswrapper[4789]: I0202 21:33:15.867392 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-mwlcp" event={"ID":"31964bc6-453e-4fc8-a06c-cfa7336e0b0f","Type":"ContainerStarted","Data":"4397dc477a521195d7dda27804b831bd75bbea0cc41cc1f9ad24afdad9edcccc"} Feb 02 21:33:15 crc kubenswrapper[4789]: I0202 21:33:15.869488 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-4t8ms" event={"ID":"238b79fd-f8fd-4667-b350-6369490157c5","Type":"ContainerStarted","Data":"1fbb55a4648e3cc37639092601d70a7d55c9ea427df570129557c9be918f23e6"} Feb 02 21:33:15 crc kubenswrapper[4789]: I0202 21:33:15.869565 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-4t8ms" Feb 02 21:33:15 crc kubenswrapper[4789]: I0202 21:33:15.872681 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-tgbb7" event={"ID":"5bd8e822-58d8-41a4-85fa-229fe3662cf7","Type":"ContainerStarted","Data":"42a5c506dae84f85792e57c9b12af272ff55c16d4f1c3f013175945d373d8ed4"} Feb 02 21:33:15 crc kubenswrapper[4789]: I0202 21:33:15.886615 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-4t8ms" podStartSLOduration=2.052045789 podStartE2EDuration="3.886595565s" podCreationTimestamp="2026-02-02 21:33:12 +0000 UTC" firstStartedPulling="2026-02-02 21:33:13.780988987 +0000 UTC m=+814.076014036" lastFinishedPulling="2026-02-02 21:33:15.615538793 +0000 UTC m=+815.910563812" observedRunningTime="2026-02-02 21:33:15.886460992 +0000 UTC m=+816.181486021" watchObservedRunningTime="2026-02-02 21:33:15.886595565 +0000 UTC m=+816.181620594" Feb 02 21:33:15 crc kubenswrapper[4789]: I0202 21:33:15.902308 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-tgbb7" podStartSLOduration=1.655153849 podStartE2EDuration="3.902288747s" podCreationTimestamp="2026-02-02 21:33:12 +0000 UTC" firstStartedPulling="2026-02-02 21:33:13.366185303 +0000 UTC m=+813.661210322" lastFinishedPulling="2026-02-02 21:33:15.613320201 +0000 UTC m=+815.908345220" observedRunningTime="2026-02-02 21:33:15.897979926 +0000 UTC m=+816.193004955" watchObservedRunningTime="2026-02-02 21:33:15.902288747 +0000 UTC m=+816.197313776" Feb 02 21:33:16 crc kubenswrapper[4789]: I0202 21:33:16.881391 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-56sxm" event={"ID":"8ecd0e71-adc8-4435-98cb-58de4b376820","Type":"ContainerStarted","Data":"2a84ec1b82eca2bd76fec28076977f19b2ba880daa1666a119df0934dedd6074"} Feb 02 21:33:16 crc kubenswrapper[4789]: I0202 21:33:16.881777 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-xl65p" podUID="cb2feeb5-33bb-403a-8e99-a4c544c69c0c" containerName="registry-server" containerID="cri-o://fa4be96ee253446c2d3f2f8fdeffdc2381a10636278ac40831554612b9602cda" gracePeriod=2 Feb 02 21:33:16 crc kubenswrapper[4789]: I0202 21:33:16.918748 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-56sxm" podStartSLOduration=2.052951356 podStartE2EDuration="4.918721989s" podCreationTimestamp="2026-02-02 21:33:12 +0000 UTC" firstStartedPulling="2026-02-02 21:33:12.74976957 +0000 UTC m=+813.044794589" lastFinishedPulling="2026-02-02 21:33:15.615540203 +0000 UTC m=+815.910565222" observedRunningTime="2026-02-02 21:33:16.910476727 +0000 UTC m=+817.205501836" watchObservedRunningTime="2026-02-02 21:33:16.918721989 +0000 UTC m=+817.213747088" Feb 02 21:33:17 crc kubenswrapper[4789]: I0202 21:33:17.303245 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xl65p" Feb 02 21:33:17 crc kubenswrapper[4789]: I0202 21:33:17.389748 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb2feeb5-33bb-403a-8e99-a4c544c69c0c-utilities\") pod \"cb2feeb5-33bb-403a-8e99-a4c544c69c0c\" (UID: \"cb2feeb5-33bb-403a-8e99-a4c544c69c0c\") " Feb 02 21:33:17 crc kubenswrapper[4789]: I0202 21:33:17.390608 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pc2qs\" (UniqueName: \"kubernetes.io/projected/cb2feeb5-33bb-403a-8e99-a4c544c69c0c-kube-api-access-pc2qs\") pod \"cb2feeb5-33bb-403a-8e99-a4c544c69c0c\" (UID: \"cb2feeb5-33bb-403a-8e99-a4c544c69c0c\") " Feb 02 21:33:17 crc kubenswrapper[4789]: I0202 21:33:17.390699 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb2feeb5-33bb-403a-8e99-a4c544c69c0c-catalog-content\") pod \"cb2feeb5-33bb-403a-8e99-a4c544c69c0c\" (UID: \"cb2feeb5-33bb-403a-8e99-a4c544c69c0c\") " Feb 02 21:33:17 crc kubenswrapper[4789]: I0202 21:33:17.390631 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb2feeb5-33bb-403a-8e99-a4c544c69c0c-utilities" (OuterVolumeSpecName: "utilities") pod "cb2feeb5-33bb-403a-8e99-a4c544c69c0c" (UID: "cb2feeb5-33bb-403a-8e99-a4c544c69c0c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 21:33:17 crc kubenswrapper[4789]: I0202 21:33:17.396433 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb2feeb5-33bb-403a-8e99-a4c544c69c0c-kube-api-access-pc2qs" (OuterVolumeSpecName: "kube-api-access-pc2qs") pod "cb2feeb5-33bb-403a-8e99-a4c544c69c0c" (UID: "cb2feeb5-33bb-403a-8e99-a4c544c69c0c"). InnerVolumeSpecName "kube-api-access-pc2qs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:33:17 crc kubenswrapper[4789]: I0202 21:33:17.491953 4789 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb2feeb5-33bb-403a-8e99-a4c544c69c0c-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 21:33:17 crc kubenswrapper[4789]: I0202 21:33:17.492038 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pc2qs\" (UniqueName: \"kubernetes.io/projected/cb2feeb5-33bb-403a-8e99-a4c544c69c0c-kube-api-access-pc2qs\") on node \"crc\" DevicePath \"\"" Feb 02 21:33:17 crc kubenswrapper[4789]: I0202 21:33:17.498782 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb2feeb5-33bb-403a-8e99-a4c544c69c0c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cb2feeb5-33bb-403a-8e99-a4c544c69c0c" (UID: "cb2feeb5-33bb-403a-8e99-a4c544c69c0c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 21:33:17 crc kubenswrapper[4789]: I0202 21:33:17.592481 4789 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb2feeb5-33bb-403a-8e99-a4c544c69c0c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 21:33:17 crc kubenswrapper[4789]: I0202 21:33:17.693212 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-56sxm" Feb 02 21:33:17 crc kubenswrapper[4789]: I0202 21:33:17.891972 4789 generic.go:334] "Generic (PLEG): container finished" podID="cb2feeb5-33bb-403a-8e99-a4c544c69c0c" containerID="fa4be96ee253446c2d3f2f8fdeffdc2381a10636278ac40831554612b9602cda" exitCode=0 Feb 02 21:33:17 crc kubenswrapper[4789]: I0202 21:33:17.892663 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xl65p" Feb 02 21:33:17 crc kubenswrapper[4789]: I0202 21:33:17.893699 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xl65p" event={"ID":"cb2feeb5-33bb-403a-8e99-a4c544c69c0c","Type":"ContainerDied","Data":"fa4be96ee253446c2d3f2f8fdeffdc2381a10636278ac40831554612b9602cda"} Feb 02 21:33:17 crc kubenswrapper[4789]: I0202 21:33:17.894394 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xl65p" event={"ID":"cb2feeb5-33bb-403a-8e99-a4c544c69c0c","Type":"ContainerDied","Data":"bed6a7dd77e4dfe506c26f480034e4ce379afc7098e062127422f0fc5f948553"} Feb 02 21:33:17 crc kubenswrapper[4789]: I0202 21:33:17.894411 4789 scope.go:117] "RemoveContainer" containerID="fa4be96ee253446c2d3f2f8fdeffdc2381a10636278ac40831554612b9602cda" Feb 02 21:33:17 crc kubenswrapper[4789]: I0202 21:33:17.925377 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xl65p"] Feb 02 21:33:17 crc kubenswrapper[4789]: I0202 21:33:17.929232 4789 scope.go:117] "RemoveContainer" containerID="3de2a8b97d927d476253c99e4a75643107b378363e08c4a5f368b5848cd00f41" Feb 02 21:33:17 crc kubenswrapper[4789]: I0202 21:33:17.929507 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-xl65p"] Feb 02 21:33:17 crc kubenswrapper[4789]: I0202 21:33:17.977153 4789 scope.go:117] "RemoveContainer" containerID="490e6b4df9005c91e7ff649ebc631e3a1b17ee301e59abe796d7babf37767e9d" Feb 02 21:33:17 crc kubenswrapper[4789]: I0202 21:33:17.996494 4789 scope.go:117] "RemoveContainer" containerID="fa4be96ee253446c2d3f2f8fdeffdc2381a10636278ac40831554612b9602cda" Feb 02 21:33:17 crc kubenswrapper[4789]: E0202 21:33:17.996845 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa4be96ee253446c2d3f2f8fdeffdc2381a10636278ac40831554612b9602cda\": container with ID starting with fa4be96ee253446c2d3f2f8fdeffdc2381a10636278ac40831554612b9602cda not found: ID does not exist" containerID="fa4be96ee253446c2d3f2f8fdeffdc2381a10636278ac40831554612b9602cda" Feb 02 21:33:17 crc kubenswrapper[4789]: I0202 21:33:17.996894 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa4be96ee253446c2d3f2f8fdeffdc2381a10636278ac40831554612b9602cda"} err="failed to get container status \"fa4be96ee253446c2d3f2f8fdeffdc2381a10636278ac40831554612b9602cda\": rpc error: code = NotFound desc = could not find container \"fa4be96ee253446c2d3f2f8fdeffdc2381a10636278ac40831554612b9602cda\": container with ID starting with fa4be96ee253446c2d3f2f8fdeffdc2381a10636278ac40831554612b9602cda not found: ID does not exist" Feb 02 21:33:17 crc kubenswrapper[4789]: I0202 21:33:17.996924 4789 scope.go:117] "RemoveContainer" containerID="3de2a8b97d927d476253c99e4a75643107b378363e08c4a5f368b5848cd00f41" Feb 02 21:33:17 crc kubenswrapper[4789]: E0202 21:33:17.997218 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3de2a8b97d927d476253c99e4a75643107b378363e08c4a5f368b5848cd00f41\": container with ID starting with 3de2a8b97d927d476253c99e4a75643107b378363e08c4a5f368b5848cd00f41 not found: ID does not exist" containerID="3de2a8b97d927d476253c99e4a75643107b378363e08c4a5f368b5848cd00f41" Feb 02 21:33:17 crc kubenswrapper[4789]: I0202 21:33:17.997248 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3de2a8b97d927d476253c99e4a75643107b378363e08c4a5f368b5848cd00f41"} err="failed to get container status \"3de2a8b97d927d476253c99e4a75643107b378363e08c4a5f368b5848cd00f41\": rpc error: code = NotFound desc = could not find container \"3de2a8b97d927d476253c99e4a75643107b378363e08c4a5f368b5848cd00f41\": container with ID starting with 3de2a8b97d927d476253c99e4a75643107b378363e08c4a5f368b5848cd00f41 not found: ID does not exist" Feb 02 21:33:17 crc kubenswrapper[4789]: I0202 21:33:17.997265 4789 scope.go:117] "RemoveContainer" containerID="490e6b4df9005c91e7ff649ebc631e3a1b17ee301e59abe796d7babf37767e9d" Feb 02 21:33:17 crc kubenswrapper[4789]: E0202 21:33:17.997521 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"490e6b4df9005c91e7ff649ebc631e3a1b17ee301e59abe796d7babf37767e9d\": container with ID starting with 490e6b4df9005c91e7ff649ebc631e3a1b17ee301e59abe796d7babf37767e9d not found: ID does not exist" containerID="490e6b4df9005c91e7ff649ebc631e3a1b17ee301e59abe796d7babf37767e9d" Feb 02 21:33:17 crc kubenswrapper[4789]: I0202 21:33:17.997549 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"490e6b4df9005c91e7ff649ebc631e3a1b17ee301e59abe796d7babf37767e9d"} err="failed to get container status \"490e6b4df9005c91e7ff649ebc631e3a1b17ee301e59abe796d7babf37767e9d\": rpc error: code = NotFound desc = could not find container \"490e6b4df9005c91e7ff649ebc631e3a1b17ee301e59abe796d7babf37767e9d\": container with ID starting with 490e6b4df9005c91e7ff649ebc631e3a1b17ee301e59abe796d7babf37767e9d not found: ID does not exist" Feb 02 21:33:18 crc kubenswrapper[4789]: I0202 21:33:18.429911 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb2feeb5-33bb-403a-8e99-a4c544c69c0c" path="/var/lib/kubelet/pods/cb2feeb5-33bb-403a-8e99-a4c544c69c0c/volumes" Feb 02 21:33:18 crc kubenswrapper[4789]: I0202 21:33:18.900850 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-mwlcp" event={"ID":"31964bc6-453e-4fc8-a06c-cfa7336e0b0f","Type":"ContainerStarted","Data":"bd353dceb3105c6e16ac3c65fbeb4c6b3051eed12c90a05b35f52154660ba3b8"} Feb 02 21:33:18 crc kubenswrapper[4789]: I0202 21:33:18.925012 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-mwlcp" podStartSLOduration=2.038215071 podStartE2EDuration="6.924990373s" podCreationTimestamp="2026-02-02 21:33:12 +0000 UTC" firstStartedPulling="2026-02-02 21:33:13.095391899 +0000 UTC m=+813.390416928" lastFinishedPulling="2026-02-02 21:33:17.982167211 +0000 UTC m=+818.277192230" observedRunningTime="2026-02-02 21:33:18.919958442 +0000 UTC m=+819.214983481" watchObservedRunningTime="2026-02-02 21:33:18.924990373 +0000 UTC m=+819.220015412" Feb 02 21:33:22 crc kubenswrapper[4789]: I0202 21:33:22.734465 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-56sxm" Feb 02 21:33:22 crc kubenswrapper[4789]: I0202 21:33:22.841921 4789 patch_prober.go:28] interesting pod/machine-config-daemon-c8vcn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 21:33:22 crc kubenswrapper[4789]: I0202 21:33:22.842003 4789 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 21:33:23 crc kubenswrapper[4789]: I0202 21:33:23.002506 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-8c49b7cb8-vkdlg" Feb 02 21:33:23 crc kubenswrapper[4789]: I0202 21:33:23.003004 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-8c49b7cb8-vkdlg" Feb 02 21:33:23 crc kubenswrapper[4789]: I0202 21:33:23.010779 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-8c49b7cb8-vkdlg" Feb 02 21:33:23 crc kubenswrapper[4789]: I0202 21:33:23.960326 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-8c49b7cb8-vkdlg" Feb 02 21:33:24 crc kubenswrapper[4789]: I0202 21:33:24.069417 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-x568j"] Feb 02 21:33:33 crc kubenswrapper[4789]: I0202 21:33:33.292907 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-4t8ms" Feb 02 21:33:49 crc kubenswrapper[4789]: I0202 21:33:49.131702 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-x568j" podUID="a7cb5e21-a1f6-4e35-bf0d-e709b16d6994" containerName="console" containerID="cri-o://10f1ae5569859cdc38f9f0c0e54b8d505db332ebcf22f76d45adda1a3b1a00a7" gracePeriod=15 Feb 02 21:33:49 crc kubenswrapper[4789]: I0202 21:33:49.307623 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfdhf2"] Feb 02 21:33:49 crc kubenswrapper[4789]: E0202 21:33:49.307850 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb2feeb5-33bb-403a-8e99-a4c544c69c0c" containerName="extract-utilities" Feb 02 21:33:49 crc kubenswrapper[4789]: I0202 21:33:49.307866 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb2feeb5-33bb-403a-8e99-a4c544c69c0c" containerName="extract-utilities" Feb 02 21:33:49 crc kubenswrapper[4789]: E0202 21:33:49.307889 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb2feeb5-33bb-403a-8e99-a4c544c69c0c" containerName="extract-content" Feb 02 21:33:49 crc kubenswrapper[4789]: I0202 21:33:49.307898 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb2feeb5-33bb-403a-8e99-a4c544c69c0c" containerName="extract-content" Feb 02 21:33:49 crc kubenswrapper[4789]: E0202 21:33:49.307909 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb2feeb5-33bb-403a-8e99-a4c544c69c0c" containerName="registry-server" Feb 02 21:33:49 crc kubenswrapper[4789]: I0202 21:33:49.307917 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb2feeb5-33bb-403a-8e99-a4c544c69c0c" containerName="registry-server" Feb 02 21:33:49 crc kubenswrapper[4789]: I0202 21:33:49.308057 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb2feeb5-33bb-403a-8e99-a4c544c69c0c" containerName="registry-server" Feb 02 21:33:49 crc kubenswrapper[4789]: I0202 21:33:49.309138 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfdhf2" Feb 02 21:33:49 crc kubenswrapper[4789]: I0202 21:33:49.313895 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 02 21:33:49 crc kubenswrapper[4789]: I0202 21:33:49.323649 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfdhf2"] Feb 02 21:33:49 crc kubenswrapper[4789]: I0202 21:33:49.354330 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dmh8\" (UniqueName: \"kubernetes.io/projected/925ff462-a22d-4c9d-8ed4-70c08866fe64-kube-api-access-8dmh8\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfdhf2\" (UID: \"925ff462-a22d-4c9d-8ed4-70c08866fe64\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfdhf2" Feb 02 21:33:49 crc kubenswrapper[4789]: I0202 21:33:49.354410 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/925ff462-a22d-4c9d-8ed4-70c08866fe64-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfdhf2\" (UID: \"925ff462-a22d-4c9d-8ed4-70c08866fe64\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfdhf2" Feb 02 21:33:49 crc kubenswrapper[4789]: I0202 21:33:49.354448 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/925ff462-a22d-4c9d-8ed4-70c08866fe64-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfdhf2\" (UID: \"925ff462-a22d-4c9d-8ed4-70c08866fe64\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfdhf2" Feb 02 21:33:49 crc kubenswrapper[4789]: I0202 21:33:49.456395 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/925ff462-a22d-4c9d-8ed4-70c08866fe64-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfdhf2\" (UID: \"925ff462-a22d-4c9d-8ed4-70c08866fe64\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfdhf2" Feb 02 21:33:49 crc kubenswrapper[4789]: I0202 21:33:49.456453 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/925ff462-a22d-4c9d-8ed4-70c08866fe64-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfdhf2\" (UID: \"925ff462-a22d-4c9d-8ed4-70c08866fe64\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfdhf2" Feb 02 21:33:49 crc kubenswrapper[4789]: I0202 21:33:49.456626 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dmh8\" (UniqueName: \"kubernetes.io/projected/925ff462-a22d-4c9d-8ed4-70c08866fe64-kube-api-access-8dmh8\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfdhf2\" (UID: \"925ff462-a22d-4c9d-8ed4-70c08866fe64\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfdhf2" Feb 02 21:33:49 crc kubenswrapper[4789]: I0202 21:33:49.457698 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/925ff462-a22d-4c9d-8ed4-70c08866fe64-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfdhf2\" (UID: \"925ff462-a22d-4c9d-8ed4-70c08866fe64\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfdhf2" Feb 02 21:33:49 crc kubenswrapper[4789]: I0202 21:33:49.457858 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/925ff462-a22d-4c9d-8ed4-70c08866fe64-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfdhf2\" (UID: \"925ff462-a22d-4c9d-8ed4-70c08866fe64\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfdhf2" Feb 02 21:33:49 crc kubenswrapper[4789]: I0202 21:33:49.478391 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dmh8\" (UniqueName: \"kubernetes.io/projected/925ff462-a22d-4c9d-8ed4-70c08866fe64-kube-api-access-8dmh8\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfdhf2\" (UID: \"925ff462-a22d-4c9d-8ed4-70c08866fe64\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfdhf2" Feb 02 21:33:49 crc kubenswrapper[4789]: I0202 21:33:49.576905 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-x568j_a7cb5e21-a1f6-4e35-bf0d-e709b16d6994/console/0.log" Feb 02 21:33:49 crc kubenswrapper[4789]: I0202 21:33:49.577039 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-x568j" Feb 02 21:33:49 crc kubenswrapper[4789]: I0202 21:33:49.629902 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfdhf2" Feb 02 21:33:49 crc kubenswrapper[4789]: I0202 21:33:49.658988 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a7cb5e21-a1f6-4e35-bf0d-e709b16d6994-console-serving-cert\") pod \"a7cb5e21-a1f6-4e35-bf0d-e709b16d6994\" (UID: \"a7cb5e21-a1f6-4e35-bf0d-e709b16d6994\") " Feb 02 21:33:49 crc kubenswrapper[4789]: I0202 21:33:49.659091 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zvwqc\" (UniqueName: \"kubernetes.io/projected/a7cb5e21-a1f6-4e35-bf0d-e709b16d6994-kube-api-access-zvwqc\") pod \"a7cb5e21-a1f6-4e35-bf0d-e709b16d6994\" (UID: \"a7cb5e21-a1f6-4e35-bf0d-e709b16d6994\") " Feb 02 21:33:49 crc kubenswrapper[4789]: I0202 21:33:49.659150 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a7cb5e21-a1f6-4e35-bf0d-e709b16d6994-console-config\") pod \"a7cb5e21-a1f6-4e35-bf0d-e709b16d6994\" (UID: \"a7cb5e21-a1f6-4e35-bf0d-e709b16d6994\") " Feb 02 21:33:49 crc kubenswrapper[4789]: I0202 21:33:49.659233 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a7cb5e21-a1f6-4e35-bf0d-e709b16d6994-console-oauth-config\") pod \"a7cb5e21-a1f6-4e35-bf0d-e709b16d6994\" (UID: \"a7cb5e21-a1f6-4e35-bf0d-e709b16d6994\") " Feb 02 21:33:49 crc kubenswrapper[4789]: I0202 21:33:49.659315 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a7cb5e21-a1f6-4e35-bf0d-e709b16d6994-trusted-ca-bundle\") pod \"a7cb5e21-a1f6-4e35-bf0d-e709b16d6994\" (UID: \"a7cb5e21-a1f6-4e35-bf0d-e709b16d6994\") " Feb 02 21:33:49 crc kubenswrapper[4789]: I0202 21:33:49.659359 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a7cb5e21-a1f6-4e35-bf0d-e709b16d6994-oauth-serving-cert\") pod \"a7cb5e21-a1f6-4e35-bf0d-e709b16d6994\" (UID: \"a7cb5e21-a1f6-4e35-bf0d-e709b16d6994\") " Feb 02 21:33:49 crc kubenswrapper[4789]: I0202 21:33:49.659391 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a7cb5e21-a1f6-4e35-bf0d-e709b16d6994-service-ca\") pod \"a7cb5e21-a1f6-4e35-bf0d-e709b16d6994\" (UID: \"a7cb5e21-a1f6-4e35-bf0d-e709b16d6994\") " Feb 02 21:33:49 crc kubenswrapper[4789]: I0202 21:33:49.660219 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7cb5e21-a1f6-4e35-bf0d-e709b16d6994-service-ca" (OuterVolumeSpecName: "service-ca") pod "a7cb5e21-a1f6-4e35-bf0d-e709b16d6994" (UID: "a7cb5e21-a1f6-4e35-bf0d-e709b16d6994"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:33:49 crc kubenswrapper[4789]: I0202 21:33:49.660657 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7cb5e21-a1f6-4e35-bf0d-e709b16d6994-console-config" (OuterVolumeSpecName: "console-config") pod "a7cb5e21-a1f6-4e35-bf0d-e709b16d6994" (UID: "a7cb5e21-a1f6-4e35-bf0d-e709b16d6994"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:33:49 crc kubenswrapper[4789]: I0202 21:33:49.661149 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7cb5e21-a1f6-4e35-bf0d-e709b16d6994-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "a7cb5e21-a1f6-4e35-bf0d-e709b16d6994" (UID: "a7cb5e21-a1f6-4e35-bf0d-e709b16d6994"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:33:49 crc kubenswrapper[4789]: I0202 21:33:49.661331 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7cb5e21-a1f6-4e35-bf0d-e709b16d6994-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "a7cb5e21-a1f6-4e35-bf0d-e709b16d6994" (UID: "a7cb5e21-a1f6-4e35-bf0d-e709b16d6994"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:33:49 crc kubenswrapper[4789]: I0202 21:33:49.664795 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7cb5e21-a1f6-4e35-bf0d-e709b16d6994-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "a7cb5e21-a1f6-4e35-bf0d-e709b16d6994" (UID: "a7cb5e21-a1f6-4e35-bf0d-e709b16d6994"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:33:49 crc kubenswrapper[4789]: I0202 21:33:49.665887 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7cb5e21-a1f6-4e35-bf0d-e709b16d6994-kube-api-access-zvwqc" (OuterVolumeSpecName: "kube-api-access-zvwqc") pod "a7cb5e21-a1f6-4e35-bf0d-e709b16d6994" (UID: "a7cb5e21-a1f6-4e35-bf0d-e709b16d6994"). InnerVolumeSpecName "kube-api-access-zvwqc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:33:49 crc kubenswrapper[4789]: I0202 21:33:49.667139 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7cb5e21-a1f6-4e35-bf0d-e709b16d6994-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "a7cb5e21-a1f6-4e35-bf0d-e709b16d6994" (UID: "a7cb5e21-a1f6-4e35-bf0d-e709b16d6994"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:33:49 crc kubenswrapper[4789]: I0202 21:33:49.760202 4789 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a7cb5e21-a1f6-4e35-bf0d-e709b16d6994-console-config\") on node \"crc\" DevicePath \"\"" Feb 02 21:33:49 crc kubenswrapper[4789]: I0202 21:33:49.760230 4789 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a7cb5e21-a1f6-4e35-bf0d-e709b16d6994-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 02 21:33:49 crc kubenswrapper[4789]: I0202 21:33:49.760242 4789 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a7cb5e21-a1f6-4e35-bf0d-e709b16d6994-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 21:33:49 crc kubenswrapper[4789]: I0202 21:33:49.760253 4789 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a7cb5e21-a1f6-4e35-bf0d-e709b16d6994-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 21:33:49 crc kubenswrapper[4789]: I0202 21:33:49.760264 4789 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a7cb5e21-a1f6-4e35-bf0d-e709b16d6994-service-ca\") on node \"crc\" DevicePath \"\"" Feb 02 21:33:49 crc kubenswrapper[4789]: I0202 21:33:49.760275 4789 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a7cb5e21-a1f6-4e35-bf0d-e709b16d6994-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 21:33:49 crc kubenswrapper[4789]: I0202 21:33:49.760287 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zvwqc\" (UniqueName: \"kubernetes.io/projected/a7cb5e21-a1f6-4e35-bf0d-e709b16d6994-kube-api-access-zvwqc\") on node \"crc\" DevicePath \"\"" Feb 02 21:33:49 crc kubenswrapper[4789]: I0202 21:33:49.870175 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfdhf2"] Feb 02 21:33:50 crc kubenswrapper[4789]: I0202 21:33:50.162032 4789 generic.go:334] "Generic (PLEG): container finished" podID="925ff462-a22d-4c9d-8ed4-70c08866fe64" containerID="edd9ffae9135bf320b4d9c0a09dcfb61216acd772abbbb914c1553a1159efe9f" exitCode=0 Feb 02 21:33:50 crc kubenswrapper[4789]: I0202 21:33:50.162139 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfdhf2" event={"ID":"925ff462-a22d-4c9d-8ed4-70c08866fe64","Type":"ContainerDied","Data":"edd9ffae9135bf320b4d9c0a09dcfb61216acd772abbbb914c1553a1159efe9f"} Feb 02 21:33:50 crc kubenswrapper[4789]: I0202 21:33:50.162550 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfdhf2" event={"ID":"925ff462-a22d-4c9d-8ed4-70c08866fe64","Type":"ContainerStarted","Data":"ce24df8d587f01efaf8dacda0b8aba2394407b34d2f9d420c3bea116211858a5"} Feb 02 21:33:50 crc kubenswrapper[4789]: I0202 21:33:50.166766 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-x568j_a7cb5e21-a1f6-4e35-bf0d-e709b16d6994/console/0.log" Feb 02 21:33:50 crc kubenswrapper[4789]: I0202 21:33:50.166828 4789 generic.go:334] "Generic (PLEG): container finished" podID="a7cb5e21-a1f6-4e35-bf0d-e709b16d6994" containerID="10f1ae5569859cdc38f9f0c0e54b8d505db332ebcf22f76d45adda1a3b1a00a7" exitCode=2 Feb 02 21:33:50 crc kubenswrapper[4789]: I0202 21:33:50.166866 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-x568j" event={"ID":"a7cb5e21-a1f6-4e35-bf0d-e709b16d6994","Type":"ContainerDied","Data":"10f1ae5569859cdc38f9f0c0e54b8d505db332ebcf22f76d45adda1a3b1a00a7"} Feb 02 21:33:50 crc kubenswrapper[4789]: I0202 21:33:50.166907 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-x568j" event={"ID":"a7cb5e21-a1f6-4e35-bf0d-e709b16d6994","Type":"ContainerDied","Data":"e0caa4a10dc5d015815b09ecc6643d246e908bacd0be7e52ef25691854bb187e"} Feb 02 21:33:50 crc kubenswrapper[4789]: I0202 21:33:50.166927 4789 scope.go:117] "RemoveContainer" containerID="10f1ae5569859cdc38f9f0c0e54b8d505db332ebcf22f76d45adda1a3b1a00a7" Feb 02 21:33:50 crc kubenswrapper[4789]: I0202 21:33:50.167291 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-x568j" Feb 02 21:33:50 crc kubenswrapper[4789]: I0202 21:33:50.212225 4789 scope.go:117] "RemoveContainer" containerID="10f1ae5569859cdc38f9f0c0e54b8d505db332ebcf22f76d45adda1a3b1a00a7" Feb 02 21:33:50 crc kubenswrapper[4789]: E0202 21:33:50.212649 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10f1ae5569859cdc38f9f0c0e54b8d505db332ebcf22f76d45adda1a3b1a00a7\": container with ID starting with 10f1ae5569859cdc38f9f0c0e54b8d505db332ebcf22f76d45adda1a3b1a00a7 not found: ID does not exist" containerID="10f1ae5569859cdc38f9f0c0e54b8d505db332ebcf22f76d45adda1a3b1a00a7" Feb 02 21:33:50 crc kubenswrapper[4789]: I0202 21:33:50.212681 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10f1ae5569859cdc38f9f0c0e54b8d505db332ebcf22f76d45adda1a3b1a00a7"} err="failed to get container status \"10f1ae5569859cdc38f9f0c0e54b8d505db332ebcf22f76d45adda1a3b1a00a7\": rpc error: code = NotFound desc = could not find container \"10f1ae5569859cdc38f9f0c0e54b8d505db332ebcf22f76d45adda1a3b1a00a7\": container with ID starting with 10f1ae5569859cdc38f9f0c0e54b8d505db332ebcf22f76d45adda1a3b1a00a7 not found: ID does not exist" Feb 02 21:33:50 crc kubenswrapper[4789]: I0202 21:33:50.223636 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-x568j"] Feb 02 21:33:50 crc kubenswrapper[4789]: I0202 21:33:50.232344 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-x568j"] Feb 02 21:33:50 crc kubenswrapper[4789]: I0202 21:33:50.439043 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7cb5e21-a1f6-4e35-bf0d-e709b16d6994" path="/var/lib/kubelet/pods/a7cb5e21-a1f6-4e35-bf0d-e709b16d6994/volumes" Feb 02 21:33:52 crc kubenswrapper[4789]: I0202 21:33:52.188038 4789 generic.go:334] "Generic (PLEG): container finished" podID="925ff462-a22d-4c9d-8ed4-70c08866fe64" containerID="121007d8d8a2edcce71d9c76cc483016e2dba31c2533864379f324b22defe90b" exitCode=0 Feb 02 21:33:52 crc kubenswrapper[4789]: I0202 21:33:52.188199 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfdhf2" event={"ID":"925ff462-a22d-4c9d-8ed4-70c08866fe64","Type":"ContainerDied","Data":"121007d8d8a2edcce71d9c76cc483016e2dba31c2533864379f324b22defe90b"} Feb 02 21:33:52 crc kubenswrapper[4789]: I0202 21:33:52.842394 4789 patch_prober.go:28] interesting pod/machine-config-daemon-c8vcn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 21:33:52 crc kubenswrapper[4789]: I0202 21:33:52.842804 4789 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 21:33:52 crc kubenswrapper[4789]: I0202 21:33:52.842871 4789 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" Feb 02 21:33:52 crc kubenswrapper[4789]: I0202 21:33:52.843727 4789 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1ec54d6d2f9dd12ba4581ba8d6bcba6253f115c225597c28969e0527a84fb4af"} pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 21:33:52 crc kubenswrapper[4789]: I0202 21:33:52.843846 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" containerName="machine-config-daemon" containerID="cri-o://1ec54d6d2f9dd12ba4581ba8d6bcba6253f115c225597c28969e0527a84fb4af" gracePeriod=600 Feb 02 21:33:53 crc kubenswrapper[4789]: I0202 21:33:53.197734 4789 generic.go:334] "Generic (PLEG): container finished" podID="925ff462-a22d-4c9d-8ed4-70c08866fe64" containerID="4e2cf654ed1b07a704e5aea2b73e4fd81036c4f285d1b9084ea1241f50273925" exitCode=0 Feb 02 21:33:53 crc kubenswrapper[4789]: I0202 21:33:53.197831 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfdhf2" event={"ID":"925ff462-a22d-4c9d-8ed4-70c08866fe64","Type":"ContainerDied","Data":"4e2cf654ed1b07a704e5aea2b73e4fd81036c4f285d1b9084ea1241f50273925"} Feb 02 21:33:53 crc kubenswrapper[4789]: I0202 21:33:53.201859 4789 generic.go:334] "Generic (PLEG): container finished" podID="bdf018b4-1451-4d37-be6e-05802b67c73e" containerID="1ec54d6d2f9dd12ba4581ba8d6bcba6253f115c225597c28969e0527a84fb4af" exitCode=0 Feb 02 21:33:53 crc kubenswrapper[4789]: I0202 21:33:53.201895 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" event={"ID":"bdf018b4-1451-4d37-be6e-05802b67c73e","Type":"ContainerDied","Data":"1ec54d6d2f9dd12ba4581ba8d6bcba6253f115c225597c28969e0527a84fb4af"} Feb 02 21:33:53 crc kubenswrapper[4789]: I0202 21:33:53.202092 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" event={"ID":"bdf018b4-1451-4d37-be6e-05802b67c73e","Type":"ContainerStarted","Data":"731cbec71f64a4bdb77752b4fd336ae74457ae3978707682a716375d9f8b1609"} Feb 02 21:33:53 crc kubenswrapper[4789]: I0202 21:33:53.202178 4789 scope.go:117] "RemoveContainer" containerID="56513053a4eff1a4ef3d67ad266c32d8d1fc9194a1d2f87f6627abedca5761d0" Feb 02 21:33:54 crc kubenswrapper[4789]: I0202 21:33:54.476955 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfdhf2" Feb 02 21:33:54 crc kubenswrapper[4789]: I0202 21:33:54.641343 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8dmh8\" (UniqueName: \"kubernetes.io/projected/925ff462-a22d-4c9d-8ed4-70c08866fe64-kube-api-access-8dmh8\") pod \"925ff462-a22d-4c9d-8ed4-70c08866fe64\" (UID: \"925ff462-a22d-4c9d-8ed4-70c08866fe64\") " Feb 02 21:33:54 crc kubenswrapper[4789]: I0202 21:33:54.641839 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/925ff462-a22d-4c9d-8ed4-70c08866fe64-bundle\") pod \"925ff462-a22d-4c9d-8ed4-70c08866fe64\" (UID: \"925ff462-a22d-4c9d-8ed4-70c08866fe64\") " Feb 02 21:33:54 crc kubenswrapper[4789]: I0202 21:33:54.641963 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/925ff462-a22d-4c9d-8ed4-70c08866fe64-util\") pod \"925ff462-a22d-4c9d-8ed4-70c08866fe64\" (UID: \"925ff462-a22d-4c9d-8ed4-70c08866fe64\") " Feb 02 21:33:54 crc kubenswrapper[4789]: I0202 21:33:54.643266 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/925ff462-a22d-4c9d-8ed4-70c08866fe64-bundle" (OuterVolumeSpecName: "bundle") pod "925ff462-a22d-4c9d-8ed4-70c08866fe64" (UID: "925ff462-a22d-4c9d-8ed4-70c08866fe64"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 21:33:54 crc kubenswrapper[4789]: I0202 21:33:54.646598 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925ff462-a22d-4c9d-8ed4-70c08866fe64-kube-api-access-8dmh8" (OuterVolumeSpecName: "kube-api-access-8dmh8") pod "925ff462-a22d-4c9d-8ed4-70c08866fe64" (UID: "925ff462-a22d-4c9d-8ed4-70c08866fe64"). InnerVolumeSpecName "kube-api-access-8dmh8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:33:54 crc kubenswrapper[4789]: I0202 21:33:54.662060 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/925ff462-a22d-4c9d-8ed4-70c08866fe64-util" (OuterVolumeSpecName: "util") pod "925ff462-a22d-4c9d-8ed4-70c08866fe64" (UID: "925ff462-a22d-4c9d-8ed4-70c08866fe64"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 21:33:54 crc kubenswrapper[4789]: I0202 21:33:54.742978 4789 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/925ff462-a22d-4c9d-8ed4-70c08866fe64-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 21:33:54 crc kubenswrapper[4789]: I0202 21:33:54.743020 4789 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/925ff462-a22d-4c9d-8ed4-70c08866fe64-util\") on node \"crc\" DevicePath \"\"" Feb 02 21:33:54 crc kubenswrapper[4789]: I0202 21:33:54.743032 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8dmh8\" (UniqueName: \"kubernetes.io/projected/925ff462-a22d-4c9d-8ed4-70c08866fe64-kube-api-access-8dmh8\") on node \"crc\" DevicePath \"\"" Feb 02 21:33:55 crc kubenswrapper[4789]: I0202 21:33:55.224091 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfdhf2" event={"ID":"925ff462-a22d-4c9d-8ed4-70c08866fe64","Type":"ContainerDied","Data":"ce24df8d587f01efaf8dacda0b8aba2394407b34d2f9d420c3bea116211858a5"} Feb 02 21:33:55 crc kubenswrapper[4789]: I0202 21:33:55.224150 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce24df8d587f01efaf8dacda0b8aba2394407b34d2f9d420c3bea116211858a5" Feb 02 21:33:55 crc kubenswrapper[4789]: I0202 21:33:55.224195 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfdhf2" Feb 02 21:34:00 crc kubenswrapper[4789]: I0202 21:34:00.847094 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wgs5x"] Feb 02 21:34:00 crc kubenswrapper[4789]: E0202 21:34:00.847691 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7cb5e21-a1f6-4e35-bf0d-e709b16d6994" containerName="console" Feb 02 21:34:00 crc kubenswrapper[4789]: I0202 21:34:00.847701 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7cb5e21-a1f6-4e35-bf0d-e709b16d6994" containerName="console" Feb 02 21:34:00 crc kubenswrapper[4789]: E0202 21:34:00.847712 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="925ff462-a22d-4c9d-8ed4-70c08866fe64" containerName="pull" Feb 02 21:34:00 crc kubenswrapper[4789]: I0202 21:34:00.847719 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="925ff462-a22d-4c9d-8ed4-70c08866fe64" containerName="pull" Feb 02 21:34:00 crc kubenswrapper[4789]: E0202 21:34:00.847733 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="925ff462-a22d-4c9d-8ed4-70c08866fe64" containerName="util" Feb 02 21:34:00 crc kubenswrapper[4789]: I0202 21:34:00.847739 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="925ff462-a22d-4c9d-8ed4-70c08866fe64" containerName="util" Feb 02 21:34:00 crc kubenswrapper[4789]: E0202 21:34:00.847754 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="925ff462-a22d-4c9d-8ed4-70c08866fe64" containerName="extract" Feb 02 21:34:00 crc kubenswrapper[4789]: I0202 21:34:00.847759 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="925ff462-a22d-4c9d-8ed4-70c08866fe64" containerName="extract" Feb 02 21:34:00 crc kubenswrapper[4789]: I0202 21:34:00.847844 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="925ff462-a22d-4c9d-8ed4-70c08866fe64" containerName="extract" Feb 02 21:34:00 crc kubenswrapper[4789]: I0202 21:34:00.847859 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7cb5e21-a1f6-4e35-bf0d-e709b16d6994" containerName="console" Feb 02 21:34:00 crc kubenswrapper[4789]: I0202 21:34:00.848514 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wgs5x" Feb 02 21:34:00 crc kubenswrapper[4789]: I0202 21:34:00.874662 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wgs5x"] Feb 02 21:34:00 crc kubenswrapper[4789]: I0202 21:34:00.924507 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0141f80c-f4b5-4aaa-b809-b09f73bd46a7-catalog-content\") pod \"redhat-marketplace-wgs5x\" (UID: \"0141f80c-f4b5-4aaa-b809-b09f73bd46a7\") " pod="openshift-marketplace/redhat-marketplace-wgs5x" Feb 02 21:34:00 crc kubenswrapper[4789]: I0202 21:34:00.924654 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0141f80c-f4b5-4aaa-b809-b09f73bd46a7-utilities\") pod \"redhat-marketplace-wgs5x\" (UID: \"0141f80c-f4b5-4aaa-b809-b09f73bd46a7\") " pod="openshift-marketplace/redhat-marketplace-wgs5x" Feb 02 21:34:00 crc kubenswrapper[4789]: I0202 21:34:00.924814 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxf4m\" (UniqueName: \"kubernetes.io/projected/0141f80c-f4b5-4aaa-b809-b09f73bd46a7-kube-api-access-zxf4m\") pod \"redhat-marketplace-wgs5x\" (UID: \"0141f80c-f4b5-4aaa-b809-b09f73bd46a7\") " pod="openshift-marketplace/redhat-marketplace-wgs5x" Feb 02 21:34:01 crc kubenswrapper[4789]: I0202 21:34:01.026264 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0141f80c-f4b5-4aaa-b809-b09f73bd46a7-catalog-content\") pod \"redhat-marketplace-wgs5x\" (UID: \"0141f80c-f4b5-4aaa-b809-b09f73bd46a7\") " pod="openshift-marketplace/redhat-marketplace-wgs5x" Feb 02 21:34:01 crc kubenswrapper[4789]: I0202 21:34:01.026345 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0141f80c-f4b5-4aaa-b809-b09f73bd46a7-utilities\") pod \"redhat-marketplace-wgs5x\" (UID: \"0141f80c-f4b5-4aaa-b809-b09f73bd46a7\") " pod="openshift-marketplace/redhat-marketplace-wgs5x" Feb 02 21:34:01 crc kubenswrapper[4789]: I0202 21:34:01.026443 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxf4m\" (UniqueName: \"kubernetes.io/projected/0141f80c-f4b5-4aaa-b809-b09f73bd46a7-kube-api-access-zxf4m\") pod \"redhat-marketplace-wgs5x\" (UID: \"0141f80c-f4b5-4aaa-b809-b09f73bd46a7\") " pod="openshift-marketplace/redhat-marketplace-wgs5x" Feb 02 21:34:01 crc kubenswrapper[4789]: I0202 21:34:01.026909 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0141f80c-f4b5-4aaa-b809-b09f73bd46a7-utilities\") pod \"redhat-marketplace-wgs5x\" (UID: \"0141f80c-f4b5-4aaa-b809-b09f73bd46a7\") " pod="openshift-marketplace/redhat-marketplace-wgs5x" Feb 02 21:34:01 crc kubenswrapper[4789]: I0202 21:34:01.027084 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0141f80c-f4b5-4aaa-b809-b09f73bd46a7-catalog-content\") pod \"redhat-marketplace-wgs5x\" (UID: \"0141f80c-f4b5-4aaa-b809-b09f73bd46a7\") " pod="openshift-marketplace/redhat-marketplace-wgs5x" Feb 02 21:34:01 crc kubenswrapper[4789]: I0202 21:34:01.047246 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxf4m\" (UniqueName: \"kubernetes.io/projected/0141f80c-f4b5-4aaa-b809-b09f73bd46a7-kube-api-access-zxf4m\") pod \"redhat-marketplace-wgs5x\" (UID: \"0141f80c-f4b5-4aaa-b809-b09f73bd46a7\") " pod="openshift-marketplace/redhat-marketplace-wgs5x" Feb 02 21:34:01 crc kubenswrapper[4789]: I0202 21:34:01.168207 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wgs5x" Feb 02 21:34:01 crc kubenswrapper[4789]: I0202 21:34:01.396749 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wgs5x"] Feb 02 21:34:02 crc kubenswrapper[4789]: I0202 21:34:02.293685 4789 generic.go:334] "Generic (PLEG): container finished" podID="0141f80c-f4b5-4aaa-b809-b09f73bd46a7" containerID="7f8dff4e80554df2382474022df02424fb70308cc899fda32a169b3132240f11" exitCode=0 Feb 02 21:34:02 crc kubenswrapper[4789]: I0202 21:34:02.293782 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wgs5x" event={"ID":"0141f80c-f4b5-4aaa-b809-b09f73bd46a7","Type":"ContainerDied","Data":"7f8dff4e80554df2382474022df02424fb70308cc899fda32a169b3132240f11"} Feb 02 21:34:02 crc kubenswrapper[4789]: I0202 21:34:02.293989 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wgs5x" event={"ID":"0141f80c-f4b5-4aaa-b809-b09f73bd46a7","Type":"ContainerStarted","Data":"e260be359b0fb2acb995d573df632e5b6da964b95146e847024dd7cde03b1837"} Feb 02 21:34:03 crc kubenswrapper[4789]: I0202 21:34:03.299898 4789 generic.go:334] "Generic (PLEG): container finished" podID="0141f80c-f4b5-4aaa-b809-b09f73bd46a7" containerID="c2c4920a7560998308149db2662754d6e0b92dc9b08404601f0cfb0e5cb65fbc" exitCode=0 Feb 02 21:34:03 crc kubenswrapper[4789]: I0202 21:34:03.299995 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wgs5x" event={"ID":"0141f80c-f4b5-4aaa-b809-b09f73bd46a7","Type":"ContainerDied","Data":"c2c4920a7560998308149db2662754d6e0b92dc9b08404601f0cfb0e5cb65fbc"} Feb 02 21:34:04 crc kubenswrapper[4789]: I0202 21:34:04.308949 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wgs5x" event={"ID":"0141f80c-f4b5-4aaa-b809-b09f73bd46a7","Type":"ContainerStarted","Data":"311c787fd19872d79beb3047aa542ea0642d6d4809502e504d1b3f1875228e0a"} Feb 02 21:34:04 crc kubenswrapper[4789]: I0202 21:34:04.333144 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-65474b67b5-ccmdm"] Feb 02 21:34:04 crc kubenswrapper[4789]: I0202 21:34:04.334028 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-65474b67b5-ccmdm" Feb 02 21:34:04 crc kubenswrapper[4789]: I0202 21:34:04.335917 4789 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-s2qzj" Feb 02 21:34:04 crc kubenswrapper[4789]: I0202 21:34:04.336116 4789 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 02 21:34:04 crc kubenswrapper[4789]: I0202 21:34:04.337454 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 02 21:34:04 crc kubenswrapper[4789]: I0202 21:34:04.337464 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 02 21:34:04 crc kubenswrapper[4789]: I0202 21:34:04.337757 4789 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 02 21:34:04 crc kubenswrapper[4789]: I0202 21:34:04.352212 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wgs5x" podStartSLOduration=2.960817486 podStartE2EDuration="4.352195036s" podCreationTimestamp="2026-02-02 21:34:00 +0000 UTC" firstStartedPulling="2026-02-02 21:34:02.295798377 +0000 UTC m=+862.590823396" lastFinishedPulling="2026-02-02 21:34:03.687175887 +0000 UTC m=+863.982200946" observedRunningTime="2026-02-02 21:34:04.33323826 +0000 UTC m=+864.628263299" watchObservedRunningTime="2026-02-02 21:34:04.352195036 +0000 UTC m=+864.647220055" Feb 02 21:34:04 crc kubenswrapper[4789]: I0202 21:34:04.355769 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-65474b67b5-ccmdm"] Feb 02 21:34:04 crc kubenswrapper[4789]: I0202 21:34:04.465075 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a6474578-8598-4e67-b846-2a7bd085dd88-webhook-cert\") pod \"metallb-operator-controller-manager-65474b67b5-ccmdm\" (UID: \"a6474578-8598-4e67-b846-2a7bd085dd88\") " pod="metallb-system/metallb-operator-controller-manager-65474b67b5-ccmdm" Feb 02 21:34:04 crc kubenswrapper[4789]: I0202 21:34:04.465181 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlz8b\" (UniqueName: \"kubernetes.io/projected/a6474578-8598-4e67-b846-2a7bd085dd88-kube-api-access-mlz8b\") pod \"metallb-operator-controller-manager-65474b67b5-ccmdm\" (UID: \"a6474578-8598-4e67-b846-2a7bd085dd88\") " pod="metallb-system/metallb-operator-controller-manager-65474b67b5-ccmdm" Feb 02 21:34:04 crc kubenswrapper[4789]: I0202 21:34:04.465213 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a6474578-8598-4e67-b846-2a7bd085dd88-apiservice-cert\") pod \"metallb-operator-controller-manager-65474b67b5-ccmdm\" (UID: \"a6474578-8598-4e67-b846-2a7bd085dd88\") " pod="metallb-system/metallb-operator-controller-manager-65474b67b5-ccmdm" Feb 02 21:34:04 crc kubenswrapper[4789]: I0202 21:34:04.566361 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a6474578-8598-4e67-b846-2a7bd085dd88-apiservice-cert\") pod \"metallb-operator-controller-manager-65474b67b5-ccmdm\" (UID: \"a6474578-8598-4e67-b846-2a7bd085dd88\") " pod="metallb-system/metallb-operator-controller-manager-65474b67b5-ccmdm" Feb 02 21:34:04 crc kubenswrapper[4789]: I0202 21:34:04.566466 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a6474578-8598-4e67-b846-2a7bd085dd88-webhook-cert\") pod \"metallb-operator-controller-manager-65474b67b5-ccmdm\" (UID: \"a6474578-8598-4e67-b846-2a7bd085dd88\") " pod="metallb-system/metallb-operator-controller-manager-65474b67b5-ccmdm" Feb 02 21:34:04 crc kubenswrapper[4789]: I0202 21:34:04.567387 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlz8b\" (UniqueName: \"kubernetes.io/projected/a6474578-8598-4e67-b846-2a7bd085dd88-kube-api-access-mlz8b\") pod \"metallb-operator-controller-manager-65474b67b5-ccmdm\" (UID: \"a6474578-8598-4e67-b846-2a7bd085dd88\") " pod="metallb-system/metallb-operator-controller-manager-65474b67b5-ccmdm" Feb 02 21:34:04 crc kubenswrapper[4789]: I0202 21:34:04.572266 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a6474578-8598-4e67-b846-2a7bd085dd88-apiservice-cert\") pod \"metallb-operator-controller-manager-65474b67b5-ccmdm\" (UID: \"a6474578-8598-4e67-b846-2a7bd085dd88\") " pod="metallb-system/metallb-operator-controller-manager-65474b67b5-ccmdm" Feb 02 21:34:04 crc kubenswrapper[4789]: I0202 21:34:04.572279 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a6474578-8598-4e67-b846-2a7bd085dd88-webhook-cert\") pod \"metallb-operator-controller-manager-65474b67b5-ccmdm\" (UID: \"a6474578-8598-4e67-b846-2a7bd085dd88\") " pod="metallb-system/metallb-operator-controller-manager-65474b67b5-ccmdm" Feb 02 21:34:04 crc kubenswrapper[4789]: I0202 21:34:04.586226 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlz8b\" (UniqueName: \"kubernetes.io/projected/a6474578-8598-4e67-b846-2a7bd085dd88-kube-api-access-mlz8b\") pod \"metallb-operator-controller-manager-65474b67b5-ccmdm\" (UID: \"a6474578-8598-4e67-b846-2a7bd085dd88\") " pod="metallb-system/metallb-operator-controller-manager-65474b67b5-ccmdm" Feb 02 21:34:04 crc kubenswrapper[4789]: I0202 21:34:04.647637 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-65474b67b5-ccmdm" Feb 02 21:34:04 crc kubenswrapper[4789]: I0202 21:34:04.663522 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-5888cd945b-dznx4"] Feb 02 21:34:04 crc kubenswrapper[4789]: I0202 21:34:04.664208 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-5888cd945b-dznx4" Feb 02 21:34:04 crc kubenswrapper[4789]: I0202 21:34:04.667265 4789 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 02 21:34:04 crc kubenswrapper[4789]: I0202 21:34:04.667365 4789 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 02 21:34:04 crc kubenswrapper[4789]: I0202 21:34:04.668092 4789 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-vf6vb" Feb 02 21:34:04 crc kubenswrapper[4789]: I0202 21:34:04.683013 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-5888cd945b-dznx4"] Feb 02 21:34:04 crc kubenswrapper[4789]: I0202 21:34:04.769292 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ffea3a90-35b0-4962-b652-8fafa44aa5a9-webhook-cert\") pod \"metallb-operator-webhook-server-5888cd945b-dznx4\" (UID: \"ffea3a90-35b0-4962-b652-8fafa44aa5a9\") " pod="metallb-system/metallb-operator-webhook-server-5888cd945b-dznx4" Feb 02 21:34:04 crc kubenswrapper[4789]: I0202 21:34:04.769369 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdrh9\" (UniqueName: \"kubernetes.io/projected/ffea3a90-35b0-4962-b652-8fafa44aa5a9-kube-api-access-pdrh9\") pod \"metallb-operator-webhook-server-5888cd945b-dznx4\" (UID: \"ffea3a90-35b0-4962-b652-8fafa44aa5a9\") " pod="metallb-system/metallb-operator-webhook-server-5888cd945b-dznx4" Feb 02 21:34:04 crc kubenswrapper[4789]: I0202 21:34:04.769415 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ffea3a90-35b0-4962-b652-8fafa44aa5a9-apiservice-cert\") pod \"metallb-operator-webhook-server-5888cd945b-dznx4\" (UID: \"ffea3a90-35b0-4962-b652-8fafa44aa5a9\") " pod="metallb-system/metallb-operator-webhook-server-5888cd945b-dznx4" Feb 02 21:34:04 crc kubenswrapper[4789]: I0202 21:34:04.870428 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ffea3a90-35b0-4962-b652-8fafa44aa5a9-webhook-cert\") pod \"metallb-operator-webhook-server-5888cd945b-dznx4\" (UID: \"ffea3a90-35b0-4962-b652-8fafa44aa5a9\") " pod="metallb-system/metallb-operator-webhook-server-5888cd945b-dznx4" Feb 02 21:34:04 crc kubenswrapper[4789]: I0202 21:34:04.870852 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdrh9\" (UniqueName: \"kubernetes.io/projected/ffea3a90-35b0-4962-b652-8fafa44aa5a9-kube-api-access-pdrh9\") pod \"metallb-operator-webhook-server-5888cd945b-dznx4\" (UID: \"ffea3a90-35b0-4962-b652-8fafa44aa5a9\") " pod="metallb-system/metallb-operator-webhook-server-5888cd945b-dznx4" Feb 02 21:34:04 crc kubenswrapper[4789]: I0202 21:34:04.870913 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ffea3a90-35b0-4962-b652-8fafa44aa5a9-apiservice-cert\") pod \"metallb-operator-webhook-server-5888cd945b-dznx4\" (UID: \"ffea3a90-35b0-4962-b652-8fafa44aa5a9\") " pod="metallb-system/metallb-operator-webhook-server-5888cd945b-dznx4" Feb 02 21:34:04 crc kubenswrapper[4789]: I0202 21:34:04.876065 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ffea3a90-35b0-4962-b652-8fafa44aa5a9-apiservice-cert\") pod \"metallb-operator-webhook-server-5888cd945b-dznx4\" (UID: \"ffea3a90-35b0-4962-b652-8fafa44aa5a9\") " pod="metallb-system/metallb-operator-webhook-server-5888cd945b-dznx4" Feb 02 21:34:04 crc kubenswrapper[4789]: I0202 21:34:04.881647 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-65474b67b5-ccmdm"] Feb 02 21:34:04 crc kubenswrapper[4789]: I0202 21:34:04.898813 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ffea3a90-35b0-4962-b652-8fafa44aa5a9-webhook-cert\") pod \"metallb-operator-webhook-server-5888cd945b-dznx4\" (UID: \"ffea3a90-35b0-4962-b652-8fafa44aa5a9\") " pod="metallb-system/metallb-operator-webhook-server-5888cd945b-dznx4" Feb 02 21:34:04 crc kubenswrapper[4789]: I0202 21:34:04.906070 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdrh9\" (UniqueName: \"kubernetes.io/projected/ffea3a90-35b0-4962-b652-8fafa44aa5a9-kube-api-access-pdrh9\") pod \"metallb-operator-webhook-server-5888cd945b-dznx4\" (UID: \"ffea3a90-35b0-4962-b652-8fafa44aa5a9\") " pod="metallb-system/metallb-operator-webhook-server-5888cd945b-dznx4" Feb 02 21:34:05 crc kubenswrapper[4789]: I0202 21:34:05.022074 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-5888cd945b-dznx4" Feb 02 21:34:05 crc kubenswrapper[4789]: I0202 21:34:05.221413 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-5888cd945b-dznx4"] Feb 02 21:34:05 crc kubenswrapper[4789]: W0202 21:34:05.229114 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podffea3a90_35b0_4962_b652_8fafa44aa5a9.slice/crio-d74ca5b56e121dbf3f36697c7acc4c9b0d2c1db4f86d2dca9f50e70a865b7b9e WatchSource:0}: Error finding container d74ca5b56e121dbf3f36697c7acc4c9b0d2c1db4f86d2dca9f50e70a865b7b9e: Status 404 returned error can't find the container with id d74ca5b56e121dbf3f36697c7acc4c9b0d2c1db4f86d2dca9f50e70a865b7b9e Feb 02 21:34:05 crc kubenswrapper[4789]: I0202 21:34:05.317751 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-65474b67b5-ccmdm" event={"ID":"a6474578-8598-4e67-b846-2a7bd085dd88","Type":"ContainerStarted","Data":"3c20417899f023dbbf9ffea55b9e132994d4f6b2fee3aa332658509843432b58"} Feb 02 21:34:05 crc kubenswrapper[4789]: I0202 21:34:05.325147 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-5888cd945b-dznx4" event={"ID":"ffea3a90-35b0-4962-b652-8fafa44aa5a9","Type":"ContainerStarted","Data":"d74ca5b56e121dbf3f36697c7acc4c9b0d2c1db4f86d2dca9f50e70a865b7b9e"} Feb 02 21:34:08 crc kubenswrapper[4789]: I0202 21:34:08.340975 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-65474b67b5-ccmdm" event={"ID":"a6474578-8598-4e67-b846-2a7bd085dd88","Type":"ContainerStarted","Data":"ae8ab53314a0955df00dda144af390f9b76ac6d9f2e3967b7c7e582303402f47"} Feb 02 21:34:08 crc kubenswrapper[4789]: I0202 21:34:08.342312 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-65474b67b5-ccmdm" Feb 02 21:34:08 crc kubenswrapper[4789]: I0202 21:34:08.367209 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-65474b67b5-ccmdm" podStartSLOduration=1.712135486 podStartE2EDuration="4.36719186s" podCreationTimestamp="2026-02-02 21:34:04 +0000 UTC" firstStartedPulling="2026-02-02 21:34:04.907603268 +0000 UTC m=+865.202628277" lastFinishedPulling="2026-02-02 21:34:07.562659572 +0000 UTC m=+867.857684651" observedRunningTime="2026-02-02 21:34:08.362194279 +0000 UTC m=+868.657219298" watchObservedRunningTime="2026-02-02 21:34:08.36719186 +0000 UTC m=+868.662216879" Feb 02 21:34:10 crc kubenswrapper[4789]: I0202 21:34:10.354428 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-5888cd945b-dznx4" event={"ID":"ffea3a90-35b0-4962-b652-8fafa44aa5a9","Type":"ContainerStarted","Data":"fcc29a1de9a42ee1f7e23908869df604a232d8650603d6efe9bba7873f7df66e"} Feb 02 21:34:10 crc kubenswrapper[4789]: I0202 21:34:10.380988 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-5888cd945b-dznx4" podStartSLOduration=2.364221241 podStartE2EDuration="6.380966825s" podCreationTimestamp="2026-02-02 21:34:04 +0000 UTC" firstStartedPulling="2026-02-02 21:34:05.231911226 +0000 UTC m=+865.526936245" lastFinishedPulling="2026-02-02 21:34:09.24865681 +0000 UTC m=+869.543681829" observedRunningTime="2026-02-02 21:34:10.375400267 +0000 UTC m=+870.670425286" watchObservedRunningTime="2026-02-02 21:34:10.380966825 +0000 UTC m=+870.675991854" Feb 02 21:34:11 crc kubenswrapper[4789]: I0202 21:34:11.169412 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wgs5x" Feb 02 21:34:11 crc kubenswrapper[4789]: I0202 21:34:11.169753 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wgs5x" Feb 02 21:34:11 crc kubenswrapper[4789]: I0202 21:34:11.239189 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wgs5x" Feb 02 21:34:11 crc kubenswrapper[4789]: I0202 21:34:11.359689 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-5888cd945b-dznx4" Feb 02 21:34:11 crc kubenswrapper[4789]: I0202 21:34:11.396200 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wgs5x" Feb 02 21:34:13 crc kubenswrapper[4789]: I0202 21:34:13.632833 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wgs5x"] Feb 02 21:34:13 crc kubenswrapper[4789]: I0202 21:34:13.633375 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-wgs5x" podUID="0141f80c-f4b5-4aaa-b809-b09f73bd46a7" containerName="registry-server" containerID="cri-o://311c787fd19872d79beb3047aa542ea0642d6d4809502e504d1b3f1875228e0a" gracePeriod=2 Feb 02 21:34:14 crc kubenswrapper[4789]: I0202 21:34:14.081664 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wgs5x" Feb 02 21:34:14 crc kubenswrapper[4789]: I0202 21:34:14.224502 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zxf4m\" (UniqueName: \"kubernetes.io/projected/0141f80c-f4b5-4aaa-b809-b09f73bd46a7-kube-api-access-zxf4m\") pod \"0141f80c-f4b5-4aaa-b809-b09f73bd46a7\" (UID: \"0141f80c-f4b5-4aaa-b809-b09f73bd46a7\") " Feb 02 21:34:14 crc kubenswrapper[4789]: I0202 21:34:14.224711 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0141f80c-f4b5-4aaa-b809-b09f73bd46a7-utilities\") pod \"0141f80c-f4b5-4aaa-b809-b09f73bd46a7\" (UID: \"0141f80c-f4b5-4aaa-b809-b09f73bd46a7\") " Feb 02 21:34:14 crc kubenswrapper[4789]: I0202 21:34:14.224885 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0141f80c-f4b5-4aaa-b809-b09f73bd46a7-catalog-content\") pod \"0141f80c-f4b5-4aaa-b809-b09f73bd46a7\" (UID: \"0141f80c-f4b5-4aaa-b809-b09f73bd46a7\") " Feb 02 21:34:14 crc kubenswrapper[4789]: I0202 21:34:14.225458 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0141f80c-f4b5-4aaa-b809-b09f73bd46a7-utilities" (OuterVolumeSpecName: "utilities") pod "0141f80c-f4b5-4aaa-b809-b09f73bd46a7" (UID: "0141f80c-f4b5-4aaa-b809-b09f73bd46a7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 21:34:14 crc kubenswrapper[4789]: I0202 21:34:14.230119 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0141f80c-f4b5-4aaa-b809-b09f73bd46a7-kube-api-access-zxf4m" (OuterVolumeSpecName: "kube-api-access-zxf4m") pod "0141f80c-f4b5-4aaa-b809-b09f73bd46a7" (UID: "0141f80c-f4b5-4aaa-b809-b09f73bd46a7"). InnerVolumeSpecName "kube-api-access-zxf4m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:34:14 crc kubenswrapper[4789]: I0202 21:34:14.245258 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0141f80c-f4b5-4aaa-b809-b09f73bd46a7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0141f80c-f4b5-4aaa-b809-b09f73bd46a7" (UID: "0141f80c-f4b5-4aaa-b809-b09f73bd46a7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 21:34:14 crc kubenswrapper[4789]: I0202 21:34:14.326425 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zxf4m\" (UniqueName: \"kubernetes.io/projected/0141f80c-f4b5-4aaa-b809-b09f73bd46a7-kube-api-access-zxf4m\") on node \"crc\" DevicePath \"\"" Feb 02 21:34:14 crc kubenswrapper[4789]: I0202 21:34:14.326468 4789 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0141f80c-f4b5-4aaa-b809-b09f73bd46a7-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 21:34:14 crc kubenswrapper[4789]: I0202 21:34:14.326482 4789 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0141f80c-f4b5-4aaa-b809-b09f73bd46a7-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 21:34:14 crc kubenswrapper[4789]: I0202 21:34:14.380436 4789 generic.go:334] "Generic (PLEG): container finished" podID="0141f80c-f4b5-4aaa-b809-b09f73bd46a7" containerID="311c787fd19872d79beb3047aa542ea0642d6d4809502e504d1b3f1875228e0a" exitCode=0 Feb 02 21:34:14 crc kubenswrapper[4789]: I0202 21:34:14.380500 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wgs5x" Feb 02 21:34:14 crc kubenswrapper[4789]: I0202 21:34:14.380502 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wgs5x" event={"ID":"0141f80c-f4b5-4aaa-b809-b09f73bd46a7","Type":"ContainerDied","Data":"311c787fd19872d79beb3047aa542ea0642d6d4809502e504d1b3f1875228e0a"} Feb 02 21:34:14 crc kubenswrapper[4789]: I0202 21:34:14.380652 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wgs5x" event={"ID":"0141f80c-f4b5-4aaa-b809-b09f73bd46a7","Type":"ContainerDied","Data":"e260be359b0fb2acb995d573df632e5b6da964b95146e847024dd7cde03b1837"} Feb 02 21:34:14 crc kubenswrapper[4789]: I0202 21:34:14.380698 4789 scope.go:117] "RemoveContainer" containerID="311c787fd19872d79beb3047aa542ea0642d6d4809502e504d1b3f1875228e0a" Feb 02 21:34:14 crc kubenswrapper[4789]: I0202 21:34:14.403222 4789 scope.go:117] "RemoveContainer" containerID="c2c4920a7560998308149db2662754d6e0b92dc9b08404601f0cfb0e5cb65fbc" Feb 02 21:34:14 crc kubenswrapper[4789]: I0202 21:34:14.418345 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wgs5x"] Feb 02 21:34:14 crc kubenswrapper[4789]: I0202 21:34:14.427880 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-wgs5x"] Feb 02 21:34:14 crc kubenswrapper[4789]: I0202 21:34:14.430662 4789 scope.go:117] "RemoveContainer" containerID="7f8dff4e80554df2382474022df02424fb70308cc899fda32a169b3132240f11" Feb 02 21:34:14 crc kubenswrapper[4789]: I0202 21:34:14.444377 4789 scope.go:117] "RemoveContainer" containerID="311c787fd19872d79beb3047aa542ea0642d6d4809502e504d1b3f1875228e0a" Feb 02 21:34:14 crc kubenswrapper[4789]: E0202 21:34:14.444829 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"311c787fd19872d79beb3047aa542ea0642d6d4809502e504d1b3f1875228e0a\": container with ID starting with 311c787fd19872d79beb3047aa542ea0642d6d4809502e504d1b3f1875228e0a not found: ID does not exist" containerID="311c787fd19872d79beb3047aa542ea0642d6d4809502e504d1b3f1875228e0a" Feb 02 21:34:14 crc kubenswrapper[4789]: I0202 21:34:14.444883 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"311c787fd19872d79beb3047aa542ea0642d6d4809502e504d1b3f1875228e0a"} err="failed to get container status \"311c787fd19872d79beb3047aa542ea0642d6d4809502e504d1b3f1875228e0a\": rpc error: code = NotFound desc = could not find container \"311c787fd19872d79beb3047aa542ea0642d6d4809502e504d1b3f1875228e0a\": container with ID starting with 311c787fd19872d79beb3047aa542ea0642d6d4809502e504d1b3f1875228e0a not found: ID does not exist" Feb 02 21:34:14 crc kubenswrapper[4789]: I0202 21:34:14.444919 4789 scope.go:117] "RemoveContainer" containerID="c2c4920a7560998308149db2662754d6e0b92dc9b08404601f0cfb0e5cb65fbc" Feb 02 21:34:14 crc kubenswrapper[4789]: E0202 21:34:14.445642 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2c4920a7560998308149db2662754d6e0b92dc9b08404601f0cfb0e5cb65fbc\": container with ID starting with c2c4920a7560998308149db2662754d6e0b92dc9b08404601f0cfb0e5cb65fbc not found: ID does not exist" containerID="c2c4920a7560998308149db2662754d6e0b92dc9b08404601f0cfb0e5cb65fbc" Feb 02 21:34:14 crc kubenswrapper[4789]: I0202 21:34:14.445689 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2c4920a7560998308149db2662754d6e0b92dc9b08404601f0cfb0e5cb65fbc"} err="failed to get container status \"c2c4920a7560998308149db2662754d6e0b92dc9b08404601f0cfb0e5cb65fbc\": rpc error: code = NotFound desc = could not find container \"c2c4920a7560998308149db2662754d6e0b92dc9b08404601f0cfb0e5cb65fbc\": container with ID starting with c2c4920a7560998308149db2662754d6e0b92dc9b08404601f0cfb0e5cb65fbc not found: ID does not exist" Feb 02 21:34:14 crc kubenswrapper[4789]: I0202 21:34:14.445735 4789 scope.go:117] "RemoveContainer" containerID="7f8dff4e80554df2382474022df02424fb70308cc899fda32a169b3132240f11" Feb 02 21:34:14 crc kubenswrapper[4789]: E0202 21:34:14.446163 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f8dff4e80554df2382474022df02424fb70308cc899fda32a169b3132240f11\": container with ID starting with 7f8dff4e80554df2382474022df02424fb70308cc899fda32a169b3132240f11 not found: ID does not exist" containerID="7f8dff4e80554df2382474022df02424fb70308cc899fda32a169b3132240f11" Feb 02 21:34:14 crc kubenswrapper[4789]: I0202 21:34:14.446211 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f8dff4e80554df2382474022df02424fb70308cc899fda32a169b3132240f11"} err="failed to get container status \"7f8dff4e80554df2382474022df02424fb70308cc899fda32a169b3132240f11\": rpc error: code = NotFound desc = could not find container \"7f8dff4e80554df2382474022df02424fb70308cc899fda32a169b3132240f11\": container with ID starting with 7f8dff4e80554df2382474022df02424fb70308cc899fda32a169b3132240f11 not found: ID does not exist" Feb 02 21:34:16 crc kubenswrapper[4789]: I0202 21:34:16.431174 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0141f80c-f4b5-4aaa-b809-b09f73bd46a7" path="/var/lib/kubelet/pods/0141f80c-f4b5-4aaa-b809-b09f73bd46a7/volumes" Feb 02 21:34:20 crc kubenswrapper[4789]: I0202 21:34:20.041918 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-l67b8"] Feb 02 21:34:20 crc kubenswrapper[4789]: E0202 21:34:20.042509 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0141f80c-f4b5-4aaa-b809-b09f73bd46a7" containerName="extract-utilities" Feb 02 21:34:20 crc kubenswrapper[4789]: I0202 21:34:20.042526 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="0141f80c-f4b5-4aaa-b809-b09f73bd46a7" containerName="extract-utilities" Feb 02 21:34:20 crc kubenswrapper[4789]: E0202 21:34:20.042550 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0141f80c-f4b5-4aaa-b809-b09f73bd46a7" containerName="extract-content" Feb 02 21:34:20 crc kubenswrapper[4789]: I0202 21:34:20.042559 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="0141f80c-f4b5-4aaa-b809-b09f73bd46a7" containerName="extract-content" Feb 02 21:34:20 crc kubenswrapper[4789]: E0202 21:34:20.042570 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0141f80c-f4b5-4aaa-b809-b09f73bd46a7" containerName="registry-server" Feb 02 21:34:20 crc kubenswrapper[4789]: I0202 21:34:20.042601 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="0141f80c-f4b5-4aaa-b809-b09f73bd46a7" containerName="registry-server" Feb 02 21:34:20 crc kubenswrapper[4789]: I0202 21:34:20.042715 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="0141f80c-f4b5-4aaa-b809-b09f73bd46a7" containerName="registry-server" Feb 02 21:34:20 crc kubenswrapper[4789]: I0202 21:34:20.043616 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l67b8" Feb 02 21:34:20 crc kubenswrapper[4789]: I0202 21:34:20.063554 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-l67b8"] Feb 02 21:34:20 crc kubenswrapper[4789]: I0202 21:34:20.210408 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd1eed87-e9b2-4048-ba82-09d5e3efb6e3-utilities\") pod \"community-operators-l67b8\" (UID: \"bd1eed87-e9b2-4048-ba82-09d5e3efb6e3\") " pod="openshift-marketplace/community-operators-l67b8" Feb 02 21:34:20 crc kubenswrapper[4789]: I0202 21:34:20.210469 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlgcf\" (UniqueName: \"kubernetes.io/projected/bd1eed87-e9b2-4048-ba82-09d5e3efb6e3-kube-api-access-dlgcf\") pod \"community-operators-l67b8\" (UID: \"bd1eed87-e9b2-4048-ba82-09d5e3efb6e3\") " pod="openshift-marketplace/community-operators-l67b8" Feb 02 21:34:20 crc kubenswrapper[4789]: I0202 21:34:20.210524 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd1eed87-e9b2-4048-ba82-09d5e3efb6e3-catalog-content\") pod \"community-operators-l67b8\" (UID: \"bd1eed87-e9b2-4048-ba82-09d5e3efb6e3\") " pod="openshift-marketplace/community-operators-l67b8" Feb 02 21:34:20 crc kubenswrapper[4789]: I0202 21:34:20.311692 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlgcf\" (UniqueName: \"kubernetes.io/projected/bd1eed87-e9b2-4048-ba82-09d5e3efb6e3-kube-api-access-dlgcf\") pod \"community-operators-l67b8\" (UID: \"bd1eed87-e9b2-4048-ba82-09d5e3efb6e3\") " pod="openshift-marketplace/community-operators-l67b8" Feb 02 21:34:20 crc kubenswrapper[4789]: I0202 21:34:20.311828 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd1eed87-e9b2-4048-ba82-09d5e3efb6e3-catalog-content\") pod \"community-operators-l67b8\" (UID: \"bd1eed87-e9b2-4048-ba82-09d5e3efb6e3\") " pod="openshift-marketplace/community-operators-l67b8" Feb 02 21:34:20 crc kubenswrapper[4789]: I0202 21:34:20.311901 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd1eed87-e9b2-4048-ba82-09d5e3efb6e3-utilities\") pod \"community-operators-l67b8\" (UID: \"bd1eed87-e9b2-4048-ba82-09d5e3efb6e3\") " pod="openshift-marketplace/community-operators-l67b8" Feb 02 21:34:20 crc kubenswrapper[4789]: I0202 21:34:20.312647 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd1eed87-e9b2-4048-ba82-09d5e3efb6e3-utilities\") pod \"community-operators-l67b8\" (UID: \"bd1eed87-e9b2-4048-ba82-09d5e3efb6e3\") " pod="openshift-marketplace/community-operators-l67b8" Feb 02 21:34:20 crc kubenswrapper[4789]: I0202 21:34:20.312800 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd1eed87-e9b2-4048-ba82-09d5e3efb6e3-catalog-content\") pod \"community-operators-l67b8\" (UID: \"bd1eed87-e9b2-4048-ba82-09d5e3efb6e3\") " pod="openshift-marketplace/community-operators-l67b8" Feb 02 21:34:20 crc kubenswrapper[4789]: I0202 21:34:20.342864 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlgcf\" (UniqueName: \"kubernetes.io/projected/bd1eed87-e9b2-4048-ba82-09d5e3efb6e3-kube-api-access-dlgcf\") pod \"community-operators-l67b8\" (UID: \"bd1eed87-e9b2-4048-ba82-09d5e3efb6e3\") " pod="openshift-marketplace/community-operators-l67b8" Feb 02 21:34:20 crc kubenswrapper[4789]: I0202 21:34:20.362161 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l67b8" Feb 02 21:34:20 crc kubenswrapper[4789]: I0202 21:34:20.826298 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-l67b8"] Feb 02 21:34:20 crc kubenswrapper[4789]: W0202 21:34:20.833988 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbd1eed87_e9b2_4048_ba82_09d5e3efb6e3.slice/crio-de546c4b29546e0a1891b91e1df491f657b0179960c58cbb8e17526fed55f1bc WatchSource:0}: Error finding container de546c4b29546e0a1891b91e1df491f657b0179960c58cbb8e17526fed55f1bc: Status 404 returned error can't find the container with id de546c4b29546e0a1891b91e1df491f657b0179960c58cbb8e17526fed55f1bc Feb 02 21:34:21 crc kubenswrapper[4789]: I0202 21:34:21.426259 4789 generic.go:334] "Generic (PLEG): container finished" podID="bd1eed87-e9b2-4048-ba82-09d5e3efb6e3" containerID="f3aa783ca4987f3c51ada50eb5a2f89de2c162173add88806b0ae1a6c300e482" exitCode=0 Feb 02 21:34:21 crc kubenswrapper[4789]: I0202 21:34:21.426321 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l67b8" event={"ID":"bd1eed87-e9b2-4048-ba82-09d5e3efb6e3","Type":"ContainerDied","Data":"f3aa783ca4987f3c51ada50eb5a2f89de2c162173add88806b0ae1a6c300e482"} Feb 02 21:34:21 crc kubenswrapper[4789]: I0202 21:34:21.426378 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l67b8" event={"ID":"bd1eed87-e9b2-4048-ba82-09d5e3efb6e3","Type":"ContainerStarted","Data":"de546c4b29546e0a1891b91e1df491f657b0179960c58cbb8e17526fed55f1bc"} Feb 02 21:34:22 crc kubenswrapper[4789]: I0202 21:34:22.440235 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l67b8" event={"ID":"bd1eed87-e9b2-4048-ba82-09d5e3efb6e3","Type":"ContainerStarted","Data":"91744b2ba9d6d00c43d886ada7c797a95f49233443dc722adc750f5f32e79813"} Feb 02 21:34:22 crc kubenswrapper[4789]: E0202 21:34:22.484758 4789 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbd1eed87_e9b2_4048_ba82_09d5e3efb6e3.slice/crio-91744b2ba9d6d00c43d886ada7c797a95f49233443dc722adc750f5f32e79813.scope\": RecentStats: unable to find data in memory cache]" Feb 02 21:34:23 crc kubenswrapper[4789]: I0202 21:34:23.451040 4789 generic.go:334] "Generic (PLEG): container finished" podID="bd1eed87-e9b2-4048-ba82-09d5e3efb6e3" containerID="91744b2ba9d6d00c43d886ada7c797a95f49233443dc722adc750f5f32e79813" exitCode=0 Feb 02 21:34:23 crc kubenswrapper[4789]: I0202 21:34:23.451114 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l67b8" event={"ID":"bd1eed87-e9b2-4048-ba82-09d5e3efb6e3","Type":"ContainerDied","Data":"91744b2ba9d6d00c43d886ada7c797a95f49233443dc722adc750f5f32e79813"} Feb 02 21:34:24 crc kubenswrapper[4789]: I0202 21:34:24.459906 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l67b8" event={"ID":"bd1eed87-e9b2-4048-ba82-09d5e3efb6e3","Type":"ContainerStarted","Data":"74eceb87562d5969a1c073580b2906b18894ea4cf841059047c433fe8138debc"} Feb 02 21:34:25 crc kubenswrapper[4789]: I0202 21:34:25.029413 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-5888cd945b-dznx4" Feb 02 21:34:25 crc kubenswrapper[4789]: I0202 21:34:25.059636 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-l67b8" podStartSLOduration=2.42314747 podStartE2EDuration="5.059611718s" podCreationTimestamp="2026-02-02 21:34:20 +0000 UTC" firstStartedPulling="2026-02-02 21:34:21.429033018 +0000 UTC m=+881.724058037" lastFinishedPulling="2026-02-02 21:34:24.065497256 +0000 UTC m=+884.360522285" observedRunningTime="2026-02-02 21:34:24.482887922 +0000 UTC m=+884.777912951" watchObservedRunningTime="2026-02-02 21:34:25.059611718 +0000 UTC m=+885.354636777" Feb 02 21:34:30 crc kubenswrapper[4789]: I0202 21:34:30.363126 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-l67b8" Feb 02 21:34:30 crc kubenswrapper[4789]: I0202 21:34:30.366512 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-l67b8" Feb 02 21:34:30 crc kubenswrapper[4789]: I0202 21:34:30.432774 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-l67b8" Feb 02 21:34:30 crc kubenswrapper[4789]: I0202 21:34:30.564711 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-l67b8" Feb 02 21:34:30 crc kubenswrapper[4789]: I0202 21:34:30.669755 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-l67b8"] Feb 02 21:34:32 crc kubenswrapper[4789]: I0202 21:34:32.513837 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-l67b8" podUID="bd1eed87-e9b2-4048-ba82-09d5e3efb6e3" containerName="registry-server" containerID="cri-o://74eceb87562d5969a1c073580b2906b18894ea4cf841059047c433fe8138debc" gracePeriod=2 Feb 02 21:34:32 crc kubenswrapper[4789]: I0202 21:34:32.922431 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l67b8" Feb 02 21:34:33 crc kubenswrapper[4789]: I0202 21:34:33.087092 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dlgcf\" (UniqueName: \"kubernetes.io/projected/bd1eed87-e9b2-4048-ba82-09d5e3efb6e3-kube-api-access-dlgcf\") pod \"bd1eed87-e9b2-4048-ba82-09d5e3efb6e3\" (UID: \"bd1eed87-e9b2-4048-ba82-09d5e3efb6e3\") " Feb 02 21:34:33 crc kubenswrapper[4789]: I0202 21:34:33.087900 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd1eed87-e9b2-4048-ba82-09d5e3efb6e3-utilities\") pod \"bd1eed87-e9b2-4048-ba82-09d5e3efb6e3\" (UID: \"bd1eed87-e9b2-4048-ba82-09d5e3efb6e3\") " Feb 02 21:34:33 crc kubenswrapper[4789]: I0202 21:34:33.087964 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd1eed87-e9b2-4048-ba82-09d5e3efb6e3-catalog-content\") pod \"bd1eed87-e9b2-4048-ba82-09d5e3efb6e3\" (UID: \"bd1eed87-e9b2-4048-ba82-09d5e3efb6e3\") " Feb 02 21:34:33 crc kubenswrapper[4789]: I0202 21:34:33.089128 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd1eed87-e9b2-4048-ba82-09d5e3efb6e3-utilities" (OuterVolumeSpecName: "utilities") pod "bd1eed87-e9b2-4048-ba82-09d5e3efb6e3" (UID: "bd1eed87-e9b2-4048-ba82-09d5e3efb6e3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 21:34:33 crc kubenswrapper[4789]: I0202 21:34:33.092843 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd1eed87-e9b2-4048-ba82-09d5e3efb6e3-kube-api-access-dlgcf" (OuterVolumeSpecName: "kube-api-access-dlgcf") pod "bd1eed87-e9b2-4048-ba82-09d5e3efb6e3" (UID: "bd1eed87-e9b2-4048-ba82-09d5e3efb6e3"). InnerVolumeSpecName "kube-api-access-dlgcf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:34:33 crc kubenswrapper[4789]: I0202 21:34:33.189660 4789 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd1eed87-e9b2-4048-ba82-09d5e3efb6e3-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 21:34:33 crc kubenswrapper[4789]: I0202 21:34:33.189712 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dlgcf\" (UniqueName: \"kubernetes.io/projected/bd1eed87-e9b2-4048-ba82-09d5e3efb6e3-kube-api-access-dlgcf\") on node \"crc\" DevicePath \"\"" Feb 02 21:34:33 crc kubenswrapper[4789]: I0202 21:34:33.529398 4789 generic.go:334] "Generic (PLEG): container finished" podID="bd1eed87-e9b2-4048-ba82-09d5e3efb6e3" containerID="74eceb87562d5969a1c073580b2906b18894ea4cf841059047c433fe8138debc" exitCode=0 Feb 02 21:34:33 crc kubenswrapper[4789]: I0202 21:34:33.529456 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l67b8" event={"ID":"bd1eed87-e9b2-4048-ba82-09d5e3efb6e3","Type":"ContainerDied","Data":"74eceb87562d5969a1c073580b2906b18894ea4cf841059047c433fe8138debc"} Feb 02 21:34:33 crc kubenswrapper[4789]: I0202 21:34:33.529490 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l67b8" event={"ID":"bd1eed87-e9b2-4048-ba82-09d5e3efb6e3","Type":"ContainerDied","Data":"de546c4b29546e0a1891b91e1df491f657b0179960c58cbb8e17526fed55f1bc"} Feb 02 21:34:33 crc kubenswrapper[4789]: I0202 21:34:33.529512 4789 scope.go:117] "RemoveContainer" containerID="74eceb87562d5969a1c073580b2906b18894ea4cf841059047c433fe8138debc" Feb 02 21:34:33 crc kubenswrapper[4789]: I0202 21:34:33.529645 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l67b8" Feb 02 21:34:33 crc kubenswrapper[4789]: I0202 21:34:33.553041 4789 scope.go:117] "RemoveContainer" containerID="91744b2ba9d6d00c43d886ada7c797a95f49233443dc722adc750f5f32e79813" Feb 02 21:34:33 crc kubenswrapper[4789]: I0202 21:34:33.583291 4789 scope.go:117] "RemoveContainer" containerID="f3aa783ca4987f3c51ada50eb5a2f89de2c162173add88806b0ae1a6c300e482" Feb 02 21:34:33 crc kubenswrapper[4789]: I0202 21:34:33.609378 4789 scope.go:117] "RemoveContainer" containerID="74eceb87562d5969a1c073580b2906b18894ea4cf841059047c433fe8138debc" Feb 02 21:34:33 crc kubenswrapper[4789]: E0202 21:34:33.610045 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74eceb87562d5969a1c073580b2906b18894ea4cf841059047c433fe8138debc\": container with ID starting with 74eceb87562d5969a1c073580b2906b18894ea4cf841059047c433fe8138debc not found: ID does not exist" containerID="74eceb87562d5969a1c073580b2906b18894ea4cf841059047c433fe8138debc" Feb 02 21:34:33 crc kubenswrapper[4789]: I0202 21:34:33.610087 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74eceb87562d5969a1c073580b2906b18894ea4cf841059047c433fe8138debc"} err="failed to get container status \"74eceb87562d5969a1c073580b2906b18894ea4cf841059047c433fe8138debc\": rpc error: code = NotFound desc = could not find container \"74eceb87562d5969a1c073580b2906b18894ea4cf841059047c433fe8138debc\": container with ID starting with 74eceb87562d5969a1c073580b2906b18894ea4cf841059047c433fe8138debc not found: ID does not exist" Feb 02 21:34:33 crc kubenswrapper[4789]: I0202 21:34:33.610112 4789 scope.go:117] "RemoveContainer" containerID="91744b2ba9d6d00c43d886ada7c797a95f49233443dc722adc750f5f32e79813" Feb 02 21:34:33 crc kubenswrapper[4789]: E0202 21:34:33.610527 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91744b2ba9d6d00c43d886ada7c797a95f49233443dc722adc750f5f32e79813\": container with ID starting with 91744b2ba9d6d00c43d886ada7c797a95f49233443dc722adc750f5f32e79813 not found: ID does not exist" containerID="91744b2ba9d6d00c43d886ada7c797a95f49233443dc722adc750f5f32e79813" Feb 02 21:34:33 crc kubenswrapper[4789]: I0202 21:34:33.610549 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91744b2ba9d6d00c43d886ada7c797a95f49233443dc722adc750f5f32e79813"} err="failed to get container status \"91744b2ba9d6d00c43d886ada7c797a95f49233443dc722adc750f5f32e79813\": rpc error: code = NotFound desc = could not find container \"91744b2ba9d6d00c43d886ada7c797a95f49233443dc722adc750f5f32e79813\": container with ID starting with 91744b2ba9d6d00c43d886ada7c797a95f49233443dc722adc750f5f32e79813 not found: ID does not exist" Feb 02 21:34:33 crc kubenswrapper[4789]: I0202 21:34:33.610561 4789 scope.go:117] "RemoveContainer" containerID="f3aa783ca4987f3c51ada50eb5a2f89de2c162173add88806b0ae1a6c300e482" Feb 02 21:34:33 crc kubenswrapper[4789]: E0202 21:34:33.611190 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3aa783ca4987f3c51ada50eb5a2f89de2c162173add88806b0ae1a6c300e482\": container with ID starting with f3aa783ca4987f3c51ada50eb5a2f89de2c162173add88806b0ae1a6c300e482 not found: ID does not exist" containerID="f3aa783ca4987f3c51ada50eb5a2f89de2c162173add88806b0ae1a6c300e482" Feb 02 21:34:33 crc kubenswrapper[4789]: I0202 21:34:33.611242 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3aa783ca4987f3c51ada50eb5a2f89de2c162173add88806b0ae1a6c300e482"} err="failed to get container status \"f3aa783ca4987f3c51ada50eb5a2f89de2c162173add88806b0ae1a6c300e482\": rpc error: code = NotFound desc = could not find container \"f3aa783ca4987f3c51ada50eb5a2f89de2c162173add88806b0ae1a6c300e482\": container with ID starting with f3aa783ca4987f3c51ada50eb5a2f89de2c162173add88806b0ae1a6c300e482 not found: ID does not exist" Feb 02 21:34:34 crc kubenswrapper[4789]: I0202 21:34:34.028617 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd1eed87-e9b2-4048-ba82-09d5e3efb6e3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bd1eed87-e9b2-4048-ba82-09d5e3efb6e3" (UID: "bd1eed87-e9b2-4048-ba82-09d5e3efb6e3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 21:34:34 crc kubenswrapper[4789]: I0202 21:34:34.103220 4789 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd1eed87-e9b2-4048-ba82-09d5e3efb6e3-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 21:34:34 crc kubenswrapper[4789]: I0202 21:34:34.170065 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-l67b8"] Feb 02 21:34:34 crc kubenswrapper[4789]: I0202 21:34:34.177125 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-l67b8"] Feb 02 21:34:34 crc kubenswrapper[4789]: I0202 21:34:34.427439 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd1eed87-e9b2-4048-ba82-09d5e3efb6e3" path="/var/lib/kubelet/pods/bd1eed87-e9b2-4048-ba82-09d5e3efb6e3/volumes" Feb 02 21:34:37 crc kubenswrapper[4789]: I0202 21:34:37.234687 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9dp6b"] Feb 02 21:34:37 crc kubenswrapper[4789]: E0202 21:34:37.235525 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd1eed87-e9b2-4048-ba82-09d5e3efb6e3" containerName="registry-server" Feb 02 21:34:37 crc kubenswrapper[4789]: I0202 21:34:37.235558 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd1eed87-e9b2-4048-ba82-09d5e3efb6e3" containerName="registry-server" Feb 02 21:34:37 crc kubenswrapper[4789]: E0202 21:34:37.235639 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd1eed87-e9b2-4048-ba82-09d5e3efb6e3" containerName="extract-content" Feb 02 21:34:37 crc kubenswrapper[4789]: I0202 21:34:37.235660 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd1eed87-e9b2-4048-ba82-09d5e3efb6e3" containerName="extract-content" Feb 02 21:34:37 crc kubenswrapper[4789]: E0202 21:34:37.235702 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd1eed87-e9b2-4048-ba82-09d5e3efb6e3" containerName="extract-utilities" Feb 02 21:34:37 crc kubenswrapper[4789]: I0202 21:34:37.235722 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd1eed87-e9b2-4048-ba82-09d5e3efb6e3" containerName="extract-utilities" Feb 02 21:34:37 crc kubenswrapper[4789]: I0202 21:34:37.235961 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd1eed87-e9b2-4048-ba82-09d5e3efb6e3" containerName="registry-server" Feb 02 21:34:37 crc kubenswrapper[4789]: I0202 21:34:37.237940 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9dp6b" Feb 02 21:34:37 crc kubenswrapper[4789]: I0202 21:34:37.253276 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9dp6b"] Feb 02 21:34:37 crc kubenswrapper[4789]: I0202 21:34:37.350327 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xnlg\" (UniqueName: \"kubernetes.io/projected/376081c4-200e-4de3-84d0-01a7ceb32de1-kube-api-access-4xnlg\") pod \"certified-operators-9dp6b\" (UID: \"376081c4-200e-4de3-84d0-01a7ceb32de1\") " pod="openshift-marketplace/certified-operators-9dp6b" Feb 02 21:34:37 crc kubenswrapper[4789]: I0202 21:34:37.350429 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/376081c4-200e-4de3-84d0-01a7ceb32de1-catalog-content\") pod \"certified-operators-9dp6b\" (UID: \"376081c4-200e-4de3-84d0-01a7ceb32de1\") " pod="openshift-marketplace/certified-operators-9dp6b" Feb 02 21:34:37 crc kubenswrapper[4789]: I0202 21:34:37.350466 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/376081c4-200e-4de3-84d0-01a7ceb32de1-utilities\") pod \"certified-operators-9dp6b\" (UID: \"376081c4-200e-4de3-84d0-01a7ceb32de1\") " pod="openshift-marketplace/certified-operators-9dp6b" Feb 02 21:34:37 crc kubenswrapper[4789]: I0202 21:34:37.451640 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/376081c4-200e-4de3-84d0-01a7ceb32de1-catalog-content\") pod \"certified-operators-9dp6b\" (UID: \"376081c4-200e-4de3-84d0-01a7ceb32de1\") " pod="openshift-marketplace/certified-operators-9dp6b" Feb 02 21:34:37 crc kubenswrapper[4789]: I0202 21:34:37.451727 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/376081c4-200e-4de3-84d0-01a7ceb32de1-utilities\") pod \"certified-operators-9dp6b\" (UID: \"376081c4-200e-4de3-84d0-01a7ceb32de1\") " pod="openshift-marketplace/certified-operators-9dp6b" Feb 02 21:34:37 crc kubenswrapper[4789]: I0202 21:34:37.451862 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xnlg\" (UniqueName: \"kubernetes.io/projected/376081c4-200e-4de3-84d0-01a7ceb32de1-kube-api-access-4xnlg\") pod \"certified-operators-9dp6b\" (UID: \"376081c4-200e-4de3-84d0-01a7ceb32de1\") " pod="openshift-marketplace/certified-operators-9dp6b" Feb 02 21:34:37 crc kubenswrapper[4789]: I0202 21:34:37.452256 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/376081c4-200e-4de3-84d0-01a7ceb32de1-catalog-content\") pod \"certified-operators-9dp6b\" (UID: \"376081c4-200e-4de3-84d0-01a7ceb32de1\") " pod="openshift-marketplace/certified-operators-9dp6b" Feb 02 21:34:37 crc kubenswrapper[4789]: I0202 21:34:37.452269 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/376081c4-200e-4de3-84d0-01a7ceb32de1-utilities\") pod \"certified-operators-9dp6b\" (UID: \"376081c4-200e-4de3-84d0-01a7ceb32de1\") " pod="openshift-marketplace/certified-operators-9dp6b" Feb 02 21:34:37 crc kubenswrapper[4789]: I0202 21:34:37.476559 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xnlg\" (UniqueName: \"kubernetes.io/projected/376081c4-200e-4de3-84d0-01a7ceb32de1-kube-api-access-4xnlg\") pod \"certified-operators-9dp6b\" (UID: \"376081c4-200e-4de3-84d0-01a7ceb32de1\") " pod="openshift-marketplace/certified-operators-9dp6b" Feb 02 21:34:37 crc kubenswrapper[4789]: I0202 21:34:37.569026 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9dp6b" Feb 02 21:34:37 crc kubenswrapper[4789]: I0202 21:34:37.814644 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9dp6b"] Feb 02 21:34:38 crc kubenswrapper[4789]: I0202 21:34:38.562948 4789 generic.go:334] "Generic (PLEG): container finished" podID="376081c4-200e-4de3-84d0-01a7ceb32de1" containerID="e44b51d04cfbcff7a81d38628e39b5e75dcdbfe0824b0f0079b030d368f2c04d" exitCode=0 Feb 02 21:34:38 crc kubenswrapper[4789]: I0202 21:34:38.563009 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9dp6b" event={"ID":"376081c4-200e-4de3-84d0-01a7ceb32de1","Type":"ContainerDied","Data":"e44b51d04cfbcff7a81d38628e39b5e75dcdbfe0824b0f0079b030d368f2c04d"} Feb 02 21:34:38 crc kubenswrapper[4789]: I0202 21:34:38.563049 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9dp6b" event={"ID":"376081c4-200e-4de3-84d0-01a7ceb32de1","Type":"ContainerStarted","Data":"a06556fe17505be20d50b2cd3b5f7182882b7ae36d3eb3599614f72ebc8d310f"} Feb 02 21:34:40 crc kubenswrapper[4789]: I0202 21:34:40.577301 4789 generic.go:334] "Generic (PLEG): container finished" podID="376081c4-200e-4de3-84d0-01a7ceb32de1" containerID="53992be25412e42ed35606497cacf034c5200969cf447f8eb070c4f1d5d6de5f" exitCode=0 Feb 02 21:34:40 crc kubenswrapper[4789]: I0202 21:34:40.577396 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9dp6b" event={"ID":"376081c4-200e-4de3-84d0-01a7ceb32de1","Type":"ContainerDied","Data":"53992be25412e42ed35606497cacf034c5200969cf447f8eb070c4f1d5d6de5f"} Feb 02 21:34:41 crc kubenswrapper[4789]: I0202 21:34:41.588724 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9dp6b" event={"ID":"376081c4-200e-4de3-84d0-01a7ceb32de1","Type":"ContainerStarted","Data":"18a0e321e45c6870c9ab593e0a1f6a2b12b6680c5dd48b65224d589ff4b1e6a0"} Feb 02 21:34:41 crc kubenswrapper[4789]: I0202 21:34:41.619220 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9dp6b" podStartSLOduration=2.193270753 podStartE2EDuration="4.619196125s" podCreationTimestamp="2026-02-02 21:34:37 +0000 UTC" firstStartedPulling="2026-02-02 21:34:38.566335819 +0000 UTC m=+898.861360848" lastFinishedPulling="2026-02-02 21:34:40.992261191 +0000 UTC m=+901.287286220" observedRunningTime="2026-02-02 21:34:41.611480557 +0000 UTC m=+901.906505606" watchObservedRunningTime="2026-02-02 21:34:41.619196125 +0000 UTC m=+901.914221174" Feb 02 21:34:44 crc kubenswrapper[4789]: I0202 21:34:44.652490 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-65474b67b5-ccmdm" Feb 02 21:34:45 crc kubenswrapper[4789]: I0202 21:34:45.509609 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-4nlg4"] Feb 02 21:34:45 crc kubenswrapper[4789]: I0202 21:34:45.512987 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-4nlg4" Feb 02 21:34:45 crc kubenswrapper[4789]: I0202 21:34:45.515227 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-kffd2"] Feb 02 21:34:45 crc kubenswrapper[4789]: I0202 21:34:45.515918 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-kffd2" Feb 02 21:34:45 crc kubenswrapper[4789]: I0202 21:34:45.517071 4789 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-n5n85" Feb 02 21:34:45 crc kubenswrapper[4789]: I0202 21:34:45.517326 4789 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 02 21:34:45 crc kubenswrapper[4789]: I0202 21:34:45.517550 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 02 21:34:45 crc kubenswrapper[4789]: I0202 21:34:45.517718 4789 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 02 21:34:45 crc kubenswrapper[4789]: I0202 21:34:45.534527 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-kffd2"] Feb 02 21:34:45 crc kubenswrapper[4789]: I0202 21:34:45.597997 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-6fxbj"] Feb 02 21:34:45 crc kubenswrapper[4789]: I0202 21:34:45.599008 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-6fxbj" Feb 02 21:34:45 crc kubenswrapper[4789]: I0202 21:34:45.601392 4789 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-xfbsg" Feb 02 21:34:45 crc kubenswrapper[4789]: I0202 21:34:45.601411 4789 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 02 21:34:45 crc kubenswrapper[4789]: I0202 21:34:45.601804 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 02 21:34:45 crc kubenswrapper[4789]: I0202 21:34:45.601936 4789 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 02 21:34:45 crc kubenswrapper[4789]: I0202 21:34:45.616648 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-hnv82"] Feb 02 21:34:45 crc kubenswrapper[4789]: I0202 21:34:45.617421 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-hnv82" Feb 02 21:34:45 crc kubenswrapper[4789]: I0202 21:34:45.623472 4789 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 02 21:34:45 crc kubenswrapper[4789]: I0202 21:34:45.624749 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/93640dbd-7a68-4fe1-8feb-8fa519bfc5d0-reloader\") pod \"frr-k8s-4nlg4\" (UID: \"93640dbd-7a68-4fe1-8feb-8fa519bfc5d0\") " pod="metallb-system/frr-k8s-4nlg4" Feb 02 21:34:45 crc kubenswrapper[4789]: I0202 21:34:45.624793 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z62ml\" (UniqueName: \"kubernetes.io/projected/93640dbd-7a68-4fe1-8feb-8fa519bfc5d0-kube-api-access-z62ml\") pod \"frr-k8s-4nlg4\" (UID: \"93640dbd-7a68-4fe1-8feb-8fa519bfc5d0\") " pod="metallb-system/frr-k8s-4nlg4" Feb 02 21:34:45 crc kubenswrapper[4789]: I0202 21:34:45.624815 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/93640dbd-7a68-4fe1-8feb-8fa519bfc5d0-frr-sockets\") pod \"frr-k8s-4nlg4\" (UID: \"93640dbd-7a68-4fe1-8feb-8fa519bfc5d0\") " pod="metallb-system/frr-k8s-4nlg4" Feb 02 21:34:45 crc kubenswrapper[4789]: I0202 21:34:45.624841 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/93640dbd-7a68-4fe1-8feb-8fa519bfc5d0-metrics-certs\") pod \"frr-k8s-4nlg4\" (UID: \"93640dbd-7a68-4fe1-8feb-8fa519bfc5d0\") " pod="metallb-system/frr-k8s-4nlg4" Feb 02 21:34:45 crc kubenswrapper[4789]: I0202 21:34:45.624875 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9brjp\" (UniqueName: \"kubernetes.io/projected/f8e3f662-8c3b-4324-af03-4e6135a4bbb3-kube-api-access-9brjp\") pod \"frr-k8s-webhook-server-7df86c4f6c-kffd2\" (UID: \"f8e3f662-8c3b-4324-af03-4e6135a4bbb3\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-kffd2" Feb 02 21:34:45 crc kubenswrapper[4789]: I0202 21:34:45.624892 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f8e3f662-8c3b-4324-af03-4e6135a4bbb3-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-kffd2\" (UID: \"f8e3f662-8c3b-4324-af03-4e6135a4bbb3\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-kffd2" Feb 02 21:34:45 crc kubenswrapper[4789]: I0202 21:34:45.624955 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/93640dbd-7a68-4fe1-8feb-8fa519bfc5d0-frr-conf\") pod \"frr-k8s-4nlg4\" (UID: \"93640dbd-7a68-4fe1-8feb-8fa519bfc5d0\") " pod="metallb-system/frr-k8s-4nlg4" Feb 02 21:34:45 crc kubenswrapper[4789]: I0202 21:34:45.624998 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/93640dbd-7a68-4fe1-8feb-8fa519bfc5d0-frr-startup\") pod \"frr-k8s-4nlg4\" (UID: \"93640dbd-7a68-4fe1-8feb-8fa519bfc5d0\") " pod="metallb-system/frr-k8s-4nlg4" Feb 02 21:34:45 crc kubenswrapper[4789]: I0202 21:34:45.625028 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/93640dbd-7a68-4fe1-8feb-8fa519bfc5d0-metrics\") pod \"frr-k8s-4nlg4\" (UID: \"93640dbd-7a68-4fe1-8feb-8fa519bfc5d0\") " pod="metallb-system/frr-k8s-4nlg4" Feb 02 21:34:45 crc kubenswrapper[4789]: I0202 21:34:45.632103 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-hnv82"] Feb 02 21:34:45 crc kubenswrapper[4789]: I0202 21:34:45.726243 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gk8sq\" (UniqueName: \"kubernetes.io/projected/6a3c98cd-b4e3-4f60-a4f8-5068fc45c634-kube-api-access-gk8sq\") pod \"controller-6968d8fdc4-hnv82\" (UID: \"6a3c98cd-b4e3-4f60-a4f8-5068fc45c634\") " pod="metallb-system/controller-6968d8fdc4-hnv82" Feb 02 21:34:45 crc kubenswrapper[4789]: I0202 21:34:45.726317 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9brjp\" (UniqueName: \"kubernetes.io/projected/f8e3f662-8c3b-4324-af03-4e6135a4bbb3-kube-api-access-9brjp\") pod \"frr-k8s-webhook-server-7df86c4f6c-kffd2\" (UID: \"f8e3f662-8c3b-4324-af03-4e6135a4bbb3\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-kffd2" Feb 02 21:34:45 crc kubenswrapper[4789]: I0202 21:34:45.726682 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f8e3f662-8c3b-4324-af03-4e6135a4bbb3-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-kffd2\" (UID: \"f8e3f662-8c3b-4324-af03-4e6135a4bbb3\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-kffd2" Feb 02 21:34:45 crc kubenswrapper[4789]: I0202 21:34:45.726719 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/ae39df38-39b3-4d32-a1d7-d521b31b2840-metallb-excludel2\") pod \"speaker-6fxbj\" (UID: \"ae39df38-39b3-4d32-a1d7-d521b31b2840\") " pod="metallb-system/speaker-6fxbj" Feb 02 21:34:45 crc kubenswrapper[4789]: I0202 21:34:45.726747 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ae39df38-39b3-4d32-a1d7-d521b31b2840-metrics-certs\") pod \"speaker-6fxbj\" (UID: \"ae39df38-39b3-4d32-a1d7-d521b31b2840\") " pod="metallb-system/speaker-6fxbj" Feb 02 21:34:45 crc kubenswrapper[4789]: I0202 21:34:45.726776 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/93640dbd-7a68-4fe1-8feb-8fa519bfc5d0-frr-conf\") pod \"frr-k8s-4nlg4\" (UID: \"93640dbd-7a68-4fe1-8feb-8fa519bfc5d0\") " pod="metallb-system/frr-k8s-4nlg4" Feb 02 21:34:45 crc kubenswrapper[4789]: E0202 21:34:45.726801 4789 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Feb 02 21:34:45 crc kubenswrapper[4789]: I0202 21:34:45.726815 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6a3c98cd-b4e3-4f60-a4f8-5068fc45c634-cert\") pod \"controller-6968d8fdc4-hnv82\" (UID: \"6a3c98cd-b4e3-4f60-a4f8-5068fc45c634\") " pod="metallb-system/controller-6968d8fdc4-hnv82" Feb 02 21:34:45 crc kubenswrapper[4789]: E0202 21:34:45.726876 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f8e3f662-8c3b-4324-af03-4e6135a4bbb3-cert podName:f8e3f662-8c3b-4324-af03-4e6135a4bbb3 nodeName:}" failed. No retries permitted until 2026-02-02 21:34:46.226859087 +0000 UTC m=+906.521884106 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f8e3f662-8c3b-4324-af03-4e6135a4bbb3-cert") pod "frr-k8s-webhook-server-7df86c4f6c-kffd2" (UID: "f8e3f662-8c3b-4324-af03-4e6135a4bbb3") : secret "frr-k8s-webhook-server-cert" not found Feb 02 21:34:45 crc kubenswrapper[4789]: I0202 21:34:45.726970 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/ae39df38-39b3-4d32-a1d7-d521b31b2840-memberlist\") pod \"speaker-6fxbj\" (UID: \"ae39df38-39b3-4d32-a1d7-d521b31b2840\") " pod="metallb-system/speaker-6fxbj" Feb 02 21:34:45 crc kubenswrapper[4789]: I0202 21:34:45.727031 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/93640dbd-7a68-4fe1-8feb-8fa519bfc5d0-frr-startup\") pod \"frr-k8s-4nlg4\" (UID: \"93640dbd-7a68-4fe1-8feb-8fa519bfc5d0\") " pod="metallb-system/frr-k8s-4nlg4" Feb 02 21:34:45 crc kubenswrapper[4789]: I0202 21:34:45.727062 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jrvb\" (UniqueName: \"kubernetes.io/projected/ae39df38-39b3-4d32-a1d7-d521b31b2840-kube-api-access-5jrvb\") pod \"speaker-6fxbj\" (UID: \"ae39df38-39b3-4d32-a1d7-d521b31b2840\") " pod="metallb-system/speaker-6fxbj" Feb 02 21:34:45 crc kubenswrapper[4789]: I0202 21:34:45.727116 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/93640dbd-7a68-4fe1-8feb-8fa519bfc5d0-metrics\") pod \"frr-k8s-4nlg4\" (UID: \"93640dbd-7a68-4fe1-8feb-8fa519bfc5d0\") " pod="metallb-system/frr-k8s-4nlg4" Feb 02 21:34:45 crc kubenswrapper[4789]: I0202 21:34:45.727143 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/93640dbd-7a68-4fe1-8feb-8fa519bfc5d0-reloader\") pod \"frr-k8s-4nlg4\" (UID: \"93640dbd-7a68-4fe1-8feb-8fa519bfc5d0\") " pod="metallb-system/frr-k8s-4nlg4" Feb 02 21:34:45 crc kubenswrapper[4789]: I0202 21:34:45.727168 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6a3c98cd-b4e3-4f60-a4f8-5068fc45c634-metrics-certs\") pod \"controller-6968d8fdc4-hnv82\" (UID: \"6a3c98cd-b4e3-4f60-a4f8-5068fc45c634\") " pod="metallb-system/controller-6968d8fdc4-hnv82" Feb 02 21:34:45 crc kubenswrapper[4789]: I0202 21:34:45.727184 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/93640dbd-7a68-4fe1-8feb-8fa519bfc5d0-frr-conf\") pod \"frr-k8s-4nlg4\" (UID: \"93640dbd-7a68-4fe1-8feb-8fa519bfc5d0\") " pod="metallb-system/frr-k8s-4nlg4" Feb 02 21:34:45 crc kubenswrapper[4789]: I0202 21:34:45.727268 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z62ml\" (UniqueName: \"kubernetes.io/projected/93640dbd-7a68-4fe1-8feb-8fa519bfc5d0-kube-api-access-z62ml\") pod \"frr-k8s-4nlg4\" (UID: \"93640dbd-7a68-4fe1-8feb-8fa519bfc5d0\") " pod="metallb-system/frr-k8s-4nlg4" Feb 02 21:34:45 crc kubenswrapper[4789]: I0202 21:34:45.727302 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/93640dbd-7a68-4fe1-8feb-8fa519bfc5d0-frr-sockets\") pod \"frr-k8s-4nlg4\" (UID: \"93640dbd-7a68-4fe1-8feb-8fa519bfc5d0\") " pod="metallb-system/frr-k8s-4nlg4" Feb 02 21:34:45 crc kubenswrapper[4789]: I0202 21:34:45.727336 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/93640dbd-7a68-4fe1-8feb-8fa519bfc5d0-metrics-certs\") pod \"frr-k8s-4nlg4\" (UID: \"93640dbd-7a68-4fe1-8feb-8fa519bfc5d0\") " pod="metallb-system/frr-k8s-4nlg4" Feb 02 21:34:45 crc kubenswrapper[4789]: I0202 21:34:45.727444 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/93640dbd-7a68-4fe1-8feb-8fa519bfc5d0-metrics\") pod \"frr-k8s-4nlg4\" (UID: \"93640dbd-7a68-4fe1-8feb-8fa519bfc5d0\") " pod="metallb-system/frr-k8s-4nlg4" Feb 02 21:34:45 crc kubenswrapper[4789]: I0202 21:34:45.727658 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/93640dbd-7a68-4fe1-8feb-8fa519bfc5d0-reloader\") pod \"frr-k8s-4nlg4\" (UID: \"93640dbd-7a68-4fe1-8feb-8fa519bfc5d0\") " pod="metallb-system/frr-k8s-4nlg4" Feb 02 21:34:45 crc kubenswrapper[4789]: I0202 21:34:45.727770 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/93640dbd-7a68-4fe1-8feb-8fa519bfc5d0-frr-sockets\") pod \"frr-k8s-4nlg4\" (UID: \"93640dbd-7a68-4fe1-8feb-8fa519bfc5d0\") " pod="metallb-system/frr-k8s-4nlg4" Feb 02 21:34:45 crc kubenswrapper[4789]: I0202 21:34:45.729059 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/93640dbd-7a68-4fe1-8feb-8fa519bfc5d0-frr-startup\") pod \"frr-k8s-4nlg4\" (UID: \"93640dbd-7a68-4fe1-8feb-8fa519bfc5d0\") " pod="metallb-system/frr-k8s-4nlg4" Feb 02 21:34:45 crc kubenswrapper[4789]: I0202 21:34:45.733045 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/93640dbd-7a68-4fe1-8feb-8fa519bfc5d0-metrics-certs\") pod \"frr-k8s-4nlg4\" (UID: \"93640dbd-7a68-4fe1-8feb-8fa519bfc5d0\") " pod="metallb-system/frr-k8s-4nlg4" Feb 02 21:34:45 crc kubenswrapper[4789]: I0202 21:34:45.749267 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z62ml\" (UniqueName: \"kubernetes.io/projected/93640dbd-7a68-4fe1-8feb-8fa519bfc5d0-kube-api-access-z62ml\") pod \"frr-k8s-4nlg4\" (UID: \"93640dbd-7a68-4fe1-8feb-8fa519bfc5d0\") " pod="metallb-system/frr-k8s-4nlg4" Feb 02 21:34:45 crc kubenswrapper[4789]: I0202 21:34:45.754695 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9brjp\" (UniqueName: \"kubernetes.io/projected/f8e3f662-8c3b-4324-af03-4e6135a4bbb3-kube-api-access-9brjp\") pod \"frr-k8s-webhook-server-7df86c4f6c-kffd2\" (UID: \"f8e3f662-8c3b-4324-af03-4e6135a4bbb3\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-kffd2" Feb 02 21:34:45 crc kubenswrapper[4789]: I0202 21:34:45.828669 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6a3c98cd-b4e3-4f60-a4f8-5068fc45c634-metrics-certs\") pod \"controller-6968d8fdc4-hnv82\" (UID: \"6a3c98cd-b4e3-4f60-a4f8-5068fc45c634\") " pod="metallb-system/controller-6968d8fdc4-hnv82" Feb 02 21:34:45 crc kubenswrapper[4789]: I0202 21:34:45.828746 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gk8sq\" (UniqueName: \"kubernetes.io/projected/6a3c98cd-b4e3-4f60-a4f8-5068fc45c634-kube-api-access-gk8sq\") pod \"controller-6968d8fdc4-hnv82\" (UID: \"6a3c98cd-b4e3-4f60-a4f8-5068fc45c634\") " pod="metallb-system/controller-6968d8fdc4-hnv82" Feb 02 21:34:45 crc kubenswrapper[4789]: I0202 21:34:45.828788 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/ae39df38-39b3-4d32-a1d7-d521b31b2840-metallb-excludel2\") pod \"speaker-6fxbj\" (UID: \"ae39df38-39b3-4d32-a1d7-d521b31b2840\") " pod="metallb-system/speaker-6fxbj" Feb 02 21:34:45 crc kubenswrapper[4789]: I0202 21:34:45.828804 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ae39df38-39b3-4d32-a1d7-d521b31b2840-metrics-certs\") pod \"speaker-6fxbj\" (UID: \"ae39df38-39b3-4d32-a1d7-d521b31b2840\") " pod="metallb-system/speaker-6fxbj" Feb 02 21:34:45 crc kubenswrapper[4789]: I0202 21:34:45.828831 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6a3c98cd-b4e3-4f60-a4f8-5068fc45c634-cert\") pod \"controller-6968d8fdc4-hnv82\" (UID: \"6a3c98cd-b4e3-4f60-a4f8-5068fc45c634\") " pod="metallb-system/controller-6968d8fdc4-hnv82" Feb 02 21:34:45 crc kubenswrapper[4789]: I0202 21:34:45.828850 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/ae39df38-39b3-4d32-a1d7-d521b31b2840-memberlist\") pod \"speaker-6fxbj\" (UID: \"ae39df38-39b3-4d32-a1d7-d521b31b2840\") " pod="metallb-system/speaker-6fxbj" Feb 02 21:34:45 crc kubenswrapper[4789]: I0202 21:34:45.828866 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jrvb\" (UniqueName: \"kubernetes.io/projected/ae39df38-39b3-4d32-a1d7-d521b31b2840-kube-api-access-5jrvb\") pod \"speaker-6fxbj\" (UID: \"ae39df38-39b3-4d32-a1d7-d521b31b2840\") " pod="metallb-system/speaker-6fxbj" Feb 02 21:34:45 crc kubenswrapper[4789]: E0202 21:34:45.829456 4789 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 02 21:34:45 crc kubenswrapper[4789]: E0202 21:34:45.829543 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ae39df38-39b3-4d32-a1d7-d521b31b2840-memberlist podName:ae39df38-39b3-4d32-a1d7-d521b31b2840 nodeName:}" failed. No retries permitted until 2026-02-02 21:34:46.329514305 +0000 UTC m=+906.624539344 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/ae39df38-39b3-4d32-a1d7-d521b31b2840-memberlist") pod "speaker-6fxbj" (UID: "ae39df38-39b3-4d32-a1d7-d521b31b2840") : secret "metallb-memberlist" not found Feb 02 21:34:45 crc kubenswrapper[4789]: I0202 21:34:45.829851 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/ae39df38-39b3-4d32-a1d7-d521b31b2840-metallb-excludel2\") pod \"speaker-6fxbj\" (UID: \"ae39df38-39b3-4d32-a1d7-d521b31b2840\") " pod="metallb-system/speaker-6fxbj" Feb 02 21:34:45 crc kubenswrapper[4789]: I0202 21:34:45.832235 4789 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 02 21:34:45 crc kubenswrapper[4789]: I0202 21:34:45.833040 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6a3c98cd-b4e3-4f60-a4f8-5068fc45c634-metrics-certs\") pod \"controller-6968d8fdc4-hnv82\" (UID: \"6a3c98cd-b4e3-4f60-a4f8-5068fc45c634\") " pod="metallb-system/controller-6968d8fdc4-hnv82" Feb 02 21:34:45 crc kubenswrapper[4789]: I0202 21:34:45.833820 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ae39df38-39b3-4d32-a1d7-d521b31b2840-metrics-certs\") pod \"speaker-6fxbj\" (UID: \"ae39df38-39b3-4d32-a1d7-d521b31b2840\") " pod="metallb-system/speaker-6fxbj" Feb 02 21:34:45 crc kubenswrapper[4789]: I0202 21:34:45.840212 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-4nlg4" Feb 02 21:34:45 crc kubenswrapper[4789]: I0202 21:34:45.842760 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6a3c98cd-b4e3-4f60-a4f8-5068fc45c634-cert\") pod \"controller-6968d8fdc4-hnv82\" (UID: \"6a3c98cd-b4e3-4f60-a4f8-5068fc45c634\") " pod="metallb-system/controller-6968d8fdc4-hnv82" Feb 02 21:34:45 crc kubenswrapper[4789]: I0202 21:34:45.847211 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gk8sq\" (UniqueName: \"kubernetes.io/projected/6a3c98cd-b4e3-4f60-a4f8-5068fc45c634-kube-api-access-gk8sq\") pod \"controller-6968d8fdc4-hnv82\" (UID: \"6a3c98cd-b4e3-4f60-a4f8-5068fc45c634\") " pod="metallb-system/controller-6968d8fdc4-hnv82" Feb 02 21:34:45 crc kubenswrapper[4789]: I0202 21:34:45.851695 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jrvb\" (UniqueName: \"kubernetes.io/projected/ae39df38-39b3-4d32-a1d7-d521b31b2840-kube-api-access-5jrvb\") pod \"speaker-6fxbj\" (UID: \"ae39df38-39b3-4d32-a1d7-d521b31b2840\") " pod="metallb-system/speaker-6fxbj" Feb 02 21:34:45 crc kubenswrapper[4789]: I0202 21:34:45.933105 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-hnv82" Feb 02 21:34:46 crc kubenswrapper[4789]: I0202 21:34:46.174839 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-hnv82"] Feb 02 21:34:46 crc kubenswrapper[4789]: I0202 21:34:46.234957 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f8e3f662-8c3b-4324-af03-4e6135a4bbb3-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-kffd2\" (UID: \"f8e3f662-8c3b-4324-af03-4e6135a4bbb3\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-kffd2" Feb 02 21:34:46 crc kubenswrapper[4789]: I0202 21:34:46.243496 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f8e3f662-8c3b-4324-af03-4e6135a4bbb3-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-kffd2\" (UID: \"f8e3f662-8c3b-4324-af03-4e6135a4bbb3\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-kffd2" Feb 02 21:34:46 crc kubenswrapper[4789]: I0202 21:34:46.336547 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/ae39df38-39b3-4d32-a1d7-d521b31b2840-memberlist\") pod \"speaker-6fxbj\" (UID: \"ae39df38-39b3-4d32-a1d7-d521b31b2840\") " pod="metallb-system/speaker-6fxbj" Feb 02 21:34:46 crc kubenswrapper[4789]: E0202 21:34:46.336745 4789 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 02 21:34:46 crc kubenswrapper[4789]: E0202 21:34:46.336817 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ae39df38-39b3-4d32-a1d7-d521b31b2840-memberlist podName:ae39df38-39b3-4d32-a1d7-d521b31b2840 nodeName:}" failed. No retries permitted until 2026-02-02 21:34:47.33679398 +0000 UTC m=+907.631819039 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/ae39df38-39b3-4d32-a1d7-d521b31b2840-memberlist") pod "speaker-6fxbj" (UID: "ae39df38-39b3-4d32-a1d7-d521b31b2840") : secret "metallb-memberlist" not found Feb 02 21:34:46 crc kubenswrapper[4789]: I0202 21:34:46.450164 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-kffd2" Feb 02 21:34:46 crc kubenswrapper[4789]: I0202 21:34:46.619918 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4nlg4" event={"ID":"93640dbd-7a68-4fe1-8feb-8fa519bfc5d0","Type":"ContainerStarted","Data":"a52467c4a59d619ada7569411a9309921c3bc09b2152be7ae1ac5f399bdfe37e"} Feb 02 21:34:46 crc kubenswrapper[4789]: I0202 21:34:46.621482 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-hnv82" event={"ID":"6a3c98cd-b4e3-4f60-a4f8-5068fc45c634","Type":"ContainerStarted","Data":"c091d8900959bda800b3d7d2183907a100949976abf7ffaf5ada5e01dbcb2b7b"} Feb 02 21:34:46 crc kubenswrapper[4789]: I0202 21:34:46.621517 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-hnv82" event={"ID":"6a3c98cd-b4e3-4f60-a4f8-5068fc45c634","Type":"ContainerStarted","Data":"6318b8d70386e4ce9cd955cf578d7e1499a14607a9bc511f18e6545977100a14"} Feb 02 21:34:46 crc kubenswrapper[4789]: I0202 21:34:46.621527 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-hnv82" event={"ID":"6a3c98cd-b4e3-4f60-a4f8-5068fc45c634","Type":"ContainerStarted","Data":"59e340e9122196eed1d6b44ac22c26a7e2bbc4ac60c669c41165a583475c4a7c"} Feb 02 21:34:46 crc kubenswrapper[4789]: I0202 21:34:46.622323 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-hnv82" Feb 02 21:34:46 crc kubenswrapper[4789]: I0202 21:34:46.649804 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-hnv82" podStartSLOduration=1.6497896079999999 podStartE2EDuration="1.649789608s" podCreationTimestamp="2026-02-02 21:34:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:34:46.646940698 +0000 UTC m=+906.941965717" watchObservedRunningTime="2026-02-02 21:34:46.649789608 +0000 UTC m=+906.944814627" Feb 02 21:34:46 crc kubenswrapper[4789]: I0202 21:34:46.901989 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-kffd2"] Feb 02 21:34:46 crc kubenswrapper[4789]: W0202 21:34:46.910330 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf8e3f662_8c3b_4324_af03_4e6135a4bbb3.slice/crio-097328136daf9a72af804127eaf842863623b35406bfe24e13f22e04e4999bd2 WatchSource:0}: Error finding container 097328136daf9a72af804127eaf842863623b35406bfe24e13f22e04e4999bd2: Status 404 returned error can't find the container with id 097328136daf9a72af804127eaf842863623b35406bfe24e13f22e04e4999bd2 Feb 02 21:34:47 crc kubenswrapper[4789]: I0202 21:34:47.373186 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/ae39df38-39b3-4d32-a1d7-d521b31b2840-memberlist\") pod \"speaker-6fxbj\" (UID: \"ae39df38-39b3-4d32-a1d7-d521b31b2840\") " pod="metallb-system/speaker-6fxbj" Feb 02 21:34:47 crc kubenswrapper[4789]: I0202 21:34:47.396199 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/ae39df38-39b3-4d32-a1d7-d521b31b2840-memberlist\") pod \"speaker-6fxbj\" (UID: \"ae39df38-39b3-4d32-a1d7-d521b31b2840\") " pod="metallb-system/speaker-6fxbj" Feb 02 21:34:47 crc kubenswrapper[4789]: I0202 21:34:47.412785 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-6fxbj" Feb 02 21:34:47 crc kubenswrapper[4789]: W0202 21:34:47.445759 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podae39df38_39b3_4d32_a1d7_d521b31b2840.slice/crio-b0bffe6450ed23216d3780218882b9eecd9e7f787eb84c3bc73db49fab0d28e8 WatchSource:0}: Error finding container b0bffe6450ed23216d3780218882b9eecd9e7f787eb84c3bc73db49fab0d28e8: Status 404 returned error can't find the container with id b0bffe6450ed23216d3780218882b9eecd9e7f787eb84c3bc73db49fab0d28e8 Feb 02 21:34:47 crc kubenswrapper[4789]: I0202 21:34:47.570077 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9dp6b" Feb 02 21:34:47 crc kubenswrapper[4789]: I0202 21:34:47.570142 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9dp6b" Feb 02 21:34:47 crc kubenswrapper[4789]: I0202 21:34:47.633707 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9dp6b" Feb 02 21:34:47 crc kubenswrapper[4789]: I0202 21:34:47.635004 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-6fxbj" event={"ID":"ae39df38-39b3-4d32-a1d7-d521b31b2840","Type":"ContainerStarted","Data":"b0bffe6450ed23216d3780218882b9eecd9e7f787eb84c3bc73db49fab0d28e8"} Feb 02 21:34:47 crc kubenswrapper[4789]: I0202 21:34:47.637339 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-kffd2" event={"ID":"f8e3f662-8c3b-4324-af03-4e6135a4bbb3","Type":"ContainerStarted","Data":"097328136daf9a72af804127eaf842863623b35406bfe24e13f22e04e4999bd2"} Feb 02 21:34:47 crc kubenswrapper[4789]: I0202 21:34:47.693602 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9dp6b" Feb 02 21:34:47 crc kubenswrapper[4789]: I0202 21:34:47.866709 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9dp6b"] Feb 02 21:34:48 crc kubenswrapper[4789]: I0202 21:34:48.647434 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-6fxbj" event={"ID":"ae39df38-39b3-4d32-a1d7-d521b31b2840","Type":"ContainerStarted","Data":"04f8731579cc403ca0ed30f384bb5e758f0b7e70277213965ae957e534e8b613"} Feb 02 21:34:48 crc kubenswrapper[4789]: I0202 21:34:48.648435 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-6fxbj" event={"ID":"ae39df38-39b3-4d32-a1d7-d521b31b2840","Type":"ContainerStarted","Data":"2955c3c86a29ff912761d0f0cc41453671808b0aee27220ccbe836ae89a1c772"} Feb 02 21:34:48 crc kubenswrapper[4789]: I0202 21:34:48.648900 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-6fxbj" Feb 02 21:34:48 crc kubenswrapper[4789]: I0202 21:34:48.668739 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-6fxbj" podStartSLOduration=3.668720478 podStartE2EDuration="3.668720478s" podCreationTimestamp="2026-02-02 21:34:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:34:48.665736294 +0000 UTC m=+908.960761313" watchObservedRunningTime="2026-02-02 21:34:48.668720478 +0000 UTC m=+908.963745497" Feb 02 21:34:49 crc kubenswrapper[4789]: I0202 21:34:49.651492 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9dp6b" podUID="376081c4-200e-4de3-84d0-01a7ceb32de1" containerName="registry-server" containerID="cri-o://18a0e321e45c6870c9ab593e0a1f6a2b12b6680c5dd48b65224d589ff4b1e6a0" gracePeriod=2 Feb 02 21:34:50 crc kubenswrapper[4789]: I0202 21:34:50.042899 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9dp6b" Feb 02 21:34:50 crc kubenswrapper[4789]: I0202 21:34:50.212776 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4xnlg\" (UniqueName: \"kubernetes.io/projected/376081c4-200e-4de3-84d0-01a7ceb32de1-kube-api-access-4xnlg\") pod \"376081c4-200e-4de3-84d0-01a7ceb32de1\" (UID: \"376081c4-200e-4de3-84d0-01a7ceb32de1\") " Feb 02 21:34:50 crc kubenswrapper[4789]: I0202 21:34:50.212827 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/376081c4-200e-4de3-84d0-01a7ceb32de1-catalog-content\") pod \"376081c4-200e-4de3-84d0-01a7ceb32de1\" (UID: \"376081c4-200e-4de3-84d0-01a7ceb32de1\") " Feb 02 21:34:50 crc kubenswrapper[4789]: I0202 21:34:50.212946 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/376081c4-200e-4de3-84d0-01a7ceb32de1-utilities\") pod \"376081c4-200e-4de3-84d0-01a7ceb32de1\" (UID: \"376081c4-200e-4de3-84d0-01a7ceb32de1\") " Feb 02 21:34:50 crc kubenswrapper[4789]: I0202 21:34:50.213911 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/376081c4-200e-4de3-84d0-01a7ceb32de1-utilities" (OuterVolumeSpecName: "utilities") pod "376081c4-200e-4de3-84d0-01a7ceb32de1" (UID: "376081c4-200e-4de3-84d0-01a7ceb32de1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 21:34:50 crc kubenswrapper[4789]: I0202 21:34:50.218148 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/376081c4-200e-4de3-84d0-01a7ceb32de1-kube-api-access-4xnlg" (OuterVolumeSpecName: "kube-api-access-4xnlg") pod "376081c4-200e-4de3-84d0-01a7ceb32de1" (UID: "376081c4-200e-4de3-84d0-01a7ceb32de1"). InnerVolumeSpecName "kube-api-access-4xnlg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:34:50 crc kubenswrapper[4789]: I0202 21:34:50.268463 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/376081c4-200e-4de3-84d0-01a7ceb32de1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "376081c4-200e-4de3-84d0-01a7ceb32de1" (UID: "376081c4-200e-4de3-84d0-01a7ceb32de1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 21:34:50 crc kubenswrapper[4789]: I0202 21:34:50.314231 4789 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/376081c4-200e-4de3-84d0-01a7ceb32de1-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 21:34:50 crc kubenswrapper[4789]: I0202 21:34:50.314259 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4xnlg\" (UniqueName: \"kubernetes.io/projected/376081c4-200e-4de3-84d0-01a7ceb32de1-kube-api-access-4xnlg\") on node \"crc\" DevicePath \"\"" Feb 02 21:34:50 crc kubenswrapper[4789]: I0202 21:34:50.314270 4789 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/376081c4-200e-4de3-84d0-01a7ceb32de1-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 21:34:50 crc kubenswrapper[4789]: I0202 21:34:50.669698 4789 generic.go:334] "Generic (PLEG): container finished" podID="376081c4-200e-4de3-84d0-01a7ceb32de1" containerID="18a0e321e45c6870c9ab593e0a1f6a2b12b6680c5dd48b65224d589ff4b1e6a0" exitCode=0 Feb 02 21:34:50 crc kubenswrapper[4789]: I0202 21:34:50.669751 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9dp6b" event={"ID":"376081c4-200e-4de3-84d0-01a7ceb32de1","Type":"ContainerDied","Data":"18a0e321e45c6870c9ab593e0a1f6a2b12b6680c5dd48b65224d589ff4b1e6a0"} Feb 02 21:34:50 crc kubenswrapper[4789]: I0202 21:34:50.669811 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9dp6b" event={"ID":"376081c4-200e-4de3-84d0-01a7ceb32de1","Type":"ContainerDied","Data":"a06556fe17505be20d50b2cd3b5f7182882b7ae36d3eb3599614f72ebc8d310f"} Feb 02 21:34:50 crc kubenswrapper[4789]: I0202 21:34:50.669831 4789 scope.go:117] "RemoveContainer" containerID="18a0e321e45c6870c9ab593e0a1f6a2b12b6680c5dd48b65224d589ff4b1e6a0" Feb 02 21:34:50 crc kubenswrapper[4789]: I0202 21:34:50.669833 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9dp6b" Feb 02 21:34:50 crc kubenswrapper[4789]: I0202 21:34:50.692615 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9dp6b"] Feb 02 21:34:50 crc kubenswrapper[4789]: I0202 21:34:50.692677 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9dp6b"] Feb 02 21:34:52 crc kubenswrapper[4789]: I0202 21:34:52.436095 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="376081c4-200e-4de3-84d0-01a7ceb32de1" path="/var/lib/kubelet/pods/376081c4-200e-4de3-84d0-01a7ceb32de1/volumes" Feb 02 21:34:52 crc kubenswrapper[4789]: I0202 21:34:52.671281 4789 scope.go:117] "RemoveContainer" containerID="53992be25412e42ed35606497cacf034c5200969cf447f8eb070c4f1d5d6de5f" Feb 02 21:34:52 crc kubenswrapper[4789]: I0202 21:34:52.741501 4789 scope.go:117] "RemoveContainer" containerID="e44b51d04cfbcff7a81d38628e39b5e75dcdbfe0824b0f0079b030d368f2c04d" Feb 02 21:34:52 crc kubenswrapper[4789]: I0202 21:34:52.773548 4789 scope.go:117] "RemoveContainer" containerID="18a0e321e45c6870c9ab593e0a1f6a2b12b6680c5dd48b65224d589ff4b1e6a0" Feb 02 21:34:52 crc kubenswrapper[4789]: E0202 21:34:52.773990 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18a0e321e45c6870c9ab593e0a1f6a2b12b6680c5dd48b65224d589ff4b1e6a0\": container with ID starting with 18a0e321e45c6870c9ab593e0a1f6a2b12b6680c5dd48b65224d589ff4b1e6a0 not found: ID does not exist" containerID="18a0e321e45c6870c9ab593e0a1f6a2b12b6680c5dd48b65224d589ff4b1e6a0" Feb 02 21:34:52 crc kubenswrapper[4789]: I0202 21:34:52.774036 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18a0e321e45c6870c9ab593e0a1f6a2b12b6680c5dd48b65224d589ff4b1e6a0"} err="failed to get container status \"18a0e321e45c6870c9ab593e0a1f6a2b12b6680c5dd48b65224d589ff4b1e6a0\": rpc error: code = NotFound desc = could not find container \"18a0e321e45c6870c9ab593e0a1f6a2b12b6680c5dd48b65224d589ff4b1e6a0\": container with ID starting with 18a0e321e45c6870c9ab593e0a1f6a2b12b6680c5dd48b65224d589ff4b1e6a0 not found: ID does not exist" Feb 02 21:34:52 crc kubenswrapper[4789]: I0202 21:34:52.774061 4789 scope.go:117] "RemoveContainer" containerID="53992be25412e42ed35606497cacf034c5200969cf447f8eb070c4f1d5d6de5f" Feb 02 21:34:52 crc kubenswrapper[4789]: E0202 21:34:52.774424 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53992be25412e42ed35606497cacf034c5200969cf447f8eb070c4f1d5d6de5f\": container with ID starting with 53992be25412e42ed35606497cacf034c5200969cf447f8eb070c4f1d5d6de5f not found: ID does not exist" containerID="53992be25412e42ed35606497cacf034c5200969cf447f8eb070c4f1d5d6de5f" Feb 02 21:34:52 crc kubenswrapper[4789]: I0202 21:34:52.774456 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53992be25412e42ed35606497cacf034c5200969cf447f8eb070c4f1d5d6de5f"} err="failed to get container status \"53992be25412e42ed35606497cacf034c5200969cf447f8eb070c4f1d5d6de5f\": rpc error: code = NotFound desc = could not find container \"53992be25412e42ed35606497cacf034c5200969cf447f8eb070c4f1d5d6de5f\": container with ID starting with 53992be25412e42ed35606497cacf034c5200969cf447f8eb070c4f1d5d6de5f not found: ID does not exist" Feb 02 21:34:52 crc kubenswrapper[4789]: I0202 21:34:52.774476 4789 scope.go:117] "RemoveContainer" containerID="e44b51d04cfbcff7a81d38628e39b5e75dcdbfe0824b0f0079b030d368f2c04d" Feb 02 21:34:52 crc kubenswrapper[4789]: E0202 21:34:52.775450 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e44b51d04cfbcff7a81d38628e39b5e75dcdbfe0824b0f0079b030d368f2c04d\": container with ID starting with e44b51d04cfbcff7a81d38628e39b5e75dcdbfe0824b0f0079b030d368f2c04d not found: ID does not exist" containerID="e44b51d04cfbcff7a81d38628e39b5e75dcdbfe0824b0f0079b030d368f2c04d" Feb 02 21:34:52 crc kubenswrapper[4789]: I0202 21:34:52.775478 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e44b51d04cfbcff7a81d38628e39b5e75dcdbfe0824b0f0079b030d368f2c04d"} err="failed to get container status \"e44b51d04cfbcff7a81d38628e39b5e75dcdbfe0824b0f0079b030d368f2c04d\": rpc error: code = NotFound desc = could not find container \"e44b51d04cfbcff7a81d38628e39b5e75dcdbfe0824b0f0079b030d368f2c04d\": container with ID starting with e44b51d04cfbcff7a81d38628e39b5e75dcdbfe0824b0f0079b030d368f2c04d not found: ID does not exist" Feb 02 21:34:53 crc kubenswrapper[4789]: I0202 21:34:53.696987 4789 generic.go:334] "Generic (PLEG): container finished" podID="93640dbd-7a68-4fe1-8feb-8fa519bfc5d0" containerID="3323f76178bcf215957bfa32ad7c42af798233121a4a7a6cea100e9e7dc07fff" exitCode=0 Feb 02 21:34:53 crc kubenswrapper[4789]: I0202 21:34:53.697070 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4nlg4" event={"ID":"93640dbd-7a68-4fe1-8feb-8fa519bfc5d0","Type":"ContainerDied","Data":"3323f76178bcf215957bfa32ad7c42af798233121a4a7a6cea100e9e7dc07fff"} Feb 02 21:34:53 crc kubenswrapper[4789]: I0202 21:34:53.700721 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-kffd2" event={"ID":"f8e3f662-8c3b-4324-af03-4e6135a4bbb3","Type":"ContainerStarted","Data":"07b93d376c790f85ca1ac926ce06ab569ce0603ff7790bd63ae34316f5a747d5"} Feb 02 21:34:53 crc kubenswrapper[4789]: I0202 21:34:53.700934 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-kffd2" Feb 02 21:34:53 crc kubenswrapper[4789]: I0202 21:34:53.763330 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-kffd2" podStartSLOduration=2.8754323729999998 podStartE2EDuration="8.763309531s" podCreationTimestamp="2026-02-02 21:34:45 +0000 UTC" firstStartedPulling="2026-02-02 21:34:46.912426105 +0000 UTC m=+907.207451134" lastFinishedPulling="2026-02-02 21:34:52.800303273 +0000 UTC m=+913.095328292" observedRunningTime="2026-02-02 21:34:53.761150679 +0000 UTC m=+914.056175708" watchObservedRunningTime="2026-02-02 21:34:53.763309531 +0000 UTC m=+914.058334560" Feb 02 21:34:54 crc kubenswrapper[4789]: I0202 21:34:54.709918 4789 generic.go:334] "Generic (PLEG): container finished" podID="93640dbd-7a68-4fe1-8feb-8fa519bfc5d0" containerID="0ddf7d4b66554897af1d44425023453c086b6aae88af644426c178d71bee2a95" exitCode=0 Feb 02 21:34:54 crc kubenswrapper[4789]: I0202 21:34:54.709988 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4nlg4" event={"ID":"93640dbd-7a68-4fe1-8feb-8fa519bfc5d0","Type":"ContainerDied","Data":"0ddf7d4b66554897af1d44425023453c086b6aae88af644426c178d71bee2a95"} Feb 02 21:34:55 crc kubenswrapper[4789]: I0202 21:34:55.721390 4789 generic.go:334] "Generic (PLEG): container finished" podID="93640dbd-7a68-4fe1-8feb-8fa519bfc5d0" containerID="9e64deaf85bccdfc8ca331f809cb99628f66b6c88a560d0408969522a517ab94" exitCode=0 Feb 02 21:34:55 crc kubenswrapper[4789]: I0202 21:34:55.721473 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4nlg4" event={"ID":"93640dbd-7a68-4fe1-8feb-8fa519bfc5d0","Type":"ContainerDied","Data":"9e64deaf85bccdfc8ca331f809cb99628f66b6c88a560d0408969522a517ab94"} Feb 02 21:34:56 crc kubenswrapper[4789]: I0202 21:34:56.735571 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4nlg4" event={"ID":"93640dbd-7a68-4fe1-8feb-8fa519bfc5d0","Type":"ContainerStarted","Data":"f738137629b99644a6876ef854625f370a9ea8317e9539fa9ac76cba2378ff2d"} Feb 02 21:34:56 crc kubenswrapper[4789]: I0202 21:34:56.735932 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4nlg4" event={"ID":"93640dbd-7a68-4fe1-8feb-8fa519bfc5d0","Type":"ContainerStarted","Data":"fe2e5e9d9f253fecfe3e6e51a1e066fc06febe983591c6b400bd91e8ea71d9df"} Feb 02 21:34:56 crc kubenswrapper[4789]: I0202 21:34:56.735943 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4nlg4" event={"ID":"93640dbd-7a68-4fe1-8feb-8fa519bfc5d0","Type":"ContainerStarted","Data":"9e7eed15d11654b0011f7c5ff695ea3646e3e38edbf295d3424e36675aef3958"} Feb 02 21:34:56 crc kubenswrapper[4789]: I0202 21:34:56.735951 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4nlg4" event={"ID":"93640dbd-7a68-4fe1-8feb-8fa519bfc5d0","Type":"ContainerStarted","Data":"a41a08766c5ed3d2588a8b2d10dca8cb7afa5b667c49393265da023037a43fc4"} Feb 02 21:34:57 crc kubenswrapper[4789]: I0202 21:34:57.417742 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-6fxbj" Feb 02 21:34:57 crc kubenswrapper[4789]: I0202 21:34:57.751557 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4nlg4" event={"ID":"93640dbd-7a68-4fe1-8feb-8fa519bfc5d0","Type":"ContainerStarted","Data":"d452a82314e71f55d7c46edfab6bbafd011310d0a0016f63b6b74b4c3c60fc6a"} Feb 02 21:34:57 crc kubenswrapper[4789]: I0202 21:34:57.751997 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4nlg4" event={"ID":"93640dbd-7a68-4fe1-8feb-8fa519bfc5d0","Type":"ContainerStarted","Data":"91e5f92badecd05b44f1c167c62cdd81d4bdf138b8351fff865fe21f1edcc67c"} Feb 02 21:34:57 crc kubenswrapper[4789]: I0202 21:34:57.752038 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-4nlg4" Feb 02 21:34:57 crc kubenswrapper[4789]: I0202 21:34:57.784211 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-4nlg4" podStartSLOduration=5.977138132 podStartE2EDuration="12.784188352s" podCreationTimestamp="2026-02-02 21:34:45 +0000 UTC" firstStartedPulling="2026-02-02 21:34:45.967717348 +0000 UTC m=+906.262742367" lastFinishedPulling="2026-02-02 21:34:52.774767578 +0000 UTC m=+913.069792587" observedRunningTime="2026-02-02 21:34:57.784004277 +0000 UTC m=+918.079029316" watchObservedRunningTime="2026-02-02 21:34:57.784188352 +0000 UTC m=+918.079213391" Feb 02 21:34:58 crc kubenswrapper[4789]: I0202 21:34:58.991659 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5zfwg5"] Feb 02 21:34:58 crc kubenswrapper[4789]: E0202 21:34:58.991889 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="376081c4-200e-4de3-84d0-01a7ceb32de1" containerName="extract-content" Feb 02 21:34:58 crc kubenswrapper[4789]: I0202 21:34:58.991901 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="376081c4-200e-4de3-84d0-01a7ceb32de1" containerName="extract-content" Feb 02 21:34:58 crc kubenswrapper[4789]: E0202 21:34:58.991918 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="376081c4-200e-4de3-84d0-01a7ceb32de1" containerName="extract-utilities" Feb 02 21:34:58 crc kubenswrapper[4789]: I0202 21:34:58.991924 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="376081c4-200e-4de3-84d0-01a7ceb32de1" containerName="extract-utilities" Feb 02 21:34:58 crc kubenswrapper[4789]: E0202 21:34:58.991936 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="376081c4-200e-4de3-84d0-01a7ceb32de1" containerName="registry-server" Feb 02 21:34:58 crc kubenswrapper[4789]: I0202 21:34:58.991942 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="376081c4-200e-4de3-84d0-01a7ceb32de1" containerName="registry-server" Feb 02 21:34:58 crc kubenswrapper[4789]: I0202 21:34:58.992041 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="376081c4-200e-4de3-84d0-01a7ceb32de1" containerName="registry-server" Feb 02 21:34:58 crc kubenswrapper[4789]: I0202 21:34:58.992744 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5zfwg5" Feb 02 21:34:58 crc kubenswrapper[4789]: I0202 21:34:58.997253 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 02 21:34:59 crc kubenswrapper[4789]: I0202 21:34:59.002453 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5zfwg5"] Feb 02 21:34:59 crc kubenswrapper[4789]: I0202 21:34:59.134945 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qq2hm\" (UniqueName: \"kubernetes.io/projected/82257589-0f42-4d43-8843-41285225ccf0-kube-api-access-qq2hm\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5zfwg5\" (UID: \"82257589-0f42-4d43-8843-41285225ccf0\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5zfwg5" Feb 02 21:34:59 crc kubenswrapper[4789]: I0202 21:34:59.135032 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/82257589-0f42-4d43-8843-41285225ccf0-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5zfwg5\" (UID: \"82257589-0f42-4d43-8843-41285225ccf0\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5zfwg5" Feb 02 21:34:59 crc kubenswrapper[4789]: I0202 21:34:59.135213 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/82257589-0f42-4d43-8843-41285225ccf0-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5zfwg5\" (UID: \"82257589-0f42-4d43-8843-41285225ccf0\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5zfwg5" Feb 02 21:34:59 crc kubenswrapper[4789]: I0202 21:34:59.236900 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qq2hm\" (UniqueName: \"kubernetes.io/projected/82257589-0f42-4d43-8843-41285225ccf0-kube-api-access-qq2hm\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5zfwg5\" (UID: \"82257589-0f42-4d43-8843-41285225ccf0\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5zfwg5" Feb 02 21:34:59 crc kubenswrapper[4789]: I0202 21:34:59.236980 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/82257589-0f42-4d43-8843-41285225ccf0-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5zfwg5\" (UID: \"82257589-0f42-4d43-8843-41285225ccf0\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5zfwg5" Feb 02 21:34:59 crc kubenswrapper[4789]: I0202 21:34:59.237043 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/82257589-0f42-4d43-8843-41285225ccf0-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5zfwg5\" (UID: \"82257589-0f42-4d43-8843-41285225ccf0\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5zfwg5" Feb 02 21:34:59 crc kubenswrapper[4789]: I0202 21:34:59.237538 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/82257589-0f42-4d43-8843-41285225ccf0-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5zfwg5\" (UID: \"82257589-0f42-4d43-8843-41285225ccf0\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5zfwg5" Feb 02 21:34:59 crc kubenswrapper[4789]: I0202 21:34:59.237749 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/82257589-0f42-4d43-8843-41285225ccf0-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5zfwg5\" (UID: \"82257589-0f42-4d43-8843-41285225ccf0\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5zfwg5" Feb 02 21:34:59 crc kubenswrapper[4789]: I0202 21:34:59.257339 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qq2hm\" (UniqueName: \"kubernetes.io/projected/82257589-0f42-4d43-8843-41285225ccf0-kube-api-access-qq2hm\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5zfwg5\" (UID: \"82257589-0f42-4d43-8843-41285225ccf0\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5zfwg5" Feb 02 21:34:59 crc kubenswrapper[4789]: I0202 21:34:59.309130 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5zfwg5" Feb 02 21:34:59 crc kubenswrapper[4789]: I0202 21:34:59.587905 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5zfwg5"] Feb 02 21:34:59 crc kubenswrapper[4789]: W0202 21:34:59.593945 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82257589_0f42_4d43_8843_41285225ccf0.slice/crio-54a91a39f47811ec903d176fd457b99b4bc5365cd58080a9915156b4d4cc6bec WatchSource:0}: Error finding container 54a91a39f47811ec903d176fd457b99b4bc5365cd58080a9915156b4d4cc6bec: Status 404 returned error can't find the container with id 54a91a39f47811ec903d176fd457b99b4bc5365cd58080a9915156b4d4cc6bec Feb 02 21:34:59 crc kubenswrapper[4789]: I0202 21:34:59.767491 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5zfwg5" event={"ID":"82257589-0f42-4d43-8843-41285225ccf0","Type":"ContainerStarted","Data":"54a91a39f47811ec903d176fd457b99b4bc5365cd58080a9915156b4d4cc6bec"} Feb 02 21:35:00 crc kubenswrapper[4789]: I0202 21:35:00.777101 4789 generic.go:334] "Generic (PLEG): container finished" podID="82257589-0f42-4d43-8843-41285225ccf0" containerID="4d1b2b2d6345731ce9e5b92ea025d9e6b6520367ce60b70b38dbd5f6f32867d7" exitCode=0 Feb 02 21:35:00 crc kubenswrapper[4789]: I0202 21:35:00.777147 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5zfwg5" event={"ID":"82257589-0f42-4d43-8843-41285225ccf0","Type":"ContainerDied","Data":"4d1b2b2d6345731ce9e5b92ea025d9e6b6520367ce60b70b38dbd5f6f32867d7"} Feb 02 21:35:00 crc kubenswrapper[4789]: I0202 21:35:00.841898 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-4nlg4" Feb 02 21:35:00 crc kubenswrapper[4789]: I0202 21:35:00.923392 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-4nlg4" Feb 02 21:35:04 crc kubenswrapper[4789]: I0202 21:35:04.804345 4789 generic.go:334] "Generic (PLEG): container finished" podID="82257589-0f42-4d43-8843-41285225ccf0" containerID="c0132214e186d2be2396301edbeb5f47daa94be9ce4e466240d594f5957c2f7e" exitCode=0 Feb 02 21:35:04 crc kubenswrapper[4789]: I0202 21:35:04.804390 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5zfwg5" event={"ID":"82257589-0f42-4d43-8843-41285225ccf0","Type":"ContainerDied","Data":"c0132214e186d2be2396301edbeb5f47daa94be9ce4e466240d594f5957c2f7e"} Feb 02 21:35:05 crc kubenswrapper[4789]: I0202 21:35:05.815332 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5zfwg5" event={"ID":"82257589-0f42-4d43-8843-41285225ccf0","Type":"ContainerStarted","Data":"c45696c9c3aca776f417a4981cfc90eb6f9a37e4377a3b471df964ac16eaa3ee"} Feb 02 21:35:05 crc kubenswrapper[4789]: I0202 21:35:05.838480 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5zfwg5" podStartSLOduration=4.356326285 podStartE2EDuration="7.838462478s" podCreationTimestamp="2026-02-02 21:34:58 +0000 UTC" firstStartedPulling="2026-02-02 21:35:00.779105319 +0000 UTC m=+921.074130338" lastFinishedPulling="2026-02-02 21:35:04.261241502 +0000 UTC m=+924.556266531" observedRunningTime="2026-02-02 21:35:05.834783504 +0000 UTC m=+926.129808523" watchObservedRunningTime="2026-02-02 21:35:05.838462478 +0000 UTC m=+926.133487497" Feb 02 21:35:05 crc kubenswrapper[4789]: I0202 21:35:05.845065 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-4nlg4" Feb 02 21:35:05 crc kubenswrapper[4789]: I0202 21:35:05.937415 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-hnv82" Feb 02 21:35:06 crc kubenswrapper[4789]: I0202 21:35:06.457410 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-kffd2" Feb 02 21:35:06 crc kubenswrapper[4789]: I0202 21:35:06.822828 4789 generic.go:334] "Generic (PLEG): container finished" podID="82257589-0f42-4d43-8843-41285225ccf0" containerID="c45696c9c3aca776f417a4981cfc90eb6f9a37e4377a3b471df964ac16eaa3ee" exitCode=0 Feb 02 21:35:06 crc kubenswrapper[4789]: I0202 21:35:06.822917 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5zfwg5" event={"ID":"82257589-0f42-4d43-8843-41285225ccf0","Type":"ContainerDied","Data":"c45696c9c3aca776f417a4981cfc90eb6f9a37e4377a3b471df964ac16eaa3ee"} Feb 02 21:35:08 crc kubenswrapper[4789]: I0202 21:35:08.204052 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5zfwg5" Feb 02 21:35:08 crc kubenswrapper[4789]: I0202 21:35:08.368160 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/82257589-0f42-4d43-8843-41285225ccf0-util\") pod \"82257589-0f42-4d43-8843-41285225ccf0\" (UID: \"82257589-0f42-4d43-8843-41285225ccf0\") " Feb 02 21:35:08 crc kubenswrapper[4789]: I0202 21:35:08.368223 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qq2hm\" (UniqueName: \"kubernetes.io/projected/82257589-0f42-4d43-8843-41285225ccf0-kube-api-access-qq2hm\") pod \"82257589-0f42-4d43-8843-41285225ccf0\" (UID: \"82257589-0f42-4d43-8843-41285225ccf0\") " Feb 02 21:35:08 crc kubenswrapper[4789]: I0202 21:35:08.368257 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/82257589-0f42-4d43-8843-41285225ccf0-bundle\") pod \"82257589-0f42-4d43-8843-41285225ccf0\" (UID: \"82257589-0f42-4d43-8843-41285225ccf0\") " Feb 02 21:35:08 crc kubenswrapper[4789]: I0202 21:35:08.369427 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82257589-0f42-4d43-8843-41285225ccf0-bundle" (OuterVolumeSpecName: "bundle") pod "82257589-0f42-4d43-8843-41285225ccf0" (UID: "82257589-0f42-4d43-8843-41285225ccf0"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 21:35:08 crc kubenswrapper[4789]: I0202 21:35:08.375568 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82257589-0f42-4d43-8843-41285225ccf0-kube-api-access-qq2hm" (OuterVolumeSpecName: "kube-api-access-qq2hm") pod "82257589-0f42-4d43-8843-41285225ccf0" (UID: "82257589-0f42-4d43-8843-41285225ccf0"). InnerVolumeSpecName "kube-api-access-qq2hm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:35:08 crc kubenswrapper[4789]: I0202 21:35:08.385511 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82257589-0f42-4d43-8843-41285225ccf0-util" (OuterVolumeSpecName: "util") pod "82257589-0f42-4d43-8843-41285225ccf0" (UID: "82257589-0f42-4d43-8843-41285225ccf0"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 21:35:08 crc kubenswrapper[4789]: I0202 21:35:08.470032 4789 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/82257589-0f42-4d43-8843-41285225ccf0-util\") on node \"crc\" DevicePath \"\"" Feb 02 21:35:08 crc kubenswrapper[4789]: I0202 21:35:08.470068 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qq2hm\" (UniqueName: \"kubernetes.io/projected/82257589-0f42-4d43-8843-41285225ccf0-kube-api-access-qq2hm\") on node \"crc\" DevicePath \"\"" Feb 02 21:35:08 crc kubenswrapper[4789]: I0202 21:35:08.470082 4789 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/82257589-0f42-4d43-8843-41285225ccf0-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 21:35:08 crc kubenswrapper[4789]: I0202 21:35:08.841332 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5zfwg5" event={"ID":"82257589-0f42-4d43-8843-41285225ccf0","Type":"ContainerDied","Data":"54a91a39f47811ec903d176fd457b99b4bc5365cd58080a9915156b4d4cc6bec"} Feb 02 21:35:08 crc kubenswrapper[4789]: I0202 21:35:08.841406 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="54a91a39f47811ec903d176fd457b99b4bc5365cd58080a9915156b4d4cc6bec" Feb 02 21:35:08 crc kubenswrapper[4789]: I0202 21:35:08.841512 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5zfwg5" Feb 02 21:35:12 crc kubenswrapper[4789]: I0202 21:35:12.546727 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-9cxr9"] Feb 02 21:35:12 crc kubenswrapper[4789]: E0202 21:35:12.547195 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82257589-0f42-4d43-8843-41285225ccf0" containerName="util" Feb 02 21:35:12 crc kubenswrapper[4789]: I0202 21:35:12.547208 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="82257589-0f42-4d43-8843-41285225ccf0" containerName="util" Feb 02 21:35:12 crc kubenswrapper[4789]: E0202 21:35:12.547223 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82257589-0f42-4d43-8843-41285225ccf0" containerName="extract" Feb 02 21:35:12 crc kubenswrapper[4789]: I0202 21:35:12.547231 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="82257589-0f42-4d43-8843-41285225ccf0" containerName="extract" Feb 02 21:35:12 crc kubenswrapper[4789]: E0202 21:35:12.547255 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82257589-0f42-4d43-8843-41285225ccf0" containerName="pull" Feb 02 21:35:12 crc kubenswrapper[4789]: I0202 21:35:12.547261 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="82257589-0f42-4d43-8843-41285225ccf0" containerName="pull" Feb 02 21:35:12 crc kubenswrapper[4789]: I0202 21:35:12.547373 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="82257589-0f42-4d43-8843-41285225ccf0" containerName="extract" Feb 02 21:35:12 crc kubenswrapper[4789]: I0202 21:35:12.547879 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-9cxr9" Feb 02 21:35:12 crc kubenswrapper[4789]: I0202 21:35:12.549854 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Feb 02 21:35:12 crc kubenswrapper[4789]: I0202 21:35:12.550437 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Feb 02 21:35:12 crc kubenswrapper[4789]: I0202 21:35:12.551277 4789 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-tvvgx" Feb 02 21:35:12 crc kubenswrapper[4789]: I0202 21:35:12.563077 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-9cxr9"] Feb 02 21:35:12 crc kubenswrapper[4789]: I0202 21:35:12.655468 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/120e0c7a-1eae-491c-aca2-bf094af6c306-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-9cxr9\" (UID: \"120e0c7a-1eae-491c-aca2-bf094af6c306\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-9cxr9" Feb 02 21:35:12 crc kubenswrapper[4789]: I0202 21:35:12.655845 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwbxk\" (UniqueName: \"kubernetes.io/projected/120e0c7a-1eae-491c-aca2-bf094af6c306-kube-api-access-xwbxk\") pod \"cert-manager-operator-controller-manager-66c8bdd694-9cxr9\" (UID: \"120e0c7a-1eae-491c-aca2-bf094af6c306\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-9cxr9" Feb 02 21:35:12 crc kubenswrapper[4789]: I0202 21:35:12.757343 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwbxk\" (UniqueName: \"kubernetes.io/projected/120e0c7a-1eae-491c-aca2-bf094af6c306-kube-api-access-xwbxk\") pod \"cert-manager-operator-controller-manager-66c8bdd694-9cxr9\" (UID: \"120e0c7a-1eae-491c-aca2-bf094af6c306\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-9cxr9" Feb 02 21:35:12 crc kubenswrapper[4789]: I0202 21:35:12.757408 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/120e0c7a-1eae-491c-aca2-bf094af6c306-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-9cxr9\" (UID: \"120e0c7a-1eae-491c-aca2-bf094af6c306\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-9cxr9" Feb 02 21:35:12 crc kubenswrapper[4789]: I0202 21:35:12.758004 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/120e0c7a-1eae-491c-aca2-bf094af6c306-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-9cxr9\" (UID: \"120e0c7a-1eae-491c-aca2-bf094af6c306\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-9cxr9" Feb 02 21:35:12 crc kubenswrapper[4789]: I0202 21:35:12.779816 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwbxk\" (UniqueName: \"kubernetes.io/projected/120e0c7a-1eae-491c-aca2-bf094af6c306-kube-api-access-xwbxk\") pod \"cert-manager-operator-controller-manager-66c8bdd694-9cxr9\" (UID: \"120e0c7a-1eae-491c-aca2-bf094af6c306\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-9cxr9" Feb 02 21:35:12 crc kubenswrapper[4789]: I0202 21:35:12.868714 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-9cxr9" Feb 02 21:35:13 crc kubenswrapper[4789]: I0202 21:35:13.312349 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-9cxr9"] Feb 02 21:35:13 crc kubenswrapper[4789]: W0202 21:35:13.322768 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod120e0c7a_1eae_491c_aca2_bf094af6c306.slice/crio-24a7514773f2ce15fd8b44a73d4c8c5322189c28ced66afe77fd06b003a6154b WatchSource:0}: Error finding container 24a7514773f2ce15fd8b44a73d4c8c5322189c28ced66afe77fd06b003a6154b: Status 404 returned error can't find the container with id 24a7514773f2ce15fd8b44a73d4c8c5322189c28ced66afe77fd06b003a6154b Feb 02 21:35:13 crc kubenswrapper[4789]: I0202 21:35:13.874189 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-9cxr9" event={"ID":"120e0c7a-1eae-491c-aca2-bf094af6c306","Type":"ContainerStarted","Data":"24a7514773f2ce15fd8b44a73d4c8c5322189c28ced66afe77fd06b003a6154b"} Feb 02 21:35:17 crc kubenswrapper[4789]: I0202 21:35:17.906982 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-9cxr9" event={"ID":"120e0c7a-1eae-491c-aca2-bf094af6c306","Type":"ContainerStarted","Data":"b1db3efa78c735c2b279ea5944214c6d220f5f480ab0b072594d4cdebdf55ec1"} Feb 02 21:35:17 crc kubenswrapper[4789]: I0202 21:35:17.926134 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-9cxr9" podStartSLOduration=2.538684606 podStartE2EDuration="5.926121351s" podCreationTimestamp="2026-02-02 21:35:12 +0000 UTC" firstStartedPulling="2026-02-02 21:35:13.324553682 +0000 UTC m=+933.619578701" lastFinishedPulling="2026-02-02 21:35:16.711990427 +0000 UTC m=+937.007015446" observedRunningTime="2026-02-02 21:35:17.923358083 +0000 UTC m=+938.218383102" watchObservedRunningTime="2026-02-02 21:35:17.926121351 +0000 UTC m=+938.221146370" Feb 02 21:35:22 crc kubenswrapper[4789]: I0202 21:35:22.352788 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-4nrdt"] Feb 02 21:35:22 crc kubenswrapper[4789]: I0202 21:35:22.354019 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-4nrdt" Feb 02 21:35:22 crc kubenswrapper[4789]: I0202 21:35:22.356132 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 02 21:35:22 crc kubenswrapper[4789]: I0202 21:35:22.356283 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 02 21:35:22 crc kubenswrapper[4789]: I0202 21:35:22.356462 4789 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-dx95n" Feb 02 21:35:22 crc kubenswrapper[4789]: I0202 21:35:22.400155 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-4nrdt"] Feb 02 21:35:22 crc kubenswrapper[4789]: I0202 21:35:22.482356 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k29wm\" (UniqueName: \"kubernetes.io/projected/f2f3a656-c8a4-41a3-8791-c021db980c6d-kube-api-access-k29wm\") pod \"cert-manager-cainjector-5545bd876-4nrdt\" (UID: \"f2f3a656-c8a4-41a3-8791-c021db980c6d\") " pod="cert-manager/cert-manager-cainjector-5545bd876-4nrdt" Feb 02 21:35:22 crc kubenswrapper[4789]: I0202 21:35:22.482461 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f2f3a656-c8a4-41a3-8791-c021db980c6d-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-4nrdt\" (UID: \"f2f3a656-c8a4-41a3-8791-c021db980c6d\") " pod="cert-manager/cert-manager-cainjector-5545bd876-4nrdt" Feb 02 21:35:22 crc kubenswrapper[4789]: I0202 21:35:22.583430 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f2f3a656-c8a4-41a3-8791-c021db980c6d-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-4nrdt\" (UID: \"f2f3a656-c8a4-41a3-8791-c021db980c6d\") " pod="cert-manager/cert-manager-cainjector-5545bd876-4nrdt" Feb 02 21:35:22 crc kubenswrapper[4789]: I0202 21:35:22.583543 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k29wm\" (UniqueName: \"kubernetes.io/projected/f2f3a656-c8a4-41a3-8791-c021db980c6d-kube-api-access-k29wm\") pod \"cert-manager-cainjector-5545bd876-4nrdt\" (UID: \"f2f3a656-c8a4-41a3-8791-c021db980c6d\") " pod="cert-manager/cert-manager-cainjector-5545bd876-4nrdt" Feb 02 21:35:22 crc kubenswrapper[4789]: I0202 21:35:22.599831 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f2f3a656-c8a4-41a3-8791-c021db980c6d-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-4nrdt\" (UID: \"f2f3a656-c8a4-41a3-8791-c021db980c6d\") " pod="cert-manager/cert-manager-cainjector-5545bd876-4nrdt" Feb 02 21:35:22 crc kubenswrapper[4789]: I0202 21:35:22.599976 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k29wm\" (UniqueName: \"kubernetes.io/projected/f2f3a656-c8a4-41a3-8791-c021db980c6d-kube-api-access-k29wm\") pod \"cert-manager-cainjector-5545bd876-4nrdt\" (UID: \"f2f3a656-c8a4-41a3-8791-c021db980c6d\") " pod="cert-manager/cert-manager-cainjector-5545bd876-4nrdt" Feb 02 21:35:22 crc kubenswrapper[4789]: I0202 21:35:22.671865 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-4nrdt" Feb 02 21:35:23 crc kubenswrapper[4789]: I0202 21:35:23.109763 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-4nrdt"] Feb 02 21:35:23 crc kubenswrapper[4789]: I0202 21:35:23.522868 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-t4fd8"] Feb 02 21:35:23 crc kubenswrapper[4789]: I0202 21:35:23.524562 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-t4fd8" Feb 02 21:35:23 crc kubenswrapper[4789]: I0202 21:35:23.526629 4789 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-5ktxj" Feb 02 21:35:23 crc kubenswrapper[4789]: I0202 21:35:23.533234 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-t4fd8"] Feb 02 21:35:23 crc kubenswrapper[4789]: I0202 21:35:23.597764 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f3db5270-fd96-47f4-bce0-94ac69a0e9f4-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-t4fd8\" (UID: \"f3db5270-fd96-47f4-bce0-94ac69a0e9f4\") " pod="cert-manager/cert-manager-webhook-6888856db4-t4fd8" Feb 02 21:35:23 crc kubenswrapper[4789]: I0202 21:35:23.597845 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68bwr\" (UniqueName: \"kubernetes.io/projected/f3db5270-fd96-47f4-bce0-94ac69a0e9f4-kube-api-access-68bwr\") pod \"cert-manager-webhook-6888856db4-t4fd8\" (UID: \"f3db5270-fd96-47f4-bce0-94ac69a0e9f4\") " pod="cert-manager/cert-manager-webhook-6888856db4-t4fd8" Feb 02 21:35:23 crc kubenswrapper[4789]: I0202 21:35:23.698892 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f3db5270-fd96-47f4-bce0-94ac69a0e9f4-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-t4fd8\" (UID: \"f3db5270-fd96-47f4-bce0-94ac69a0e9f4\") " pod="cert-manager/cert-manager-webhook-6888856db4-t4fd8" Feb 02 21:35:23 crc kubenswrapper[4789]: I0202 21:35:23.698974 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68bwr\" (UniqueName: \"kubernetes.io/projected/f3db5270-fd96-47f4-bce0-94ac69a0e9f4-kube-api-access-68bwr\") pod \"cert-manager-webhook-6888856db4-t4fd8\" (UID: \"f3db5270-fd96-47f4-bce0-94ac69a0e9f4\") " pod="cert-manager/cert-manager-webhook-6888856db4-t4fd8" Feb 02 21:35:23 crc kubenswrapper[4789]: I0202 21:35:23.720240 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f3db5270-fd96-47f4-bce0-94ac69a0e9f4-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-t4fd8\" (UID: \"f3db5270-fd96-47f4-bce0-94ac69a0e9f4\") " pod="cert-manager/cert-manager-webhook-6888856db4-t4fd8" Feb 02 21:35:23 crc kubenswrapper[4789]: I0202 21:35:23.720741 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68bwr\" (UniqueName: \"kubernetes.io/projected/f3db5270-fd96-47f4-bce0-94ac69a0e9f4-kube-api-access-68bwr\") pod \"cert-manager-webhook-6888856db4-t4fd8\" (UID: \"f3db5270-fd96-47f4-bce0-94ac69a0e9f4\") " pod="cert-manager/cert-manager-webhook-6888856db4-t4fd8" Feb 02 21:35:23 crc kubenswrapper[4789]: I0202 21:35:23.854116 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-t4fd8" Feb 02 21:35:23 crc kubenswrapper[4789]: I0202 21:35:23.948742 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-4nrdt" event={"ID":"f2f3a656-c8a4-41a3-8791-c021db980c6d","Type":"ContainerStarted","Data":"a80ecc973896897b3d25863b1c7d2d065dc2e11acd19ece2dcfadb36226e1749"} Feb 02 21:35:24 crc kubenswrapper[4789]: I0202 21:35:24.301726 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-t4fd8"] Feb 02 21:35:24 crc kubenswrapper[4789]: W0202 21:35:24.307972 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf3db5270_fd96_47f4_bce0_94ac69a0e9f4.slice/crio-d6e1ee14c95f4ba70bc6c9d7ea6710c24b62912f63d9911537d6d20b74d0e4fc WatchSource:0}: Error finding container d6e1ee14c95f4ba70bc6c9d7ea6710c24b62912f63d9911537d6d20b74d0e4fc: Status 404 returned error can't find the container with id d6e1ee14c95f4ba70bc6c9d7ea6710c24b62912f63d9911537d6d20b74d0e4fc Feb 02 21:35:24 crc kubenswrapper[4789]: I0202 21:35:24.955325 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-t4fd8" event={"ID":"f3db5270-fd96-47f4-bce0-94ac69a0e9f4","Type":"ContainerStarted","Data":"d6e1ee14c95f4ba70bc6c9d7ea6710c24b62912f63d9911537d6d20b74d0e4fc"} Feb 02 21:35:27 crc kubenswrapper[4789]: I0202 21:35:27.974904 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-t4fd8" event={"ID":"f3db5270-fd96-47f4-bce0-94ac69a0e9f4","Type":"ContainerStarted","Data":"a5c685ab9cb7bd3679c838856688845ba0dcfb8ed36faad5df029c53d54d612a"} Feb 02 21:35:27 crc kubenswrapper[4789]: I0202 21:35:27.975436 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-6888856db4-t4fd8" Feb 02 21:35:27 crc kubenswrapper[4789]: I0202 21:35:27.977158 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-4nrdt" event={"ID":"f2f3a656-c8a4-41a3-8791-c021db980c6d","Type":"ContainerStarted","Data":"3ecc750112d0218cbdd9c528aa00824b0e851b5dcec772e357cfae80fbc962b0"} Feb 02 21:35:27 crc kubenswrapper[4789]: I0202 21:35:27.995830 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-6888856db4-t4fd8" podStartSLOduration=1.5709188809999999 podStartE2EDuration="4.99580833s" podCreationTimestamp="2026-02-02 21:35:23 +0000 UTC" firstStartedPulling="2026-02-02 21:35:24.311975043 +0000 UTC m=+944.607000062" lastFinishedPulling="2026-02-02 21:35:27.736864482 +0000 UTC m=+948.031889511" observedRunningTime="2026-02-02 21:35:27.991528699 +0000 UTC m=+948.286553728" watchObservedRunningTime="2026-02-02 21:35:27.99580833 +0000 UTC m=+948.290833349" Feb 02 21:35:28 crc kubenswrapper[4789]: I0202 21:35:28.013828 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-5545bd876-4nrdt" podStartSLOduration=1.40332903 podStartE2EDuration="6.013808171s" podCreationTimestamp="2026-02-02 21:35:22 +0000 UTC" firstStartedPulling="2026-02-02 21:35:23.120397391 +0000 UTC m=+943.415422430" lastFinishedPulling="2026-02-02 21:35:27.730876552 +0000 UTC m=+948.025901571" observedRunningTime="2026-02-02 21:35:28.013302376 +0000 UTC m=+948.308327425" watchObservedRunningTime="2026-02-02 21:35:28.013808171 +0000 UTC m=+948.308833190" Feb 02 21:35:33 crc kubenswrapper[4789]: I0202 21:35:33.857504 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-6888856db4-t4fd8" Feb 02 21:35:39 crc kubenswrapper[4789]: I0202 21:35:39.309983 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-545d4d4674-p5b9q"] Feb 02 21:35:39 crc kubenswrapper[4789]: I0202 21:35:39.311928 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-p5b9q" Feb 02 21:35:39 crc kubenswrapper[4789]: I0202 21:35:39.314098 4789 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-72l5c" Feb 02 21:35:39 crc kubenswrapper[4789]: I0202 21:35:39.329693 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-p5b9q"] Feb 02 21:35:39 crc kubenswrapper[4789]: I0202 21:35:39.408144 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/25fd2392-6f0c-47da-bf7d-cec1cc21b429-bound-sa-token\") pod \"cert-manager-545d4d4674-p5b9q\" (UID: \"25fd2392-6f0c-47da-bf7d-cec1cc21b429\") " pod="cert-manager/cert-manager-545d4d4674-p5b9q" Feb 02 21:35:39 crc kubenswrapper[4789]: I0202 21:35:39.408214 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scg2h\" (UniqueName: \"kubernetes.io/projected/25fd2392-6f0c-47da-bf7d-cec1cc21b429-kube-api-access-scg2h\") pod \"cert-manager-545d4d4674-p5b9q\" (UID: \"25fd2392-6f0c-47da-bf7d-cec1cc21b429\") " pod="cert-manager/cert-manager-545d4d4674-p5b9q" Feb 02 21:35:39 crc kubenswrapper[4789]: I0202 21:35:39.510216 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/25fd2392-6f0c-47da-bf7d-cec1cc21b429-bound-sa-token\") pod \"cert-manager-545d4d4674-p5b9q\" (UID: \"25fd2392-6f0c-47da-bf7d-cec1cc21b429\") " pod="cert-manager/cert-manager-545d4d4674-p5b9q" Feb 02 21:35:39 crc kubenswrapper[4789]: I0202 21:35:39.510351 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-scg2h\" (UniqueName: \"kubernetes.io/projected/25fd2392-6f0c-47da-bf7d-cec1cc21b429-kube-api-access-scg2h\") pod \"cert-manager-545d4d4674-p5b9q\" (UID: \"25fd2392-6f0c-47da-bf7d-cec1cc21b429\") " pod="cert-manager/cert-manager-545d4d4674-p5b9q" Feb 02 21:35:39 crc kubenswrapper[4789]: I0202 21:35:39.544522 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/25fd2392-6f0c-47da-bf7d-cec1cc21b429-bound-sa-token\") pod \"cert-manager-545d4d4674-p5b9q\" (UID: \"25fd2392-6f0c-47da-bf7d-cec1cc21b429\") " pod="cert-manager/cert-manager-545d4d4674-p5b9q" Feb 02 21:35:39 crc kubenswrapper[4789]: I0202 21:35:39.545520 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-scg2h\" (UniqueName: \"kubernetes.io/projected/25fd2392-6f0c-47da-bf7d-cec1cc21b429-kube-api-access-scg2h\") pod \"cert-manager-545d4d4674-p5b9q\" (UID: \"25fd2392-6f0c-47da-bf7d-cec1cc21b429\") " pod="cert-manager/cert-manager-545d4d4674-p5b9q" Feb 02 21:35:39 crc kubenswrapper[4789]: I0202 21:35:39.642135 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-p5b9q" Feb 02 21:35:39 crc kubenswrapper[4789]: I0202 21:35:39.868006 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-p5b9q"] Feb 02 21:35:40 crc kubenswrapper[4789]: I0202 21:35:40.053080 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-p5b9q" event={"ID":"25fd2392-6f0c-47da-bf7d-cec1cc21b429","Type":"ContainerStarted","Data":"3d7b6f4f65cdc54fc2ceacc2ec4b302f7186caa4dda8f78ae1053097d82d5e13"} Feb 02 21:35:40 crc kubenswrapper[4789]: I0202 21:35:40.053181 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-p5b9q" event={"ID":"25fd2392-6f0c-47da-bf7d-cec1cc21b429","Type":"ContainerStarted","Data":"cf3bae2a502d3092518b8e64d89fc513056b62bb6d306ff7b9b5d1469a676a37"} Feb 02 21:35:40 crc kubenswrapper[4789]: I0202 21:35:40.068647 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-545d4d4674-p5b9q" podStartSLOduration=1.068628682 podStartE2EDuration="1.068628682s" podCreationTimestamp="2026-02-02 21:35:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:35:40.065267957 +0000 UTC m=+960.360293006" watchObservedRunningTime="2026-02-02 21:35:40.068628682 +0000 UTC m=+960.363653701" Feb 02 21:35:47 crc kubenswrapper[4789]: I0202 21:35:47.784173 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-dsxzh"] Feb 02 21:35:47 crc kubenswrapper[4789]: I0202 21:35:47.785915 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-dsxzh" Feb 02 21:35:47 crc kubenswrapper[4789]: I0202 21:35:47.789973 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-cldpl" Feb 02 21:35:47 crc kubenswrapper[4789]: I0202 21:35:47.793240 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-dsxzh"] Feb 02 21:35:47 crc kubenswrapper[4789]: I0202 21:35:47.798051 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 02 21:35:47 crc kubenswrapper[4789]: I0202 21:35:47.799802 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 02 21:35:47 crc kubenswrapper[4789]: I0202 21:35:47.935288 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9fz7\" (UniqueName: \"kubernetes.io/projected/1dd8296b-bcdc-4787-b8b1-9b2cf60b6851-kube-api-access-n9fz7\") pod \"openstack-operator-index-dsxzh\" (UID: \"1dd8296b-bcdc-4787-b8b1-9b2cf60b6851\") " pod="openstack-operators/openstack-operator-index-dsxzh" Feb 02 21:35:48 crc kubenswrapper[4789]: I0202 21:35:48.036367 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9fz7\" (UniqueName: \"kubernetes.io/projected/1dd8296b-bcdc-4787-b8b1-9b2cf60b6851-kube-api-access-n9fz7\") pod \"openstack-operator-index-dsxzh\" (UID: \"1dd8296b-bcdc-4787-b8b1-9b2cf60b6851\") " pod="openstack-operators/openstack-operator-index-dsxzh" Feb 02 21:35:48 crc kubenswrapper[4789]: I0202 21:35:48.060939 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9fz7\" (UniqueName: \"kubernetes.io/projected/1dd8296b-bcdc-4787-b8b1-9b2cf60b6851-kube-api-access-n9fz7\") pod \"openstack-operator-index-dsxzh\" (UID: \"1dd8296b-bcdc-4787-b8b1-9b2cf60b6851\") " pod="openstack-operators/openstack-operator-index-dsxzh" Feb 02 21:35:48 crc kubenswrapper[4789]: I0202 21:35:48.109603 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-dsxzh" Feb 02 21:35:48 crc kubenswrapper[4789]: I0202 21:35:48.569113 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-dsxzh"] Feb 02 21:35:49 crc kubenswrapper[4789]: I0202 21:35:49.120881 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-dsxzh" event={"ID":"1dd8296b-bcdc-4787-b8b1-9b2cf60b6851","Type":"ContainerStarted","Data":"e949407ae78d3d776f4cced0d3234a01964cf05196bc00e83a59fbe5b720f2e2"} Feb 02 21:35:51 crc kubenswrapper[4789]: I0202 21:35:51.163081 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-dsxzh"] Feb 02 21:35:51 crc kubenswrapper[4789]: I0202 21:35:51.777963 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-xzwx8"] Feb 02 21:35:51 crc kubenswrapper[4789]: I0202 21:35:51.779652 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-xzwx8" Feb 02 21:35:51 crc kubenswrapper[4789]: I0202 21:35:51.786982 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-xzwx8"] Feb 02 21:35:51 crc kubenswrapper[4789]: I0202 21:35:51.897130 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmnwj\" (UniqueName: \"kubernetes.io/projected/05148ecc-f381-45e6-af59-a732c6d6e856-kube-api-access-lmnwj\") pod \"openstack-operator-index-xzwx8\" (UID: \"05148ecc-f381-45e6-af59-a732c6d6e856\") " pod="openstack-operators/openstack-operator-index-xzwx8" Feb 02 21:35:51 crc kubenswrapper[4789]: I0202 21:35:51.999130 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmnwj\" (UniqueName: \"kubernetes.io/projected/05148ecc-f381-45e6-af59-a732c6d6e856-kube-api-access-lmnwj\") pod \"openstack-operator-index-xzwx8\" (UID: \"05148ecc-f381-45e6-af59-a732c6d6e856\") " pod="openstack-operators/openstack-operator-index-xzwx8" Feb 02 21:35:52 crc kubenswrapper[4789]: I0202 21:35:52.034260 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmnwj\" (UniqueName: \"kubernetes.io/projected/05148ecc-f381-45e6-af59-a732c6d6e856-kube-api-access-lmnwj\") pod \"openstack-operator-index-xzwx8\" (UID: \"05148ecc-f381-45e6-af59-a732c6d6e856\") " pod="openstack-operators/openstack-operator-index-xzwx8" Feb 02 21:35:52 crc kubenswrapper[4789]: I0202 21:35:52.105567 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-xzwx8" Feb 02 21:35:52 crc kubenswrapper[4789]: I0202 21:35:52.144155 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-dsxzh" event={"ID":"1dd8296b-bcdc-4787-b8b1-9b2cf60b6851","Type":"ContainerStarted","Data":"d08be45bfcf4c1b52fa07758db51d3cddf54121020a62a40b4bfa2a2a734d16f"} Feb 02 21:35:52 crc kubenswrapper[4789]: I0202 21:35:52.144322 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-dsxzh" podUID="1dd8296b-bcdc-4787-b8b1-9b2cf60b6851" containerName="registry-server" containerID="cri-o://d08be45bfcf4c1b52fa07758db51d3cddf54121020a62a40b4bfa2a2a734d16f" gracePeriod=2 Feb 02 21:35:52 crc kubenswrapper[4789]: I0202 21:35:52.177888 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-dsxzh" podStartSLOduration=1.838918746 podStartE2EDuration="5.177856986s" podCreationTimestamp="2026-02-02 21:35:47 +0000 UTC" firstStartedPulling="2026-02-02 21:35:48.576075728 +0000 UTC m=+968.871100787" lastFinishedPulling="2026-02-02 21:35:51.915013968 +0000 UTC m=+972.210039027" observedRunningTime="2026-02-02 21:35:52.160965647 +0000 UTC m=+972.455990706" watchObservedRunningTime="2026-02-02 21:35:52.177856986 +0000 UTC m=+972.472882045" Feb 02 21:35:52 crc kubenswrapper[4789]: I0202 21:35:52.499722 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-dsxzh" Feb 02 21:35:52 crc kubenswrapper[4789]: I0202 21:35:52.583767 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-xzwx8"] Feb 02 21:35:52 crc kubenswrapper[4789]: W0202 21:35:52.587163 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod05148ecc_f381_45e6_af59_a732c6d6e856.slice/crio-a4016620dddd0456aa7bed63d95ba054a15343d21ef818e1662d1ea8718b4c8f WatchSource:0}: Error finding container a4016620dddd0456aa7bed63d95ba054a15343d21ef818e1662d1ea8718b4c8f: Status 404 returned error can't find the container with id a4016620dddd0456aa7bed63d95ba054a15343d21ef818e1662d1ea8718b4c8f Feb 02 21:35:52 crc kubenswrapper[4789]: I0202 21:35:52.609392 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n9fz7\" (UniqueName: \"kubernetes.io/projected/1dd8296b-bcdc-4787-b8b1-9b2cf60b6851-kube-api-access-n9fz7\") pod \"1dd8296b-bcdc-4787-b8b1-9b2cf60b6851\" (UID: \"1dd8296b-bcdc-4787-b8b1-9b2cf60b6851\") " Feb 02 21:35:52 crc kubenswrapper[4789]: I0202 21:35:52.618036 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1dd8296b-bcdc-4787-b8b1-9b2cf60b6851-kube-api-access-n9fz7" (OuterVolumeSpecName: "kube-api-access-n9fz7") pod "1dd8296b-bcdc-4787-b8b1-9b2cf60b6851" (UID: "1dd8296b-bcdc-4787-b8b1-9b2cf60b6851"). InnerVolumeSpecName "kube-api-access-n9fz7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:35:52 crc kubenswrapper[4789]: I0202 21:35:52.710666 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n9fz7\" (UniqueName: \"kubernetes.io/projected/1dd8296b-bcdc-4787-b8b1-9b2cf60b6851-kube-api-access-n9fz7\") on node \"crc\" DevicePath \"\"" Feb 02 21:35:53 crc kubenswrapper[4789]: I0202 21:35:53.153753 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-xzwx8" event={"ID":"05148ecc-f381-45e6-af59-a732c6d6e856","Type":"ContainerStarted","Data":"75d6d431c8fa44e1bb2a5ff1174a01e3d14e8b8cb5bce60dd4abb5bcafee8a1d"} Feb 02 21:35:53 crc kubenswrapper[4789]: I0202 21:35:53.153829 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-xzwx8" event={"ID":"05148ecc-f381-45e6-af59-a732c6d6e856","Type":"ContainerStarted","Data":"a4016620dddd0456aa7bed63d95ba054a15343d21ef818e1662d1ea8718b4c8f"} Feb 02 21:35:53 crc kubenswrapper[4789]: I0202 21:35:53.155366 4789 generic.go:334] "Generic (PLEG): container finished" podID="1dd8296b-bcdc-4787-b8b1-9b2cf60b6851" containerID="d08be45bfcf4c1b52fa07758db51d3cddf54121020a62a40b4bfa2a2a734d16f" exitCode=0 Feb 02 21:35:53 crc kubenswrapper[4789]: I0202 21:35:53.155439 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-dsxzh" event={"ID":"1dd8296b-bcdc-4787-b8b1-9b2cf60b6851","Type":"ContainerDied","Data":"d08be45bfcf4c1b52fa07758db51d3cddf54121020a62a40b4bfa2a2a734d16f"} Feb 02 21:35:53 crc kubenswrapper[4789]: I0202 21:35:53.155492 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-dsxzh" Feb 02 21:35:53 crc kubenswrapper[4789]: I0202 21:35:53.155526 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-dsxzh" event={"ID":"1dd8296b-bcdc-4787-b8b1-9b2cf60b6851","Type":"ContainerDied","Data":"e949407ae78d3d776f4cced0d3234a01964cf05196bc00e83a59fbe5b720f2e2"} Feb 02 21:35:53 crc kubenswrapper[4789]: I0202 21:35:53.155600 4789 scope.go:117] "RemoveContainer" containerID="d08be45bfcf4c1b52fa07758db51d3cddf54121020a62a40b4bfa2a2a734d16f" Feb 02 21:35:53 crc kubenswrapper[4789]: I0202 21:35:53.179146 4789 scope.go:117] "RemoveContainer" containerID="d08be45bfcf4c1b52fa07758db51d3cddf54121020a62a40b4bfa2a2a734d16f" Feb 02 21:35:53 crc kubenswrapper[4789]: E0202 21:35:53.179977 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d08be45bfcf4c1b52fa07758db51d3cddf54121020a62a40b4bfa2a2a734d16f\": container with ID starting with d08be45bfcf4c1b52fa07758db51d3cddf54121020a62a40b4bfa2a2a734d16f not found: ID does not exist" containerID="d08be45bfcf4c1b52fa07758db51d3cddf54121020a62a40b4bfa2a2a734d16f" Feb 02 21:35:53 crc kubenswrapper[4789]: I0202 21:35:53.180021 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d08be45bfcf4c1b52fa07758db51d3cddf54121020a62a40b4bfa2a2a734d16f"} err="failed to get container status \"d08be45bfcf4c1b52fa07758db51d3cddf54121020a62a40b4bfa2a2a734d16f\": rpc error: code = NotFound desc = could not find container \"d08be45bfcf4c1b52fa07758db51d3cddf54121020a62a40b4bfa2a2a734d16f\": container with ID starting with d08be45bfcf4c1b52fa07758db51d3cddf54121020a62a40b4bfa2a2a734d16f not found: ID does not exist" Feb 02 21:35:53 crc kubenswrapper[4789]: I0202 21:35:53.181068 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-xzwx8" podStartSLOduration=2.09456749 podStartE2EDuration="2.181044834s" podCreationTimestamp="2026-02-02 21:35:51 +0000 UTC" firstStartedPulling="2026-02-02 21:35:52.592012699 +0000 UTC m=+972.887037748" lastFinishedPulling="2026-02-02 21:35:52.678490053 +0000 UTC m=+972.973515092" observedRunningTime="2026-02-02 21:35:53.1745367 +0000 UTC m=+973.469561759" watchObservedRunningTime="2026-02-02 21:35:53.181044834 +0000 UTC m=+973.476069893" Feb 02 21:35:53 crc kubenswrapper[4789]: I0202 21:35:53.214261 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-dsxzh"] Feb 02 21:35:53 crc kubenswrapper[4789]: I0202 21:35:53.221987 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-dsxzh"] Feb 02 21:35:54 crc kubenswrapper[4789]: I0202 21:35:54.432649 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1dd8296b-bcdc-4787-b8b1-9b2cf60b6851" path="/var/lib/kubelet/pods/1dd8296b-bcdc-4787-b8b1-9b2cf60b6851/volumes" Feb 02 21:36:02 crc kubenswrapper[4789]: I0202 21:36:02.106774 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-xzwx8" Feb 02 21:36:02 crc kubenswrapper[4789]: I0202 21:36:02.107472 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-xzwx8" Feb 02 21:36:02 crc kubenswrapper[4789]: I0202 21:36:02.154630 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-xzwx8" Feb 02 21:36:02 crc kubenswrapper[4789]: I0202 21:36:02.259396 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-xzwx8" Feb 02 21:36:07 crc kubenswrapper[4789]: I0202 21:36:07.820337 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/6d4b642553982cbb8e7fd91a34c219bfd2554bbfec4ca764bf9a8af8f3phzkw"] Feb 02 21:36:07 crc kubenswrapper[4789]: E0202 21:36:07.821482 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1dd8296b-bcdc-4787-b8b1-9b2cf60b6851" containerName="registry-server" Feb 02 21:36:07 crc kubenswrapper[4789]: I0202 21:36:07.821500 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dd8296b-bcdc-4787-b8b1-9b2cf60b6851" containerName="registry-server" Feb 02 21:36:07 crc kubenswrapper[4789]: I0202 21:36:07.821668 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="1dd8296b-bcdc-4787-b8b1-9b2cf60b6851" containerName="registry-server" Feb 02 21:36:07 crc kubenswrapper[4789]: I0202 21:36:07.822715 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/6d4b642553982cbb8e7fd91a34c219bfd2554bbfec4ca764bf9a8af8f3phzkw" Feb 02 21:36:07 crc kubenswrapper[4789]: I0202 21:36:07.825035 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-k4h66" Feb 02 21:36:07 crc kubenswrapper[4789]: I0202 21:36:07.840189 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/6d4b642553982cbb8e7fd91a34c219bfd2554bbfec4ca764bf9a8af8f3phzkw"] Feb 02 21:36:07 crc kubenswrapper[4789]: I0202 21:36:07.980415 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b517a7e4-68f4-4873-99aa-b62a18bc38b6-util\") pod \"6d4b642553982cbb8e7fd91a34c219bfd2554bbfec4ca764bf9a8af8f3phzkw\" (UID: \"b517a7e4-68f4-4873-99aa-b62a18bc38b6\") " pod="openstack-operators/6d4b642553982cbb8e7fd91a34c219bfd2554bbfec4ca764bf9a8af8f3phzkw" Feb 02 21:36:07 crc kubenswrapper[4789]: I0202 21:36:07.980552 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b517a7e4-68f4-4873-99aa-b62a18bc38b6-bundle\") pod \"6d4b642553982cbb8e7fd91a34c219bfd2554bbfec4ca764bf9a8af8f3phzkw\" (UID: \"b517a7e4-68f4-4873-99aa-b62a18bc38b6\") " pod="openstack-operators/6d4b642553982cbb8e7fd91a34c219bfd2554bbfec4ca764bf9a8af8f3phzkw" Feb 02 21:36:07 crc kubenswrapper[4789]: I0202 21:36:07.980680 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7j4gm\" (UniqueName: \"kubernetes.io/projected/b517a7e4-68f4-4873-99aa-b62a18bc38b6-kube-api-access-7j4gm\") pod \"6d4b642553982cbb8e7fd91a34c219bfd2554bbfec4ca764bf9a8af8f3phzkw\" (UID: \"b517a7e4-68f4-4873-99aa-b62a18bc38b6\") " pod="openstack-operators/6d4b642553982cbb8e7fd91a34c219bfd2554bbfec4ca764bf9a8af8f3phzkw" Feb 02 21:36:08 crc kubenswrapper[4789]: I0202 21:36:08.082368 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b517a7e4-68f4-4873-99aa-b62a18bc38b6-bundle\") pod \"6d4b642553982cbb8e7fd91a34c219bfd2554bbfec4ca764bf9a8af8f3phzkw\" (UID: \"b517a7e4-68f4-4873-99aa-b62a18bc38b6\") " pod="openstack-operators/6d4b642553982cbb8e7fd91a34c219bfd2554bbfec4ca764bf9a8af8f3phzkw" Feb 02 21:36:08 crc kubenswrapper[4789]: I0202 21:36:08.082522 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7j4gm\" (UniqueName: \"kubernetes.io/projected/b517a7e4-68f4-4873-99aa-b62a18bc38b6-kube-api-access-7j4gm\") pod \"6d4b642553982cbb8e7fd91a34c219bfd2554bbfec4ca764bf9a8af8f3phzkw\" (UID: \"b517a7e4-68f4-4873-99aa-b62a18bc38b6\") " pod="openstack-operators/6d4b642553982cbb8e7fd91a34c219bfd2554bbfec4ca764bf9a8af8f3phzkw" Feb 02 21:36:08 crc kubenswrapper[4789]: I0202 21:36:08.082695 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b517a7e4-68f4-4873-99aa-b62a18bc38b6-util\") pod \"6d4b642553982cbb8e7fd91a34c219bfd2554bbfec4ca764bf9a8af8f3phzkw\" (UID: \"b517a7e4-68f4-4873-99aa-b62a18bc38b6\") " pod="openstack-operators/6d4b642553982cbb8e7fd91a34c219bfd2554bbfec4ca764bf9a8af8f3phzkw" Feb 02 21:36:08 crc kubenswrapper[4789]: I0202 21:36:08.083712 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b517a7e4-68f4-4873-99aa-b62a18bc38b6-util\") pod \"6d4b642553982cbb8e7fd91a34c219bfd2554bbfec4ca764bf9a8af8f3phzkw\" (UID: \"b517a7e4-68f4-4873-99aa-b62a18bc38b6\") " pod="openstack-operators/6d4b642553982cbb8e7fd91a34c219bfd2554bbfec4ca764bf9a8af8f3phzkw" Feb 02 21:36:08 crc kubenswrapper[4789]: I0202 21:36:08.083771 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b517a7e4-68f4-4873-99aa-b62a18bc38b6-bundle\") pod \"6d4b642553982cbb8e7fd91a34c219bfd2554bbfec4ca764bf9a8af8f3phzkw\" (UID: \"b517a7e4-68f4-4873-99aa-b62a18bc38b6\") " pod="openstack-operators/6d4b642553982cbb8e7fd91a34c219bfd2554bbfec4ca764bf9a8af8f3phzkw" Feb 02 21:36:08 crc kubenswrapper[4789]: I0202 21:36:08.109391 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7j4gm\" (UniqueName: \"kubernetes.io/projected/b517a7e4-68f4-4873-99aa-b62a18bc38b6-kube-api-access-7j4gm\") pod \"6d4b642553982cbb8e7fd91a34c219bfd2554bbfec4ca764bf9a8af8f3phzkw\" (UID: \"b517a7e4-68f4-4873-99aa-b62a18bc38b6\") " pod="openstack-operators/6d4b642553982cbb8e7fd91a34c219bfd2554bbfec4ca764bf9a8af8f3phzkw" Feb 02 21:36:08 crc kubenswrapper[4789]: I0202 21:36:08.146017 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/6d4b642553982cbb8e7fd91a34c219bfd2554bbfec4ca764bf9a8af8f3phzkw" Feb 02 21:36:08 crc kubenswrapper[4789]: I0202 21:36:08.408735 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/6d4b642553982cbb8e7fd91a34c219bfd2554bbfec4ca764bf9a8af8f3phzkw"] Feb 02 21:36:09 crc kubenswrapper[4789]: I0202 21:36:09.307236 4789 generic.go:334] "Generic (PLEG): container finished" podID="b517a7e4-68f4-4873-99aa-b62a18bc38b6" containerID="a8752fedcee5690b0c06da7b8281cabc525472e7298384eb733ded6f16c1cc7f" exitCode=0 Feb 02 21:36:09 crc kubenswrapper[4789]: I0202 21:36:09.307321 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/6d4b642553982cbb8e7fd91a34c219bfd2554bbfec4ca764bf9a8af8f3phzkw" event={"ID":"b517a7e4-68f4-4873-99aa-b62a18bc38b6","Type":"ContainerDied","Data":"a8752fedcee5690b0c06da7b8281cabc525472e7298384eb733ded6f16c1cc7f"} Feb 02 21:36:09 crc kubenswrapper[4789]: I0202 21:36:09.307404 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/6d4b642553982cbb8e7fd91a34c219bfd2554bbfec4ca764bf9a8af8f3phzkw" event={"ID":"b517a7e4-68f4-4873-99aa-b62a18bc38b6","Type":"ContainerStarted","Data":"f1e991dd6f08819320089d275e336857a6db149e5247550609fdc58fe7f2f0d9"} Feb 02 21:36:10 crc kubenswrapper[4789]: I0202 21:36:10.319814 4789 generic.go:334] "Generic (PLEG): container finished" podID="b517a7e4-68f4-4873-99aa-b62a18bc38b6" containerID="b7d0a04c19c0ab19b29fbdae7d8f254f7266a32674351c1682b18857c686749e" exitCode=0 Feb 02 21:36:10 crc kubenswrapper[4789]: I0202 21:36:10.320112 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/6d4b642553982cbb8e7fd91a34c219bfd2554bbfec4ca764bf9a8af8f3phzkw" event={"ID":"b517a7e4-68f4-4873-99aa-b62a18bc38b6","Type":"ContainerDied","Data":"b7d0a04c19c0ab19b29fbdae7d8f254f7266a32674351c1682b18857c686749e"} Feb 02 21:36:11 crc kubenswrapper[4789]: I0202 21:36:11.332744 4789 generic.go:334] "Generic (PLEG): container finished" podID="b517a7e4-68f4-4873-99aa-b62a18bc38b6" containerID="bd2da142f02aa4680aef1372d23af8166e1908c4ed987063c82cf4609f670d7c" exitCode=0 Feb 02 21:36:11 crc kubenswrapper[4789]: I0202 21:36:11.332821 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/6d4b642553982cbb8e7fd91a34c219bfd2554bbfec4ca764bf9a8af8f3phzkw" event={"ID":"b517a7e4-68f4-4873-99aa-b62a18bc38b6","Type":"ContainerDied","Data":"bd2da142f02aa4680aef1372d23af8166e1908c4ed987063c82cf4609f670d7c"} Feb 02 21:36:12 crc kubenswrapper[4789]: I0202 21:36:12.711117 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/6d4b642553982cbb8e7fd91a34c219bfd2554bbfec4ca764bf9a8af8f3phzkw" Feb 02 21:36:12 crc kubenswrapper[4789]: I0202 21:36:12.851787 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b517a7e4-68f4-4873-99aa-b62a18bc38b6-util\") pod \"b517a7e4-68f4-4873-99aa-b62a18bc38b6\" (UID: \"b517a7e4-68f4-4873-99aa-b62a18bc38b6\") " Feb 02 21:36:12 crc kubenswrapper[4789]: I0202 21:36:12.851873 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b517a7e4-68f4-4873-99aa-b62a18bc38b6-bundle\") pod \"b517a7e4-68f4-4873-99aa-b62a18bc38b6\" (UID: \"b517a7e4-68f4-4873-99aa-b62a18bc38b6\") " Feb 02 21:36:12 crc kubenswrapper[4789]: I0202 21:36:12.851958 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7j4gm\" (UniqueName: \"kubernetes.io/projected/b517a7e4-68f4-4873-99aa-b62a18bc38b6-kube-api-access-7j4gm\") pod \"b517a7e4-68f4-4873-99aa-b62a18bc38b6\" (UID: \"b517a7e4-68f4-4873-99aa-b62a18bc38b6\") " Feb 02 21:36:12 crc kubenswrapper[4789]: I0202 21:36:12.852672 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b517a7e4-68f4-4873-99aa-b62a18bc38b6-bundle" (OuterVolumeSpecName: "bundle") pod "b517a7e4-68f4-4873-99aa-b62a18bc38b6" (UID: "b517a7e4-68f4-4873-99aa-b62a18bc38b6"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 21:36:12 crc kubenswrapper[4789]: I0202 21:36:12.853311 4789 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b517a7e4-68f4-4873-99aa-b62a18bc38b6-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 21:36:12 crc kubenswrapper[4789]: I0202 21:36:12.860086 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b517a7e4-68f4-4873-99aa-b62a18bc38b6-kube-api-access-7j4gm" (OuterVolumeSpecName: "kube-api-access-7j4gm") pod "b517a7e4-68f4-4873-99aa-b62a18bc38b6" (UID: "b517a7e4-68f4-4873-99aa-b62a18bc38b6"). InnerVolumeSpecName "kube-api-access-7j4gm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:36:12 crc kubenswrapper[4789]: I0202 21:36:12.865303 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b517a7e4-68f4-4873-99aa-b62a18bc38b6-util" (OuterVolumeSpecName: "util") pod "b517a7e4-68f4-4873-99aa-b62a18bc38b6" (UID: "b517a7e4-68f4-4873-99aa-b62a18bc38b6"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 21:36:12 crc kubenswrapper[4789]: I0202 21:36:12.954764 4789 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b517a7e4-68f4-4873-99aa-b62a18bc38b6-util\") on node \"crc\" DevicePath \"\"" Feb 02 21:36:12 crc kubenswrapper[4789]: I0202 21:36:12.954810 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7j4gm\" (UniqueName: \"kubernetes.io/projected/b517a7e4-68f4-4873-99aa-b62a18bc38b6-kube-api-access-7j4gm\") on node \"crc\" DevicePath \"\"" Feb 02 21:36:13 crc kubenswrapper[4789]: I0202 21:36:13.352463 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/6d4b642553982cbb8e7fd91a34c219bfd2554bbfec4ca764bf9a8af8f3phzkw" event={"ID":"b517a7e4-68f4-4873-99aa-b62a18bc38b6","Type":"ContainerDied","Data":"f1e991dd6f08819320089d275e336857a6db149e5247550609fdc58fe7f2f0d9"} Feb 02 21:36:13 crc kubenswrapper[4789]: I0202 21:36:13.352536 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1e991dd6f08819320089d275e336857a6db149e5247550609fdc58fe7f2f0d9" Feb 02 21:36:13 crc kubenswrapper[4789]: I0202 21:36:13.352654 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/6d4b642553982cbb8e7fd91a34c219bfd2554bbfec4ca764bf9a8af8f3phzkw" Feb 02 21:36:22 crc kubenswrapper[4789]: I0202 21:36:22.841782 4789 patch_prober.go:28] interesting pod/machine-config-daemon-c8vcn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 21:36:22 crc kubenswrapper[4789]: I0202 21:36:22.842495 4789 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 21:36:23 crc kubenswrapper[4789]: I0202 21:36:23.166110 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-76c676d8b-bw7bj"] Feb 02 21:36:23 crc kubenswrapper[4789]: E0202 21:36:23.166486 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b517a7e4-68f4-4873-99aa-b62a18bc38b6" containerName="util" Feb 02 21:36:23 crc kubenswrapper[4789]: I0202 21:36:23.166505 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="b517a7e4-68f4-4873-99aa-b62a18bc38b6" containerName="util" Feb 02 21:36:23 crc kubenswrapper[4789]: E0202 21:36:23.166525 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b517a7e4-68f4-4873-99aa-b62a18bc38b6" containerName="extract" Feb 02 21:36:23 crc kubenswrapper[4789]: I0202 21:36:23.166538 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="b517a7e4-68f4-4873-99aa-b62a18bc38b6" containerName="extract" Feb 02 21:36:23 crc kubenswrapper[4789]: E0202 21:36:23.166556 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b517a7e4-68f4-4873-99aa-b62a18bc38b6" containerName="pull" Feb 02 21:36:23 crc kubenswrapper[4789]: I0202 21:36:23.166570 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="b517a7e4-68f4-4873-99aa-b62a18bc38b6" containerName="pull" Feb 02 21:36:23 crc kubenswrapper[4789]: I0202 21:36:23.166794 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="b517a7e4-68f4-4873-99aa-b62a18bc38b6" containerName="extract" Feb 02 21:36:23 crc kubenswrapper[4789]: I0202 21:36:23.167389 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-76c676d8b-bw7bj" Feb 02 21:36:23 crc kubenswrapper[4789]: I0202 21:36:23.171476 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-fth69" Feb 02 21:36:23 crc kubenswrapper[4789]: I0202 21:36:23.191523 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-76c676d8b-bw7bj"] Feb 02 21:36:23 crc kubenswrapper[4789]: I0202 21:36:23.302753 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7bdv\" (UniqueName: \"kubernetes.io/projected/fa6fc056-865a-4a4d-8fa6-f7615afd06b1-kube-api-access-z7bdv\") pod \"openstack-operator-controller-init-76c676d8b-bw7bj\" (UID: \"fa6fc056-865a-4a4d-8fa6-f7615afd06b1\") " pod="openstack-operators/openstack-operator-controller-init-76c676d8b-bw7bj" Feb 02 21:36:23 crc kubenswrapper[4789]: I0202 21:36:23.404533 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7bdv\" (UniqueName: \"kubernetes.io/projected/fa6fc056-865a-4a4d-8fa6-f7615afd06b1-kube-api-access-z7bdv\") pod \"openstack-operator-controller-init-76c676d8b-bw7bj\" (UID: \"fa6fc056-865a-4a4d-8fa6-f7615afd06b1\") " pod="openstack-operators/openstack-operator-controller-init-76c676d8b-bw7bj" Feb 02 21:36:23 crc kubenswrapper[4789]: I0202 21:36:23.429545 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7bdv\" (UniqueName: \"kubernetes.io/projected/fa6fc056-865a-4a4d-8fa6-f7615afd06b1-kube-api-access-z7bdv\") pod \"openstack-operator-controller-init-76c676d8b-bw7bj\" (UID: \"fa6fc056-865a-4a4d-8fa6-f7615afd06b1\") " pod="openstack-operators/openstack-operator-controller-init-76c676d8b-bw7bj" Feb 02 21:36:23 crc kubenswrapper[4789]: I0202 21:36:23.490676 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-76c676d8b-bw7bj" Feb 02 21:36:23 crc kubenswrapper[4789]: I0202 21:36:23.918393 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-76c676d8b-bw7bj"] Feb 02 21:36:24 crc kubenswrapper[4789]: I0202 21:36:24.429600 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-76c676d8b-bw7bj" event={"ID":"fa6fc056-865a-4a4d-8fa6-f7615afd06b1","Type":"ContainerStarted","Data":"932116f65dc819cc5fac4c1eec5a3490db072afc72f7bb69d26e764b4931debb"} Feb 02 21:36:28 crc kubenswrapper[4789]: I0202 21:36:28.472817 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-76c676d8b-bw7bj" event={"ID":"fa6fc056-865a-4a4d-8fa6-f7615afd06b1","Type":"ContainerStarted","Data":"03a2b0a84958765ad6ed3a7352f29f923907fcfec050e6c5bfa17cd92975fb34"} Feb 02 21:36:28 crc kubenswrapper[4789]: I0202 21:36:28.473521 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-76c676d8b-bw7bj" Feb 02 21:36:28 crc kubenswrapper[4789]: I0202 21:36:28.529852 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-76c676d8b-bw7bj" podStartSLOduration=1.695714009 podStartE2EDuration="5.529826909s" podCreationTimestamp="2026-02-02 21:36:23 +0000 UTC" firstStartedPulling="2026-02-02 21:36:23.930750531 +0000 UTC m=+1004.225775550" lastFinishedPulling="2026-02-02 21:36:27.764863411 +0000 UTC m=+1008.059888450" observedRunningTime="2026-02-02 21:36:28.526458853 +0000 UTC m=+1008.821483922" watchObservedRunningTime="2026-02-02 21:36:28.529826909 +0000 UTC m=+1008.824851968" Feb 02 21:36:33 crc kubenswrapper[4789]: I0202 21:36:33.494822 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-76c676d8b-bw7bj" Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.131379 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-fc589b45f-7l6fz"] Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.132980 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-fc589b45f-7l6fz" Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.135700 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-9c2bn" Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.150407 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-866f9bb544-47rnc"] Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.154030 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-866f9bb544-47rnc" Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.162467 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-fc589b45f-7l6fz"] Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.163811 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-c7kjf" Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.170035 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-8f4c5cb64-dntn6"] Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.171100 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-dntn6" Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.172452 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-866f9bb544-47rnc"] Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.174756 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-sws87" Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.193207 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-5d77f4dbc9-v5pw2"] Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.194288 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-5d77f4dbc9-v5pw2" Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.200021 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-tpqmv" Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.204823 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-8f4c5cb64-dntn6"] Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.232906 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-65dc6c8d9c-rgs76"] Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.233901 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-65dc6c8d9c-rgs76" Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.239816 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-79dbh" Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.239816 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vc7v\" (UniqueName: \"kubernetes.io/projected/0a1fe831-76ff-4718-9217-34c72110e718-kube-api-access-6vc7v\") pod \"cinder-operator-controller-manager-866f9bb544-47rnc\" (UID: \"0a1fe831-76ff-4718-9217-34c72110e718\") " pod="openstack-operators/cinder-operator-controller-manager-866f9bb544-47rnc" Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.239881 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jmgc\" (UniqueName: \"kubernetes.io/projected/56cea7a2-1c74-45e8-ae6e-e5a30b71df3e-kube-api-access-9jmgc\") pod \"barbican-operator-controller-manager-fc589b45f-7l6fz\" (UID: \"56cea7a2-1c74-45e8-ae6e-e5a30b71df3e\") " pod="openstack-operators/barbican-operator-controller-manager-fc589b45f-7l6fz" Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.245645 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-pg282"] Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.246390 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-pg282" Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.248078 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-2qzs4" Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.249665 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-5d77f4dbc9-v5pw2"] Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.254017 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-65dc6c8d9c-rgs76"] Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.259281 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-r68x4"] Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.259994 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-r68x4" Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.262041 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.262044 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-b9cgc" Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.291808 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-r68x4"] Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.319483 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-pg282"] Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.332293 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-87bd9d46f-b8vjj"] Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.333317 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-87bd9d46f-b8vjj" Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.340304 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-bhllf" Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.341093 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgjj7\" (UniqueName: \"kubernetes.io/projected/1ac077aa-b2d2-41b7-aa3d-061e3e7b41dc-kube-api-access-pgjj7\") pod \"horizon-operator-controller-manager-5fb775575f-pg282\" (UID: \"1ac077aa-b2d2-41b7-aa3d-061e3e7b41dc\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-pg282" Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.341143 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jmgc\" (UniqueName: \"kubernetes.io/projected/56cea7a2-1c74-45e8-ae6e-e5a30b71df3e-kube-api-access-9jmgc\") pod \"barbican-operator-controller-manager-fc589b45f-7l6fz\" (UID: \"56cea7a2-1c74-45e8-ae6e-e5a30b71df3e\") " pod="openstack-operators/barbican-operator-controller-manager-fc589b45f-7l6fz" Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.341182 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8l2t\" (UniqueName: \"kubernetes.io/projected/92bfe813-4e92-44da-8e1a-092164813972-kube-api-access-v8l2t\") pod \"glance-operator-controller-manager-5d77f4dbc9-v5pw2\" (UID: \"92bfe813-4e92-44da-8e1a-092164813972\") " pod="openstack-operators/glance-operator-controller-manager-5d77f4dbc9-v5pw2" Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.341219 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmmvv\" (UniqueName: \"kubernetes.io/projected/46f273e2-4f9b-4436-815b-72fcfd1f0b96-kube-api-access-kmmvv\") pod \"designate-operator-controller-manager-8f4c5cb64-dntn6\" (UID: \"46f273e2-4f9b-4436-815b-72fcfd1f0b96\") " pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-dntn6" Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.341245 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5czfk\" (UniqueName: \"kubernetes.io/projected/0bdf708d-44ed-44cf-af93-f8a2aec7e9ed-kube-api-access-5czfk\") pod \"heat-operator-controller-manager-65dc6c8d9c-rgs76\" (UID: \"0bdf708d-44ed-44cf-af93-f8a2aec7e9ed\") " pod="openstack-operators/heat-operator-controller-manager-65dc6c8d9c-rgs76" Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.341274 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vc7v\" (UniqueName: \"kubernetes.io/projected/0a1fe831-76ff-4718-9217-34c72110e718-kube-api-access-6vc7v\") pod \"cinder-operator-controller-manager-866f9bb544-47rnc\" (UID: \"0a1fe831-76ff-4718-9217-34c72110e718\") " pod="openstack-operators/cinder-operator-controller-manager-866f9bb544-47rnc" Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.341992 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-64469b487f-svkqs"] Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.342680 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-64469b487f-svkqs" Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.345713 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-whvjz" Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.359990 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vc7v\" (UniqueName: \"kubernetes.io/projected/0a1fe831-76ff-4718-9217-34c72110e718-kube-api-access-6vc7v\") pod \"cinder-operator-controller-manager-866f9bb544-47rnc\" (UID: \"0a1fe831-76ff-4718-9217-34c72110e718\") " pod="openstack-operators/cinder-operator-controller-manager-866f9bb544-47rnc" Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.368052 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jmgc\" (UniqueName: \"kubernetes.io/projected/56cea7a2-1c74-45e8-ae6e-e5a30b71df3e-kube-api-access-9jmgc\") pod \"barbican-operator-controller-manager-fc589b45f-7l6fz\" (UID: \"56cea7a2-1c74-45e8-ae6e-e5a30b71df3e\") " pod="openstack-operators/barbican-operator-controller-manager-fc589b45f-7l6fz" Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.368187 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-7775d87d9d-hzqch"] Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.369207 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7775d87d9d-hzqch" Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.371140 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-snxwf" Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.399000 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7775d87d9d-hzqch"] Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.415678 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-87bd9d46f-b8vjj"] Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.516614 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c8853e21-c77c-4220-acb8-8e469cbca718-cert\") pod \"infra-operator-controller-manager-79955696d6-r68x4\" (UID: \"c8853e21-c77c-4220-acb8-8e469cbca718\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-r68x4" Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.516968 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8l2t\" (UniqueName: \"kubernetes.io/projected/92bfe813-4e92-44da-8e1a-092164813972-kube-api-access-v8l2t\") pod \"glance-operator-controller-manager-5d77f4dbc9-v5pw2\" (UID: \"92bfe813-4e92-44da-8e1a-092164813972\") " pod="openstack-operators/glance-operator-controller-manager-5d77f4dbc9-v5pw2" Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.517106 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmmvv\" (UniqueName: \"kubernetes.io/projected/46f273e2-4f9b-4436-815b-72fcfd1f0b96-kube-api-access-kmmvv\") pod \"designate-operator-controller-manager-8f4c5cb64-dntn6\" (UID: \"46f273e2-4f9b-4436-815b-72fcfd1f0b96\") " pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-dntn6" Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.517234 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2gjj\" (UniqueName: \"kubernetes.io/projected/c8853e21-c77c-4220-acb8-8e469cbca718-kube-api-access-b2gjj\") pod \"infra-operator-controller-manager-79955696d6-r68x4\" (UID: \"c8853e21-c77c-4220-acb8-8e469cbca718\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-r68x4" Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.517353 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5czfk\" (UniqueName: \"kubernetes.io/projected/0bdf708d-44ed-44cf-af93-f8a2aec7e9ed-kube-api-access-5czfk\") pod \"heat-operator-controller-manager-65dc6c8d9c-rgs76\" (UID: \"0bdf708d-44ed-44cf-af93-f8a2aec7e9ed\") " pod="openstack-operators/heat-operator-controller-manager-65dc6c8d9c-rgs76" Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.517504 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8rk6\" (UniqueName: \"kubernetes.io/projected/bd53908a-497a-4e78-aa27-d608e94d1723-kube-api-access-j8rk6\") pod \"keystone-operator-controller-manager-64469b487f-svkqs\" (UID: \"bd53908a-497a-4e78-aa27-d608e94d1723\") " pod="openstack-operators/keystone-operator-controller-manager-64469b487f-svkqs" Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.517640 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wq9wf\" (UniqueName: \"kubernetes.io/projected/27a24f44-b811-4f28-84b5-88504deaae1c-kube-api-access-wq9wf\") pod \"ironic-operator-controller-manager-87bd9d46f-b8vjj\" (UID: \"27a24f44-b811-4f28-84b5-88504deaae1c\") " pod="openstack-operators/ironic-operator-controller-manager-87bd9d46f-b8vjj" Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.517751 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgjj7\" (UniqueName: \"kubernetes.io/projected/1ac077aa-b2d2-41b7-aa3d-061e3e7b41dc-kube-api-access-pgjj7\") pod \"horizon-operator-controller-manager-5fb775575f-pg282\" (UID: \"1ac077aa-b2d2-41b7-aa3d-061e3e7b41dc\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-pg282" Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.518864 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-fc589b45f-7l6fz" Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.519760 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-866f9bb544-47rnc" Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.525282 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-64469b487f-svkqs"] Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.534626 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-5zgft"] Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.535347 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-5zgft" Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.549861 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-576995988b-qcnc8"] Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.550306 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-qsrmt" Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.550640 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-576995988b-qcnc8" Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.565936 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-lbmjq" Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.567355 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8l2t\" (UniqueName: \"kubernetes.io/projected/92bfe813-4e92-44da-8e1a-092164813972-kube-api-access-v8l2t\") pod \"glance-operator-controller-manager-5d77f4dbc9-v5pw2\" (UID: \"92bfe813-4e92-44da-8e1a-092164813972\") " pod="openstack-operators/glance-operator-controller-manager-5d77f4dbc9-v5pw2" Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.567750 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-5zgft"] Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.569873 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmmvv\" (UniqueName: \"kubernetes.io/projected/46f273e2-4f9b-4436-815b-72fcfd1f0b96-kube-api-access-kmmvv\") pod \"designate-operator-controller-manager-8f4c5cb64-dntn6\" (UID: \"46f273e2-4f9b-4436-815b-72fcfd1f0b96\") " pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-dntn6" Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.595714 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-576995988b-qcnc8"] Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.597944 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgjj7\" (UniqueName: \"kubernetes.io/projected/1ac077aa-b2d2-41b7-aa3d-061e3e7b41dc-kube-api-access-pgjj7\") pod \"horizon-operator-controller-manager-5fb775575f-pg282\" (UID: \"1ac077aa-b2d2-41b7-aa3d-061e3e7b41dc\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-pg282" Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.601608 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5czfk\" (UniqueName: \"kubernetes.io/projected/0bdf708d-44ed-44cf-af93-f8a2aec7e9ed-kube-api-access-5czfk\") pod \"heat-operator-controller-manager-65dc6c8d9c-rgs76\" (UID: \"0bdf708d-44ed-44cf-af93-f8a2aec7e9ed\") " pod="openstack-operators/heat-operator-controller-manager-65dc6c8d9c-rgs76" Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.623722 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-5644b66645-mpkrm"] Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.624537 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-5644b66645-mpkrm" Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.626639 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpw8w\" (UniqueName: \"kubernetes.io/projected/e7c2322e-e6a7-4745-ab83-4ba56575e037-kube-api-access-wpw8w\") pod \"manila-operator-controller-manager-7775d87d9d-hzqch\" (UID: \"e7c2322e-e6a7-4745-ab83-4ba56575e037\") " pod="openstack-operators/manila-operator-controller-manager-7775d87d9d-hzqch" Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.626677 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8rk6\" (UniqueName: \"kubernetes.io/projected/bd53908a-497a-4e78-aa27-d608e94d1723-kube-api-access-j8rk6\") pod \"keystone-operator-controller-manager-64469b487f-svkqs\" (UID: \"bd53908a-497a-4e78-aa27-d608e94d1723\") " pod="openstack-operators/keystone-operator-controller-manager-64469b487f-svkqs" Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.626709 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wq9wf\" (UniqueName: \"kubernetes.io/projected/27a24f44-b811-4f28-84b5-88504deaae1c-kube-api-access-wq9wf\") pod \"ironic-operator-controller-manager-87bd9d46f-b8vjj\" (UID: \"27a24f44-b811-4f28-84b5-88504deaae1c\") " pod="openstack-operators/ironic-operator-controller-manager-87bd9d46f-b8vjj" Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.626792 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c8853e21-c77c-4220-acb8-8e469cbca718-cert\") pod \"infra-operator-controller-manager-79955696d6-r68x4\" (UID: \"c8853e21-c77c-4220-acb8-8e469cbca718\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-r68x4" Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.626810 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwk4m\" (UniqueName: \"kubernetes.io/projected/0b11c2a0-977e-46f8-bab7-5d812c8a35f9-kube-api-access-wwk4m\") pod \"mariadb-operator-controller-manager-67bf948998-5zgft\" (UID: \"0b11c2a0-977e-46f8-bab7-5d812c8a35f9\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-5zgft" Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.626840 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ww5fd\" (UniqueName: \"kubernetes.io/projected/546b235a-9844-4eff-9e93-91fc4c8c1c0c-kube-api-access-ww5fd\") pod \"nova-operator-controller-manager-5644b66645-mpkrm\" (UID: \"546b235a-9844-4eff-9e93-91fc4c8c1c0c\") " pod="openstack-operators/nova-operator-controller-manager-5644b66645-mpkrm" Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.626878 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9svg\" (UniqueName: \"kubernetes.io/projected/1fd01978-b3df-4a1c-a650-e6b182389a8d-kube-api-access-h9svg\") pod \"neutron-operator-controller-manager-576995988b-qcnc8\" (UID: \"1fd01978-b3df-4a1c-a650-e6b182389a8d\") " pod="openstack-operators/neutron-operator-controller-manager-576995988b-qcnc8" Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.626900 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2gjj\" (UniqueName: \"kubernetes.io/projected/c8853e21-c77c-4220-acb8-8e469cbca718-kube-api-access-b2gjj\") pod \"infra-operator-controller-manager-79955696d6-r68x4\" (UID: \"c8853e21-c77c-4220-acb8-8e469cbca718\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-r68x4" Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.630696 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-5644b66645-mpkrm"] Feb 02 21:36:52 crc kubenswrapper[4789]: E0202 21:36:52.630964 4789 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 02 21:36:52 crc kubenswrapper[4789]: E0202 21:36:52.631031 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8853e21-c77c-4220-acb8-8e469cbca718-cert podName:c8853e21-c77c-4220-acb8-8e469cbca718 nodeName:}" failed. No retries permitted until 2026-02-02 21:36:53.131010954 +0000 UTC m=+1033.426035973 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c8853e21-c77c-4220-acb8-8e469cbca718-cert") pod "infra-operator-controller-manager-79955696d6-r68x4" (UID: "c8853e21-c77c-4220-acb8-8e469cbca718") : secret "infra-operator-webhook-server-cert" not found Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.633948 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-cbj86" Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.645626 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7b89ddb58-gxpg7"] Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.646640 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7b89ddb58-gxpg7" Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.660571 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-5t5lv" Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.662448 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wq9wf\" (UniqueName: \"kubernetes.io/projected/27a24f44-b811-4f28-84b5-88504deaae1c-kube-api-access-wq9wf\") pod \"ironic-operator-controller-manager-87bd9d46f-b8vjj\" (UID: \"27a24f44-b811-4f28-84b5-88504deaae1c\") " pod="openstack-operators/ironic-operator-controller-manager-87bd9d46f-b8vjj" Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.665125 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2gjj\" (UniqueName: \"kubernetes.io/projected/c8853e21-c77c-4220-acb8-8e469cbca718-kube-api-access-b2gjj\") pod \"infra-operator-controller-manager-79955696d6-r68x4\" (UID: \"c8853e21-c77c-4220-acb8-8e469cbca718\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-r68x4" Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.670244 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8rk6\" (UniqueName: \"kubernetes.io/projected/bd53908a-497a-4e78-aa27-d608e94d1723-kube-api-access-j8rk6\") pod \"keystone-operator-controller-manager-64469b487f-svkqs\" (UID: \"bd53908a-497a-4e78-aa27-d608e94d1723\") " pod="openstack-operators/keystone-operator-controller-manager-64469b487f-svkqs" Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.672477 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7b89ddb58-gxpg7"] Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.716996 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-64469b487f-svkqs" Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.727551 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hcc2\" (UniqueName: \"kubernetes.io/projected/31505f8d-d5db-47b0-a3c7-38b45e6a6997-kube-api-access-6hcc2\") pod \"octavia-operator-controller-manager-7b89ddb58-gxpg7\" (UID: \"31505f8d-d5db-47b0-a3c7-38b45e6a6997\") " pod="openstack-operators/octavia-operator-controller-manager-7b89ddb58-gxpg7" Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.727649 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwk4m\" (UniqueName: \"kubernetes.io/projected/0b11c2a0-977e-46f8-bab7-5d812c8a35f9-kube-api-access-wwk4m\") pod \"mariadb-operator-controller-manager-67bf948998-5zgft\" (UID: \"0b11c2a0-977e-46f8-bab7-5d812c8a35f9\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-5zgft" Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.727690 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ww5fd\" (UniqueName: \"kubernetes.io/projected/546b235a-9844-4eff-9e93-91fc4c8c1c0c-kube-api-access-ww5fd\") pod \"nova-operator-controller-manager-5644b66645-mpkrm\" (UID: \"546b235a-9844-4eff-9e93-91fc4c8c1c0c\") " pod="openstack-operators/nova-operator-controller-manager-5644b66645-mpkrm" Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.727722 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9svg\" (UniqueName: \"kubernetes.io/projected/1fd01978-b3df-4a1c-a650-e6b182389a8d-kube-api-access-h9svg\") pod \"neutron-operator-controller-manager-576995988b-qcnc8\" (UID: \"1fd01978-b3df-4a1c-a650-e6b182389a8d\") " pod="openstack-operators/neutron-operator-controller-manager-576995988b-qcnc8" Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.727758 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wpw8w\" (UniqueName: \"kubernetes.io/projected/e7c2322e-e6a7-4745-ab83-4ba56575e037-kube-api-access-wpw8w\") pod \"manila-operator-controller-manager-7775d87d9d-hzqch\" (UID: \"e7c2322e-e6a7-4745-ab83-4ba56575e037\") " pod="openstack-operators/manila-operator-controller-manager-7775d87d9d-hzqch" Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.742361 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-pqj67"] Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.743482 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-pqj67" Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.748009 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-ggf4s" Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.751424 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwk4m\" (UniqueName: \"kubernetes.io/projected/0b11c2a0-977e-46f8-bab7-5d812c8a35f9-kube-api-access-wwk4m\") pod \"mariadb-operator-controller-manager-67bf948998-5zgft\" (UID: \"0b11c2a0-977e-46f8-bab7-5d812c8a35f9\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-5zgft" Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.763301 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9svg\" (UniqueName: \"kubernetes.io/projected/1fd01978-b3df-4a1c-a650-e6b182389a8d-kube-api-access-h9svg\") pod \"neutron-operator-controller-manager-576995988b-qcnc8\" (UID: \"1fd01978-b3df-4a1c-a650-e6b182389a8d\") " pod="openstack-operators/neutron-operator-controller-manager-576995988b-qcnc8" Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.763826 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpw8w\" (UniqueName: \"kubernetes.io/projected/e7c2322e-e6a7-4745-ab83-4ba56575e037-kube-api-access-wpw8w\") pod \"manila-operator-controller-manager-7775d87d9d-hzqch\" (UID: \"e7c2322e-e6a7-4745-ab83-4ba56575e037\") " pod="openstack-operators/manila-operator-controller-manager-7775d87d9d-hzqch" Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.768719 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ww5fd\" (UniqueName: \"kubernetes.io/projected/546b235a-9844-4eff-9e93-91fc4c8c1c0c-kube-api-access-ww5fd\") pod \"nova-operator-controller-manager-5644b66645-mpkrm\" (UID: \"546b235a-9844-4eff-9e93-91fc4c8c1c0c\") " pod="openstack-operators/nova-operator-controller-manager-5644b66645-mpkrm" Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.773976 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dvqp4s"] Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.775330 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dvqp4s" Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.778560 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-np7xx" Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.778743 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.780649 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-pqj67"] Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.788462 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dvqp4s"] Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.796608 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-dntn6" Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.797450 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-xc8xn"] Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.798154 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-xc8xn" Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.800192 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-9fsvw" Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.807546 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-7b89fdf75b-xv5k7"] Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.815387 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-7b89fdf75b-xv5k7" Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.816408 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-xc8xn"] Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.819169 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-5d77f4dbc9-v5pw2" Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.820031 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-v8jsl" Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.828829 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9xq7\" (UniqueName: \"kubernetes.io/projected/c43d0dc6-7406-4225-93b3-6779c68940f8-kube-api-access-r9xq7\") pod \"placement-operator-controller-manager-5b964cf4cd-xc8xn\" (UID: \"c43d0dc6-7406-4225-93b3-6779c68940f8\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-xc8xn" Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.828896 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmx5p\" (UniqueName: \"kubernetes.io/projected/faa2ece3-95a2-43c2-935b-10cc966e7292-kube-api-access-xmx5p\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dvqp4s\" (UID: \"faa2ece3-95a2-43c2-935b-10cc966e7292\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dvqp4s" Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.828933 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxrkz\" (UniqueName: \"kubernetes.io/projected/ad6fe491-a355-480a-8c88-ec835b469c44-kube-api-access-cxrkz\") pod \"ovn-operator-controller-manager-788c46999f-pqj67\" (UID: \"ad6fe491-a355-480a-8c88-ec835b469c44\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-pqj67" Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.828982 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6hcc2\" (UniqueName: \"kubernetes.io/projected/31505f8d-d5db-47b0-a3c7-38b45e6a6997-kube-api-access-6hcc2\") pod \"octavia-operator-controller-manager-7b89ddb58-gxpg7\" (UID: \"31505f8d-d5db-47b0-a3c7-38b45e6a6997\") " pod="openstack-operators/octavia-operator-controller-manager-7b89ddb58-gxpg7" Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.829044 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/faa2ece3-95a2-43c2-935b-10cc966e7292-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dvqp4s\" (UID: \"faa2ece3-95a2-43c2-935b-10cc966e7292\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dvqp4s" Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.829073 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xshwf\" (UniqueName: \"kubernetes.io/projected/88179082-d6af-4a6c-a159-262a4928c4c3-kube-api-access-xshwf\") pod \"swift-operator-controller-manager-7b89fdf75b-xv5k7\" (UID: \"88179082-d6af-4a6c-a159-262a4928c4c3\") " pod="openstack-operators/swift-operator-controller-manager-7b89fdf75b-xv5k7" Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.841515 4789 patch_prober.go:28] interesting pod/machine-config-daemon-c8vcn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.841594 4789 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.851060 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-65dc6c8d9c-rgs76" Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.858996 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-7b89fdf75b-xv5k7"] Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.861450 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hcc2\" (UniqueName: \"kubernetes.io/projected/31505f8d-d5db-47b0-a3c7-38b45e6a6997-kube-api-access-6hcc2\") pod \"octavia-operator-controller-manager-7b89ddb58-gxpg7\" (UID: \"31505f8d-d5db-47b0-a3c7-38b45e6a6997\") " pod="openstack-operators/octavia-operator-controller-manager-7b89ddb58-gxpg7" Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.870362 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-565849b54-th2wj"] Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.871828 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-565849b54-th2wj" Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.874844 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-vxgjn" Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.876670 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-565849b54-th2wj"] Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.880657 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-pg282" Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.899208 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-g6hcr"] Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.899967 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-g6hcr" Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.901955 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-kc5nz" Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.916308 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-g6hcr"] Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.929814 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmx5p\" (UniqueName: \"kubernetes.io/projected/faa2ece3-95a2-43c2-935b-10cc966e7292-kube-api-access-xmx5p\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dvqp4s\" (UID: \"faa2ece3-95a2-43c2-935b-10cc966e7292\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dvqp4s" Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.929877 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxrkz\" (UniqueName: \"kubernetes.io/projected/ad6fe491-a355-480a-8c88-ec835b469c44-kube-api-access-cxrkz\") pod \"ovn-operator-controller-manager-788c46999f-pqj67\" (UID: \"ad6fe491-a355-480a-8c88-ec835b469c44\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-pqj67" Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.929920 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gk5w9\" (UniqueName: \"kubernetes.io/projected/4ad0215b-c9cf-46bd-aee5-a3099f5fb8e7-kube-api-access-gk5w9\") pod \"telemetry-operator-controller-manager-565849b54-th2wj\" (UID: \"4ad0215b-c9cf-46bd-aee5-a3099f5fb8e7\") " pod="openstack-operators/telemetry-operator-controller-manager-565849b54-th2wj" Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.929986 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/faa2ece3-95a2-43c2-935b-10cc966e7292-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dvqp4s\" (UID: \"faa2ece3-95a2-43c2-935b-10cc966e7292\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dvqp4s" Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.930021 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xshwf\" (UniqueName: \"kubernetes.io/projected/88179082-d6af-4a6c-a159-262a4928c4c3-kube-api-access-xshwf\") pod \"swift-operator-controller-manager-7b89fdf75b-xv5k7\" (UID: \"88179082-d6af-4a6c-a159-262a4928c4c3\") " pod="openstack-operators/swift-operator-controller-manager-7b89fdf75b-xv5k7" Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.930061 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9xq7\" (UniqueName: \"kubernetes.io/projected/c43d0dc6-7406-4225-93b3-6779c68940f8-kube-api-access-r9xq7\") pod \"placement-operator-controller-manager-5b964cf4cd-xc8xn\" (UID: \"c43d0dc6-7406-4225-93b3-6779c68940f8\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-xc8xn" Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.930089 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlb5z\" (UniqueName: \"kubernetes.io/projected/19963467-1169-4cc6-99f8-efadadfcba2e-kube-api-access-rlb5z\") pod \"test-operator-controller-manager-56f8bfcd9f-g6hcr\" (UID: \"19963467-1169-4cc6-99f8-efadadfcba2e\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-g6hcr" Feb 02 21:36:52 crc kubenswrapper[4789]: E0202 21:36:52.930666 4789 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 21:36:52 crc kubenswrapper[4789]: E0202 21:36:52.930713 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/faa2ece3-95a2-43c2-935b-10cc966e7292-cert podName:faa2ece3-95a2-43c2-935b-10cc966e7292 nodeName:}" failed. No retries permitted until 2026-02-02 21:36:53.430696117 +0000 UTC m=+1033.725721136 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/faa2ece3-95a2-43c2-935b-10cc966e7292-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dvqp4s" (UID: "faa2ece3-95a2-43c2-935b-10cc966e7292") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.935330 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-586b95b788-jl54m"] Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.936082 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-586b95b788-jl54m" Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.947286 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-586b95b788-jl54m"] Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.948426 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-5zgft" Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.956955 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-87bd9d46f-b8vjj" Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.984238 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-576995988b-qcnc8" Feb 02 21:36:52 crc kubenswrapper[4789]: I0202 21:36:52.988261 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-xn5gf" Feb 02 21:36:53 crc kubenswrapper[4789]: I0202 21:36:53.001760 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-5644b66645-mpkrm" Feb 02 21:36:53 crc kubenswrapper[4789]: I0202 21:36:53.009638 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xshwf\" (UniqueName: \"kubernetes.io/projected/88179082-d6af-4a6c-a159-262a4928c4c3-kube-api-access-xshwf\") pod \"swift-operator-controller-manager-7b89fdf75b-xv5k7\" (UID: \"88179082-d6af-4a6c-a159-262a4928c4c3\") " pod="openstack-operators/swift-operator-controller-manager-7b89fdf75b-xv5k7" Feb 02 21:36:53 crc kubenswrapper[4789]: I0202 21:36:53.011396 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmx5p\" (UniqueName: \"kubernetes.io/projected/faa2ece3-95a2-43c2-935b-10cc966e7292-kube-api-access-xmx5p\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dvqp4s\" (UID: \"faa2ece3-95a2-43c2-935b-10cc966e7292\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dvqp4s" Feb 02 21:36:53 crc kubenswrapper[4789]: I0202 21:36:53.018911 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9xq7\" (UniqueName: \"kubernetes.io/projected/c43d0dc6-7406-4225-93b3-6779c68940f8-kube-api-access-r9xq7\") pod \"placement-operator-controller-manager-5b964cf4cd-xc8xn\" (UID: \"c43d0dc6-7406-4225-93b3-6779c68940f8\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-xc8xn" Feb 02 21:36:53 crc kubenswrapper[4789]: I0202 21:36:53.022782 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxrkz\" (UniqueName: \"kubernetes.io/projected/ad6fe491-a355-480a-8c88-ec835b469c44-kube-api-access-cxrkz\") pod \"ovn-operator-controller-manager-788c46999f-pqj67\" (UID: \"ad6fe491-a355-480a-8c88-ec835b469c44\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-pqj67" Feb 02 21:36:53 crc kubenswrapper[4789]: I0202 21:36:53.031066 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlb5z\" (UniqueName: \"kubernetes.io/projected/19963467-1169-4cc6-99f8-efadadfcba2e-kube-api-access-rlb5z\") pod \"test-operator-controller-manager-56f8bfcd9f-g6hcr\" (UID: \"19963467-1169-4cc6-99f8-efadadfcba2e\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-g6hcr" Feb 02 21:36:53 crc kubenswrapper[4789]: I0202 21:36:53.031187 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gk5w9\" (UniqueName: \"kubernetes.io/projected/4ad0215b-c9cf-46bd-aee5-a3099f5fb8e7-kube-api-access-gk5w9\") pod \"telemetry-operator-controller-manager-565849b54-th2wj\" (UID: \"4ad0215b-c9cf-46bd-aee5-a3099f5fb8e7\") " pod="openstack-operators/telemetry-operator-controller-manager-565849b54-th2wj" Feb 02 21:36:53 crc kubenswrapper[4789]: I0202 21:36:53.033539 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-79966df5f8-95whl"] Feb 02 21:36:53 crc kubenswrapper[4789]: I0202 21:36:53.034388 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7b89ddb58-gxpg7" Feb 02 21:36:53 crc kubenswrapper[4789]: I0202 21:36:53.034650 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-79966df5f8-95whl" Feb 02 21:36:53 crc kubenswrapper[4789]: I0202 21:36:53.040809 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-5rr4c" Feb 02 21:36:53 crc kubenswrapper[4789]: I0202 21:36:53.041007 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 02 21:36:53 crc kubenswrapper[4789]: I0202 21:36:53.041125 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 02 21:36:53 crc kubenswrapper[4789]: I0202 21:36:53.045873 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-79966df5f8-95whl"] Feb 02 21:36:53 crc kubenswrapper[4789]: I0202 21:36:53.053237 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlb5z\" (UniqueName: \"kubernetes.io/projected/19963467-1169-4cc6-99f8-efadadfcba2e-kube-api-access-rlb5z\") pod \"test-operator-controller-manager-56f8bfcd9f-g6hcr\" (UID: \"19963467-1169-4cc6-99f8-efadadfcba2e\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-g6hcr" Feb 02 21:36:53 crc kubenswrapper[4789]: I0202 21:36:53.060281 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gk5w9\" (UniqueName: \"kubernetes.io/projected/4ad0215b-c9cf-46bd-aee5-a3099f5fb8e7-kube-api-access-gk5w9\") pod \"telemetry-operator-controller-manager-565849b54-th2wj\" (UID: \"4ad0215b-c9cf-46bd-aee5-a3099f5fb8e7\") " pod="openstack-operators/telemetry-operator-controller-manager-565849b54-th2wj" Feb 02 21:36:53 crc kubenswrapper[4789]: I0202 21:36:53.060639 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7775d87d9d-hzqch" Feb 02 21:36:53 crc kubenswrapper[4789]: I0202 21:36:53.072564 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-pqj67" Feb 02 21:36:53 crc kubenswrapper[4789]: I0202 21:36:53.091424 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-pn28p"] Feb 02 21:36:53 crc kubenswrapper[4789]: I0202 21:36:53.092359 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-pn28p" Feb 02 21:36:53 crc kubenswrapper[4789]: I0202 21:36:53.095210 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-zr69q" Feb 02 21:36:53 crc kubenswrapper[4789]: I0202 21:36:53.113923 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-pn28p"] Feb 02 21:36:53 crc kubenswrapper[4789]: I0202 21:36:53.126489 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-866f9bb544-47rnc"] Feb 02 21:36:53 crc kubenswrapper[4789]: I0202 21:36:53.134235 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c8853e21-c77c-4220-acb8-8e469cbca718-cert\") pod \"infra-operator-controller-manager-79955696d6-r68x4\" (UID: \"c8853e21-c77c-4220-acb8-8e469cbca718\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-r68x4" Feb 02 21:36:53 crc kubenswrapper[4789]: I0202 21:36:53.134296 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rj2h4\" (UniqueName: \"kubernetes.io/projected/c82b99fc-84c7-4ff7-9662-c7cbae1d9ae5-kube-api-access-rj2h4\") pod \"rabbitmq-cluster-operator-manager-668c99d594-pn28p\" (UID: \"c82b99fc-84c7-4ff7-9662-c7cbae1d9ae5\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-pn28p" Feb 02 21:36:53 crc kubenswrapper[4789]: E0202 21:36:53.134430 4789 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 02 21:36:53 crc kubenswrapper[4789]: E0202 21:36:53.134500 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8853e21-c77c-4220-acb8-8e469cbca718-cert podName:c8853e21-c77c-4220-acb8-8e469cbca718 nodeName:}" failed. No retries permitted until 2026-02-02 21:36:54.134480142 +0000 UTC m=+1034.429505151 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c8853e21-c77c-4220-acb8-8e469cbca718-cert") pod "infra-operator-controller-manager-79955696d6-r68x4" (UID: "c8853e21-c77c-4220-acb8-8e469cbca718") : secret "infra-operator-webhook-server-cert" not found Feb 02 21:36:53 crc kubenswrapper[4789]: I0202 21:36:53.135371 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfbsk\" (UniqueName: \"kubernetes.io/projected/3b005885-1fa6-4f6b-b928-b0da5cd41798-kube-api-access-tfbsk\") pod \"watcher-operator-controller-manager-586b95b788-jl54m\" (UID: \"3b005885-1fa6-4f6b-b928-b0da5cd41798\") " pod="openstack-operators/watcher-operator-controller-manager-586b95b788-jl54m" Feb 02 21:36:53 crc kubenswrapper[4789]: I0202 21:36:53.148146 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-fc589b45f-7l6fz"] Feb 02 21:36:53 crc kubenswrapper[4789]: I0202 21:36:53.154249 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-xc8xn" Feb 02 21:36:53 crc kubenswrapper[4789]: I0202 21:36:53.176559 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-7b89fdf75b-xv5k7" Feb 02 21:36:53 crc kubenswrapper[4789]: I0202 21:36:53.213294 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-565849b54-th2wj" Feb 02 21:36:53 crc kubenswrapper[4789]: W0202 21:36:53.214751 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod56cea7a2_1c74_45e8_ae6e_e5a30b71df3e.slice/crio-7d76eb0e75bd159221f530d57014fdd4f57586ddd4636bd5bceb24123a6df497 WatchSource:0}: Error finding container 7d76eb0e75bd159221f530d57014fdd4f57586ddd4636bd5bceb24123a6df497: Status 404 returned error can't find the container with id 7d76eb0e75bd159221f530d57014fdd4f57586ddd4636bd5bceb24123a6df497 Feb 02 21:36:53 crc kubenswrapper[4789]: I0202 21:36:53.223729 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-g6hcr" Feb 02 21:36:53 crc kubenswrapper[4789]: I0202 21:36:53.240370 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7709193e-e11f-49dc-9ffc-be57f3d0b898-webhook-certs\") pod \"openstack-operator-controller-manager-79966df5f8-95whl\" (UID: \"7709193e-e11f-49dc-9ffc-be57f3d0b898\") " pod="openstack-operators/openstack-operator-controller-manager-79966df5f8-95whl" Feb 02 21:36:53 crc kubenswrapper[4789]: I0202 21:36:53.240447 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfbsk\" (UniqueName: \"kubernetes.io/projected/3b005885-1fa6-4f6b-b928-b0da5cd41798-kube-api-access-tfbsk\") pod \"watcher-operator-controller-manager-586b95b788-jl54m\" (UID: \"3b005885-1fa6-4f6b-b928-b0da5cd41798\") " pod="openstack-operators/watcher-operator-controller-manager-586b95b788-jl54m" Feb 02 21:36:53 crc kubenswrapper[4789]: I0202 21:36:53.240475 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zg9nk\" (UniqueName: \"kubernetes.io/projected/7709193e-e11f-49dc-9ffc-be57f3d0b898-kube-api-access-zg9nk\") pod \"openstack-operator-controller-manager-79966df5f8-95whl\" (UID: \"7709193e-e11f-49dc-9ffc-be57f3d0b898\") " pod="openstack-operators/openstack-operator-controller-manager-79966df5f8-95whl" Feb 02 21:36:53 crc kubenswrapper[4789]: I0202 21:36:53.240546 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7709193e-e11f-49dc-9ffc-be57f3d0b898-metrics-certs\") pod \"openstack-operator-controller-manager-79966df5f8-95whl\" (UID: \"7709193e-e11f-49dc-9ffc-be57f3d0b898\") " pod="openstack-operators/openstack-operator-controller-manager-79966df5f8-95whl" Feb 02 21:36:53 crc kubenswrapper[4789]: I0202 21:36:53.240743 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rj2h4\" (UniqueName: \"kubernetes.io/projected/c82b99fc-84c7-4ff7-9662-c7cbae1d9ae5-kube-api-access-rj2h4\") pod \"rabbitmq-cluster-operator-manager-668c99d594-pn28p\" (UID: \"c82b99fc-84c7-4ff7-9662-c7cbae1d9ae5\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-pn28p" Feb 02 21:36:53 crc kubenswrapper[4789]: I0202 21:36:53.261397 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfbsk\" (UniqueName: \"kubernetes.io/projected/3b005885-1fa6-4f6b-b928-b0da5cd41798-kube-api-access-tfbsk\") pod \"watcher-operator-controller-manager-586b95b788-jl54m\" (UID: \"3b005885-1fa6-4f6b-b928-b0da5cd41798\") " pod="openstack-operators/watcher-operator-controller-manager-586b95b788-jl54m" Feb 02 21:36:53 crc kubenswrapper[4789]: I0202 21:36:53.263088 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rj2h4\" (UniqueName: \"kubernetes.io/projected/c82b99fc-84c7-4ff7-9662-c7cbae1d9ae5-kube-api-access-rj2h4\") pod \"rabbitmq-cluster-operator-manager-668c99d594-pn28p\" (UID: \"c82b99fc-84c7-4ff7-9662-c7cbae1d9ae5\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-pn28p" Feb 02 21:36:53 crc kubenswrapper[4789]: I0202 21:36:53.324694 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-586b95b788-jl54m" Feb 02 21:36:53 crc kubenswrapper[4789]: I0202 21:36:53.342508 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zg9nk\" (UniqueName: \"kubernetes.io/projected/7709193e-e11f-49dc-9ffc-be57f3d0b898-kube-api-access-zg9nk\") pod \"openstack-operator-controller-manager-79966df5f8-95whl\" (UID: \"7709193e-e11f-49dc-9ffc-be57f3d0b898\") " pod="openstack-operators/openstack-operator-controller-manager-79966df5f8-95whl" Feb 02 21:36:53 crc kubenswrapper[4789]: I0202 21:36:53.342572 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7709193e-e11f-49dc-9ffc-be57f3d0b898-metrics-certs\") pod \"openstack-operator-controller-manager-79966df5f8-95whl\" (UID: \"7709193e-e11f-49dc-9ffc-be57f3d0b898\") " pod="openstack-operators/openstack-operator-controller-manager-79966df5f8-95whl" Feb 02 21:36:53 crc kubenswrapper[4789]: I0202 21:36:53.342771 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7709193e-e11f-49dc-9ffc-be57f3d0b898-webhook-certs\") pod \"openstack-operator-controller-manager-79966df5f8-95whl\" (UID: \"7709193e-e11f-49dc-9ffc-be57f3d0b898\") " pod="openstack-operators/openstack-operator-controller-manager-79966df5f8-95whl" Feb 02 21:36:53 crc kubenswrapper[4789]: E0202 21:36:53.342822 4789 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 02 21:36:53 crc kubenswrapper[4789]: E0202 21:36:53.342908 4789 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 02 21:36:53 crc kubenswrapper[4789]: E0202 21:36:53.342920 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7709193e-e11f-49dc-9ffc-be57f3d0b898-metrics-certs podName:7709193e-e11f-49dc-9ffc-be57f3d0b898 nodeName:}" failed. No retries permitted until 2026-02-02 21:36:53.842898698 +0000 UTC m=+1034.137923767 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7709193e-e11f-49dc-9ffc-be57f3d0b898-metrics-certs") pod "openstack-operator-controller-manager-79966df5f8-95whl" (UID: "7709193e-e11f-49dc-9ffc-be57f3d0b898") : secret "metrics-server-cert" not found Feb 02 21:36:53 crc kubenswrapper[4789]: E0202 21:36:53.342985 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7709193e-e11f-49dc-9ffc-be57f3d0b898-webhook-certs podName:7709193e-e11f-49dc-9ffc-be57f3d0b898 nodeName:}" failed. No retries permitted until 2026-02-02 21:36:53.84296807 +0000 UTC m=+1034.137993149 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/7709193e-e11f-49dc-9ffc-be57f3d0b898-webhook-certs") pod "openstack-operator-controller-manager-79966df5f8-95whl" (UID: "7709193e-e11f-49dc-9ffc-be57f3d0b898") : secret "webhook-server-cert" not found Feb 02 21:36:53 crc kubenswrapper[4789]: I0202 21:36:53.366415 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zg9nk\" (UniqueName: \"kubernetes.io/projected/7709193e-e11f-49dc-9ffc-be57f3d0b898-kube-api-access-zg9nk\") pod \"openstack-operator-controller-manager-79966df5f8-95whl\" (UID: \"7709193e-e11f-49dc-9ffc-be57f3d0b898\") " pod="openstack-operators/openstack-operator-controller-manager-79966df5f8-95whl" Feb 02 21:36:53 crc kubenswrapper[4789]: I0202 21:36:53.414786 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-8f4c5cb64-dntn6"] Feb 02 21:36:53 crc kubenswrapper[4789]: I0202 21:36:53.419279 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-64469b487f-svkqs"] Feb 02 21:36:53 crc kubenswrapper[4789]: I0202 21:36:53.420889 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-pn28p" Feb 02 21:36:53 crc kubenswrapper[4789]: I0202 21:36:53.443456 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/faa2ece3-95a2-43c2-935b-10cc966e7292-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dvqp4s\" (UID: \"faa2ece3-95a2-43c2-935b-10cc966e7292\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dvqp4s" Feb 02 21:36:53 crc kubenswrapper[4789]: E0202 21:36:53.443966 4789 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 21:36:53 crc kubenswrapper[4789]: E0202 21:36:53.444023 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/faa2ece3-95a2-43c2-935b-10cc966e7292-cert podName:faa2ece3-95a2-43c2-935b-10cc966e7292 nodeName:}" failed. No retries permitted until 2026-02-02 21:36:54.444004713 +0000 UTC m=+1034.739029732 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/faa2ece3-95a2-43c2-935b-10cc966e7292-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dvqp4s" (UID: "faa2ece3-95a2-43c2-935b-10cc966e7292") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 21:36:53 crc kubenswrapper[4789]: I0202 21:36:53.502753 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-65dc6c8d9c-rgs76"] Feb 02 21:36:53 crc kubenswrapper[4789]: I0202 21:36:53.539536 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-5d77f4dbc9-v5pw2"] Feb 02 21:36:53 crc kubenswrapper[4789]: I0202 21:36:53.546957 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-pg282"] Feb 02 21:36:53 crc kubenswrapper[4789]: W0202 21:36:53.612243 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0bdf708d_44ed_44cf_af93_f8a2aec7e9ed.slice/crio-26017631492f67d51a749a66aea20d0405f3a7b28eb11ca78a1801584e94e22c WatchSource:0}: Error finding container 26017631492f67d51a749a66aea20d0405f3a7b28eb11ca78a1801584e94e22c: Status 404 returned error can't find the container with id 26017631492f67d51a749a66aea20d0405f3a7b28eb11ca78a1801584e94e22c Feb 02 21:36:53 crc kubenswrapper[4789]: I0202 21:36:53.683867 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-dntn6" event={"ID":"46f273e2-4f9b-4436-815b-72fcfd1f0b96","Type":"ContainerStarted","Data":"13c7303e3d07a9c35ec0ee38477bccdafbfd819d766ec2efb9d878876de4170c"} Feb 02 21:36:53 crc kubenswrapper[4789]: I0202 21:36:53.684605 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-fc589b45f-7l6fz" event={"ID":"56cea7a2-1c74-45e8-ae6e-e5a30b71df3e","Type":"ContainerStarted","Data":"7d76eb0e75bd159221f530d57014fdd4f57586ddd4636bd5bceb24123a6df497"} Feb 02 21:36:53 crc kubenswrapper[4789]: I0202 21:36:53.693791 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-pg282" event={"ID":"1ac077aa-b2d2-41b7-aa3d-061e3e7b41dc","Type":"ContainerStarted","Data":"59141c90b7999c6fb123429c8b8ed0c1f0adb6e116429197e73e334fd40701ff"} Feb 02 21:36:53 crc kubenswrapper[4789]: I0202 21:36:53.696464 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-64469b487f-svkqs" event={"ID":"bd53908a-497a-4e78-aa27-d608e94d1723","Type":"ContainerStarted","Data":"49dcbaa9e18ba4183013d1980d3559f5d8c6716e19699d074832438cbf2b1d28"} Feb 02 21:36:53 crc kubenswrapper[4789]: I0202 21:36:53.700168 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-866f9bb544-47rnc" event={"ID":"0a1fe831-76ff-4718-9217-34c72110e718","Type":"ContainerStarted","Data":"4c2e4ee85ac120a92901fff093ef5073ea2436ef56f9d3b4022a1d0afd4f7dbb"} Feb 02 21:36:53 crc kubenswrapper[4789]: I0202 21:36:53.701108 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-65dc6c8d9c-rgs76" event={"ID":"0bdf708d-44ed-44cf-af93-f8a2aec7e9ed","Type":"ContainerStarted","Data":"26017631492f67d51a749a66aea20d0405f3a7b28eb11ca78a1801584e94e22c"} Feb 02 21:36:53 crc kubenswrapper[4789]: I0202 21:36:53.702789 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-5d77f4dbc9-v5pw2" event={"ID":"92bfe813-4e92-44da-8e1a-092164813972","Type":"ContainerStarted","Data":"a40aabc6a0406dcdae3e5bb9aab063f5d0b884a4967e8a87a29de4221a51e479"} Feb 02 21:36:53 crc kubenswrapper[4789]: I0202 21:36:53.849735 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7709193e-e11f-49dc-9ffc-be57f3d0b898-webhook-certs\") pod \"openstack-operator-controller-manager-79966df5f8-95whl\" (UID: \"7709193e-e11f-49dc-9ffc-be57f3d0b898\") " pod="openstack-operators/openstack-operator-controller-manager-79966df5f8-95whl" Feb 02 21:36:53 crc kubenswrapper[4789]: I0202 21:36:53.849807 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7709193e-e11f-49dc-9ffc-be57f3d0b898-metrics-certs\") pod \"openstack-operator-controller-manager-79966df5f8-95whl\" (UID: \"7709193e-e11f-49dc-9ffc-be57f3d0b898\") " pod="openstack-operators/openstack-operator-controller-manager-79966df5f8-95whl" Feb 02 21:36:53 crc kubenswrapper[4789]: E0202 21:36:53.849942 4789 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 02 21:36:53 crc kubenswrapper[4789]: E0202 21:36:53.850017 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7709193e-e11f-49dc-9ffc-be57f3d0b898-webhook-certs podName:7709193e-e11f-49dc-9ffc-be57f3d0b898 nodeName:}" failed. No retries permitted until 2026-02-02 21:36:54.849999569 +0000 UTC m=+1035.145024588 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/7709193e-e11f-49dc-9ffc-be57f3d0b898-webhook-certs") pod "openstack-operator-controller-manager-79966df5f8-95whl" (UID: "7709193e-e11f-49dc-9ffc-be57f3d0b898") : secret "webhook-server-cert" not found Feb 02 21:36:53 crc kubenswrapper[4789]: E0202 21:36:53.849948 4789 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 02 21:36:53 crc kubenswrapper[4789]: E0202 21:36:53.850198 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7709193e-e11f-49dc-9ffc-be57f3d0b898-metrics-certs podName:7709193e-e11f-49dc-9ffc-be57f3d0b898 nodeName:}" failed. No retries permitted until 2026-02-02 21:36:54.850167313 +0000 UTC m=+1035.145192432 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7709193e-e11f-49dc-9ffc-be57f3d0b898-metrics-certs") pod "openstack-operator-controller-manager-79966df5f8-95whl" (UID: "7709193e-e11f-49dc-9ffc-be57f3d0b898") : secret "metrics-server-cert" not found Feb 02 21:36:54 crc kubenswrapper[4789]: I0202 21:36:54.015059 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7b89ddb58-gxpg7"] Feb 02 21:36:54 crc kubenswrapper[4789]: W0202 21:36:54.027644 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod31505f8d_d5db_47b0_a3c7_38b45e6a6997.slice/crio-c6c1f1cef8f731f699590d7e9390c721a1c6fffd6da02af382c121653e4c1698 WatchSource:0}: Error finding container c6c1f1cef8f731f699590d7e9390c721a1c6fffd6da02af382c121653e4c1698: Status 404 returned error can't find the container with id c6c1f1cef8f731f699590d7e9390c721a1c6fffd6da02af382c121653e4c1698 Feb 02 21:36:54 crc kubenswrapper[4789]: I0202 21:36:54.029020 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-5644b66645-mpkrm"] Feb 02 21:36:54 crc kubenswrapper[4789]: I0202 21:36:54.037878 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-5zgft"] Feb 02 21:36:54 crc kubenswrapper[4789]: W0202 21:36:54.044911 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b11c2a0_977e_46f8_bab7_5d812c8a35f9.slice/crio-2d4d3da2ace232c2e07276d3e483992dd7f346fc14e010f388c902095c74e314 WatchSource:0}: Error finding container 2d4d3da2ace232c2e07276d3e483992dd7f346fc14e010f388c902095c74e314: Status 404 returned error can't find the container with id 2d4d3da2ace232c2e07276d3e483992dd7f346fc14e010f388c902095c74e314 Feb 02 21:36:54 crc kubenswrapper[4789]: I0202 21:36:54.048059 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7775d87d9d-hzqch"] Feb 02 21:36:54 crc kubenswrapper[4789]: I0202 21:36:54.056257 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-87bd9d46f-b8vjj"] Feb 02 21:36:54 crc kubenswrapper[4789]: I0202 21:36:54.079962 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-7b89fdf75b-xv5k7"] Feb 02 21:36:54 crc kubenswrapper[4789]: I0202 21:36:54.087433 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-565849b54-th2wj"] Feb 02 21:36:54 crc kubenswrapper[4789]: W0202 21:36:54.087539 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod27a24f44_b811_4f28_84b5_88504deaae1c.slice/crio-bcb8cf8b1d21d54ca2da95ab72279f3a99043c0fbe513a0335f35b591817d388 WatchSource:0}: Error finding container bcb8cf8b1d21d54ca2da95ab72279f3a99043c0fbe513a0335f35b591817d388: Status 404 returned error can't find the container with id bcb8cf8b1d21d54ca2da95ab72279f3a99043c0fbe513a0335f35b591817d388 Feb 02 21:36:54 crc kubenswrapper[4789]: W0202 21:36:54.088725 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode7c2322e_e6a7_4745_ab83_4ba56575e037.slice/crio-e52edc8f8e417051fbac016ef6e35facbd1c34e0116ffa754af520006dd7c6f3 WatchSource:0}: Error finding container e52edc8f8e417051fbac016ef6e35facbd1c34e0116ffa754af520006dd7c6f3: Status 404 returned error can't find the container with id e52edc8f8e417051fbac016ef6e35facbd1c34e0116ffa754af520006dd7c6f3 Feb 02 21:36:54 crc kubenswrapper[4789]: I0202 21:36:54.096870 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-xc8xn"] Feb 02 21:36:54 crc kubenswrapper[4789]: I0202 21:36:54.104691 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-pqj67"] Feb 02 21:36:54 crc kubenswrapper[4789]: I0202 21:36:54.111547 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-586b95b788-jl54m"] Feb 02 21:36:54 crc kubenswrapper[4789]: W0202 21:36:54.126384 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod88179082_d6af_4a6c_a159_262a4928c4c3.slice/crio-cde8993d473e20d648e31eccdaf7de4afbee13719115eccbb8d55cff08566cec WatchSource:0}: Error finding container cde8993d473e20d648e31eccdaf7de4afbee13719115eccbb8d55cff08566cec: Status 404 returned error can't find the container with id cde8993d473e20d648e31eccdaf7de4afbee13719115eccbb8d55cff08566cec Feb 02 21:36:54 crc kubenswrapper[4789]: I0202 21:36:54.126748 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-576995988b-qcnc8"] Feb 02 21:36:54 crc kubenswrapper[4789]: W0202 21:36:54.128016 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podad6fe491_a355_480a_8c88_ec835b469c44.slice/crio-0536232aaec19c81d796c8f7439d57e9cb4b279e0f5facd141b80254ca484024 WatchSource:0}: Error finding container 0536232aaec19c81d796c8f7439d57e9cb4b279e0f5facd141b80254ca484024: Status 404 returned error can't find the container with id 0536232aaec19c81d796c8f7439d57e9cb4b279e0f5facd141b80254ca484024 Feb 02 21:36:54 crc kubenswrapper[4789]: E0202 21:36:54.139178 4789 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/swift-operator@sha256:8f8c3f4484960b48b4aa30b66deb78e54443e5d0a91ce7e34f3cd34675d7eda4,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xshwf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-7b89fdf75b-xv5k7_openstack-operators(88179082-d6af-4a6c-a159-262a4928c4c3): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 02 21:36:54 crc kubenswrapper[4789]: E0202 21:36:54.141140 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-7b89fdf75b-xv5k7" podUID="88179082-d6af-4a6c-a159-262a4928c4c3" Feb 02 21:36:54 crc kubenswrapper[4789]: W0202 21:36:54.150489 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1fd01978_b3df_4a1c_a650_e6b182389a8d.slice/crio-526f6cf4abbc9dbd769fa55b503ecf38ac6b41fc99aa7c9adba7f743aecf8f91 WatchSource:0}: Error finding container 526f6cf4abbc9dbd769fa55b503ecf38ac6b41fc99aa7c9adba7f743aecf8f91: Status 404 returned error can't find the container with id 526f6cf4abbc9dbd769fa55b503ecf38ac6b41fc99aa7c9adba7f743aecf8f91 Feb 02 21:36:54 crc kubenswrapper[4789]: I0202 21:36:54.152877 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c8853e21-c77c-4220-acb8-8e469cbca718-cert\") pod \"infra-operator-controller-manager-79955696d6-r68x4\" (UID: \"c8853e21-c77c-4220-acb8-8e469cbca718\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-r68x4" Feb 02 21:36:54 crc kubenswrapper[4789]: E0202 21:36:54.153023 4789 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 02 21:36:54 crc kubenswrapper[4789]: E0202 21:36:54.153165 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8853e21-c77c-4220-acb8-8e469cbca718-cert podName:c8853e21-c77c-4220-acb8-8e469cbca718 nodeName:}" failed. No retries permitted until 2026-02-02 21:36:56.153067227 +0000 UTC m=+1036.448092246 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c8853e21-c77c-4220-acb8-8e469cbca718-cert") pod "infra-operator-controller-manager-79955696d6-r68x4" (UID: "c8853e21-c77c-4220-acb8-8e469cbca718") : secret "infra-operator-webhook-server-cert" not found Feb 02 21:36:54 crc kubenswrapper[4789]: W0202 21:36:54.158130 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b005885_1fa6_4f6b_b928_b0da5cd41798.slice/crio-f1aa53ae1a54982aa9774f874a453618710715bb167f6578173603eb881b95c1 WatchSource:0}: Error finding container f1aa53ae1a54982aa9774f874a453618710715bb167f6578173603eb881b95c1: Status 404 returned error can't find the container with id f1aa53ae1a54982aa9774f874a453618710715bb167f6578173603eb881b95c1 Feb 02 21:36:54 crc kubenswrapper[4789]: E0202 21:36:54.158660 4789 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/neutron-operator@sha256:32d8aa084f9ca6788a465b65a4575f7a3bb38255c30c849c955e9173b4351ef2,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-h9svg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-576995988b-qcnc8_openstack-operators(1fd01978-b3df-4a1c-a650-e6b182389a8d): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 02 21:36:54 crc kubenswrapper[4789]: E0202 21:36:54.159736 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/neutron-operator-controller-manager-576995988b-qcnc8" podUID="1fd01978-b3df-4a1c-a650-e6b182389a8d" Feb 02 21:36:54 crc kubenswrapper[4789]: E0202 21:36:54.161330 4789 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/watcher-operator@sha256:3fd1f7623a4b32505f51f329116f7e13bb4cfd320e920961a5b86441a89326d6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tfbsk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-586b95b788-jl54m_openstack-operators(3b005885-1fa6-4f6b-b928-b0da5cd41798): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 02 21:36:54 crc kubenswrapper[4789]: E0202 21:36:54.162497 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-586b95b788-jl54m" podUID="3b005885-1fa6-4f6b-b928-b0da5cd41798" Feb 02 21:36:54 crc kubenswrapper[4789]: I0202 21:36:54.259274 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-g6hcr"] Feb 02 21:36:54 crc kubenswrapper[4789]: I0202 21:36:54.263228 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-pn28p"] Feb 02 21:36:54 crc kubenswrapper[4789]: W0202 21:36:54.271318 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod19963467_1169_4cc6_99f8_efadadfcba2e.slice/crio-fe553d42991db1fcd2f3cd39771782b970b8a6dff055a7e52309bf7a905c0868 WatchSource:0}: Error finding container fe553d42991db1fcd2f3cd39771782b970b8a6dff055a7e52309bf7a905c0868: Status 404 returned error can't find the container with id fe553d42991db1fcd2f3cd39771782b970b8a6dff055a7e52309bf7a905c0868 Feb 02 21:36:54 crc kubenswrapper[4789]: E0202 21:36:54.274279 4789 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rlb5z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-56f8bfcd9f-g6hcr_openstack-operators(19963467-1169-4cc6-99f8-efadadfcba2e): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 02 21:36:54 crc kubenswrapper[4789]: W0202 21:36:54.274715 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc82b99fc_84c7_4ff7_9662_c7cbae1d9ae5.slice/crio-b154c31d76a792952530202aea710d1d57fe69eb726721660f2883b81a6afa90 WatchSource:0}: Error finding container b154c31d76a792952530202aea710d1d57fe69eb726721660f2883b81a6afa90: Status 404 returned error can't find the container with id b154c31d76a792952530202aea710d1d57fe69eb726721660f2883b81a6afa90 Feb 02 21:36:54 crc kubenswrapper[4789]: E0202 21:36:54.275821 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-g6hcr" podUID="19963467-1169-4cc6-99f8-efadadfcba2e" Feb 02 21:36:54 crc kubenswrapper[4789]: E0202 21:36:54.280086 4789 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rj2h4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-pn28p_openstack-operators(c82b99fc-84c7-4ff7-9662-c7cbae1d9ae5): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 02 21:36:54 crc kubenswrapper[4789]: E0202 21:36:54.281188 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-pn28p" podUID="c82b99fc-84c7-4ff7-9662-c7cbae1d9ae5" Feb 02 21:36:54 crc kubenswrapper[4789]: I0202 21:36:54.466389 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/faa2ece3-95a2-43c2-935b-10cc966e7292-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dvqp4s\" (UID: \"faa2ece3-95a2-43c2-935b-10cc966e7292\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dvqp4s" Feb 02 21:36:54 crc kubenswrapper[4789]: E0202 21:36:54.466780 4789 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 21:36:54 crc kubenswrapper[4789]: E0202 21:36:54.466875 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/faa2ece3-95a2-43c2-935b-10cc966e7292-cert podName:faa2ece3-95a2-43c2-935b-10cc966e7292 nodeName:}" failed. No retries permitted until 2026-02-02 21:36:56.466836299 +0000 UTC m=+1036.761861368 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/faa2ece3-95a2-43c2-935b-10cc966e7292-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dvqp4s" (UID: "faa2ece3-95a2-43c2-935b-10cc966e7292") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 21:36:54 crc kubenswrapper[4789]: I0202 21:36:54.723976 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7b89ddb58-gxpg7" event={"ID":"31505f8d-d5db-47b0-a3c7-38b45e6a6997","Type":"ContainerStarted","Data":"c6c1f1cef8f731f699590d7e9390c721a1c6fffd6da02af382c121653e4c1698"} Feb 02 21:36:54 crc kubenswrapper[4789]: I0202 21:36:54.725233 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-565849b54-th2wj" event={"ID":"4ad0215b-c9cf-46bd-aee5-a3099f5fb8e7","Type":"ContainerStarted","Data":"dd9271a48aaabc3edb529c4785846a047b875e4b01bc4ed4979015ad1b8aca38"} Feb 02 21:36:54 crc kubenswrapper[4789]: I0202 21:36:54.726183 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-87bd9d46f-b8vjj" event={"ID":"27a24f44-b811-4f28-84b5-88504deaae1c","Type":"ContainerStarted","Data":"bcb8cf8b1d21d54ca2da95ab72279f3a99043c0fbe513a0335f35b591817d388"} Feb 02 21:36:54 crc kubenswrapper[4789]: I0202 21:36:54.727012 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7775d87d9d-hzqch" event={"ID":"e7c2322e-e6a7-4745-ab83-4ba56575e037","Type":"ContainerStarted","Data":"e52edc8f8e417051fbac016ef6e35facbd1c34e0116ffa754af520006dd7c6f3"} Feb 02 21:36:54 crc kubenswrapper[4789]: I0202 21:36:54.728203 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-576995988b-qcnc8" event={"ID":"1fd01978-b3df-4a1c-a650-e6b182389a8d","Type":"ContainerStarted","Data":"526f6cf4abbc9dbd769fa55b503ecf38ac6b41fc99aa7c9adba7f743aecf8f91"} Feb 02 21:36:54 crc kubenswrapper[4789]: I0202 21:36:54.729922 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5644b66645-mpkrm" event={"ID":"546b235a-9844-4eff-9e93-91fc4c8c1c0c","Type":"ContainerStarted","Data":"1991f10a645ed53f943b4b8529b869f126c9f8b7503a8b9db0619ba8a2b7d6fe"} Feb 02 21:36:54 crc kubenswrapper[4789]: I0202 21:36:54.730807 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-pqj67" event={"ID":"ad6fe491-a355-480a-8c88-ec835b469c44","Type":"ContainerStarted","Data":"0536232aaec19c81d796c8f7439d57e9cb4b279e0f5facd141b80254ca484024"} Feb 02 21:36:54 crc kubenswrapper[4789]: E0202 21:36:54.731224 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/neutron-operator@sha256:32d8aa084f9ca6788a465b65a4575f7a3bb38255c30c849c955e9173b4351ef2\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-576995988b-qcnc8" podUID="1fd01978-b3df-4a1c-a650-e6b182389a8d" Feb 02 21:36:54 crc kubenswrapper[4789]: I0202 21:36:54.732905 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-7b89fdf75b-xv5k7" event={"ID":"88179082-d6af-4a6c-a159-262a4928c4c3","Type":"ContainerStarted","Data":"cde8993d473e20d648e31eccdaf7de4afbee13719115eccbb8d55cff08566cec"} Feb 02 21:36:54 crc kubenswrapper[4789]: I0202 21:36:54.733820 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-pn28p" event={"ID":"c82b99fc-84c7-4ff7-9662-c7cbae1d9ae5","Type":"ContainerStarted","Data":"b154c31d76a792952530202aea710d1d57fe69eb726721660f2883b81a6afa90"} Feb 02 21:36:54 crc kubenswrapper[4789]: E0202 21:36:54.735121 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-pn28p" podUID="c82b99fc-84c7-4ff7-9662-c7cbae1d9ae5" Feb 02 21:36:54 crc kubenswrapper[4789]: E0202 21:36:54.735180 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/swift-operator@sha256:8f8c3f4484960b48b4aa30b66deb78e54443e5d0a91ce7e34f3cd34675d7eda4\\\"\"" pod="openstack-operators/swift-operator-controller-manager-7b89fdf75b-xv5k7" podUID="88179082-d6af-4a6c-a159-262a4928c4c3" Feb 02 21:36:54 crc kubenswrapper[4789]: I0202 21:36:54.742710 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-586b95b788-jl54m" event={"ID":"3b005885-1fa6-4f6b-b928-b0da5cd41798","Type":"ContainerStarted","Data":"f1aa53ae1a54982aa9774f874a453618710715bb167f6578173603eb881b95c1"} Feb 02 21:36:54 crc kubenswrapper[4789]: E0202 21:36:54.745505 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/watcher-operator@sha256:3fd1f7623a4b32505f51f329116f7e13bb4cfd320e920961a5b86441a89326d6\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-586b95b788-jl54m" podUID="3b005885-1fa6-4f6b-b928-b0da5cd41798" Feb 02 21:36:54 crc kubenswrapper[4789]: I0202 21:36:54.750240 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-g6hcr" event={"ID":"19963467-1169-4cc6-99f8-efadadfcba2e","Type":"ContainerStarted","Data":"fe553d42991db1fcd2f3cd39771782b970b8a6dff055a7e52309bf7a905c0868"} Feb 02 21:36:54 crc kubenswrapper[4789]: E0202 21:36:54.754343 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-g6hcr" podUID="19963467-1169-4cc6-99f8-efadadfcba2e" Feb 02 21:36:54 crc kubenswrapper[4789]: I0202 21:36:54.757498 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-xc8xn" event={"ID":"c43d0dc6-7406-4225-93b3-6779c68940f8","Type":"ContainerStarted","Data":"4a60e2b5424ed517aaecc695930550290131a4877e8f45aa9fb897d1188f7748"} Feb 02 21:36:54 crc kubenswrapper[4789]: I0202 21:36:54.764459 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-5zgft" event={"ID":"0b11c2a0-977e-46f8-bab7-5d812c8a35f9","Type":"ContainerStarted","Data":"2d4d3da2ace232c2e07276d3e483992dd7f346fc14e010f388c902095c74e314"} Feb 02 21:36:54 crc kubenswrapper[4789]: I0202 21:36:54.879252 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7709193e-e11f-49dc-9ffc-be57f3d0b898-metrics-certs\") pod \"openstack-operator-controller-manager-79966df5f8-95whl\" (UID: \"7709193e-e11f-49dc-9ffc-be57f3d0b898\") " pod="openstack-operators/openstack-operator-controller-manager-79966df5f8-95whl" Feb 02 21:36:54 crc kubenswrapper[4789]: I0202 21:36:54.879376 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7709193e-e11f-49dc-9ffc-be57f3d0b898-webhook-certs\") pod \"openstack-operator-controller-manager-79966df5f8-95whl\" (UID: \"7709193e-e11f-49dc-9ffc-be57f3d0b898\") " pod="openstack-operators/openstack-operator-controller-manager-79966df5f8-95whl" Feb 02 21:36:54 crc kubenswrapper[4789]: E0202 21:36:54.879409 4789 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 02 21:36:54 crc kubenswrapper[4789]: E0202 21:36:54.879474 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7709193e-e11f-49dc-9ffc-be57f3d0b898-metrics-certs podName:7709193e-e11f-49dc-9ffc-be57f3d0b898 nodeName:}" failed. No retries permitted until 2026-02-02 21:36:56.879457582 +0000 UTC m=+1037.174482601 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7709193e-e11f-49dc-9ffc-be57f3d0b898-metrics-certs") pod "openstack-operator-controller-manager-79966df5f8-95whl" (UID: "7709193e-e11f-49dc-9ffc-be57f3d0b898") : secret "metrics-server-cert" not found Feb 02 21:36:54 crc kubenswrapper[4789]: E0202 21:36:54.879490 4789 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 02 21:36:54 crc kubenswrapper[4789]: E0202 21:36:54.879540 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7709193e-e11f-49dc-9ffc-be57f3d0b898-webhook-certs podName:7709193e-e11f-49dc-9ffc-be57f3d0b898 nodeName:}" failed. No retries permitted until 2026-02-02 21:36:56.879523574 +0000 UTC m=+1037.174548593 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/7709193e-e11f-49dc-9ffc-be57f3d0b898-webhook-certs") pod "openstack-operator-controller-manager-79966df5f8-95whl" (UID: "7709193e-e11f-49dc-9ffc-be57f3d0b898") : secret "webhook-server-cert" not found Feb 02 21:36:55 crc kubenswrapper[4789]: E0202 21:36:55.774262 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-pn28p" podUID="c82b99fc-84c7-4ff7-9662-c7cbae1d9ae5" Feb 02 21:36:55 crc kubenswrapper[4789]: E0202 21:36:55.774643 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/swift-operator@sha256:8f8c3f4484960b48b4aa30b66deb78e54443e5d0a91ce7e34f3cd34675d7eda4\\\"\"" pod="openstack-operators/swift-operator-controller-manager-7b89fdf75b-xv5k7" podUID="88179082-d6af-4a6c-a159-262a4928c4c3" Feb 02 21:36:55 crc kubenswrapper[4789]: E0202 21:36:55.774681 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-g6hcr" podUID="19963467-1169-4cc6-99f8-efadadfcba2e" Feb 02 21:36:55 crc kubenswrapper[4789]: E0202 21:36:55.779090 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/neutron-operator@sha256:32d8aa084f9ca6788a465b65a4575f7a3bb38255c30c849c955e9173b4351ef2\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-576995988b-qcnc8" podUID="1fd01978-b3df-4a1c-a650-e6b182389a8d" Feb 02 21:36:55 crc kubenswrapper[4789]: E0202 21:36:55.779151 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/watcher-operator@sha256:3fd1f7623a4b32505f51f329116f7e13bb4cfd320e920961a5b86441a89326d6\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-586b95b788-jl54m" podUID="3b005885-1fa6-4f6b-b928-b0da5cd41798" Feb 02 21:36:56 crc kubenswrapper[4789]: I0202 21:36:56.224035 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c8853e21-c77c-4220-acb8-8e469cbca718-cert\") pod \"infra-operator-controller-manager-79955696d6-r68x4\" (UID: \"c8853e21-c77c-4220-acb8-8e469cbca718\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-r68x4" Feb 02 21:36:56 crc kubenswrapper[4789]: E0202 21:36:56.224233 4789 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 02 21:36:56 crc kubenswrapper[4789]: E0202 21:36:56.224306 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8853e21-c77c-4220-acb8-8e469cbca718-cert podName:c8853e21-c77c-4220-acb8-8e469cbca718 nodeName:}" failed. No retries permitted until 2026-02-02 21:37:00.224289191 +0000 UTC m=+1040.519314210 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c8853e21-c77c-4220-acb8-8e469cbca718-cert") pod "infra-operator-controller-manager-79955696d6-r68x4" (UID: "c8853e21-c77c-4220-acb8-8e469cbca718") : secret "infra-operator-webhook-server-cert" not found Feb 02 21:36:56 crc kubenswrapper[4789]: I0202 21:36:56.526930 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/faa2ece3-95a2-43c2-935b-10cc966e7292-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dvqp4s\" (UID: \"faa2ece3-95a2-43c2-935b-10cc966e7292\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dvqp4s" Feb 02 21:36:56 crc kubenswrapper[4789]: E0202 21:36:56.527220 4789 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 21:36:56 crc kubenswrapper[4789]: E0202 21:36:56.527294 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/faa2ece3-95a2-43c2-935b-10cc966e7292-cert podName:faa2ece3-95a2-43c2-935b-10cc966e7292 nodeName:}" failed. No retries permitted until 2026-02-02 21:37:00.527276767 +0000 UTC m=+1040.822301786 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/faa2ece3-95a2-43c2-935b-10cc966e7292-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dvqp4s" (UID: "faa2ece3-95a2-43c2-935b-10cc966e7292") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 21:36:56 crc kubenswrapper[4789]: I0202 21:36:56.934958 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7709193e-e11f-49dc-9ffc-be57f3d0b898-webhook-certs\") pod \"openstack-operator-controller-manager-79966df5f8-95whl\" (UID: \"7709193e-e11f-49dc-9ffc-be57f3d0b898\") " pod="openstack-operators/openstack-operator-controller-manager-79966df5f8-95whl" Feb 02 21:36:56 crc kubenswrapper[4789]: I0202 21:36:56.935068 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7709193e-e11f-49dc-9ffc-be57f3d0b898-metrics-certs\") pod \"openstack-operator-controller-manager-79966df5f8-95whl\" (UID: \"7709193e-e11f-49dc-9ffc-be57f3d0b898\") " pod="openstack-operators/openstack-operator-controller-manager-79966df5f8-95whl" Feb 02 21:36:56 crc kubenswrapper[4789]: E0202 21:36:56.935076 4789 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 02 21:36:56 crc kubenswrapper[4789]: E0202 21:36:56.935138 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7709193e-e11f-49dc-9ffc-be57f3d0b898-webhook-certs podName:7709193e-e11f-49dc-9ffc-be57f3d0b898 nodeName:}" failed. No retries permitted until 2026-02-02 21:37:00.935121475 +0000 UTC m=+1041.230146494 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/7709193e-e11f-49dc-9ffc-be57f3d0b898-webhook-certs") pod "openstack-operator-controller-manager-79966df5f8-95whl" (UID: "7709193e-e11f-49dc-9ffc-be57f3d0b898") : secret "webhook-server-cert" not found Feb 02 21:36:56 crc kubenswrapper[4789]: E0202 21:36:56.935277 4789 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 02 21:36:56 crc kubenswrapper[4789]: E0202 21:36:56.935342 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7709193e-e11f-49dc-9ffc-be57f3d0b898-metrics-certs podName:7709193e-e11f-49dc-9ffc-be57f3d0b898 nodeName:}" failed. No retries permitted until 2026-02-02 21:37:00.93532117 +0000 UTC m=+1041.230346229 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7709193e-e11f-49dc-9ffc-be57f3d0b898-metrics-certs") pod "openstack-operator-controller-manager-79966df5f8-95whl" (UID: "7709193e-e11f-49dc-9ffc-be57f3d0b898") : secret "metrics-server-cert" not found Feb 02 21:37:00 crc kubenswrapper[4789]: I0202 21:37:00.284476 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c8853e21-c77c-4220-acb8-8e469cbca718-cert\") pod \"infra-operator-controller-manager-79955696d6-r68x4\" (UID: \"c8853e21-c77c-4220-acb8-8e469cbca718\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-r68x4" Feb 02 21:37:00 crc kubenswrapper[4789]: E0202 21:37:00.284727 4789 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 02 21:37:00 crc kubenswrapper[4789]: E0202 21:37:00.285117 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8853e21-c77c-4220-acb8-8e469cbca718-cert podName:c8853e21-c77c-4220-acb8-8e469cbca718 nodeName:}" failed. No retries permitted until 2026-02-02 21:37:08.285095206 +0000 UTC m=+1048.580120235 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c8853e21-c77c-4220-acb8-8e469cbca718-cert") pod "infra-operator-controller-manager-79955696d6-r68x4" (UID: "c8853e21-c77c-4220-acb8-8e469cbca718") : secret "infra-operator-webhook-server-cert" not found Feb 02 21:37:00 crc kubenswrapper[4789]: I0202 21:37:00.588933 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/faa2ece3-95a2-43c2-935b-10cc966e7292-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dvqp4s\" (UID: \"faa2ece3-95a2-43c2-935b-10cc966e7292\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dvqp4s" Feb 02 21:37:00 crc kubenswrapper[4789]: E0202 21:37:00.589142 4789 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 21:37:00 crc kubenswrapper[4789]: E0202 21:37:00.589211 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/faa2ece3-95a2-43c2-935b-10cc966e7292-cert podName:faa2ece3-95a2-43c2-935b-10cc966e7292 nodeName:}" failed. No retries permitted until 2026-02-02 21:37:08.589195134 +0000 UTC m=+1048.884220153 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/faa2ece3-95a2-43c2-935b-10cc966e7292-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dvqp4s" (UID: "faa2ece3-95a2-43c2-935b-10cc966e7292") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 21:37:00 crc kubenswrapper[4789]: I0202 21:37:00.998247 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7709193e-e11f-49dc-9ffc-be57f3d0b898-webhook-certs\") pod \"openstack-operator-controller-manager-79966df5f8-95whl\" (UID: \"7709193e-e11f-49dc-9ffc-be57f3d0b898\") " pod="openstack-operators/openstack-operator-controller-manager-79966df5f8-95whl" Feb 02 21:37:00 crc kubenswrapper[4789]: I0202 21:37:00.998308 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7709193e-e11f-49dc-9ffc-be57f3d0b898-metrics-certs\") pod \"openstack-operator-controller-manager-79966df5f8-95whl\" (UID: \"7709193e-e11f-49dc-9ffc-be57f3d0b898\") " pod="openstack-operators/openstack-operator-controller-manager-79966df5f8-95whl" Feb 02 21:37:00 crc kubenswrapper[4789]: E0202 21:37:00.998453 4789 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 02 21:37:00 crc kubenswrapper[4789]: E0202 21:37:00.998524 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7709193e-e11f-49dc-9ffc-be57f3d0b898-webhook-certs podName:7709193e-e11f-49dc-9ffc-be57f3d0b898 nodeName:}" failed. No retries permitted until 2026-02-02 21:37:08.998505393 +0000 UTC m=+1049.293530412 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/7709193e-e11f-49dc-9ffc-be57f3d0b898-webhook-certs") pod "openstack-operator-controller-manager-79966df5f8-95whl" (UID: "7709193e-e11f-49dc-9ffc-be57f3d0b898") : secret "webhook-server-cert" not found Feb 02 21:37:00 crc kubenswrapper[4789]: E0202 21:37:00.998461 4789 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 02 21:37:00 crc kubenswrapper[4789]: E0202 21:37:00.998868 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7709193e-e11f-49dc-9ffc-be57f3d0b898-metrics-certs podName:7709193e-e11f-49dc-9ffc-be57f3d0b898 nodeName:}" failed. No retries permitted until 2026-02-02 21:37:08.998860353 +0000 UTC m=+1049.293885372 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7709193e-e11f-49dc-9ffc-be57f3d0b898-metrics-certs") pod "openstack-operator-controller-manager-79966df5f8-95whl" (UID: "7709193e-e11f-49dc-9ffc-be57f3d0b898") : secret "metrics-server-cert" not found Feb 02 21:37:07 crc kubenswrapper[4789]: I0202 21:37:07.856544 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-87bd9d46f-b8vjj" event={"ID":"27a24f44-b811-4f28-84b5-88504deaae1c","Type":"ContainerStarted","Data":"3964629cdc16ec206df58ca705c3f7ccce37877abf2a6d572dee2c61bce6bc7d"} Feb 02 21:37:07 crc kubenswrapper[4789]: I0202 21:37:07.857971 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-87bd9d46f-b8vjj" Feb 02 21:37:07 crc kubenswrapper[4789]: I0202 21:37:07.860923 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-fc589b45f-7l6fz" event={"ID":"56cea7a2-1c74-45e8-ae6e-e5a30b71df3e","Type":"ContainerStarted","Data":"80c35e2c7f06041afb731ffb43eb5f1b15b881d70ab72eaefb8409d1ae077dc8"} Feb 02 21:37:07 crc kubenswrapper[4789]: I0202 21:37:07.861271 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-fc589b45f-7l6fz" Feb 02 21:37:07 crc kubenswrapper[4789]: I0202 21:37:07.862384 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-64469b487f-svkqs" event={"ID":"bd53908a-497a-4e78-aa27-d608e94d1723","Type":"ContainerStarted","Data":"76b3d39a986343f24645ef40198296a958fb84094756e522db6362d168de1ae2"} Feb 02 21:37:07 crc kubenswrapper[4789]: I0202 21:37:07.862708 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-64469b487f-svkqs" Feb 02 21:37:07 crc kubenswrapper[4789]: I0202 21:37:07.864750 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-pg282" event={"ID":"1ac077aa-b2d2-41b7-aa3d-061e3e7b41dc","Type":"ContainerStarted","Data":"f23fb6f2a7c668009fb496c365a6bf526808adf74f0402011aaa04ec8938ce34"} Feb 02 21:37:07 crc kubenswrapper[4789]: I0202 21:37:07.865080 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-pg282" Feb 02 21:37:07 crc kubenswrapper[4789]: I0202 21:37:07.869227 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-pqj67" event={"ID":"ad6fe491-a355-480a-8c88-ec835b469c44","Type":"ContainerStarted","Data":"d2c8fb199a2e2b6ec8c49041ad46d13d42d56c19288a3cbbcdb352f7027d0af8"} Feb 02 21:37:07 crc kubenswrapper[4789]: I0202 21:37:07.869271 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-pqj67" Feb 02 21:37:07 crc kubenswrapper[4789]: I0202 21:37:07.870772 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-65dc6c8d9c-rgs76" event={"ID":"0bdf708d-44ed-44cf-af93-f8a2aec7e9ed","Type":"ContainerStarted","Data":"89701a235ead297d87dcfe853f855e5ad9a3e70379a45539a826e8f87aa29805"} Feb 02 21:37:07 crc kubenswrapper[4789]: I0202 21:37:07.871252 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-65dc6c8d9c-rgs76" Feb 02 21:37:07 crc kubenswrapper[4789]: I0202 21:37:07.872716 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-xc8xn" event={"ID":"c43d0dc6-7406-4225-93b3-6779c68940f8","Type":"ContainerStarted","Data":"354f357ecad785111287cb0ab3ee5c75b850c7560fab5d70ce90a4c9f986424d"} Feb 02 21:37:07 crc kubenswrapper[4789]: I0202 21:37:07.873030 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-xc8xn" Feb 02 21:37:07 crc kubenswrapper[4789]: I0202 21:37:07.874401 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-5zgft" event={"ID":"0b11c2a0-977e-46f8-bab7-5d812c8a35f9","Type":"ContainerStarted","Data":"d359da286b1b9fd41ab60130fdf3f7a37b899aec195c1afa5bc1fd7d69d453b6"} Feb 02 21:37:07 crc kubenswrapper[4789]: I0202 21:37:07.874795 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-5zgft" Feb 02 21:37:07 crc kubenswrapper[4789]: I0202 21:37:07.876377 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7775d87d9d-hzqch" event={"ID":"e7c2322e-e6a7-4745-ab83-4ba56575e037","Type":"ContainerStarted","Data":"afdce275e1ea5f7186eb7c2768f704a1d772cb22feb7a7cbf58d26186f3d4f71"} Feb 02 21:37:07 crc kubenswrapper[4789]: I0202 21:37:07.876703 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-7775d87d9d-hzqch" Feb 02 21:37:07 crc kubenswrapper[4789]: I0202 21:37:07.877942 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-dntn6" event={"ID":"46f273e2-4f9b-4436-815b-72fcfd1f0b96","Type":"ContainerStarted","Data":"5234e11141c16f3832c113e9ef0938af694f3de6f67a86e7d73b2b48771f23d0"} Feb 02 21:37:07 crc kubenswrapper[4789]: I0202 21:37:07.878272 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-dntn6" Feb 02 21:37:07 crc kubenswrapper[4789]: I0202 21:37:07.880711 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-565849b54-th2wj" event={"ID":"4ad0215b-c9cf-46bd-aee5-a3099f5fb8e7","Type":"ContainerStarted","Data":"b0d81f6f13b30dc4c18ae2762a362a13ade631444cc25a7fbe4e13add634a47c"} Feb 02 21:37:07 crc kubenswrapper[4789]: I0202 21:37:07.881023 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-565849b54-th2wj" Feb 02 21:37:07 crc kubenswrapper[4789]: I0202 21:37:07.882433 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-866f9bb544-47rnc" event={"ID":"0a1fe831-76ff-4718-9217-34c72110e718","Type":"ContainerStarted","Data":"c26edf41fea96b878a6ff9cc4d1e2a03f71773b6d3015cc18520c3978d8bc064"} Feb 02 21:37:07 crc kubenswrapper[4789]: I0202 21:37:07.882526 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-866f9bb544-47rnc" Feb 02 21:37:07 crc kubenswrapper[4789]: I0202 21:37:07.884116 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5644b66645-mpkrm" event={"ID":"546b235a-9844-4eff-9e93-91fc4c8c1c0c","Type":"ContainerStarted","Data":"1fca49bdd0739b7fc3c1dca729fd9b7ce1212b3bba9e0999bc17d78d011eca0b"} Feb 02 21:37:07 crc kubenswrapper[4789]: I0202 21:37:07.884453 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-5644b66645-mpkrm" Feb 02 21:37:07 crc kubenswrapper[4789]: I0202 21:37:07.885626 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-5d77f4dbc9-v5pw2" event={"ID":"92bfe813-4e92-44da-8e1a-092164813972","Type":"ContainerStarted","Data":"952f53de89a823c89bde4290b9c1eaffacd2da9a522851b71d404f9162bd6c5d"} Feb 02 21:37:07 crc kubenswrapper[4789]: I0202 21:37:07.885929 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-5d77f4dbc9-v5pw2" Feb 02 21:37:07 crc kubenswrapper[4789]: I0202 21:37:07.887183 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7b89ddb58-gxpg7" event={"ID":"31505f8d-d5db-47b0-a3c7-38b45e6a6997","Type":"ContainerStarted","Data":"97e84b2b78eb6cc9678a8eb9ae38818b8445f9779b4ac9089827f3d83c3c0841"} Feb 02 21:37:07 crc kubenswrapper[4789]: I0202 21:37:07.887498 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-7b89ddb58-gxpg7" Feb 02 21:37:07 crc kubenswrapper[4789]: I0202 21:37:07.949119 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-87bd9d46f-b8vjj" podStartSLOduration=3.43345859 podStartE2EDuration="15.949103878s" podCreationTimestamp="2026-02-02 21:36:52 +0000 UTC" firstStartedPulling="2026-02-02 21:36:54.094263721 +0000 UTC m=+1034.389288740" lastFinishedPulling="2026-02-02 21:37:06.609909009 +0000 UTC m=+1046.904934028" observedRunningTime="2026-02-02 21:37:07.91845675 +0000 UTC m=+1048.213481769" watchObservedRunningTime="2026-02-02 21:37:07.949103878 +0000 UTC m=+1048.244128897" Feb 02 21:37:07 crc kubenswrapper[4789]: I0202 21:37:07.980596 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-5zgft" podStartSLOduration=3.461150635 podStartE2EDuration="15.98056635s" podCreationTimestamp="2026-02-02 21:36:52 +0000 UTC" firstStartedPulling="2026-02-02 21:36:54.073823631 +0000 UTC m=+1034.368848650" lastFinishedPulling="2026-02-02 21:37:06.593239346 +0000 UTC m=+1046.888264365" observedRunningTime="2026-02-02 21:37:07.977985107 +0000 UTC m=+1048.273010126" watchObservedRunningTime="2026-02-02 21:37:07.98056635 +0000 UTC m=+1048.275591369" Feb 02 21:37:07 crc kubenswrapper[4789]: I0202 21:37:07.981061 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-7b89ddb58-gxpg7" podStartSLOduration=3.410057517 podStartE2EDuration="15.981057024s" podCreationTimestamp="2026-02-02 21:36:52 +0000 UTC" firstStartedPulling="2026-02-02 21:36:54.039071746 +0000 UTC m=+1034.334096765" lastFinishedPulling="2026-02-02 21:37:06.610071263 +0000 UTC m=+1046.905096272" observedRunningTime="2026-02-02 21:37:07.950138198 +0000 UTC m=+1048.245163217" watchObservedRunningTime="2026-02-02 21:37:07.981057024 +0000 UTC m=+1048.276082043" Feb 02 21:37:08 crc kubenswrapper[4789]: I0202 21:37:08.001695 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-dntn6" podStartSLOduration=2.8813920360000003 podStartE2EDuration="16.001680838s" podCreationTimestamp="2026-02-02 21:36:52 +0000 UTC" firstStartedPulling="2026-02-02 21:36:53.4685738 +0000 UTC m=+1033.763598819" lastFinishedPulling="2026-02-02 21:37:06.588862582 +0000 UTC m=+1046.883887621" observedRunningTime="2026-02-02 21:37:07.99962197 +0000 UTC m=+1048.294646989" watchObservedRunningTime="2026-02-02 21:37:08.001680838 +0000 UTC m=+1048.296705877" Feb 02 21:37:08 crc kubenswrapper[4789]: I0202 21:37:08.018666 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-64469b487f-svkqs" podStartSLOduration=2.8724279409999998 podStartE2EDuration="16.018651839s" podCreationTimestamp="2026-02-02 21:36:52 +0000 UTC" firstStartedPulling="2026-02-02 21:36:53.465911884 +0000 UTC m=+1033.760936903" lastFinishedPulling="2026-02-02 21:37:06.612135782 +0000 UTC m=+1046.907160801" observedRunningTime="2026-02-02 21:37:08.01655366 +0000 UTC m=+1048.311578669" watchObservedRunningTime="2026-02-02 21:37:08.018651839 +0000 UTC m=+1048.313676858" Feb 02 21:37:08 crc kubenswrapper[4789]: I0202 21:37:08.049693 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-pqj67" podStartSLOduration=3.488713996 podStartE2EDuration="16.049671758s" podCreationTimestamp="2026-02-02 21:36:52 +0000 UTC" firstStartedPulling="2026-02-02 21:36:54.131605489 +0000 UTC m=+1034.426630508" lastFinishedPulling="2026-02-02 21:37:06.692563251 +0000 UTC m=+1046.987588270" observedRunningTime="2026-02-02 21:37:08.04902571 +0000 UTC m=+1048.344050719" watchObservedRunningTime="2026-02-02 21:37:08.049671758 +0000 UTC m=+1048.344696777" Feb 02 21:37:08 crc kubenswrapper[4789]: I0202 21:37:08.073128 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-65dc6c8d9c-rgs76" podStartSLOduration=3.129066383 podStartE2EDuration="16.073112102s" podCreationTimestamp="2026-02-02 21:36:52 +0000 UTC" firstStartedPulling="2026-02-02 21:36:53.616276525 +0000 UTC m=+1033.911301544" lastFinishedPulling="2026-02-02 21:37:06.560322244 +0000 UTC m=+1046.855347263" observedRunningTime="2026-02-02 21:37:08.068354168 +0000 UTC m=+1048.363379187" watchObservedRunningTime="2026-02-02 21:37:08.073112102 +0000 UTC m=+1048.368137121" Feb 02 21:37:08 crc kubenswrapper[4789]: I0202 21:37:08.092092 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-565849b54-th2wj" podStartSLOduration=3.598352383 podStartE2EDuration="16.09207455s" podCreationTimestamp="2026-02-02 21:36:52 +0000 UTC" firstStartedPulling="2026-02-02 21:36:54.119115165 +0000 UTC m=+1034.414140184" lastFinishedPulling="2026-02-02 21:37:06.612837342 +0000 UTC m=+1046.907862351" observedRunningTime="2026-02-02 21:37:08.090568157 +0000 UTC m=+1048.385593176" watchObservedRunningTime="2026-02-02 21:37:08.09207455 +0000 UTC m=+1048.387099569" Feb 02 21:37:08 crc kubenswrapper[4789]: I0202 21:37:08.110462 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-866f9bb544-47rnc" podStartSLOduration=2.651643794 podStartE2EDuration="16.1104469s" podCreationTimestamp="2026-02-02 21:36:52 +0000 UTC" firstStartedPulling="2026-02-02 21:36:53.100084417 +0000 UTC m=+1033.395109436" lastFinishedPulling="2026-02-02 21:37:06.558887523 +0000 UTC m=+1046.853912542" observedRunningTime="2026-02-02 21:37:08.109059871 +0000 UTC m=+1048.404084890" watchObservedRunningTime="2026-02-02 21:37:08.1104469 +0000 UTC m=+1048.405471919" Feb 02 21:37:08 crc kubenswrapper[4789]: I0202 21:37:08.127848 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-fc589b45f-7l6fz" podStartSLOduration=2.766448268 podStartE2EDuration="16.127833983s" podCreationTimestamp="2026-02-02 21:36:52 +0000 UTC" firstStartedPulling="2026-02-02 21:36:53.227450697 +0000 UTC m=+1033.522475716" lastFinishedPulling="2026-02-02 21:37:06.588836412 +0000 UTC m=+1046.883861431" observedRunningTime="2026-02-02 21:37:08.124125018 +0000 UTC m=+1048.419150057" watchObservedRunningTime="2026-02-02 21:37:08.127833983 +0000 UTC m=+1048.422859002" Feb 02 21:37:08 crc kubenswrapper[4789]: I0202 21:37:08.148507 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-7775d87d9d-hzqch" podStartSLOduration=3.616351303 podStartE2EDuration="16.148494929s" podCreationTimestamp="2026-02-02 21:36:52 +0000 UTC" firstStartedPulling="2026-02-02 21:36:54.098522661 +0000 UTC m=+1034.393547680" lastFinishedPulling="2026-02-02 21:37:06.630666287 +0000 UTC m=+1046.925691306" observedRunningTime="2026-02-02 21:37:08.146817121 +0000 UTC m=+1048.441842140" watchObservedRunningTime="2026-02-02 21:37:08.148494929 +0000 UTC m=+1048.443519948" Feb 02 21:37:08 crc kubenswrapper[4789]: I0202 21:37:08.165719 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-5d77f4dbc9-v5pw2" podStartSLOduration=3.13313977 podStartE2EDuration="16.165698646s" podCreationTimestamp="2026-02-02 21:36:52 +0000 UTC" firstStartedPulling="2026-02-02 21:36:53.560470464 +0000 UTC m=+1033.855495483" lastFinishedPulling="2026-02-02 21:37:06.59302934 +0000 UTC m=+1046.888054359" observedRunningTime="2026-02-02 21:37:08.162837155 +0000 UTC m=+1048.457862164" watchObservedRunningTime="2026-02-02 21:37:08.165698646 +0000 UTC m=+1048.460723665" Feb 02 21:37:08 crc kubenswrapper[4789]: I0202 21:37:08.206262 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-xc8xn" podStartSLOduration=3.7162905840000002 podStartE2EDuration="16.206247175s" podCreationTimestamp="2026-02-02 21:36:52 +0000 UTC" firstStartedPulling="2026-02-02 21:36:54.120089762 +0000 UTC m=+1034.415114781" lastFinishedPulling="2026-02-02 21:37:06.610046353 +0000 UTC m=+1046.905071372" observedRunningTime="2026-02-02 21:37:08.204953219 +0000 UTC m=+1048.499978238" watchObservedRunningTime="2026-02-02 21:37:08.206247175 +0000 UTC m=+1048.501272194" Feb 02 21:37:08 crc kubenswrapper[4789]: I0202 21:37:08.291563 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-pg282" podStartSLOduration=3.237401504 podStartE2EDuration="16.291544552s" podCreationTimestamp="2026-02-02 21:36:52 +0000 UTC" firstStartedPulling="2026-02-02 21:36:53.557715376 +0000 UTC m=+1033.852740395" lastFinishedPulling="2026-02-02 21:37:06.611858424 +0000 UTC m=+1046.906883443" observedRunningTime="2026-02-02 21:37:08.291007427 +0000 UTC m=+1048.586032446" watchObservedRunningTime="2026-02-02 21:37:08.291544552 +0000 UTC m=+1048.586569571" Feb 02 21:37:08 crc kubenswrapper[4789]: I0202 21:37:08.295343 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-5644b66645-mpkrm" podStartSLOduration=3.776056688 podStartE2EDuration="16.29533816s" podCreationTimestamp="2026-02-02 21:36:52 +0000 UTC" firstStartedPulling="2026-02-02 21:36:54.093732475 +0000 UTC m=+1034.388757494" lastFinishedPulling="2026-02-02 21:37:06.613013947 +0000 UTC m=+1046.908038966" observedRunningTime="2026-02-02 21:37:08.24771434 +0000 UTC m=+1048.542739359" watchObservedRunningTime="2026-02-02 21:37:08.29533816 +0000 UTC m=+1048.590363179" Feb 02 21:37:08 crc kubenswrapper[4789]: I0202 21:37:08.331249 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c8853e21-c77c-4220-acb8-8e469cbca718-cert\") pod \"infra-operator-controller-manager-79955696d6-r68x4\" (UID: \"c8853e21-c77c-4220-acb8-8e469cbca718\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-r68x4" Feb 02 21:37:08 crc kubenswrapper[4789]: E0202 21:37:08.331451 4789 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 02 21:37:08 crc kubenswrapper[4789]: E0202 21:37:08.331526 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8853e21-c77c-4220-acb8-8e469cbca718-cert podName:c8853e21-c77c-4220-acb8-8e469cbca718 nodeName:}" failed. No retries permitted until 2026-02-02 21:37:24.331508125 +0000 UTC m=+1064.626533144 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c8853e21-c77c-4220-acb8-8e469cbca718-cert") pod "infra-operator-controller-manager-79955696d6-r68x4" (UID: "c8853e21-c77c-4220-acb8-8e469cbca718") : secret "infra-operator-webhook-server-cert" not found Feb 02 21:37:08 crc kubenswrapper[4789]: I0202 21:37:08.634910 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/faa2ece3-95a2-43c2-935b-10cc966e7292-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dvqp4s\" (UID: \"faa2ece3-95a2-43c2-935b-10cc966e7292\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dvqp4s" Feb 02 21:37:08 crc kubenswrapper[4789]: I0202 21:37:08.641090 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/faa2ece3-95a2-43c2-935b-10cc966e7292-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dvqp4s\" (UID: \"faa2ece3-95a2-43c2-935b-10cc966e7292\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dvqp4s" Feb 02 21:37:08 crc kubenswrapper[4789]: I0202 21:37:08.727454 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dvqp4s" Feb 02 21:37:09 crc kubenswrapper[4789]: I0202 21:37:09.040497 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7709193e-e11f-49dc-9ffc-be57f3d0b898-webhook-certs\") pod \"openstack-operator-controller-manager-79966df5f8-95whl\" (UID: \"7709193e-e11f-49dc-9ffc-be57f3d0b898\") " pod="openstack-operators/openstack-operator-controller-manager-79966df5f8-95whl" Feb 02 21:37:09 crc kubenswrapper[4789]: I0202 21:37:09.040843 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7709193e-e11f-49dc-9ffc-be57f3d0b898-metrics-certs\") pod \"openstack-operator-controller-manager-79966df5f8-95whl\" (UID: \"7709193e-e11f-49dc-9ffc-be57f3d0b898\") " pod="openstack-operators/openstack-operator-controller-manager-79966df5f8-95whl" Feb 02 21:37:09 crc kubenswrapper[4789]: I0202 21:37:09.050366 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7709193e-e11f-49dc-9ffc-be57f3d0b898-webhook-certs\") pod \"openstack-operator-controller-manager-79966df5f8-95whl\" (UID: \"7709193e-e11f-49dc-9ffc-be57f3d0b898\") " pod="openstack-operators/openstack-operator-controller-manager-79966df5f8-95whl" Feb 02 21:37:09 crc kubenswrapper[4789]: I0202 21:37:09.053191 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7709193e-e11f-49dc-9ffc-be57f3d0b898-metrics-certs\") pod \"openstack-operator-controller-manager-79966df5f8-95whl\" (UID: \"7709193e-e11f-49dc-9ffc-be57f3d0b898\") " pod="openstack-operators/openstack-operator-controller-manager-79966df5f8-95whl" Feb 02 21:37:09 crc kubenswrapper[4789]: I0202 21:37:09.241634 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dvqp4s"] Feb 02 21:37:09 crc kubenswrapper[4789]: I0202 21:37:09.268425 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-79966df5f8-95whl" Feb 02 21:37:09 crc kubenswrapper[4789]: I0202 21:37:09.763423 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-79966df5f8-95whl"] Feb 02 21:37:09 crc kubenswrapper[4789]: W0202 21:37:09.764349 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7709193e_e11f_49dc_9ffc_be57f3d0b898.slice/crio-1b87f3006b0142d14a6fc92d8c89251aaac1df03154c281aab9209088e116a88 WatchSource:0}: Error finding container 1b87f3006b0142d14a6fc92d8c89251aaac1df03154c281aab9209088e116a88: Status 404 returned error can't find the container with id 1b87f3006b0142d14a6fc92d8c89251aaac1df03154c281aab9209088e116a88 Feb 02 21:37:09 crc kubenswrapper[4789]: I0202 21:37:09.920387 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-79966df5f8-95whl" event={"ID":"7709193e-e11f-49dc-9ffc-be57f3d0b898","Type":"ContainerStarted","Data":"1b87f3006b0142d14a6fc92d8c89251aaac1df03154c281aab9209088e116a88"} Feb 02 21:37:09 crc kubenswrapper[4789]: I0202 21:37:09.923619 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dvqp4s" event={"ID":"faa2ece3-95a2-43c2-935b-10cc966e7292","Type":"ContainerStarted","Data":"bbfb8f6eb1c45f9e2fea9f91b495964e706d54c3193b6b7d2d6422541d07e866"} Feb 02 21:37:10 crc kubenswrapper[4789]: I0202 21:37:10.932320 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-79966df5f8-95whl" event={"ID":"7709193e-e11f-49dc-9ffc-be57f3d0b898","Type":"ContainerStarted","Data":"a949d4fd99e8d8a8bef5d81eba659ff5097ef8922cf9d1055d405a6c0478fd2f"} Feb 02 21:37:10 crc kubenswrapper[4789]: I0202 21:37:10.932640 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-79966df5f8-95whl" Feb 02 21:37:10 crc kubenswrapper[4789]: I0202 21:37:10.963308 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-79966df5f8-95whl" podStartSLOduration=18.963285134 podStartE2EDuration="18.963285134s" podCreationTimestamp="2026-02-02 21:36:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:37:10.9581971 +0000 UTC m=+1051.253222129" watchObservedRunningTime="2026-02-02 21:37:10.963285134 +0000 UTC m=+1051.258310163" Feb 02 21:37:12 crc kubenswrapper[4789]: I0202 21:37:12.523128 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-fc589b45f-7l6fz" Feb 02 21:37:12 crc kubenswrapper[4789]: I0202 21:37:12.523500 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-866f9bb544-47rnc" Feb 02 21:37:12 crc kubenswrapper[4789]: I0202 21:37:12.720282 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-64469b487f-svkqs" Feb 02 21:37:12 crc kubenswrapper[4789]: I0202 21:37:12.800266 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-dntn6" Feb 02 21:37:12 crc kubenswrapper[4789]: I0202 21:37:12.823933 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-5d77f4dbc9-v5pw2" Feb 02 21:37:12 crc kubenswrapper[4789]: I0202 21:37:12.859044 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-65dc6c8d9c-rgs76" Feb 02 21:37:12 crc kubenswrapper[4789]: I0202 21:37:12.884654 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-pg282" Feb 02 21:37:12 crc kubenswrapper[4789]: I0202 21:37:12.950559 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-5zgft" Feb 02 21:37:12 crc kubenswrapper[4789]: I0202 21:37:12.958752 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-87bd9d46f-b8vjj" Feb 02 21:37:13 crc kubenswrapper[4789]: I0202 21:37:13.009173 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-5644b66645-mpkrm" Feb 02 21:37:13 crc kubenswrapper[4789]: I0202 21:37:13.036518 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-7b89ddb58-gxpg7" Feb 02 21:37:13 crc kubenswrapper[4789]: I0202 21:37:13.063487 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-7775d87d9d-hzqch" Feb 02 21:37:13 crc kubenswrapper[4789]: I0202 21:37:13.128127 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-pqj67" Feb 02 21:37:13 crc kubenswrapper[4789]: I0202 21:37:13.156070 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-xc8xn" Feb 02 21:37:13 crc kubenswrapper[4789]: I0202 21:37:13.215785 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-565849b54-th2wj" Feb 02 21:37:15 crc kubenswrapper[4789]: I0202 21:37:15.975973 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-g6hcr" event={"ID":"19963467-1169-4cc6-99f8-efadadfcba2e","Type":"ContainerStarted","Data":"7f2e6fb0f746a747998fc99322bb5ce7a218d854ef9d67df511856e9295bcd45"} Feb 02 21:37:15 crc kubenswrapper[4789]: I0202 21:37:15.977167 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-g6hcr" Feb 02 21:37:15 crc kubenswrapper[4789]: I0202 21:37:15.978897 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-586b95b788-jl54m" event={"ID":"3b005885-1fa6-4f6b-b928-b0da5cd41798","Type":"ContainerStarted","Data":"84070431a2b79a3efb16a984c28439078a3c16f85ed9719980471c3bcc621f47"} Feb 02 21:37:15 crc kubenswrapper[4789]: I0202 21:37:15.979196 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-586b95b788-jl54m" Feb 02 21:37:15 crc kubenswrapper[4789]: I0202 21:37:15.981305 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-7b89fdf75b-xv5k7" event={"ID":"88179082-d6af-4a6c-a159-262a4928c4c3","Type":"ContainerStarted","Data":"ee90380cdbb493ecef0543101c9cebf26569e40fc4606b6b03579440233d1c3e"} Feb 02 21:37:15 crc kubenswrapper[4789]: I0202 21:37:15.981558 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-7b89fdf75b-xv5k7" Feb 02 21:37:15 crc kubenswrapper[4789]: I0202 21:37:15.985204 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-pn28p" event={"ID":"c82b99fc-84c7-4ff7-9662-c7cbae1d9ae5","Type":"ContainerStarted","Data":"834307c7f0c445de341f163950aa1200c0e2b5a7170a9dd7e6f6c922043e0954"} Feb 02 21:37:15 crc kubenswrapper[4789]: I0202 21:37:15.987836 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dvqp4s" event={"ID":"faa2ece3-95a2-43c2-935b-10cc966e7292","Type":"ContainerStarted","Data":"ada2dd087168fe578a6d7a7a058ac94e8736a9d31416779c0e078be2bc17d67b"} Feb 02 21:37:15 crc kubenswrapper[4789]: I0202 21:37:15.988083 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dvqp4s" Feb 02 21:37:15 crc kubenswrapper[4789]: I0202 21:37:15.990096 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-576995988b-qcnc8" event={"ID":"1fd01978-b3df-4a1c-a650-e6b182389a8d","Type":"ContainerStarted","Data":"7f43de1922ab56bb2b0ed33390bf147352e8060b75a4e15b3f10adeccb9509c3"} Feb 02 21:37:15 crc kubenswrapper[4789]: I0202 21:37:15.990794 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-576995988b-qcnc8" Feb 02 21:37:16 crc kubenswrapper[4789]: I0202 21:37:16.013090 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-g6hcr" podStartSLOduration=3.46449525 podStartE2EDuration="24.013066065s" podCreationTimestamp="2026-02-02 21:36:52 +0000 UTC" firstStartedPulling="2026-02-02 21:36:54.274113987 +0000 UTC m=+1034.569139006" lastFinishedPulling="2026-02-02 21:37:14.822684802 +0000 UTC m=+1055.117709821" observedRunningTime="2026-02-02 21:37:16.009765102 +0000 UTC m=+1056.304790151" watchObservedRunningTime="2026-02-02 21:37:16.013066065 +0000 UTC m=+1056.308091124" Feb 02 21:37:16 crc kubenswrapper[4789]: I0202 21:37:16.034750 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-pn28p" podStartSLOduration=3.576915476 podStartE2EDuration="24.034730459s" podCreationTimestamp="2026-02-02 21:36:52 +0000 UTC" firstStartedPulling="2026-02-02 21:36:54.279927062 +0000 UTC m=+1034.574952081" lastFinishedPulling="2026-02-02 21:37:14.737742045 +0000 UTC m=+1055.032767064" observedRunningTime="2026-02-02 21:37:16.031278162 +0000 UTC m=+1056.326303221" watchObservedRunningTime="2026-02-02 21:37:16.034730459 +0000 UTC m=+1056.329755508" Feb 02 21:37:16 crc kubenswrapper[4789]: I0202 21:37:16.050181 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-7b89fdf75b-xv5k7" podStartSLOduration=3.451946155 podStartE2EDuration="24.050165177s" podCreationTimestamp="2026-02-02 21:36:52 +0000 UTC" firstStartedPulling="2026-02-02 21:36:54.139027819 +0000 UTC m=+1034.434052838" lastFinishedPulling="2026-02-02 21:37:14.737246841 +0000 UTC m=+1055.032271860" observedRunningTime="2026-02-02 21:37:16.046996547 +0000 UTC m=+1056.342021606" watchObservedRunningTime="2026-02-02 21:37:16.050165177 +0000 UTC m=+1056.345190196" Feb 02 21:37:16 crc kubenswrapper[4789]: I0202 21:37:16.074746 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-576995988b-qcnc8" podStartSLOduration=3.496871769 podStartE2EDuration="24.074728073s" podCreationTimestamp="2026-02-02 21:36:52 +0000 UTC" firstStartedPulling="2026-02-02 21:36:54.158538372 +0000 UTC m=+1034.453563391" lastFinishedPulling="2026-02-02 21:37:14.736394676 +0000 UTC m=+1055.031419695" observedRunningTime="2026-02-02 21:37:16.06014261 +0000 UTC m=+1056.355167689" watchObservedRunningTime="2026-02-02 21:37:16.074728073 +0000 UTC m=+1056.369753092" Feb 02 21:37:16 crc kubenswrapper[4789]: I0202 21:37:16.085163 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-586b95b788-jl54m" podStartSLOduration=3.508876409 podStartE2EDuration="24.085147668s" podCreationTimestamp="2026-02-02 21:36:52 +0000 UTC" firstStartedPulling="2026-02-02 21:36:54.161249369 +0000 UTC m=+1034.456274388" lastFinishedPulling="2026-02-02 21:37:14.737520628 +0000 UTC m=+1055.032545647" observedRunningTime="2026-02-02 21:37:16.082108852 +0000 UTC m=+1056.377133881" watchObservedRunningTime="2026-02-02 21:37:16.085147668 +0000 UTC m=+1056.380172677" Feb 02 21:37:16 crc kubenswrapper[4789]: I0202 21:37:16.128505 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dvqp4s" podStartSLOduration=18.64762573 podStartE2EDuration="24.128489636s" podCreationTimestamp="2026-02-02 21:36:52 +0000 UTC" firstStartedPulling="2026-02-02 21:37:09.25551856 +0000 UTC m=+1049.550543579" lastFinishedPulling="2026-02-02 21:37:14.736382466 +0000 UTC m=+1055.031407485" observedRunningTime="2026-02-02 21:37:16.122172607 +0000 UTC m=+1056.417197626" watchObservedRunningTime="2026-02-02 21:37:16.128489636 +0000 UTC m=+1056.423514655" Feb 02 21:37:19 crc kubenswrapper[4789]: I0202 21:37:19.278060 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-79966df5f8-95whl" Feb 02 21:37:22 crc kubenswrapper[4789]: I0202 21:37:22.841552 4789 patch_prober.go:28] interesting pod/machine-config-daemon-c8vcn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 21:37:22 crc kubenswrapper[4789]: I0202 21:37:22.841950 4789 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 21:37:22 crc kubenswrapper[4789]: I0202 21:37:22.842016 4789 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" Feb 02 21:37:22 crc kubenswrapper[4789]: I0202 21:37:22.988992 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-576995988b-qcnc8" Feb 02 21:37:23 crc kubenswrapper[4789]: I0202 21:37:23.048572 4789 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"731cbec71f64a4bdb77752b4fd336ae74457ae3978707682a716375d9f8b1609"} pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 21:37:23 crc kubenswrapper[4789]: I0202 21:37:23.048731 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" containerName="machine-config-daemon" containerID="cri-o://731cbec71f64a4bdb77752b4fd336ae74457ae3978707682a716375d9f8b1609" gracePeriod=600 Feb 02 21:37:23 crc kubenswrapper[4789]: I0202 21:37:23.180893 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-7b89fdf75b-xv5k7" Feb 02 21:37:23 crc kubenswrapper[4789]: I0202 21:37:23.253723 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-g6hcr" Feb 02 21:37:23 crc kubenswrapper[4789]: I0202 21:37:23.327276 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-586b95b788-jl54m" Feb 02 21:37:24 crc kubenswrapper[4789]: I0202 21:37:24.059067 4789 generic.go:334] "Generic (PLEG): container finished" podID="bdf018b4-1451-4d37-be6e-05802b67c73e" containerID="731cbec71f64a4bdb77752b4fd336ae74457ae3978707682a716375d9f8b1609" exitCode=0 Feb 02 21:37:24 crc kubenswrapper[4789]: I0202 21:37:24.059145 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" event={"ID":"bdf018b4-1451-4d37-be6e-05802b67c73e","Type":"ContainerDied","Data":"731cbec71f64a4bdb77752b4fd336ae74457ae3978707682a716375d9f8b1609"} Feb 02 21:37:24 crc kubenswrapper[4789]: I0202 21:37:24.059634 4789 scope.go:117] "RemoveContainer" containerID="1ec54d6d2f9dd12ba4581ba8d6bcba6253f115c225597c28969e0527a84fb4af" Feb 02 21:37:24 crc kubenswrapper[4789]: I0202 21:37:24.409256 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c8853e21-c77c-4220-acb8-8e469cbca718-cert\") pod \"infra-operator-controller-manager-79955696d6-r68x4\" (UID: \"c8853e21-c77c-4220-acb8-8e469cbca718\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-r68x4" Feb 02 21:37:24 crc kubenswrapper[4789]: I0202 21:37:24.417330 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c8853e21-c77c-4220-acb8-8e469cbca718-cert\") pod \"infra-operator-controller-manager-79955696d6-r68x4\" (UID: \"c8853e21-c77c-4220-acb8-8e469cbca718\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-r68x4" Feb 02 21:37:24 crc kubenswrapper[4789]: I0202 21:37:24.693050 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-r68x4" Feb 02 21:37:24 crc kubenswrapper[4789]: I0202 21:37:24.896478 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-r68x4"] Feb 02 21:37:24 crc kubenswrapper[4789]: W0202 21:37:24.898094 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc8853e21_c77c_4220_acb8_8e469cbca718.slice/crio-9c4bc784c5ade861b23c0f5d587c6b391961a17274e7b9b23398b3a56415048c WatchSource:0}: Error finding container 9c4bc784c5ade861b23c0f5d587c6b391961a17274e7b9b23398b3a56415048c: Status 404 returned error can't find the container with id 9c4bc784c5ade861b23c0f5d587c6b391961a17274e7b9b23398b3a56415048c Feb 02 21:37:25 crc kubenswrapper[4789]: I0202 21:37:25.068088 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-r68x4" event={"ID":"c8853e21-c77c-4220-acb8-8e469cbca718","Type":"ContainerStarted","Data":"9c4bc784c5ade861b23c0f5d587c6b391961a17274e7b9b23398b3a56415048c"} Feb 02 21:37:25 crc kubenswrapper[4789]: I0202 21:37:25.070856 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" event={"ID":"bdf018b4-1451-4d37-be6e-05802b67c73e","Type":"ContainerStarted","Data":"58201de0dc796bafdb3ebb503e9bfcd61c6265506eb41819ac59515674816d43"} Feb 02 21:37:27 crc kubenswrapper[4789]: I0202 21:37:27.090115 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-r68x4" event={"ID":"c8853e21-c77c-4220-acb8-8e469cbca718","Type":"ContainerStarted","Data":"7eb2f767c12af896e9ff62a64a7d6c17d48f6fa003d0827ceba2ad95ff670227"} Feb 02 21:37:27 crc kubenswrapper[4789]: I0202 21:37:27.091045 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79955696d6-r68x4" Feb 02 21:37:27 crc kubenswrapper[4789]: I0202 21:37:27.128043 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79955696d6-r68x4" podStartSLOduration=33.14273049 podStartE2EDuration="35.12801177s" podCreationTimestamp="2026-02-02 21:36:52 +0000 UTC" firstStartedPulling="2026-02-02 21:37:24.901478814 +0000 UTC m=+1065.196503853" lastFinishedPulling="2026-02-02 21:37:26.886760104 +0000 UTC m=+1067.181785133" observedRunningTime="2026-02-02 21:37:27.121040353 +0000 UTC m=+1067.416065412" watchObservedRunningTime="2026-02-02 21:37:27.12801177 +0000 UTC m=+1067.423036819" Feb 02 21:37:28 crc kubenswrapper[4789]: I0202 21:37:28.738093 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dvqp4s" Feb 02 21:37:34 crc kubenswrapper[4789]: I0202 21:37:34.703363 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79955696d6-r68x4" Feb 02 21:37:51 crc kubenswrapper[4789]: I0202 21:37:51.014348 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-kxd86"] Feb 02 21:37:51 crc kubenswrapper[4789]: I0202 21:37:51.026350 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-kxd86" Feb 02 21:37:51 crc kubenswrapper[4789]: I0202 21:37:51.033933 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 02 21:37:51 crc kubenswrapper[4789]: I0202 21:37:51.034136 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-7vmq6" Feb 02 21:37:51 crc kubenswrapper[4789]: I0202 21:37:51.034723 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 02 21:37:51 crc kubenswrapper[4789]: I0202 21:37:51.035864 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 02 21:37:51 crc kubenswrapper[4789]: I0202 21:37:51.038474 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-kxd86"] Feb 02 21:37:51 crc kubenswrapper[4789]: I0202 21:37:51.087680 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-8gw52"] Feb 02 21:37:51 crc kubenswrapper[4789]: I0202 21:37:51.088755 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-8gw52" Feb 02 21:37:51 crc kubenswrapper[4789]: I0202 21:37:51.090910 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 02 21:37:51 crc kubenswrapper[4789]: I0202 21:37:51.118914 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-8gw52"] Feb 02 21:37:51 crc kubenswrapper[4789]: I0202 21:37:51.124106 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b28m8\" (UniqueName: \"kubernetes.io/projected/87ac290e-0323-4c14-baa5-ac62040f0d34-kube-api-access-b28m8\") pod \"dnsmasq-dns-78dd6ddcc-8gw52\" (UID: \"87ac290e-0323-4c14-baa5-ac62040f0d34\") " pod="openstack/dnsmasq-dns-78dd6ddcc-8gw52" Feb 02 21:37:51 crc kubenswrapper[4789]: I0202 21:37:51.124213 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rc88w\" (UniqueName: \"kubernetes.io/projected/ac4eb40d-807a-43dd-bcc1-ec832d4b5311-kube-api-access-rc88w\") pod \"dnsmasq-dns-675f4bcbfc-kxd86\" (UID: \"ac4eb40d-807a-43dd-bcc1-ec832d4b5311\") " pod="openstack/dnsmasq-dns-675f4bcbfc-kxd86" Feb 02 21:37:51 crc kubenswrapper[4789]: I0202 21:37:51.124253 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/87ac290e-0323-4c14-baa5-ac62040f0d34-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-8gw52\" (UID: \"87ac290e-0323-4c14-baa5-ac62040f0d34\") " pod="openstack/dnsmasq-dns-78dd6ddcc-8gw52" Feb 02 21:37:51 crc kubenswrapper[4789]: I0202 21:37:51.124274 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac4eb40d-807a-43dd-bcc1-ec832d4b5311-config\") pod \"dnsmasq-dns-675f4bcbfc-kxd86\" (UID: \"ac4eb40d-807a-43dd-bcc1-ec832d4b5311\") " pod="openstack/dnsmasq-dns-675f4bcbfc-kxd86" Feb 02 21:37:51 crc kubenswrapper[4789]: I0202 21:37:51.124321 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87ac290e-0323-4c14-baa5-ac62040f0d34-config\") pod \"dnsmasq-dns-78dd6ddcc-8gw52\" (UID: \"87ac290e-0323-4c14-baa5-ac62040f0d34\") " pod="openstack/dnsmasq-dns-78dd6ddcc-8gw52" Feb 02 21:37:51 crc kubenswrapper[4789]: I0202 21:37:51.225954 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rc88w\" (UniqueName: \"kubernetes.io/projected/ac4eb40d-807a-43dd-bcc1-ec832d4b5311-kube-api-access-rc88w\") pod \"dnsmasq-dns-675f4bcbfc-kxd86\" (UID: \"ac4eb40d-807a-43dd-bcc1-ec832d4b5311\") " pod="openstack/dnsmasq-dns-675f4bcbfc-kxd86" Feb 02 21:37:51 crc kubenswrapper[4789]: I0202 21:37:51.226359 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac4eb40d-807a-43dd-bcc1-ec832d4b5311-config\") pod \"dnsmasq-dns-675f4bcbfc-kxd86\" (UID: \"ac4eb40d-807a-43dd-bcc1-ec832d4b5311\") " pod="openstack/dnsmasq-dns-675f4bcbfc-kxd86" Feb 02 21:37:51 crc kubenswrapper[4789]: I0202 21:37:51.226391 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/87ac290e-0323-4c14-baa5-ac62040f0d34-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-8gw52\" (UID: \"87ac290e-0323-4c14-baa5-ac62040f0d34\") " pod="openstack/dnsmasq-dns-78dd6ddcc-8gw52" Feb 02 21:37:51 crc kubenswrapper[4789]: I0202 21:37:51.226433 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87ac290e-0323-4c14-baa5-ac62040f0d34-config\") pod \"dnsmasq-dns-78dd6ddcc-8gw52\" (UID: \"87ac290e-0323-4c14-baa5-ac62040f0d34\") " pod="openstack/dnsmasq-dns-78dd6ddcc-8gw52" Feb 02 21:37:51 crc kubenswrapper[4789]: I0202 21:37:51.226479 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b28m8\" (UniqueName: \"kubernetes.io/projected/87ac290e-0323-4c14-baa5-ac62040f0d34-kube-api-access-b28m8\") pod \"dnsmasq-dns-78dd6ddcc-8gw52\" (UID: \"87ac290e-0323-4c14-baa5-ac62040f0d34\") " pod="openstack/dnsmasq-dns-78dd6ddcc-8gw52" Feb 02 21:37:51 crc kubenswrapper[4789]: I0202 21:37:51.227423 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac4eb40d-807a-43dd-bcc1-ec832d4b5311-config\") pod \"dnsmasq-dns-675f4bcbfc-kxd86\" (UID: \"ac4eb40d-807a-43dd-bcc1-ec832d4b5311\") " pod="openstack/dnsmasq-dns-675f4bcbfc-kxd86" Feb 02 21:37:51 crc kubenswrapper[4789]: I0202 21:37:51.228091 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/87ac290e-0323-4c14-baa5-ac62040f0d34-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-8gw52\" (UID: \"87ac290e-0323-4c14-baa5-ac62040f0d34\") " pod="openstack/dnsmasq-dns-78dd6ddcc-8gw52" Feb 02 21:37:51 crc kubenswrapper[4789]: I0202 21:37:51.229026 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87ac290e-0323-4c14-baa5-ac62040f0d34-config\") pod \"dnsmasq-dns-78dd6ddcc-8gw52\" (UID: \"87ac290e-0323-4c14-baa5-ac62040f0d34\") " pod="openstack/dnsmasq-dns-78dd6ddcc-8gw52" Feb 02 21:37:51 crc kubenswrapper[4789]: I0202 21:37:51.249415 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rc88w\" (UniqueName: \"kubernetes.io/projected/ac4eb40d-807a-43dd-bcc1-ec832d4b5311-kube-api-access-rc88w\") pod \"dnsmasq-dns-675f4bcbfc-kxd86\" (UID: \"ac4eb40d-807a-43dd-bcc1-ec832d4b5311\") " pod="openstack/dnsmasq-dns-675f4bcbfc-kxd86" Feb 02 21:37:51 crc kubenswrapper[4789]: I0202 21:37:51.252511 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b28m8\" (UniqueName: \"kubernetes.io/projected/87ac290e-0323-4c14-baa5-ac62040f0d34-kube-api-access-b28m8\") pod \"dnsmasq-dns-78dd6ddcc-8gw52\" (UID: \"87ac290e-0323-4c14-baa5-ac62040f0d34\") " pod="openstack/dnsmasq-dns-78dd6ddcc-8gw52" Feb 02 21:37:51 crc kubenswrapper[4789]: I0202 21:37:51.347466 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-kxd86" Feb 02 21:37:51 crc kubenswrapper[4789]: I0202 21:37:51.421113 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-8gw52" Feb 02 21:37:51 crc kubenswrapper[4789]: I0202 21:37:51.759237 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-kxd86"] Feb 02 21:37:51 crc kubenswrapper[4789]: I0202 21:37:51.762410 4789 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 21:37:51 crc kubenswrapper[4789]: I0202 21:37:51.946153 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-8gw52"] Feb 02 21:37:51 crc kubenswrapper[4789]: W0202 21:37:51.950939 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod87ac290e_0323_4c14_baa5_ac62040f0d34.slice/crio-3ae8222c263c0a276a90af12925b14f2545632ff3d8ab355cc437809a82b7afb WatchSource:0}: Error finding container 3ae8222c263c0a276a90af12925b14f2545632ff3d8ab355cc437809a82b7afb: Status 404 returned error can't find the container with id 3ae8222c263c0a276a90af12925b14f2545632ff3d8ab355cc437809a82b7afb Feb 02 21:37:52 crc kubenswrapper[4789]: I0202 21:37:52.307022 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-8gw52" event={"ID":"87ac290e-0323-4c14-baa5-ac62040f0d34","Type":"ContainerStarted","Data":"3ae8222c263c0a276a90af12925b14f2545632ff3d8ab355cc437809a82b7afb"} Feb 02 21:37:52 crc kubenswrapper[4789]: I0202 21:37:52.311038 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-kxd86" event={"ID":"ac4eb40d-807a-43dd-bcc1-ec832d4b5311","Type":"ContainerStarted","Data":"00f2bd6141425108b6e1f38eee4d19f1196ee1668e4647b90d103b30d6400831"} Feb 02 21:37:53 crc kubenswrapper[4789]: I0202 21:37:53.875523 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-kxd86"] Feb 02 21:37:53 crc kubenswrapper[4789]: I0202 21:37:53.894301 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-6l85z"] Feb 02 21:37:53 crc kubenswrapper[4789]: I0202 21:37:53.895649 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-6l85z" Feb 02 21:37:53 crc kubenswrapper[4789]: I0202 21:37:53.914679 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-6l85z"] Feb 02 21:37:53 crc kubenswrapper[4789]: I0202 21:37:53.968898 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc3193e4-49b3-4b91-a64f-cdd0b0c2b5db-config\") pod \"dnsmasq-dns-666b6646f7-6l85z\" (UID: \"dc3193e4-49b3-4b91-a64f-cdd0b0c2b5db\") " pod="openstack/dnsmasq-dns-666b6646f7-6l85z" Feb 02 21:37:53 crc kubenswrapper[4789]: I0202 21:37:53.968975 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bj4hb\" (UniqueName: \"kubernetes.io/projected/dc3193e4-49b3-4b91-a64f-cdd0b0c2b5db-kube-api-access-bj4hb\") pod \"dnsmasq-dns-666b6646f7-6l85z\" (UID: \"dc3193e4-49b3-4b91-a64f-cdd0b0c2b5db\") " pod="openstack/dnsmasq-dns-666b6646f7-6l85z" Feb 02 21:37:53 crc kubenswrapper[4789]: I0202 21:37:53.969021 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dc3193e4-49b3-4b91-a64f-cdd0b0c2b5db-dns-svc\") pod \"dnsmasq-dns-666b6646f7-6l85z\" (UID: \"dc3193e4-49b3-4b91-a64f-cdd0b0c2b5db\") " pod="openstack/dnsmasq-dns-666b6646f7-6l85z" Feb 02 21:37:54 crc kubenswrapper[4789]: I0202 21:37:54.075346 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bj4hb\" (UniqueName: \"kubernetes.io/projected/dc3193e4-49b3-4b91-a64f-cdd0b0c2b5db-kube-api-access-bj4hb\") pod \"dnsmasq-dns-666b6646f7-6l85z\" (UID: \"dc3193e4-49b3-4b91-a64f-cdd0b0c2b5db\") " pod="openstack/dnsmasq-dns-666b6646f7-6l85z" Feb 02 21:37:54 crc kubenswrapper[4789]: I0202 21:37:54.075424 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dc3193e4-49b3-4b91-a64f-cdd0b0c2b5db-dns-svc\") pod \"dnsmasq-dns-666b6646f7-6l85z\" (UID: \"dc3193e4-49b3-4b91-a64f-cdd0b0c2b5db\") " pod="openstack/dnsmasq-dns-666b6646f7-6l85z" Feb 02 21:37:54 crc kubenswrapper[4789]: I0202 21:37:54.075483 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc3193e4-49b3-4b91-a64f-cdd0b0c2b5db-config\") pod \"dnsmasq-dns-666b6646f7-6l85z\" (UID: \"dc3193e4-49b3-4b91-a64f-cdd0b0c2b5db\") " pod="openstack/dnsmasq-dns-666b6646f7-6l85z" Feb 02 21:37:54 crc kubenswrapper[4789]: I0202 21:37:54.076356 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dc3193e4-49b3-4b91-a64f-cdd0b0c2b5db-dns-svc\") pod \"dnsmasq-dns-666b6646f7-6l85z\" (UID: \"dc3193e4-49b3-4b91-a64f-cdd0b0c2b5db\") " pod="openstack/dnsmasq-dns-666b6646f7-6l85z" Feb 02 21:37:54 crc kubenswrapper[4789]: I0202 21:37:54.076404 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc3193e4-49b3-4b91-a64f-cdd0b0c2b5db-config\") pod \"dnsmasq-dns-666b6646f7-6l85z\" (UID: \"dc3193e4-49b3-4b91-a64f-cdd0b0c2b5db\") " pod="openstack/dnsmasq-dns-666b6646f7-6l85z" Feb 02 21:37:54 crc kubenswrapper[4789]: I0202 21:37:54.102681 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bj4hb\" (UniqueName: \"kubernetes.io/projected/dc3193e4-49b3-4b91-a64f-cdd0b0c2b5db-kube-api-access-bj4hb\") pod \"dnsmasq-dns-666b6646f7-6l85z\" (UID: \"dc3193e4-49b3-4b91-a64f-cdd0b0c2b5db\") " pod="openstack/dnsmasq-dns-666b6646f7-6l85z" Feb 02 21:37:54 crc kubenswrapper[4789]: I0202 21:37:54.156304 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-8gw52"] Feb 02 21:37:54 crc kubenswrapper[4789]: I0202 21:37:54.197837 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-5lz8j"] Feb 02 21:37:54 crc kubenswrapper[4789]: I0202 21:37:54.203353 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-5lz8j" Feb 02 21:37:54 crc kubenswrapper[4789]: I0202 21:37:54.213471 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-5lz8j"] Feb 02 21:37:54 crc kubenswrapper[4789]: I0202 21:37:54.220307 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-6l85z" Feb 02 21:37:54 crc kubenswrapper[4789]: I0202 21:37:54.277908 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1ffc8320-6bb1-4763-a98e-5c86314a4ec4-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-5lz8j\" (UID: \"1ffc8320-6bb1-4763-a98e-5c86314a4ec4\") " pod="openstack/dnsmasq-dns-57d769cc4f-5lz8j" Feb 02 21:37:54 crc kubenswrapper[4789]: I0202 21:37:54.277971 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ffc8320-6bb1-4763-a98e-5c86314a4ec4-config\") pod \"dnsmasq-dns-57d769cc4f-5lz8j\" (UID: \"1ffc8320-6bb1-4763-a98e-5c86314a4ec4\") " pod="openstack/dnsmasq-dns-57d769cc4f-5lz8j" Feb 02 21:37:54 crc kubenswrapper[4789]: I0202 21:37:54.278024 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4ngc\" (UniqueName: \"kubernetes.io/projected/1ffc8320-6bb1-4763-a98e-5c86314a4ec4-kube-api-access-r4ngc\") pod \"dnsmasq-dns-57d769cc4f-5lz8j\" (UID: \"1ffc8320-6bb1-4763-a98e-5c86314a4ec4\") " pod="openstack/dnsmasq-dns-57d769cc4f-5lz8j" Feb 02 21:37:54 crc kubenswrapper[4789]: I0202 21:37:54.379847 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1ffc8320-6bb1-4763-a98e-5c86314a4ec4-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-5lz8j\" (UID: \"1ffc8320-6bb1-4763-a98e-5c86314a4ec4\") " pod="openstack/dnsmasq-dns-57d769cc4f-5lz8j" Feb 02 21:37:54 crc kubenswrapper[4789]: I0202 21:37:54.379902 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ffc8320-6bb1-4763-a98e-5c86314a4ec4-config\") pod \"dnsmasq-dns-57d769cc4f-5lz8j\" (UID: \"1ffc8320-6bb1-4763-a98e-5c86314a4ec4\") " pod="openstack/dnsmasq-dns-57d769cc4f-5lz8j" Feb 02 21:37:54 crc kubenswrapper[4789]: I0202 21:37:54.379955 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4ngc\" (UniqueName: \"kubernetes.io/projected/1ffc8320-6bb1-4763-a98e-5c86314a4ec4-kube-api-access-r4ngc\") pod \"dnsmasq-dns-57d769cc4f-5lz8j\" (UID: \"1ffc8320-6bb1-4763-a98e-5c86314a4ec4\") " pod="openstack/dnsmasq-dns-57d769cc4f-5lz8j" Feb 02 21:37:54 crc kubenswrapper[4789]: I0202 21:37:54.380934 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1ffc8320-6bb1-4763-a98e-5c86314a4ec4-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-5lz8j\" (UID: \"1ffc8320-6bb1-4763-a98e-5c86314a4ec4\") " pod="openstack/dnsmasq-dns-57d769cc4f-5lz8j" Feb 02 21:37:54 crc kubenswrapper[4789]: I0202 21:37:54.381783 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ffc8320-6bb1-4763-a98e-5c86314a4ec4-config\") pod \"dnsmasq-dns-57d769cc4f-5lz8j\" (UID: \"1ffc8320-6bb1-4763-a98e-5c86314a4ec4\") " pod="openstack/dnsmasq-dns-57d769cc4f-5lz8j" Feb 02 21:37:54 crc kubenswrapper[4789]: I0202 21:37:54.397824 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4ngc\" (UniqueName: \"kubernetes.io/projected/1ffc8320-6bb1-4763-a98e-5c86314a4ec4-kube-api-access-r4ngc\") pod \"dnsmasq-dns-57d769cc4f-5lz8j\" (UID: \"1ffc8320-6bb1-4763-a98e-5c86314a4ec4\") " pod="openstack/dnsmasq-dns-57d769cc4f-5lz8j" Feb 02 21:37:54 crc kubenswrapper[4789]: I0202 21:37:54.521338 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-5lz8j" Feb 02 21:37:55 crc kubenswrapper[4789]: I0202 21:37:55.036345 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 02 21:37:55 crc kubenswrapper[4789]: I0202 21:37:55.037591 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 02 21:37:55 crc kubenswrapper[4789]: I0202 21:37:55.039419 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 02 21:37:55 crc kubenswrapper[4789]: I0202 21:37:55.039452 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 02 21:37:55 crc kubenswrapper[4789]: I0202 21:37:55.039469 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 02 21:37:55 crc kubenswrapper[4789]: I0202 21:37:55.044010 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-kqgk2" Feb 02 21:37:55 crc kubenswrapper[4789]: I0202 21:37:55.044165 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 02 21:37:55 crc kubenswrapper[4789]: I0202 21:37:55.044267 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 02 21:37:55 crc kubenswrapper[4789]: I0202 21:37:55.046086 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 02 21:37:55 crc kubenswrapper[4789]: I0202 21:37:55.050778 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 02 21:37:55 crc kubenswrapper[4789]: I0202 21:37:55.090990 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b4db4b23-dae0-42a5-ad47-3336073d0b6a-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b4db4b23-dae0-42a5-ad47-3336073d0b6a\") " pod="openstack/rabbitmq-server-0" Feb 02 21:37:55 crc kubenswrapper[4789]: I0202 21:37:55.091043 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcfw4\" (UniqueName: \"kubernetes.io/projected/b4db4b23-dae0-42a5-ad47-3336073d0b6a-kube-api-access-qcfw4\") pod \"rabbitmq-server-0\" (UID: \"b4db4b23-dae0-42a5-ad47-3336073d0b6a\") " pod="openstack/rabbitmq-server-0" Feb 02 21:37:55 crc kubenswrapper[4789]: I0202 21:37:55.091066 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b4db4b23-dae0-42a5-ad47-3336073d0b6a-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b4db4b23-dae0-42a5-ad47-3336073d0b6a\") " pod="openstack/rabbitmq-server-0" Feb 02 21:37:55 crc kubenswrapper[4789]: I0202 21:37:55.091081 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b4db4b23-dae0-42a5-ad47-3336073d0b6a-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b4db4b23-dae0-42a5-ad47-3336073d0b6a\") " pod="openstack/rabbitmq-server-0" Feb 02 21:37:55 crc kubenswrapper[4789]: I0202 21:37:55.091105 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b4db4b23-dae0-42a5-ad47-3336073d0b6a-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b4db4b23-dae0-42a5-ad47-3336073d0b6a\") " pod="openstack/rabbitmq-server-0" Feb 02 21:37:55 crc kubenswrapper[4789]: I0202 21:37:55.091128 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b4db4b23-dae0-42a5-ad47-3336073d0b6a-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b4db4b23-dae0-42a5-ad47-3336073d0b6a\") " pod="openstack/rabbitmq-server-0" Feb 02 21:37:55 crc kubenswrapper[4789]: I0202 21:37:55.091145 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b4db4b23-dae0-42a5-ad47-3336073d0b6a-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b4db4b23-dae0-42a5-ad47-3336073d0b6a\") " pod="openstack/rabbitmq-server-0" Feb 02 21:37:55 crc kubenswrapper[4789]: I0202 21:37:55.091163 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b4db4b23-dae0-42a5-ad47-3336073d0b6a-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b4db4b23-dae0-42a5-ad47-3336073d0b6a\") " pod="openstack/rabbitmq-server-0" Feb 02 21:37:55 crc kubenswrapper[4789]: I0202 21:37:55.091254 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"b4db4b23-dae0-42a5-ad47-3336073d0b6a\") " pod="openstack/rabbitmq-server-0" Feb 02 21:37:55 crc kubenswrapper[4789]: I0202 21:37:55.091285 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b4db4b23-dae0-42a5-ad47-3336073d0b6a-config-data\") pod \"rabbitmq-server-0\" (UID: \"b4db4b23-dae0-42a5-ad47-3336073d0b6a\") " pod="openstack/rabbitmq-server-0" Feb 02 21:37:55 crc kubenswrapper[4789]: I0202 21:37:55.091522 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b4db4b23-dae0-42a5-ad47-3336073d0b6a-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"b4db4b23-dae0-42a5-ad47-3336073d0b6a\") " pod="openstack/rabbitmq-server-0" Feb 02 21:37:55 crc kubenswrapper[4789]: I0202 21:37:55.193125 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"b4db4b23-dae0-42a5-ad47-3336073d0b6a\") " pod="openstack/rabbitmq-server-0" Feb 02 21:37:55 crc kubenswrapper[4789]: I0202 21:37:55.193169 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b4db4b23-dae0-42a5-ad47-3336073d0b6a-config-data\") pod \"rabbitmq-server-0\" (UID: \"b4db4b23-dae0-42a5-ad47-3336073d0b6a\") " pod="openstack/rabbitmq-server-0" Feb 02 21:37:55 crc kubenswrapper[4789]: I0202 21:37:55.193198 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b4db4b23-dae0-42a5-ad47-3336073d0b6a-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"b4db4b23-dae0-42a5-ad47-3336073d0b6a\") " pod="openstack/rabbitmq-server-0" Feb 02 21:37:55 crc kubenswrapper[4789]: I0202 21:37:55.193249 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b4db4b23-dae0-42a5-ad47-3336073d0b6a-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b4db4b23-dae0-42a5-ad47-3336073d0b6a\") " pod="openstack/rabbitmq-server-0" Feb 02 21:37:55 crc kubenswrapper[4789]: I0202 21:37:55.193294 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qcfw4\" (UniqueName: \"kubernetes.io/projected/b4db4b23-dae0-42a5-ad47-3336073d0b6a-kube-api-access-qcfw4\") pod \"rabbitmq-server-0\" (UID: \"b4db4b23-dae0-42a5-ad47-3336073d0b6a\") " pod="openstack/rabbitmq-server-0" Feb 02 21:37:55 crc kubenswrapper[4789]: I0202 21:37:55.193319 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b4db4b23-dae0-42a5-ad47-3336073d0b6a-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b4db4b23-dae0-42a5-ad47-3336073d0b6a\") " pod="openstack/rabbitmq-server-0" Feb 02 21:37:55 crc kubenswrapper[4789]: I0202 21:37:55.193339 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b4db4b23-dae0-42a5-ad47-3336073d0b6a-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b4db4b23-dae0-42a5-ad47-3336073d0b6a\") " pod="openstack/rabbitmq-server-0" Feb 02 21:37:55 crc kubenswrapper[4789]: I0202 21:37:55.193368 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b4db4b23-dae0-42a5-ad47-3336073d0b6a-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b4db4b23-dae0-42a5-ad47-3336073d0b6a\") " pod="openstack/rabbitmq-server-0" Feb 02 21:37:55 crc kubenswrapper[4789]: I0202 21:37:55.193397 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b4db4b23-dae0-42a5-ad47-3336073d0b6a-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b4db4b23-dae0-42a5-ad47-3336073d0b6a\") " pod="openstack/rabbitmq-server-0" Feb 02 21:37:55 crc kubenswrapper[4789]: I0202 21:37:55.193425 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b4db4b23-dae0-42a5-ad47-3336073d0b6a-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b4db4b23-dae0-42a5-ad47-3336073d0b6a\") " pod="openstack/rabbitmq-server-0" Feb 02 21:37:55 crc kubenswrapper[4789]: I0202 21:37:55.193448 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b4db4b23-dae0-42a5-ad47-3336073d0b6a-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b4db4b23-dae0-42a5-ad47-3336073d0b6a\") " pod="openstack/rabbitmq-server-0" Feb 02 21:37:55 crc kubenswrapper[4789]: I0202 21:37:55.194686 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b4db4b23-dae0-42a5-ad47-3336073d0b6a-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b4db4b23-dae0-42a5-ad47-3336073d0b6a\") " pod="openstack/rabbitmq-server-0" Feb 02 21:37:55 crc kubenswrapper[4789]: I0202 21:37:55.195740 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b4db4b23-dae0-42a5-ad47-3336073d0b6a-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b4db4b23-dae0-42a5-ad47-3336073d0b6a\") " pod="openstack/rabbitmq-server-0" Feb 02 21:37:55 crc kubenswrapper[4789]: I0202 21:37:55.195864 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b4db4b23-dae0-42a5-ad47-3336073d0b6a-config-data\") pod \"rabbitmq-server-0\" (UID: \"b4db4b23-dae0-42a5-ad47-3336073d0b6a\") " pod="openstack/rabbitmq-server-0" Feb 02 21:37:55 crc kubenswrapper[4789]: I0202 21:37:55.196162 4789 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"b4db4b23-dae0-42a5-ad47-3336073d0b6a\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-server-0" Feb 02 21:37:55 crc kubenswrapper[4789]: I0202 21:37:55.198029 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b4db4b23-dae0-42a5-ad47-3336073d0b6a-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b4db4b23-dae0-42a5-ad47-3336073d0b6a\") " pod="openstack/rabbitmq-server-0" Feb 02 21:37:55 crc kubenswrapper[4789]: I0202 21:37:55.198808 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b4db4b23-dae0-42a5-ad47-3336073d0b6a-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b4db4b23-dae0-42a5-ad47-3336073d0b6a\") " pod="openstack/rabbitmq-server-0" Feb 02 21:37:55 crc kubenswrapper[4789]: I0202 21:37:55.199338 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b4db4b23-dae0-42a5-ad47-3336073d0b6a-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"b4db4b23-dae0-42a5-ad47-3336073d0b6a\") " pod="openstack/rabbitmq-server-0" Feb 02 21:37:55 crc kubenswrapper[4789]: I0202 21:37:55.199478 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b4db4b23-dae0-42a5-ad47-3336073d0b6a-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b4db4b23-dae0-42a5-ad47-3336073d0b6a\") " pod="openstack/rabbitmq-server-0" Feb 02 21:37:55 crc kubenswrapper[4789]: I0202 21:37:55.205043 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b4db4b23-dae0-42a5-ad47-3336073d0b6a-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b4db4b23-dae0-42a5-ad47-3336073d0b6a\") " pod="openstack/rabbitmq-server-0" Feb 02 21:37:55 crc kubenswrapper[4789]: I0202 21:37:55.206400 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b4db4b23-dae0-42a5-ad47-3336073d0b6a-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b4db4b23-dae0-42a5-ad47-3336073d0b6a\") " pod="openstack/rabbitmq-server-0" Feb 02 21:37:55 crc kubenswrapper[4789]: I0202 21:37:55.212048 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcfw4\" (UniqueName: \"kubernetes.io/projected/b4db4b23-dae0-42a5-ad47-3336073d0b6a-kube-api-access-qcfw4\") pod \"rabbitmq-server-0\" (UID: \"b4db4b23-dae0-42a5-ad47-3336073d0b6a\") " pod="openstack/rabbitmq-server-0" Feb 02 21:37:55 crc kubenswrapper[4789]: I0202 21:37:55.215324 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"b4db4b23-dae0-42a5-ad47-3336073d0b6a\") " pod="openstack/rabbitmq-server-0" Feb 02 21:37:55 crc kubenswrapper[4789]: I0202 21:37:55.342720 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 02 21:37:55 crc kubenswrapper[4789]: I0202 21:37:55.343934 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 02 21:37:55 crc kubenswrapper[4789]: I0202 21:37:55.347873 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 02 21:37:55 crc kubenswrapper[4789]: I0202 21:37:55.348038 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 02 21:37:55 crc kubenswrapper[4789]: I0202 21:37:55.348240 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 02 21:37:55 crc kubenswrapper[4789]: I0202 21:37:55.348380 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-2czc8" Feb 02 21:37:55 crc kubenswrapper[4789]: I0202 21:37:55.348424 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 02 21:37:55 crc kubenswrapper[4789]: I0202 21:37:55.348536 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 02 21:37:55 crc kubenswrapper[4789]: I0202 21:37:55.348687 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 02 21:37:55 crc kubenswrapper[4789]: I0202 21:37:55.348806 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 02 21:37:55 crc kubenswrapper[4789]: I0202 21:37:55.375767 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 02 21:37:55 crc kubenswrapper[4789]: I0202 21:37:55.394835 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wzbn\" (UniqueName: \"kubernetes.io/projected/b8917d54-451e-4a56-9e8a-142bb5db17e1-kube-api-access-7wzbn\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8917d54-451e-4a56-9e8a-142bb5db17e1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 21:37:55 crc kubenswrapper[4789]: I0202 21:37:55.394880 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b8917d54-451e-4a56-9e8a-142bb5db17e1-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8917d54-451e-4a56-9e8a-142bb5db17e1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 21:37:55 crc kubenswrapper[4789]: I0202 21:37:55.394907 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b8917d54-451e-4a56-9e8a-142bb5db17e1-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8917d54-451e-4a56-9e8a-142bb5db17e1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 21:37:55 crc kubenswrapper[4789]: I0202 21:37:55.394946 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b8917d54-451e-4a56-9e8a-142bb5db17e1-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8917d54-451e-4a56-9e8a-142bb5db17e1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 21:37:55 crc kubenswrapper[4789]: I0202 21:37:55.394979 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8917d54-451e-4a56-9e8a-142bb5db17e1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 21:37:55 crc kubenswrapper[4789]: I0202 21:37:55.394997 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b8917d54-451e-4a56-9e8a-142bb5db17e1-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8917d54-451e-4a56-9e8a-142bb5db17e1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 21:37:55 crc kubenswrapper[4789]: I0202 21:37:55.395030 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b8917d54-451e-4a56-9e8a-142bb5db17e1-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8917d54-451e-4a56-9e8a-142bb5db17e1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 21:37:55 crc kubenswrapper[4789]: I0202 21:37:55.395068 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b8917d54-451e-4a56-9e8a-142bb5db17e1-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8917d54-451e-4a56-9e8a-142bb5db17e1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 21:37:55 crc kubenswrapper[4789]: I0202 21:37:55.395121 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b8917d54-451e-4a56-9e8a-142bb5db17e1-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8917d54-451e-4a56-9e8a-142bb5db17e1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 21:37:55 crc kubenswrapper[4789]: I0202 21:37:55.395151 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b8917d54-451e-4a56-9e8a-142bb5db17e1-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8917d54-451e-4a56-9e8a-142bb5db17e1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 21:37:55 crc kubenswrapper[4789]: I0202 21:37:55.395224 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b8917d54-451e-4a56-9e8a-142bb5db17e1-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8917d54-451e-4a56-9e8a-142bb5db17e1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 21:37:55 crc kubenswrapper[4789]: I0202 21:37:55.496290 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8917d54-451e-4a56-9e8a-142bb5db17e1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 21:37:55 crc kubenswrapper[4789]: I0202 21:37:55.496340 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b8917d54-451e-4a56-9e8a-142bb5db17e1-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8917d54-451e-4a56-9e8a-142bb5db17e1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 21:37:55 crc kubenswrapper[4789]: I0202 21:37:55.496380 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b8917d54-451e-4a56-9e8a-142bb5db17e1-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8917d54-451e-4a56-9e8a-142bb5db17e1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 21:37:55 crc kubenswrapper[4789]: I0202 21:37:55.496421 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b8917d54-451e-4a56-9e8a-142bb5db17e1-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8917d54-451e-4a56-9e8a-142bb5db17e1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 21:37:55 crc kubenswrapper[4789]: I0202 21:37:55.496454 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b8917d54-451e-4a56-9e8a-142bb5db17e1-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8917d54-451e-4a56-9e8a-142bb5db17e1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 21:37:55 crc kubenswrapper[4789]: I0202 21:37:55.496482 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b8917d54-451e-4a56-9e8a-142bb5db17e1-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8917d54-451e-4a56-9e8a-142bb5db17e1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 21:37:55 crc kubenswrapper[4789]: I0202 21:37:55.496481 4789 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8917d54-451e-4a56-9e8a-142bb5db17e1\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-cell1-server-0" Feb 02 21:37:55 crc kubenswrapper[4789]: I0202 21:37:55.496525 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b8917d54-451e-4a56-9e8a-142bb5db17e1-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8917d54-451e-4a56-9e8a-142bb5db17e1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 21:37:55 crc kubenswrapper[4789]: I0202 21:37:55.496594 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b8917d54-451e-4a56-9e8a-142bb5db17e1-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8917d54-451e-4a56-9e8a-142bb5db17e1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 21:37:55 crc kubenswrapper[4789]: I0202 21:37:55.496619 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wzbn\" (UniqueName: \"kubernetes.io/projected/b8917d54-451e-4a56-9e8a-142bb5db17e1-kube-api-access-7wzbn\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8917d54-451e-4a56-9e8a-142bb5db17e1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 21:37:55 crc kubenswrapper[4789]: I0202 21:37:55.496646 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b8917d54-451e-4a56-9e8a-142bb5db17e1-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8917d54-451e-4a56-9e8a-142bb5db17e1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 21:37:55 crc kubenswrapper[4789]: I0202 21:37:55.496689 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b8917d54-451e-4a56-9e8a-142bb5db17e1-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8917d54-451e-4a56-9e8a-142bb5db17e1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 21:37:55 crc kubenswrapper[4789]: I0202 21:37:55.497518 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b8917d54-451e-4a56-9e8a-142bb5db17e1-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8917d54-451e-4a56-9e8a-142bb5db17e1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 21:37:55 crc kubenswrapper[4789]: I0202 21:37:55.498986 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b8917d54-451e-4a56-9e8a-142bb5db17e1-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8917d54-451e-4a56-9e8a-142bb5db17e1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 21:37:55 crc kubenswrapper[4789]: I0202 21:37:55.500442 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b8917d54-451e-4a56-9e8a-142bb5db17e1-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8917d54-451e-4a56-9e8a-142bb5db17e1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 21:37:55 crc kubenswrapper[4789]: I0202 21:37:55.500950 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b8917d54-451e-4a56-9e8a-142bb5db17e1-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8917d54-451e-4a56-9e8a-142bb5db17e1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 21:37:55 crc kubenswrapper[4789]: I0202 21:37:55.501013 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b8917d54-451e-4a56-9e8a-142bb5db17e1-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8917d54-451e-4a56-9e8a-142bb5db17e1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 21:37:55 crc kubenswrapper[4789]: I0202 21:37:55.501516 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b8917d54-451e-4a56-9e8a-142bb5db17e1-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8917d54-451e-4a56-9e8a-142bb5db17e1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 21:37:55 crc kubenswrapper[4789]: I0202 21:37:55.504051 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b8917d54-451e-4a56-9e8a-142bb5db17e1-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8917d54-451e-4a56-9e8a-142bb5db17e1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 21:37:55 crc kubenswrapper[4789]: I0202 21:37:55.507421 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b8917d54-451e-4a56-9e8a-142bb5db17e1-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8917d54-451e-4a56-9e8a-142bb5db17e1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 21:37:55 crc kubenswrapper[4789]: I0202 21:37:55.507925 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b8917d54-451e-4a56-9e8a-142bb5db17e1-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8917d54-451e-4a56-9e8a-142bb5db17e1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 21:37:55 crc kubenswrapper[4789]: I0202 21:37:55.516129 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wzbn\" (UniqueName: \"kubernetes.io/projected/b8917d54-451e-4a56-9e8a-142bb5db17e1-kube-api-access-7wzbn\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8917d54-451e-4a56-9e8a-142bb5db17e1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 21:37:55 crc kubenswrapper[4789]: I0202 21:37:55.525075 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8917d54-451e-4a56-9e8a-142bb5db17e1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 21:37:55 crc kubenswrapper[4789]: I0202 21:37:55.699045 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 02 21:37:56 crc kubenswrapper[4789]: I0202 21:37:56.705245 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 02 21:37:56 crc kubenswrapper[4789]: I0202 21:37:56.706928 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 02 21:37:56 crc kubenswrapper[4789]: I0202 21:37:56.708953 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-8n86d" Feb 02 21:37:56 crc kubenswrapper[4789]: I0202 21:37:56.709749 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 02 21:37:56 crc kubenswrapper[4789]: I0202 21:37:56.709958 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 02 21:37:56 crc kubenswrapper[4789]: I0202 21:37:56.710326 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 02 21:37:56 crc kubenswrapper[4789]: I0202 21:37:56.723495 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 02 21:37:56 crc kubenswrapper[4789]: I0202 21:37:56.729867 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 02 21:37:56 crc kubenswrapper[4789]: I0202 21:37:56.817059 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a77ac0de-f396-45e6-a92c-07fbddc4ec60-kolla-config\") pod \"openstack-galera-0\" (UID: \"a77ac0de-f396-45e6-a92c-07fbddc4ec60\") " pod="openstack/openstack-galera-0" Feb 02 21:37:56 crc kubenswrapper[4789]: I0202 21:37:56.817122 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a77ac0de-f396-45e6-a92c-07fbddc4ec60-config-data-default\") pod \"openstack-galera-0\" (UID: \"a77ac0de-f396-45e6-a92c-07fbddc4ec60\") " pod="openstack/openstack-galera-0" Feb 02 21:37:56 crc kubenswrapper[4789]: I0202 21:37:56.817152 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a77ac0de-f396-45e6-a92c-07fbddc4ec60-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"a77ac0de-f396-45e6-a92c-07fbddc4ec60\") " pod="openstack/openstack-galera-0" Feb 02 21:37:56 crc kubenswrapper[4789]: I0202 21:37:56.817170 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a77ac0de-f396-45e6-a92c-07fbddc4ec60-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"a77ac0de-f396-45e6-a92c-07fbddc4ec60\") " pod="openstack/openstack-galera-0" Feb 02 21:37:56 crc kubenswrapper[4789]: I0202 21:37:56.817289 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-0\" (UID: \"a77ac0de-f396-45e6-a92c-07fbddc4ec60\") " pod="openstack/openstack-galera-0" Feb 02 21:37:56 crc kubenswrapper[4789]: I0202 21:37:56.817346 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a77ac0de-f396-45e6-a92c-07fbddc4ec60-config-data-generated\") pod \"openstack-galera-0\" (UID: \"a77ac0de-f396-45e6-a92c-07fbddc4ec60\") " pod="openstack/openstack-galera-0" Feb 02 21:37:56 crc kubenswrapper[4789]: I0202 21:37:56.817363 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a77ac0de-f396-45e6-a92c-07fbddc4ec60-operator-scripts\") pod \"openstack-galera-0\" (UID: \"a77ac0de-f396-45e6-a92c-07fbddc4ec60\") " pod="openstack/openstack-galera-0" Feb 02 21:37:56 crc kubenswrapper[4789]: I0202 21:37:56.817380 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ws68l\" (UniqueName: \"kubernetes.io/projected/a77ac0de-f396-45e6-a92c-07fbddc4ec60-kube-api-access-ws68l\") pod \"openstack-galera-0\" (UID: \"a77ac0de-f396-45e6-a92c-07fbddc4ec60\") " pod="openstack/openstack-galera-0" Feb 02 21:37:56 crc kubenswrapper[4789]: I0202 21:37:56.918728 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a77ac0de-f396-45e6-a92c-07fbddc4ec60-kolla-config\") pod \"openstack-galera-0\" (UID: \"a77ac0de-f396-45e6-a92c-07fbddc4ec60\") " pod="openstack/openstack-galera-0" Feb 02 21:37:56 crc kubenswrapper[4789]: I0202 21:37:56.918787 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a77ac0de-f396-45e6-a92c-07fbddc4ec60-config-data-default\") pod \"openstack-galera-0\" (UID: \"a77ac0de-f396-45e6-a92c-07fbddc4ec60\") " pod="openstack/openstack-galera-0" Feb 02 21:37:56 crc kubenswrapper[4789]: I0202 21:37:56.918818 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a77ac0de-f396-45e6-a92c-07fbddc4ec60-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"a77ac0de-f396-45e6-a92c-07fbddc4ec60\") " pod="openstack/openstack-galera-0" Feb 02 21:37:56 crc kubenswrapper[4789]: I0202 21:37:56.918835 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a77ac0de-f396-45e6-a92c-07fbddc4ec60-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"a77ac0de-f396-45e6-a92c-07fbddc4ec60\") " pod="openstack/openstack-galera-0" Feb 02 21:37:56 crc kubenswrapper[4789]: I0202 21:37:56.918865 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-0\" (UID: \"a77ac0de-f396-45e6-a92c-07fbddc4ec60\") " pod="openstack/openstack-galera-0" Feb 02 21:37:56 crc kubenswrapper[4789]: I0202 21:37:56.918926 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a77ac0de-f396-45e6-a92c-07fbddc4ec60-config-data-generated\") pod \"openstack-galera-0\" (UID: \"a77ac0de-f396-45e6-a92c-07fbddc4ec60\") " pod="openstack/openstack-galera-0" Feb 02 21:37:56 crc kubenswrapper[4789]: I0202 21:37:56.918947 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a77ac0de-f396-45e6-a92c-07fbddc4ec60-operator-scripts\") pod \"openstack-galera-0\" (UID: \"a77ac0de-f396-45e6-a92c-07fbddc4ec60\") " pod="openstack/openstack-galera-0" Feb 02 21:37:56 crc kubenswrapper[4789]: I0202 21:37:56.918968 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ws68l\" (UniqueName: \"kubernetes.io/projected/a77ac0de-f396-45e6-a92c-07fbddc4ec60-kube-api-access-ws68l\") pod \"openstack-galera-0\" (UID: \"a77ac0de-f396-45e6-a92c-07fbddc4ec60\") " pod="openstack/openstack-galera-0" Feb 02 21:37:56 crc kubenswrapper[4789]: I0202 21:37:56.919413 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a77ac0de-f396-45e6-a92c-07fbddc4ec60-config-data-generated\") pod \"openstack-galera-0\" (UID: \"a77ac0de-f396-45e6-a92c-07fbddc4ec60\") " pod="openstack/openstack-galera-0" Feb 02 21:37:56 crc kubenswrapper[4789]: I0202 21:37:56.919423 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a77ac0de-f396-45e6-a92c-07fbddc4ec60-kolla-config\") pod \"openstack-galera-0\" (UID: \"a77ac0de-f396-45e6-a92c-07fbddc4ec60\") " pod="openstack/openstack-galera-0" Feb 02 21:37:56 crc kubenswrapper[4789]: I0202 21:37:56.919550 4789 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-0\" (UID: \"a77ac0de-f396-45e6-a92c-07fbddc4ec60\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/openstack-galera-0" Feb 02 21:37:56 crc kubenswrapper[4789]: I0202 21:37:56.920290 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a77ac0de-f396-45e6-a92c-07fbddc4ec60-config-data-default\") pod \"openstack-galera-0\" (UID: \"a77ac0de-f396-45e6-a92c-07fbddc4ec60\") " pod="openstack/openstack-galera-0" Feb 02 21:37:56 crc kubenswrapper[4789]: I0202 21:37:56.920711 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a77ac0de-f396-45e6-a92c-07fbddc4ec60-operator-scripts\") pod \"openstack-galera-0\" (UID: \"a77ac0de-f396-45e6-a92c-07fbddc4ec60\") " pod="openstack/openstack-galera-0" Feb 02 21:37:56 crc kubenswrapper[4789]: I0202 21:37:56.923409 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a77ac0de-f396-45e6-a92c-07fbddc4ec60-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"a77ac0de-f396-45e6-a92c-07fbddc4ec60\") " pod="openstack/openstack-galera-0" Feb 02 21:37:56 crc kubenswrapper[4789]: I0202 21:37:56.924204 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a77ac0de-f396-45e6-a92c-07fbddc4ec60-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"a77ac0de-f396-45e6-a92c-07fbddc4ec60\") " pod="openstack/openstack-galera-0" Feb 02 21:37:56 crc kubenswrapper[4789]: I0202 21:37:56.941778 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-0\" (UID: \"a77ac0de-f396-45e6-a92c-07fbddc4ec60\") " pod="openstack/openstack-galera-0" Feb 02 21:37:56 crc kubenswrapper[4789]: I0202 21:37:56.943003 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ws68l\" (UniqueName: \"kubernetes.io/projected/a77ac0de-f396-45e6-a92c-07fbddc4ec60-kube-api-access-ws68l\") pod \"openstack-galera-0\" (UID: \"a77ac0de-f396-45e6-a92c-07fbddc4ec60\") " pod="openstack/openstack-galera-0" Feb 02 21:37:57 crc kubenswrapper[4789]: I0202 21:37:57.046322 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 02 21:37:58 crc kubenswrapper[4789]: I0202 21:37:58.130878 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 02 21:37:58 crc kubenswrapper[4789]: I0202 21:37:58.131966 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 02 21:37:58 crc kubenswrapper[4789]: I0202 21:37:58.133521 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-96qfl" Feb 02 21:37:58 crc kubenswrapper[4789]: I0202 21:37:58.134679 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 02 21:37:58 crc kubenswrapper[4789]: I0202 21:37:58.134802 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 02 21:37:58 crc kubenswrapper[4789]: I0202 21:37:58.135228 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 02 21:37:58 crc kubenswrapper[4789]: I0202 21:37:58.159742 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 02 21:37:58 crc kubenswrapper[4789]: I0202 21:37:58.238519 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/96f4773a-9fa9-41c6-ab4b-54107e66a498-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"96f4773a-9fa9-41c6-ab4b-54107e66a498\") " pod="openstack/openstack-cell1-galera-0" Feb 02 21:37:58 crc kubenswrapper[4789]: I0202 21:37:58.238641 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gc5k4\" (UniqueName: \"kubernetes.io/projected/96f4773a-9fa9-41c6-ab4b-54107e66a498-kube-api-access-gc5k4\") pod \"openstack-cell1-galera-0\" (UID: \"96f4773a-9fa9-41c6-ab4b-54107e66a498\") " pod="openstack/openstack-cell1-galera-0" Feb 02 21:37:58 crc kubenswrapper[4789]: I0202 21:37:58.238711 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/96f4773a-9fa9-41c6-ab4b-54107e66a498-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"96f4773a-9fa9-41c6-ab4b-54107e66a498\") " pod="openstack/openstack-cell1-galera-0" Feb 02 21:37:58 crc kubenswrapper[4789]: I0202 21:37:58.238734 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/96f4773a-9fa9-41c6-ab4b-54107e66a498-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"96f4773a-9fa9-41c6-ab4b-54107e66a498\") " pod="openstack/openstack-cell1-galera-0" Feb 02 21:37:58 crc kubenswrapper[4789]: I0202 21:37:58.238770 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/96f4773a-9fa9-41c6-ab4b-54107e66a498-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"96f4773a-9fa9-41c6-ab4b-54107e66a498\") " pod="openstack/openstack-cell1-galera-0" Feb 02 21:37:58 crc kubenswrapper[4789]: I0202 21:37:58.238795 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96f4773a-9fa9-41c6-ab4b-54107e66a498-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"96f4773a-9fa9-41c6-ab4b-54107e66a498\") " pod="openstack/openstack-cell1-galera-0" Feb 02 21:37:58 crc kubenswrapper[4789]: I0202 21:37:58.238827 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"96f4773a-9fa9-41c6-ab4b-54107e66a498\") " pod="openstack/openstack-cell1-galera-0" Feb 02 21:37:58 crc kubenswrapper[4789]: I0202 21:37:58.238853 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/96f4773a-9fa9-41c6-ab4b-54107e66a498-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"96f4773a-9fa9-41c6-ab4b-54107e66a498\") " pod="openstack/openstack-cell1-galera-0" Feb 02 21:37:58 crc kubenswrapper[4789]: I0202 21:37:58.340331 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/96f4773a-9fa9-41c6-ab4b-54107e66a498-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"96f4773a-9fa9-41c6-ab4b-54107e66a498\") " pod="openstack/openstack-cell1-galera-0" Feb 02 21:37:58 crc kubenswrapper[4789]: I0202 21:37:58.340390 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gc5k4\" (UniqueName: \"kubernetes.io/projected/96f4773a-9fa9-41c6-ab4b-54107e66a498-kube-api-access-gc5k4\") pod \"openstack-cell1-galera-0\" (UID: \"96f4773a-9fa9-41c6-ab4b-54107e66a498\") " pod="openstack/openstack-cell1-galera-0" Feb 02 21:37:58 crc kubenswrapper[4789]: I0202 21:37:58.340460 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/96f4773a-9fa9-41c6-ab4b-54107e66a498-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"96f4773a-9fa9-41c6-ab4b-54107e66a498\") " pod="openstack/openstack-cell1-galera-0" Feb 02 21:37:58 crc kubenswrapper[4789]: I0202 21:37:58.340489 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/96f4773a-9fa9-41c6-ab4b-54107e66a498-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"96f4773a-9fa9-41c6-ab4b-54107e66a498\") " pod="openstack/openstack-cell1-galera-0" Feb 02 21:37:58 crc kubenswrapper[4789]: I0202 21:37:58.340526 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/96f4773a-9fa9-41c6-ab4b-54107e66a498-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"96f4773a-9fa9-41c6-ab4b-54107e66a498\") " pod="openstack/openstack-cell1-galera-0" Feb 02 21:37:58 crc kubenswrapper[4789]: I0202 21:37:58.340550 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96f4773a-9fa9-41c6-ab4b-54107e66a498-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"96f4773a-9fa9-41c6-ab4b-54107e66a498\") " pod="openstack/openstack-cell1-galera-0" Feb 02 21:37:58 crc kubenswrapper[4789]: I0202 21:37:58.340591 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"96f4773a-9fa9-41c6-ab4b-54107e66a498\") " pod="openstack/openstack-cell1-galera-0" Feb 02 21:37:58 crc kubenswrapper[4789]: I0202 21:37:58.340616 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/96f4773a-9fa9-41c6-ab4b-54107e66a498-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"96f4773a-9fa9-41c6-ab4b-54107e66a498\") " pod="openstack/openstack-cell1-galera-0" Feb 02 21:37:58 crc kubenswrapper[4789]: I0202 21:37:58.341313 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/96f4773a-9fa9-41c6-ab4b-54107e66a498-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"96f4773a-9fa9-41c6-ab4b-54107e66a498\") " pod="openstack/openstack-cell1-galera-0" Feb 02 21:37:58 crc kubenswrapper[4789]: I0202 21:37:58.341459 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/96f4773a-9fa9-41c6-ab4b-54107e66a498-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"96f4773a-9fa9-41c6-ab4b-54107e66a498\") " pod="openstack/openstack-cell1-galera-0" Feb 02 21:37:58 crc kubenswrapper[4789]: I0202 21:37:58.341627 4789 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"96f4773a-9fa9-41c6-ab4b-54107e66a498\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/openstack-cell1-galera-0" Feb 02 21:37:58 crc kubenswrapper[4789]: I0202 21:37:58.341892 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/96f4773a-9fa9-41c6-ab4b-54107e66a498-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"96f4773a-9fa9-41c6-ab4b-54107e66a498\") " pod="openstack/openstack-cell1-galera-0" Feb 02 21:37:58 crc kubenswrapper[4789]: I0202 21:37:58.342291 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/96f4773a-9fa9-41c6-ab4b-54107e66a498-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"96f4773a-9fa9-41c6-ab4b-54107e66a498\") " pod="openstack/openstack-cell1-galera-0" Feb 02 21:37:58 crc kubenswrapper[4789]: I0202 21:37:58.345947 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/96f4773a-9fa9-41c6-ab4b-54107e66a498-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"96f4773a-9fa9-41c6-ab4b-54107e66a498\") " pod="openstack/openstack-cell1-galera-0" Feb 02 21:37:58 crc kubenswrapper[4789]: I0202 21:37:58.351520 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96f4773a-9fa9-41c6-ab4b-54107e66a498-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"96f4773a-9fa9-41c6-ab4b-54107e66a498\") " pod="openstack/openstack-cell1-galera-0" Feb 02 21:37:58 crc kubenswrapper[4789]: I0202 21:37:58.358996 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gc5k4\" (UniqueName: \"kubernetes.io/projected/96f4773a-9fa9-41c6-ab4b-54107e66a498-kube-api-access-gc5k4\") pod \"openstack-cell1-galera-0\" (UID: \"96f4773a-9fa9-41c6-ab4b-54107e66a498\") " pod="openstack/openstack-cell1-galera-0" Feb 02 21:37:58 crc kubenswrapper[4789]: I0202 21:37:58.374953 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"96f4773a-9fa9-41c6-ab4b-54107e66a498\") " pod="openstack/openstack-cell1-galera-0" Feb 02 21:37:58 crc kubenswrapper[4789]: I0202 21:37:58.383212 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 02 21:37:58 crc kubenswrapper[4789]: I0202 21:37:58.384352 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 02 21:37:58 crc kubenswrapper[4789]: I0202 21:37:58.386924 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-5whj6" Feb 02 21:37:58 crc kubenswrapper[4789]: I0202 21:37:58.387110 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 02 21:37:58 crc kubenswrapper[4789]: I0202 21:37:58.387158 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 02 21:37:58 crc kubenswrapper[4789]: I0202 21:37:58.397100 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 02 21:37:58 crc kubenswrapper[4789]: I0202 21:37:58.454884 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 02 21:37:58 crc kubenswrapper[4789]: I0202 21:37:58.542909 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88qk8\" (UniqueName: \"kubernetes.io/projected/078a8abb-3926-40cd-9340-0bef088c130f-kube-api-access-88qk8\") pod \"memcached-0\" (UID: \"078a8abb-3926-40cd-9340-0bef088c130f\") " pod="openstack/memcached-0" Feb 02 21:37:58 crc kubenswrapper[4789]: I0202 21:37:58.543138 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/078a8abb-3926-40cd-9340-0bef088c130f-kolla-config\") pod \"memcached-0\" (UID: \"078a8abb-3926-40cd-9340-0bef088c130f\") " pod="openstack/memcached-0" Feb 02 21:37:58 crc kubenswrapper[4789]: I0202 21:37:58.543273 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/078a8abb-3926-40cd-9340-0bef088c130f-config-data\") pod \"memcached-0\" (UID: \"078a8abb-3926-40cd-9340-0bef088c130f\") " pod="openstack/memcached-0" Feb 02 21:37:58 crc kubenswrapper[4789]: I0202 21:37:58.543338 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/078a8abb-3926-40cd-9340-0bef088c130f-memcached-tls-certs\") pod \"memcached-0\" (UID: \"078a8abb-3926-40cd-9340-0bef088c130f\") " pod="openstack/memcached-0" Feb 02 21:37:58 crc kubenswrapper[4789]: I0202 21:37:58.543378 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/078a8abb-3926-40cd-9340-0bef088c130f-combined-ca-bundle\") pod \"memcached-0\" (UID: \"078a8abb-3926-40cd-9340-0bef088c130f\") " pod="openstack/memcached-0" Feb 02 21:37:58 crc kubenswrapper[4789]: I0202 21:37:58.645048 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/078a8abb-3926-40cd-9340-0bef088c130f-kolla-config\") pod \"memcached-0\" (UID: \"078a8abb-3926-40cd-9340-0bef088c130f\") " pod="openstack/memcached-0" Feb 02 21:37:58 crc kubenswrapper[4789]: I0202 21:37:58.645159 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/078a8abb-3926-40cd-9340-0bef088c130f-config-data\") pod \"memcached-0\" (UID: \"078a8abb-3926-40cd-9340-0bef088c130f\") " pod="openstack/memcached-0" Feb 02 21:37:58 crc kubenswrapper[4789]: I0202 21:37:58.645238 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/078a8abb-3926-40cd-9340-0bef088c130f-memcached-tls-certs\") pod \"memcached-0\" (UID: \"078a8abb-3926-40cd-9340-0bef088c130f\") " pod="openstack/memcached-0" Feb 02 21:37:58 crc kubenswrapper[4789]: I0202 21:37:58.645288 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/078a8abb-3926-40cd-9340-0bef088c130f-combined-ca-bundle\") pod \"memcached-0\" (UID: \"078a8abb-3926-40cd-9340-0bef088c130f\") " pod="openstack/memcached-0" Feb 02 21:37:58 crc kubenswrapper[4789]: I0202 21:37:58.645382 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88qk8\" (UniqueName: \"kubernetes.io/projected/078a8abb-3926-40cd-9340-0bef088c130f-kube-api-access-88qk8\") pod \"memcached-0\" (UID: \"078a8abb-3926-40cd-9340-0bef088c130f\") " pod="openstack/memcached-0" Feb 02 21:37:58 crc kubenswrapper[4789]: I0202 21:37:58.646969 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/078a8abb-3926-40cd-9340-0bef088c130f-kolla-config\") pod \"memcached-0\" (UID: \"078a8abb-3926-40cd-9340-0bef088c130f\") " pod="openstack/memcached-0" Feb 02 21:37:58 crc kubenswrapper[4789]: I0202 21:37:58.648077 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/078a8abb-3926-40cd-9340-0bef088c130f-config-data\") pod \"memcached-0\" (UID: \"078a8abb-3926-40cd-9340-0bef088c130f\") " pod="openstack/memcached-0" Feb 02 21:37:58 crc kubenswrapper[4789]: I0202 21:37:58.658602 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/078a8abb-3926-40cd-9340-0bef088c130f-memcached-tls-certs\") pod \"memcached-0\" (UID: \"078a8abb-3926-40cd-9340-0bef088c130f\") " pod="openstack/memcached-0" Feb 02 21:37:58 crc kubenswrapper[4789]: I0202 21:37:58.664781 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/078a8abb-3926-40cd-9340-0bef088c130f-combined-ca-bundle\") pod \"memcached-0\" (UID: \"078a8abb-3926-40cd-9340-0bef088c130f\") " pod="openstack/memcached-0" Feb 02 21:37:58 crc kubenswrapper[4789]: I0202 21:37:58.667083 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88qk8\" (UniqueName: \"kubernetes.io/projected/078a8abb-3926-40cd-9340-0bef088c130f-kube-api-access-88qk8\") pod \"memcached-0\" (UID: \"078a8abb-3926-40cd-9340-0bef088c130f\") " pod="openstack/memcached-0" Feb 02 21:37:58 crc kubenswrapper[4789]: I0202 21:37:58.740012 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 02 21:37:59 crc kubenswrapper[4789]: I0202 21:37:59.950810 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 21:37:59 crc kubenswrapper[4789]: I0202 21:37:59.952964 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 02 21:37:59 crc kubenswrapper[4789]: I0202 21:37:59.956857 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-c4vcx" Feb 02 21:37:59 crc kubenswrapper[4789]: I0202 21:37:59.961147 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 21:38:00 crc kubenswrapper[4789]: I0202 21:38:00.074394 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ccpc\" (UniqueName: \"kubernetes.io/projected/b8a53bc3-3ae7-4358-8574-1adcd8d4fefb-kube-api-access-9ccpc\") pod \"kube-state-metrics-0\" (UID: \"b8a53bc3-3ae7-4358-8574-1adcd8d4fefb\") " pod="openstack/kube-state-metrics-0" Feb 02 21:38:00 crc kubenswrapper[4789]: I0202 21:38:00.175751 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ccpc\" (UniqueName: \"kubernetes.io/projected/b8a53bc3-3ae7-4358-8574-1adcd8d4fefb-kube-api-access-9ccpc\") pod \"kube-state-metrics-0\" (UID: \"b8a53bc3-3ae7-4358-8574-1adcd8d4fefb\") " pod="openstack/kube-state-metrics-0" Feb 02 21:38:00 crc kubenswrapper[4789]: I0202 21:38:00.199519 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ccpc\" (UniqueName: \"kubernetes.io/projected/b8a53bc3-3ae7-4358-8574-1adcd8d4fefb-kube-api-access-9ccpc\") pod \"kube-state-metrics-0\" (UID: \"b8a53bc3-3ae7-4358-8574-1adcd8d4fefb\") " pod="openstack/kube-state-metrics-0" Feb 02 21:38:00 crc kubenswrapper[4789]: I0202 21:38:00.283054 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 02 21:38:02 crc kubenswrapper[4789]: I0202 21:38:02.804380 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-gjls4"] Feb 02 21:38:02 crc kubenswrapper[4789]: I0202 21:38:02.806212 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-gjls4" Feb 02 21:38:02 crc kubenswrapper[4789]: I0202 21:38:02.808800 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Feb 02 21:38:02 crc kubenswrapper[4789]: I0202 21:38:02.808998 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-p2nqd" Feb 02 21:38:02 crc kubenswrapper[4789]: I0202 21:38:02.809844 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 02 21:38:02 crc kubenswrapper[4789]: I0202 21:38:02.814214 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-tjn59"] Feb 02 21:38:02 crc kubenswrapper[4789]: I0202 21:38:02.832635 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-tjn59" Feb 02 21:38:02 crc kubenswrapper[4789]: I0202 21:38:02.835852 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-gjls4"] Feb 02 21:38:02 crc kubenswrapper[4789]: I0202 21:38:02.901875 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-tjn59"] Feb 02 21:38:02 crc kubenswrapper[4789]: I0202 21:38:02.933233 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0ad9b78e-5f3a-42f0-a462-c6b0bf0c4ec1-var-run\") pod \"ovn-controller-ovs-tjn59\" (UID: \"0ad9b78e-5f3a-42f0-a462-c6b0bf0c4ec1\") " pod="openstack/ovn-controller-ovs-tjn59" Feb 02 21:38:02 crc kubenswrapper[4789]: I0202 21:38:02.933281 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gjjv\" (UniqueName: \"kubernetes.io/projected/c571c3a8-8470-4076-adde-89416f071937-kube-api-access-7gjjv\") pod \"ovn-controller-gjls4\" (UID: \"c571c3a8-8470-4076-adde-89416f071937\") " pod="openstack/ovn-controller-gjls4" Feb 02 21:38:02 crc kubenswrapper[4789]: I0202 21:38:02.933321 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c571c3a8-8470-4076-adde-89416f071937-var-run\") pod \"ovn-controller-gjls4\" (UID: \"c571c3a8-8470-4076-adde-89416f071937\") " pod="openstack/ovn-controller-gjls4" Feb 02 21:38:02 crc kubenswrapper[4789]: I0202 21:38:02.933341 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c571c3a8-8470-4076-adde-89416f071937-scripts\") pod \"ovn-controller-gjls4\" (UID: \"c571c3a8-8470-4076-adde-89416f071937\") " pod="openstack/ovn-controller-gjls4" Feb 02 21:38:02 crc kubenswrapper[4789]: I0202 21:38:02.933355 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bfsx\" (UniqueName: \"kubernetes.io/projected/0ad9b78e-5f3a-42f0-a462-c6b0bf0c4ec1-kube-api-access-6bfsx\") pod \"ovn-controller-ovs-tjn59\" (UID: \"0ad9b78e-5f3a-42f0-a462-c6b0bf0c4ec1\") " pod="openstack/ovn-controller-ovs-tjn59" Feb 02 21:38:02 crc kubenswrapper[4789]: I0202 21:38:02.933375 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/0ad9b78e-5f3a-42f0-a462-c6b0bf0c4ec1-var-log\") pod \"ovn-controller-ovs-tjn59\" (UID: \"0ad9b78e-5f3a-42f0-a462-c6b0bf0c4ec1\") " pod="openstack/ovn-controller-ovs-tjn59" Feb 02 21:38:02 crc kubenswrapper[4789]: I0202 21:38:02.933412 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/c571c3a8-8470-4076-adde-89416f071937-ovn-controller-tls-certs\") pod \"ovn-controller-gjls4\" (UID: \"c571c3a8-8470-4076-adde-89416f071937\") " pod="openstack/ovn-controller-gjls4" Feb 02 21:38:02 crc kubenswrapper[4789]: I0202 21:38:02.933437 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/0ad9b78e-5f3a-42f0-a462-c6b0bf0c4ec1-var-lib\") pod \"ovn-controller-ovs-tjn59\" (UID: \"0ad9b78e-5f3a-42f0-a462-c6b0bf0c4ec1\") " pod="openstack/ovn-controller-ovs-tjn59" Feb 02 21:38:02 crc kubenswrapper[4789]: I0202 21:38:02.933496 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c571c3a8-8470-4076-adde-89416f071937-var-log-ovn\") pod \"ovn-controller-gjls4\" (UID: \"c571c3a8-8470-4076-adde-89416f071937\") " pod="openstack/ovn-controller-gjls4" Feb 02 21:38:02 crc kubenswrapper[4789]: I0202 21:38:02.933595 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0ad9b78e-5f3a-42f0-a462-c6b0bf0c4ec1-scripts\") pod \"ovn-controller-ovs-tjn59\" (UID: \"0ad9b78e-5f3a-42f0-a462-c6b0bf0c4ec1\") " pod="openstack/ovn-controller-ovs-tjn59" Feb 02 21:38:02 crc kubenswrapper[4789]: I0202 21:38:02.933619 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/0ad9b78e-5f3a-42f0-a462-c6b0bf0c4ec1-etc-ovs\") pod \"ovn-controller-ovs-tjn59\" (UID: \"0ad9b78e-5f3a-42f0-a462-c6b0bf0c4ec1\") " pod="openstack/ovn-controller-ovs-tjn59" Feb 02 21:38:02 crc kubenswrapper[4789]: I0202 21:38:02.933638 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c571c3a8-8470-4076-adde-89416f071937-var-run-ovn\") pod \"ovn-controller-gjls4\" (UID: \"c571c3a8-8470-4076-adde-89416f071937\") " pod="openstack/ovn-controller-gjls4" Feb 02 21:38:02 crc kubenswrapper[4789]: I0202 21:38:02.933656 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c571c3a8-8470-4076-adde-89416f071937-combined-ca-bundle\") pod \"ovn-controller-gjls4\" (UID: \"c571c3a8-8470-4076-adde-89416f071937\") " pod="openstack/ovn-controller-gjls4" Feb 02 21:38:03 crc kubenswrapper[4789]: I0202 21:38:03.035130 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c571c3a8-8470-4076-adde-89416f071937-scripts\") pod \"ovn-controller-gjls4\" (UID: \"c571c3a8-8470-4076-adde-89416f071937\") " pod="openstack/ovn-controller-gjls4" Feb 02 21:38:03 crc kubenswrapper[4789]: I0202 21:38:03.035169 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bfsx\" (UniqueName: \"kubernetes.io/projected/0ad9b78e-5f3a-42f0-a462-c6b0bf0c4ec1-kube-api-access-6bfsx\") pod \"ovn-controller-ovs-tjn59\" (UID: \"0ad9b78e-5f3a-42f0-a462-c6b0bf0c4ec1\") " pod="openstack/ovn-controller-ovs-tjn59" Feb 02 21:38:03 crc kubenswrapper[4789]: I0202 21:38:03.035192 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/0ad9b78e-5f3a-42f0-a462-c6b0bf0c4ec1-var-log\") pod \"ovn-controller-ovs-tjn59\" (UID: \"0ad9b78e-5f3a-42f0-a462-c6b0bf0c4ec1\") " pod="openstack/ovn-controller-ovs-tjn59" Feb 02 21:38:03 crc kubenswrapper[4789]: I0202 21:38:03.035223 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/c571c3a8-8470-4076-adde-89416f071937-ovn-controller-tls-certs\") pod \"ovn-controller-gjls4\" (UID: \"c571c3a8-8470-4076-adde-89416f071937\") " pod="openstack/ovn-controller-gjls4" Feb 02 21:38:03 crc kubenswrapper[4789]: I0202 21:38:03.035252 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/0ad9b78e-5f3a-42f0-a462-c6b0bf0c4ec1-var-lib\") pod \"ovn-controller-ovs-tjn59\" (UID: \"0ad9b78e-5f3a-42f0-a462-c6b0bf0c4ec1\") " pod="openstack/ovn-controller-ovs-tjn59" Feb 02 21:38:03 crc kubenswrapper[4789]: I0202 21:38:03.035278 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c571c3a8-8470-4076-adde-89416f071937-var-log-ovn\") pod \"ovn-controller-gjls4\" (UID: \"c571c3a8-8470-4076-adde-89416f071937\") " pod="openstack/ovn-controller-gjls4" Feb 02 21:38:03 crc kubenswrapper[4789]: I0202 21:38:03.035307 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0ad9b78e-5f3a-42f0-a462-c6b0bf0c4ec1-scripts\") pod \"ovn-controller-ovs-tjn59\" (UID: \"0ad9b78e-5f3a-42f0-a462-c6b0bf0c4ec1\") " pod="openstack/ovn-controller-ovs-tjn59" Feb 02 21:38:03 crc kubenswrapper[4789]: I0202 21:38:03.035328 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/0ad9b78e-5f3a-42f0-a462-c6b0bf0c4ec1-etc-ovs\") pod \"ovn-controller-ovs-tjn59\" (UID: \"0ad9b78e-5f3a-42f0-a462-c6b0bf0c4ec1\") " pod="openstack/ovn-controller-ovs-tjn59" Feb 02 21:38:03 crc kubenswrapper[4789]: I0202 21:38:03.035346 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c571c3a8-8470-4076-adde-89416f071937-var-run-ovn\") pod \"ovn-controller-gjls4\" (UID: \"c571c3a8-8470-4076-adde-89416f071937\") " pod="openstack/ovn-controller-gjls4" Feb 02 21:38:03 crc kubenswrapper[4789]: I0202 21:38:03.035364 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c571c3a8-8470-4076-adde-89416f071937-combined-ca-bundle\") pod \"ovn-controller-gjls4\" (UID: \"c571c3a8-8470-4076-adde-89416f071937\") " pod="openstack/ovn-controller-gjls4" Feb 02 21:38:03 crc kubenswrapper[4789]: I0202 21:38:03.035383 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0ad9b78e-5f3a-42f0-a462-c6b0bf0c4ec1-var-run\") pod \"ovn-controller-ovs-tjn59\" (UID: \"0ad9b78e-5f3a-42f0-a462-c6b0bf0c4ec1\") " pod="openstack/ovn-controller-ovs-tjn59" Feb 02 21:38:03 crc kubenswrapper[4789]: I0202 21:38:03.035404 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gjjv\" (UniqueName: \"kubernetes.io/projected/c571c3a8-8470-4076-adde-89416f071937-kube-api-access-7gjjv\") pod \"ovn-controller-gjls4\" (UID: \"c571c3a8-8470-4076-adde-89416f071937\") " pod="openstack/ovn-controller-gjls4" Feb 02 21:38:03 crc kubenswrapper[4789]: I0202 21:38:03.035440 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c571c3a8-8470-4076-adde-89416f071937-var-run\") pod \"ovn-controller-gjls4\" (UID: \"c571c3a8-8470-4076-adde-89416f071937\") " pod="openstack/ovn-controller-gjls4" Feb 02 21:38:03 crc kubenswrapper[4789]: I0202 21:38:03.036005 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/0ad9b78e-5f3a-42f0-a462-c6b0bf0c4ec1-etc-ovs\") pod \"ovn-controller-ovs-tjn59\" (UID: \"0ad9b78e-5f3a-42f0-a462-c6b0bf0c4ec1\") " pod="openstack/ovn-controller-ovs-tjn59" Feb 02 21:38:03 crc kubenswrapper[4789]: I0202 21:38:03.036049 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0ad9b78e-5f3a-42f0-a462-c6b0bf0c4ec1-var-run\") pod \"ovn-controller-ovs-tjn59\" (UID: \"0ad9b78e-5f3a-42f0-a462-c6b0bf0c4ec1\") " pod="openstack/ovn-controller-ovs-tjn59" Feb 02 21:38:03 crc kubenswrapper[4789]: I0202 21:38:03.036013 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c571c3a8-8470-4076-adde-89416f071937-var-run\") pod \"ovn-controller-gjls4\" (UID: \"c571c3a8-8470-4076-adde-89416f071937\") " pod="openstack/ovn-controller-gjls4" Feb 02 21:38:03 crc kubenswrapper[4789]: I0202 21:38:03.036126 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c571c3a8-8470-4076-adde-89416f071937-var-run-ovn\") pod \"ovn-controller-gjls4\" (UID: \"c571c3a8-8470-4076-adde-89416f071937\") " pod="openstack/ovn-controller-gjls4" Feb 02 21:38:03 crc kubenswrapper[4789]: I0202 21:38:03.036418 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/0ad9b78e-5f3a-42f0-a462-c6b0bf0c4ec1-var-lib\") pod \"ovn-controller-ovs-tjn59\" (UID: \"0ad9b78e-5f3a-42f0-a462-c6b0bf0c4ec1\") " pod="openstack/ovn-controller-ovs-tjn59" Feb 02 21:38:03 crc kubenswrapper[4789]: I0202 21:38:03.036681 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/0ad9b78e-5f3a-42f0-a462-c6b0bf0c4ec1-var-log\") pod \"ovn-controller-ovs-tjn59\" (UID: \"0ad9b78e-5f3a-42f0-a462-c6b0bf0c4ec1\") " pod="openstack/ovn-controller-ovs-tjn59" Feb 02 21:38:03 crc kubenswrapper[4789]: I0202 21:38:03.036762 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c571c3a8-8470-4076-adde-89416f071937-var-log-ovn\") pod \"ovn-controller-gjls4\" (UID: \"c571c3a8-8470-4076-adde-89416f071937\") " pod="openstack/ovn-controller-gjls4" Feb 02 21:38:03 crc kubenswrapper[4789]: I0202 21:38:03.037793 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c571c3a8-8470-4076-adde-89416f071937-scripts\") pod \"ovn-controller-gjls4\" (UID: \"c571c3a8-8470-4076-adde-89416f071937\") " pod="openstack/ovn-controller-gjls4" Feb 02 21:38:03 crc kubenswrapper[4789]: I0202 21:38:03.037939 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0ad9b78e-5f3a-42f0-a462-c6b0bf0c4ec1-scripts\") pod \"ovn-controller-ovs-tjn59\" (UID: \"0ad9b78e-5f3a-42f0-a462-c6b0bf0c4ec1\") " pod="openstack/ovn-controller-ovs-tjn59" Feb 02 21:38:03 crc kubenswrapper[4789]: I0202 21:38:03.040908 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/c571c3a8-8470-4076-adde-89416f071937-ovn-controller-tls-certs\") pod \"ovn-controller-gjls4\" (UID: \"c571c3a8-8470-4076-adde-89416f071937\") " pod="openstack/ovn-controller-gjls4" Feb 02 21:38:03 crc kubenswrapper[4789]: I0202 21:38:03.049954 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c571c3a8-8470-4076-adde-89416f071937-combined-ca-bundle\") pod \"ovn-controller-gjls4\" (UID: \"c571c3a8-8470-4076-adde-89416f071937\") " pod="openstack/ovn-controller-gjls4" Feb 02 21:38:03 crc kubenswrapper[4789]: I0202 21:38:03.050075 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gjjv\" (UniqueName: \"kubernetes.io/projected/c571c3a8-8470-4076-adde-89416f071937-kube-api-access-7gjjv\") pod \"ovn-controller-gjls4\" (UID: \"c571c3a8-8470-4076-adde-89416f071937\") " pod="openstack/ovn-controller-gjls4" Feb 02 21:38:03 crc kubenswrapper[4789]: I0202 21:38:03.056838 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bfsx\" (UniqueName: \"kubernetes.io/projected/0ad9b78e-5f3a-42f0-a462-c6b0bf0c4ec1-kube-api-access-6bfsx\") pod \"ovn-controller-ovs-tjn59\" (UID: \"0ad9b78e-5f3a-42f0-a462-c6b0bf0c4ec1\") " pod="openstack/ovn-controller-ovs-tjn59" Feb 02 21:38:03 crc kubenswrapper[4789]: I0202 21:38:03.182660 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-gjls4" Feb 02 21:38:03 crc kubenswrapper[4789]: I0202 21:38:03.208058 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-tjn59" Feb 02 21:38:03 crc kubenswrapper[4789]: I0202 21:38:03.692347 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 02 21:38:03 crc kubenswrapper[4789]: I0202 21:38:03.693665 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 02 21:38:03 crc kubenswrapper[4789]: I0202 21:38:03.697468 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 02 21:38:03 crc kubenswrapper[4789]: I0202 21:38:03.699383 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Feb 02 21:38:03 crc kubenswrapper[4789]: I0202 21:38:03.699640 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Feb 02 21:38:03 crc kubenswrapper[4789]: I0202 21:38:03.702084 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 02 21:38:03 crc kubenswrapper[4789]: I0202 21:38:03.702264 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 02 21:38:03 crc kubenswrapper[4789]: I0202 21:38:03.702781 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-pxph5" Feb 02 21:38:03 crc kubenswrapper[4789]: I0202 21:38:03.846265 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/01c5293c-f7b0-4141-99a7-e423de507b87-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"01c5293c-f7b0-4141-99a7-e423de507b87\") " pod="openstack/ovsdbserver-nb-0" Feb 02 21:38:03 crc kubenswrapper[4789]: I0202 21:38:03.846364 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-nb-0\" (UID: \"01c5293c-f7b0-4141-99a7-e423de507b87\") " pod="openstack/ovsdbserver-nb-0" Feb 02 21:38:03 crc kubenswrapper[4789]: I0202 21:38:03.846602 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/01c5293c-f7b0-4141-99a7-e423de507b87-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"01c5293c-f7b0-4141-99a7-e423de507b87\") " pod="openstack/ovsdbserver-nb-0" Feb 02 21:38:03 crc kubenswrapper[4789]: I0202 21:38:03.846681 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01c5293c-f7b0-4141-99a7-e423de507b87-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"01c5293c-f7b0-4141-99a7-e423de507b87\") " pod="openstack/ovsdbserver-nb-0" Feb 02 21:38:03 crc kubenswrapper[4789]: I0202 21:38:03.846798 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/01c5293c-f7b0-4141-99a7-e423de507b87-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"01c5293c-f7b0-4141-99a7-e423de507b87\") " pod="openstack/ovsdbserver-nb-0" Feb 02 21:38:03 crc kubenswrapper[4789]: I0202 21:38:03.846900 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01c5293c-f7b0-4141-99a7-e423de507b87-config\") pod \"ovsdbserver-nb-0\" (UID: \"01c5293c-f7b0-4141-99a7-e423de507b87\") " pod="openstack/ovsdbserver-nb-0" Feb 02 21:38:03 crc kubenswrapper[4789]: I0202 21:38:03.847091 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgkdz\" (UniqueName: \"kubernetes.io/projected/01c5293c-f7b0-4141-99a7-e423de507b87-kube-api-access-vgkdz\") pod \"ovsdbserver-nb-0\" (UID: \"01c5293c-f7b0-4141-99a7-e423de507b87\") " pod="openstack/ovsdbserver-nb-0" Feb 02 21:38:03 crc kubenswrapper[4789]: I0202 21:38:03.847393 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/01c5293c-f7b0-4141-99a7-e423de507b87-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"01c5293c-f7b0-4141-99a7-e423de507b87\") " pod="openstack/ovsdbserver-nb-0" Feb 02 21:38:03 crc kubenswrapper[4789]: I0202 21:38:03.949425 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/01c5293c-f7b0-4141-99a7-e423de507b87-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"01c5293c-f7b0-4141-99a7-e423de507b87\") " pod="openstack/ovsdbserver-nb-0" Feb 02 21:38:03 crc kubenswrapper[4789]: I0202 21:38:03.949912 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/01c5293c-f7b0-4141-99a7-e423de507b87-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"01c5293c-f7b0-4141-99a7-e423de507b87\") " pod="openstack/ovsdbserver-nb-0" Feb 02 21:38:03 crc kubenswrapper[4789]: I0202 21:38:03.949982 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-nb-0\" (UID: \"01c5293c-f7b0-4141-99a7-e423de507b87\") " pod="openstack/ovsdbserver-nb-0" Feb 02 21:38:03 crc kubenswrapper[4789]: I0202 21:38:03.950038 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/01c5293c-f7b0-4141-99a7-e423de507b87-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"01c5293c-f7b0-4141-99a7-e423de507b87\") " pod="openstack/ovsdbserver-nb-0" Feb 02 21:38:03 crc kubenswrapper[4789]: I0202 21:38:03.950069 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01c5293c-f7b0-4141-99a7-e423de507b87-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"01c5293c-f7b0-4141-99a7-e423de507b87\") " pod="openstack/ovsdbserver-nb-0" Feb 02 21:38:03 crc kubenswrapper[4789]: I0202 21:38:03.950114 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/01c5293c-f7b0-4141-99a7-e423de507b87-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"01c5293c-f7b0-4141-99a7-e423de507b87\") " pod="openstack/ovsdbserver-nb-0" Feb 02 21:38:03 crc kubenswrapper[4789]: I0202 21:38:03.950393 4789 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-nb-0\" (UID: \"01c5293c-f7b0-4141-99a7-e423de507b87\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/ovsdbserver-nb-0" Feb 02 21:38:03 crc kubenswrapper[4789]: I0202 21:38:03.950562 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/01c5293c-f7b0-4141-99a7-e423de507b87-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"01c5293c-f7b0-4141-99a7-e423de507b87\") " pod="openstack/ovsdbserver-nb-0" Feb 02 21:38:03 crc kubenswrapper[4789]: I0202 21:38:03.951285 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01c5293c-f7b0-4141-99a7-e423de507b87-config\") pod \"ovsdbserver-nb-0\" (UID: \"01c5293c-f7b0-4141-99a7-e423de507b87\") " pod="openstack/ovsdbserver-nb-0" Feb 02 21:38:03 crc kubenswrapper[4789]: I0202 21:38:03.951343 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/01c5293c-f7b0-4141-99a7-e423de507b87-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"01c5293c-f7b0-4141-99a7-e423de507b87\") " pod="openstack/ovsdbserver-nb-0" Feb 02 21:38:03 crc kubenswrapper[4789]: I0202 21:38:03.951444 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgkdz\" (UniqueName: \"kubernetes.io/projected/01c5293c-f7b0-4141-99a7-e423de507b87-kube-api-access-vgkdz\") pod \"ovsdbserver-nb-0\" (UID: \"01c5293c-f7b0-4141-99a7-e423de507b87\") " pod="openstack/ovsdbserver-nb-0" Feb 02 21:38:03 crc kubenswrapper[4789]: I0202 21:38:03.951984 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01c5293c-f7b0-4141-99a7-e423de507b87-config\") pod \"ovsdbserver-nb-0\" (UID: \"01c5293c-f7b0-4141-99a7-e423de507b87\") " pod="openstack/ovsdbserver-nb-0" Feb 02 21:38:03 crc kubenswrapper[4789]: I0202 21:38:03.954737 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/01c5293c-f7b0-4141-99a7-e423de507b87-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"01c5293c-f7b0-4141-99a7-e423de507b87\") " pod="openstack/ovsdbserver-nb-0" Feb 02 21:38:03 crc kubenswrapper[4789]: I0202 21:38:03.957970 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/01c5293c-f7b0-4141-99a7-e423de507b87-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"01c5293c-f7b0-4141-99a7-e423de507b87\") " pod="openstack/ovsdbserver-nb-0" Feb 02 21:38:03 crc kubenswrapper[4789]: I0202 21:38:03.969408 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01c5293c-f7b0-4141-99a7-e423de507b87-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"01c5293c-f7b0-4141-99a7-e423de507b87\") " pod="openstack/ovsdbserver-nb-0" Feb 02 21:38:03 crc kubenswrapper[4789]: I0202 21:38:03.981105 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgkdz\" (UniqueName: \"kubernetes.io/projected/01c5293c-f7b0-4141-99a7-e423de507b87-kube-api-access-vgkdz\") pod \"ovsdbserver-nb-0\" (UID: \"01c5293c-f7b0-4141-99a7-e423de507b87\") " pod="openstack/ovsdbserver-nb-0" Feb 02 21:38:03 crc kubenswrapper[4789]: I0202 21:38:03.991681 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-nb-0\" (UID: \"01c5293c-f7b0-4141-99a7-e423de507b87\") " pod="openstack/ovsdbserver-nb-0" Feb 02 21:38:04 crc kubenswrapper[4789]: I0202 21:38:04.026185 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 02 21:38:05 crc kubenswrapper[4789]: I0202 21:38:05.574076 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 02 21:38:06 crc kubenswrapper[4789]: E0202 21:38:06.112722 4789 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 02 21:38:06 crc kubenswrapper[4789]: E0202 21:38:06.113076 4789 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rc88w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-kxd86_openstack(ac4eb40d-807a-43dd-bcc1-ec832d4b5311): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 21:38:06 crc kubenswrapper[4789]: E0202 21:38:06.114347 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-kxd86" podUID="ac4eb40d-807a-43dd-bcc1-ec832d4b5311" Feb 02 21:38:06 crc kubenswrapper[4789]: E0202 21:38:06.116018 4789 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 02 21:38:06 crc kubenswrapper[4789]: E0202 21:38:06.116177 4789 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b28m8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-8gw52_openstack(87ac290e-0323-4c14-baa5-ac62040f0d34): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 21:38:06 crc kubenswrapper[4789]: E0202 21:38:06.117253 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-8gw52" podUID="87ac290e-0323-4c14-baa5-ac62040f0d34" Feb 02 21:38:06 crc kubenswrapper[4789]: I0202 21:38:06.433377 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b4db4b23-dae0-42a5-ad47-3336073d0b6a","Type":"ContainerStarted","Data":"a287094b9dc75aa61117d069b721c13f01fa508d00808b0032962fcb227bddf5"} Feb 02 21:38:06 crc kubenswrapper[4789]: I0202 21:38:06.889063 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-8gw52" Feb 02 21:38:06 crc kubenswrapper[4789]: I0202 21:38:06.895521 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-kxd86" Feb 02 21:38:06 crc kubenswrapper[4789]: I0202 21:38:06.964618 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 21:38:06 crc kubenswrapper[4789]: I0202 21:38:06.981570 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-6l85z"] Feb 02 21:38:06 crc kubenswrapper[4789]: I0202 21:38:06.993136 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 02 21:38:06 crc kubenswrapper[4789]: I0202 21:38:06.994397 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 02 21:38:07 crc kubenswrapper[4789]: I0202 21:38:07.006782 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rc88w\" (UniqueName: \"kubernetes.io/projected/ac4eb40d-807a-43dd-bcc1-ec832d4b5311-kube-api-access-rc88w\") pod \"ac4eb40d-807a-43dd-bcc1-ec832d4b5311\" (UID: \"ac4eb40d-807a-43dd-bcc1-ec832d4b5311\") " Feb 02 21:38:07 crc kubenswrapper[4789]: I0202 21:38:07.006832 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/87ac290e-0323-4c14-baa5-ac62040f0d34-dns-svc\") pod \"87ac290e-0323-4c14-baa5-ac62040f0d34\" (UID: \"87ac290e-0323-4c14-baa5-ac62040f0d34\") " Feb 02 21:38:07 crc kubenswrapper[4789]: I0202 21:38:07.006887 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87ac290e-0323-4c14-baa5-ac62040f0d34-config\") pod \"87ac290e-0323-4c14-baa5-ac62040f0d34\" (UID: \"87ac290e-0323-4c14-baa5-ac62040f0d34\") " Feb 02 21:38:07 crc kubenswrapper[4789]: I0202 21:38:07.006991 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac4eb40d-807a-43dd-bcc1-ec832d4b5311-config\") pod \"ac4eb40d-807a-43dd-bcc1-ec832d4b5311\" (UID: \"ac4eb40d-807a-43dd-bcc1-ec832d4b5311\") " Feb 02 21:38:07 crc kubenswrapper[4789]: I0202 21:38:07.007011 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b28m8\" (UniqueName: \"kubernetes.io/projected/87ac290e-0323-4c14-baa5-ac62040f0d34-kube-api-access-b28m8\") pod \"87ac290e-0323-4c14-baa5-ac62040f0d34\" (UID: \"87ac290e-0323-4c14-baa5-ac62040f0d34\") " Feb 02 21:38:07 crc kubenswrapper[4789]: I0202 21:38:07.008288 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac4eb40d-807a-43dd-bcc1-ec832d4b5311-config" (OuterVolumeSpecName: "config") pod "ac4eb40d-807a-43dd-bcc1-ec832d4b5311" (UID: "ac4eb40d-807a-43dd-bcc1-ec832d4b5311"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:38:07 crc kubenswrapper[4789]: I0202 21:38:07.008313 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87ac290e-0323-4c14-baa5-ac62040f0d34-config" (OuterVolumeSpecName: "config") pod "87ac290e-0323-4c14-baa5-ac62040f0d34" (UID: "87ac290e-0323-4c14-baa5-ac62040f0d34"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:38:07 crc kubenswrapper[4789]: I0202 21:38:07.009012 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87ac290e-0323-4c14-baa5-ac62040f0d34-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "87ac290e-0323-4c14-baa5-ac62040f0d34" (UID: "87ac290e-0323-4c14-baa5-ac62040f0d34"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:38:07 crc kubenswrapper[4789]: I0202 21:38:07.014218 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac4eb40d-807a-43dd-bcc1-ec832d4b5311-kube-api-access-rc88w" (OuterVolumeSpecName: "kube-api-access-rc88w") pod "ac4eb40d-807a-43dd-bcc1-ec832d4b5311" (UID: "ac4eb40d-807a-43dd-bcc1-ec832d4b5311"). InnerVolumeSpecName "kube-api-access-rc88w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:38:07 crc kubenswrapper[4789]: I0202 21:38:07.015761 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87ac290e-0323-4c14-baa5-ac62040f0d34-kube-api-access-b28m8" (OuterVolumeSpecName: "kube-api-access-b28m8") pod "87ac290e-0323-4c14-baa5-ac62040f0d34" (UID: "87ac290e-0323-4c14-baa5-ac62040f0d34"). InnerVolumeSpecName "kube-api-access-b28m8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:38:07 crc kubenswrapper[4789]: I0202 21:38:07.026349 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 02 21:38:07 crc kubenswrapper[4789]: I0202 21:38:07.039554 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 02 21:38:07 crc kubenswrapper[4789]: I0202 21:38:07.056678 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-gjls4"] Feb 02 21:38:07 crc kubenswrapper[4789]: I0202 21:38:07.067132 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-5lz8j"] Feb 02 21:38:07 crc kubenswrapper[4789]: I0202 21:38:07.109228 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rc88w\" (UniqueName: \"kubernetes.io/projected/ac4eb40d-807a-43dd-bcc1-ec832d4b5311-kube-api-access-rc88w\") on node \"crc\" DevicePath \"\"" Feb 02 21:38:07 crc kubenswrapper[4789]: I0202 21:38:07.109259 4789 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/87ac290e-0323-4c14-baa5-ac62040f0d34-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 21:38:07 crc kubenswrapper[4789]: I0202 21:38:07.109271 4789 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87ac290e-0323-4c14-baa5-ac62040f0d34-config\") on node \"crc\" DevicePath \"\"" Feb 02 21:38:07 crc kubenswrapper[4789]: I0202 21:38:07.109280 4789 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac4eb40d-807a-43dd-bcc1-ec832d4b5311-config\") on node \"crc\" DevicePath \"\"" Feb 02 21:38:07 crc kubenswrapper[4789]: I0202 21:38:07.109290 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b28m8\" (UniqueName: \"kubernetes.io/projected/87ac290e-0323-4c14-baa5-ac62040f0d34-kube-api-access-b28m8\") on node \"crc\" DevicePath \"\"" Feb 02 21:38:07 crc kubenswrapper[4789]: I0202 21:38:07.136341 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 02 21:38:07 crc kubenswrapper[4789]: W0202 21:38:07.143357 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01c5293c_f7b0_4141_99a7_e423de507b87.slice/crio-4c418005c61aedc2db0e22250a2bd998aa9fbf796a307443b5065e80ae029167 WatchSource:0}: Error finding container 4c418005c61aedc2db0e22250a2bd998aa9fbf796a307443b5065e80ae029167: Status 404 returned error can't find the container with id 4c418005c61aedc2db0e22250a2bd998aa9fbf796a307443b5065e80ae029167 Feb 02 21:38:07 crc kubenswrapper[4789]: I0202 21:38:07.444323 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-8gw52" event={"ID":"87ac290e-0323-4c14-baa5-ac62040f0d34","Type":"ContainerDied","Data":"3ae8222c263c0a276a90af12925b14f2545632ff3d8ab355cc437809a82b7afb"} Feb 02 21:38:07 crc kubenswrapper[4789]: I0202 21:38:07.444387 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-8gw52" Feb 02 21:38:07 crc kubenswrapper[4789]: I0202 21:38:07.447123 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-5lz8j" event={"ID":"1ffc8320-6bb1-4763-a98e-5c86314a4ec4","Type":"ContainerStarted","Data":"566d4f69f0cc5ce3f967814aed8c359f88bb748b25f66513bdbf8dd79ef2cf5a"} Feb 02 21:38:07 crc kubenswrapper[4789]: I0202 21:38:07.448904 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b8917d54-451e-4a56-9e8a-142bb5db17e1","Type":"ContainerStarted","Data":"95c4b796e6336984d0dc820412ea4efe958320e94fbcf078a04bb239e144172f"} Feb 02 21:38:07 crc kubenswrapper[4789]: I0202 21:38:07.449796 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"96f4773a-9fa9-41c6-ab4b-54107e66a498","Type":"ContainerStarted","Data":"ee8a86a649f3594509c94cf0c417ef1bf0bbd58671003431b12c9dd6d4093172"} Feb 02 21:38:07 crc kubenswrapper[4789]: I0202 21:38:07.450722 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"01c5293c-f7b0-4141-99a7-e423de507b87","Type":"ContainerStarted","Data":"4c418005c61aedc2db0e22250a2bd998aa9fbf796a307443b5065e80ae029167"} Feb 02 21:38:07 crc kubenswrapper[4789]: I0202 21:38:07.451799 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"b8a53bc3-3ae7-4358-8574-1adcd8d4fefb","Type":"ContainerStarted","Data":"2345d36d16828447040cfec3a598737da8e60528ef54f41721c3920a078bff47"} Feb 02 21:38:07 crc kubenswrapper[4789]: I0202 21:38:07.452786 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"a77ac0de-f396-45e6-a92c-07fbddc4ec60","Type":"ContainerStarted","Data":"1cee6c445449104e880b0bc100b90f19a2b6fa5905bdf43dce3fa8461974d7df"} Feb 02 21:38:07 crc kubenswrapper[4789]: I0202 21:38:07.454879 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"078a8abb-3926-40cd-9340-0bef088c130f","Type":"ContainerStarted","Data":"8741deca8d199e7e6afa1c514df0eb85db0cc454bbb5594b20937541b22505ce"} Feb 02 21:38:07 crc kubenswrapper[4789]: I0202 21:38:07.455761 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-6l85z" event={"ID":"dc3193e4-49b3-4b91-a64f-cdd0b0c2b5db","Type":"ContainerStarted","Data":"c3fdab3ce3a382512ea0621e13519149cb0b2a60d55a125f5ccaae6333455bd2"} Feb 02 21:38:07 crc kubenswrapper[4789]: I0202 21:38:07.456621 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-gjls4" event={"ID":"c571c3a8-8470-4076-adde-89416f071937","Type":"ContainerStarted","Data":"8f7daf988c11c33d2d922ade395ba0c5f550739635868994a6a4c7810c35fbcd"} Feb 02 21:38:07 crc kubenswrapper[4789]: I0202 21:38:07.457416 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-kxd86" event={"ID":"ac4eb40d-807a-43dd-bcc1-ec832d4b5311","Type":"ContainerDied","Data":"00f2bd6141425108b6e1f38eee4d19f1196ee1668e4647b90d103b30d6400831"} Feb 02 21:38:07 crc kubenswrapper[4789]: I0202 21:38:07.457473 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-kxd86" Feb 02 21:38:07 crc kubenswrapper[4789]: I0202 21:38:07.533088 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-8gw52"] Feb 02 21:38:07 crc kubenswrapper[4789]: I0202 21:38:07.538689 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-8gw52"] Feb 02 21:38:07 crc kubenswrapper[4789]: I0202 21:38:07.551092 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-kxd86"] Feb 02 21:38:07 crc kubenswrapper[4789]: I0202 21:38:07.574513 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-kxd86"] Feb 02 21:38:07 crc kubenswrapper[4789]: I0202 21:38:07.896059 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 02 21:38:07 crc kubenswrapper[4789]: I0202 21:38:07.897789 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 02 21:38:07 crc kubenswrapper[4789]: I0202 21:38:07.899666 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 02 21:38:07 crc kubenswrapper[4789]: I0202 21:38:07.899922 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-ww6qm" Feb 02 21:38:07 crc kubenswrapper[4789]: I0202 21:38:07.899959 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Feb 02 21:38:07 crc kubenswrapper[4789]: I0202 21:38:07.900244 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 02 21:38:07 crc kubenswrapper[4789]: I0202 21:38:07.904020 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 02 21:38:08 crc kubenswrapper[4789]: I0202 21:38:08.032338 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c55d3f19-edf8-4cff-ab70-495607e77798-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c55d3f19-edf8-4cff-ab70-495607e77798\") " pod="openstack/ovsdbserver-sb-0" Feb 02 21:38:08 crc kubenswrapper[4789]: I0202 21:38:08.032414 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c55d3f19-edf8-4cff-ab70-495607e77798-config\") pod \"ovsdbserver-sb-0\" (UID: \"c55d3f19-edf8-4cff-ab70-495607e77798\") " pod="openstack/ovsdbserver-sb-0" Feb 02 21:38:08 crc kubenswrapper[4789]: I0202 21:38:08.032433 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c55d3f19-edf8-4cff-ab70-495607e77798-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"c55d3f19-edf8-4cff-ab70-495607e77798\") " pod="openstack/ovsdbserver-sb-0" Feb 02 21:38:08 crc kubenswrapper[4789]: I0202 21:38:08.032480 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p69km\" (UniqueName: \"kubernetes.io/projected/c55d3f19-edf8-4cff-ab70-495607e77798-kube-api-access-p69km\") pod \"ovsdbserver-sb-0\" (UID: \"c55d3f19-edf8-4cff-ab70-495607e77798\") " pod="openstack/ovsdbserver-sb-0" Feb 02 21:38:08 crc kubenswrapper[4789]: I0202 21:38:08.032521 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c55d3f19-edf8-4cff-ab70-495607e77798-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"c55d3f19-edf8-4cff-ab70-495607e77798\") " pod="openstack/ovsdbserver-sb-0" Feb 02 21:38:08 crc kubenswrapper[4789]: I0202 21:38:08.032537 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-sb-0\" (UID: \"c55d3f19-edf8-4cff-ab70-495607e77798\") " pod="openstack/ovsdbserver-sb-0" Feb 02 21:38:08 crc kubenswrapper[4789]: I0202 21:38:08.032925 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c55d3f19-edf8-4cff-ab70-495607e77798-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c55d3f19-edf8-4cff-ab70-495607e77798\") " pod="openstack/ovsdbserver-sb-0" Feb 02 21:38:08 crc kubenswrapper[4789]: I0202 21:38:08.033140 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c55d3f19-edf8-4cff-ab70-495607e77798-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"c55d3f19-edf8-4cff-ab70-495607e77798\") " pod="openstack/ovsdbserver-sb-0" Feb 02 21:38:08 crc kubenswrapper[4789]: I0202 21:38:08.135333 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c55d3f19-edf8-4cff-ab70-495607e77798-config\") pod \"ovsdbserver-sb-0\" (UID: \"c55d3f19-edf8-4cff-ab70-495607e77798\") " pod="openstack/ovsdbserver-sb-0" Feb 02 21:38:08 crc kubenswrapper[4789]: I0202 21:38:08.135384 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c55d3f19-edf8-4cff-ab70-495607e77798-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"c55d3f19-edf8-4cff-ab70-495607e77798\") " pod="openstack/ovsdbserver-sb-0" Feb 02 21:38:08 crc kubenswrapper[4789]: I0202 21:38:08.135435 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p69km\" (UniqueName: \"kubernetes.io/projected/c55d3f19-edf8-4cff-ab70-495607e77798-kube-api-access-p69km\") pod \"ovsdbserver-sb-0\" (UID: \"c55d3f19-edf8-4cff-ab70-495607e77798\") " pod="openstack/ovsdbserver-sb-0" Feb 02 21:38:08 crc kubenswrapper[4789]: I0202 21:38:08.135455 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c55d3f19-edf8-4cff-ab70-495607e77798-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"c55d3f19-edf8-4cff-ab70-495607e77798\") " pod="openstack/ovsdbserver-sb-0" Feb 02 21:38:08 crc kubenswrapper[4789]: I0202 21:38:08.135472 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-sb-0\" (UID: \"c55d3f19-edf8-4cff-ab70-495607e77798\") " pod="openstack/ovsdbserver-sb-0" Feb 02 21:38:08 crc kubenswrapper[4789]: I0202 21:38:08.135490 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c55d3f19-edf8-4cff-ab70-495607e77798-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c55d3f19-edf8-4cff-ab70-495607e77798\") " pod="openstack/ovsdbserver-sb-0" Feb 02 21:38:08 crc kubenswrapper[4789]: I0202 21:38:08.135534 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c55d3f19-edf8-4cff-ab70-495607e77798-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"c55d3f19-edf8-4cff-ab70-495607e77798\") " pod="openstack/ovsdbserver-sb-0" Feb 02 21:38:08 crc kubenswrapper[4789]: I0202 21:38:08.135561 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c55d3f19-edf8-4cff-ab70-495607e77798-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c55d3f19-edf8-4cff-ab70-495607e77798\") " pod="openstack/ovsdbserver-sb-0" Feb 02 21:38:08 crc kubenswrapper[4789]: I0202 21:38:08.148931 4789 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-sb-0\" (UID: \"c55d3f19-edf8-4cff-ab70-495607e77798\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/ovsdbserver-sb-0" Feb 02 21:38:08 crc kubenswrapper[4789]: I0202 21:38:08.158773 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c55d3f19-edf8-4cff-ab70-495607e77798-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"c55d3f19-edf8-4cff-ab70-495607e77798\") " pod="openstack/ovsdbserver-sb-0" Feb 02 21:38:08 crc kubenswrapper[4789]: I0202 21:38:08.160724 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-tjn59"] Feb 02 21:38:08 crc kubenswrapper[4789]: I0202 21:38:08.169243 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c55d3f19-edf8-4cff-ab70-495607e77798-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c55d3f19-edf8-4cff-ab70-495607e77798\") " pod="openstack/ovsdbserver-sb-0" Feb 02 21:38:08 crc kubenswrapper[4789]: I0202 21:38:08.169455 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c55d3f19-edf8-4cff-ab70-495607e77798-config\") pod \"ovsdbserver-sb-0\" (UID: \"c55d3f19-edf8-4cff-ab70-495607e77798\") " pod="openstack/ovsdbserver-sb-0" Feb 02 21:38:08 crc kubenswrapper[4789]: I0202 21:38:08.194165 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c55d3f19-edf8-4cff-ab70-495607e77798-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c55d3f19-edf8-4cff-ab70-495607e77798\") " pod="openstack/ovsdbserver-sb-0" Feb 02 21:38:08 crc kubenswrapper[4789]: I0202 21:38:08.195386 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c55d3f19-edf8-4cff-ab70-495607e77798-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"c55d3f19-edf8-4cff-ab70-495607e77798\") " pod="openstack/ovsdbserver-sb-0" Feb 02 21:38:08 crc kubenswrapper[4789]: I0202 21:38:08.204664 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c55d3f19-edf8-4cff-ab70-495607e77798-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"c55d3f19-edf8-4cff-ab70-495607e77798\") " pod="openstack/ovsdbserver-sb-0" Feb 02 21:38:08 crc kubenswrapper[4789]: I0202 21:38:08.222256 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p69km\" (UniqueName: \"kubernetes.io/projected/c55d3f19-edf8-4cff-ab70-495607e77798-kube-api-access-p69km\") pod \"ovsdbserver-sb-0\" (UID: \"c55d3f19-edf8-4cff-ab70-495607e77798\") " pod="openstack/ovsdbserver-sb-0" Feb 02 21:38:08 crc kubenswrapper[4789]: I0202 21:38:08.230860 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-sb-0\" (UID: \"c55d3f19-edf8-4cff-ab70-495607e77798\") " pod="openstack/ovsdbserver-sb-0" Feb 02 21:38:08 crc kubenswrapper[4789]: I0202 21:38:08.244077 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 02 21:38:08 crc kubenswrapper[4789]: I0202 21:38:08.306189 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-d5hwz"] Feb 02 21:38:08 crc kubenswrapper[4789]: I0202 21:38:08.307274 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-d5hwz" Feb 02 21:38:08 crc kubenswrapper[4789]: I0202 21:38:08.310787 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 02 21:38:08 crc kubenswrapper[4789]: I0202 21:38:08.328338 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-d5hwz"] Feb 02 21:38:08 crc kubenswrapper[4789]: I0202 21:38:08.440847 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87ac290e-0323-4c14-baa5-ac62040f0d34" path="/var/lib/kubelet/pods/87ac290e-0323-4c14-baa5-ac62040f0d34/volumes" Feb 02 21:38:08 crc kubenswrapper[4789]: I0202 21:38:08.443272 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/d4127fa0-de5d-43ce-b257-46b80eecd670-ovs-rundir\") pod \"ovn-controller-metrics-d5hwz\" (UID: \"d4127fa0-de5d-43ce-b257-46b80eecd670\") " pod="openstack/ovn-controller-metrics-d5hwz" Feb 02 21:38:08 crc kubenswrapper[4789]: I0202 21:38:08.443365 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4127fa0-de5d-43ce-b257-46b80eecd670-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-d5hwz\" (UID: \"d4127fa0-de5d-43ce-b257-46b80eecd670\") " pod="openstack/ovn-controller-metrics-d5hwz" Feb 02 21:38:08 crc kubenswrapper[4789]: I0202 21:38:08.443392 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4127fa0-de5d-43ce-b257-46b80eecd670-config\") pod \"ovn-controller-metrics-d5hwz\" (UID: \"d4127fa0-de5d-43ce-b257-46b80eecd670\") " pod="openstack/ovn-controller-metrics-d5hwz" Feb 02 21:38:08 crc kubenswrapper[4789]: I0202 21:38:08.443431 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/d4127fa0-de5d-43ce-b257-46b80eecd670-ovn-rundir\") pod \"ovn-controller-metrics-d5hwz\" (UID: \"d4127fa0-de5d-43ce-b257-46b80eecd670\") " pod="openstack/ovn-controller-metrics-d5hwz" Feb 02 21:38:08 crc kubenswrapper[4789]: I0202 21:38:08.443463 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4127fa0-de5d-43ce-b257-46b80eecd670-combined-ca-bundle\") pod \"ovn-controller-metrics-d5hwz\" (UID: \"d4127fa0-de5d-43ce-b257-46b80eecd670\") " pod="openstack/ovn-controller-metrics-d5hwz" Feb 02 21:38:08 crc kubenswrapper[4789]: I0202 21:38:08.443477 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txqhh\" (UniqueName: \"kubernetes.io/projected/d4127fa0-de5d-43ce-b257-46b80eecd670-kube-api-access-txqhh\") pod \"ovn-controller-metrics-d5hwz\" (UID: \"d4127fa0-de5d-43ce-b257-46b80eecd670\") " pod="openstack/ovn-controller-metrics-d5hwz" Feb 02 21:38:08 crc kubenswrapper[4789]: I0202 21:38:08.446311 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac4eb40d-807a-43dd-bcc1-ec832d4b5311" path="/var/lib/kubelet/pods/ac4eb40d-807a-43dd-bcc1-ec832d4b5311/volumes" Feb 02 21:38:08 crc kubenswrapper[4789]: I0202 21:38:08.446793 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-6l85z"] Feb 02 21:38:08 crc kubenswrapper[4789]: I0202 21:38:08.453356 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-l6jmk"] Feb 02 21:38:08 crc kubenswrapper[4789]: I0202 21:38:08.455431 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-l6jmk" Feb 02 21:38:08 crc kubenswrapper[4789]: I0202 21:38:08.495289 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 02 21:38:08 crc kubenswrapper[4789]: I0202 21:38:08.510387 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-l6jmk"] Feb 02 21:38:08 crc kubenswrapper[4789]: I0202 21:38:08.510805 4789 generic.go:334] "Generic (PLEG): container finished" podID="1ffc8320-6bb1-4763-a98e-5c86314a4ec4" containerID="2751a6c6a42cae5b3e5b9efa0d416bad094639bf82efb03b8fd2e3d395ffb1d3" exitCode=0 Feb 02 21:38:08 crc kubenswrapper[4789]: I0202 21:38:08.510852 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-5lz8j" event={"ID":"1ffc8320-6bb1-4763-a98e-5c86314a4ec4","Type":"ContainerDied","Data":"2751a6c6a42cae5b3e5b9efa0d416bad094639bf82efb03b8fd2e3d395ffb1d3"} Feb 02 21:38:08 crc kubenswrapper[4789]: I0202 21:38:08.544474 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/d4127fa0-de5d-43ce-b257-46b80eecd670-ovn-rundir\") pod \"ovn-controller-metrics-d5hwz\" (UID: \"d4127fa0-de5d-43ce-b257-46b80eecd670\") " pod="openstack/ovn-controller-metrics-d5hwz" Feb 02 21:38:08 crc kubenswrapper[4789]: I0202 21:38:08.544536 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4127fa0-de5d-43ce-b257-46b80eecd670-combined-ca-bundle\") pod \"ovn-controller-metrics-d5hwz\" (UID: \"d4127fa0-de5d-43ce-b257-46b80eecd670\") " pod="openstack/ovn-controller-metrics-d5hwz" Feb 02 21:38:08 crc kubenswrapper[4789]: I0202 21:38:08.544553 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txqhh\" (UniqueName: \"kubernetes.io/projected/d4127fa0-de5d-43ce-b257-46b80eecd670-kube-api-access-txqhh\") pod \"ovn-controller-metrics-d5hwz\" (UID: \"d4127fa0-de5d-43ce-b257-46b80eecd670\") " pod="openstack/ovn-controller-metrics-d5hwz" Feb 02 21:38:08 crc kubenswrapper[4789]: I0202 21:38:08.544591 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cd6df5c8-3899-431d-b9cd-9a9f022160d7-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-l6jmk\" (UID: \"cd6df5c8-3899-431d-b9cd-9a9f022160d7\") " pod="openstack/dnsmasq-dns-7fd796d7df-l6jmk" Feb 02 21:38:08 crc kubenswrapper[4789]: I0202 21:38:08.544627 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/d4127fa0-de5d-43ce-b257-46b80eecd670-ovs-rundir\") pod \"ovn-controller-metrics-d5hwz\" (UID: \"d4127fa0-de5d-43ce-b257-46b80eecd670\") " pod="openstack/ovn-controller-metrics-d5hwz" Feb 02 21:38:08 crc kubenswrapper[4789]: I0202 21:38:08.544659 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhztm\" (UniqueName: \"kubernetes.io/projected/cd6df5c8-3899-431d-b9cd-9a9f022160d7-kube-api-access-dhztm\") pod \"dnsmasq-dns-7fd796d7df-l6jmk\" (UID: \"cd6df5c8-3899-431d-b9cd-9a9f022160d7\") " pod="openstack/dnsmasq-dns-7fd796d7df-l6jmk" Feb 02 21:38:08 crc kubenswrapper[4789]: I0202 21:38:08.544718 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cd6df5c8-3899-431d-b9cd-9a9f022160d7-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-l6jmk\" (UID: \"cd6df5c8-3899-431d-b9cd-9a9f022160d7\") " pod="openstack/dnsmasq-dns-7fd796d7df-l6jmk" Feb 02 21:38:08 crc kubenswrapper[4789]: I0202 21:38:08.544736 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4127fa0-de5d-43ce-b257-46b80eecd670-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-d5hwz\" (UID: \"d4127fa0-de5d-43ce-b257-46b80eecd670\") " pod="openstack/ovn-controller-metrics-d5hwz" Feb 02 21:38:08 crc kubenswrapper[4789]: I0202 21:38:08.544754 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4127fa0-de5d-43ce-b257-46b80eecd670-config\") pod \"ovn-controller-metrics-d5hwz\" (UID: \"d4127fa0-de5d-43ce-b257-46b80eecd670\") " pod="openstack/ovn-controller-metrics-d5hwz" Feb 02 21:38:08 crc kubenswrapper[4789]: I0202 21:38:08.544773 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd6df5c8-3899-431d-b9cd-9a9f022160d7-config\") pod \"dnsmasq-dns-7fd796d7df-l6jmk\" (UID: \"cd6df5c8-3899-431d-b9cd-9a9f022160d7\") " pod="openstack/dnsmasq-dns-7fd796d7df-l6jmk" Feb 02 21:38:08 crc kubenswrapper[4789]: I0202 21:38:08.544885 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/d4127fa0-de5d-43ce-b257-46b80eecd670-ovn-rundir\") pod \"ovn-controller-metrics-d5hwz\" (UID: \"d4127fa0-de5d-43ce-b257-46b80eecd670\") " pod="openstack/ovn-controller-metrics-d5hwz" Feb 02 21:38:08 crc kubenswrapper[4789]: I0202 21:38:08.544902 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/d4127fa0-de5d-43ce-b257-46b80eecd670-ovs-rundir\") pod \"ovn-controller-metrics-d5hwz\" (UID: \"d4127fa0-de5d-43ce-b257-46b80eecd670\") " pod="openstack/ovn-controller-metrics-d5hwz" Feb 02 21:38:08 crc kubenswrapper[4789]: I0202 21:38:08.546526 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4127fa0-de5d-43ce-b257-46b80eecd670-config\") pod \"ovn-controller-metrics-d5hwz\" (UID: \"d4127fa0-de5d-43ce-b257-46b80eecd670\") " pod="openstack/ovn-controller-metrics-d5hwz" Feb 02 21:38:08 crc kubenswrapper[4789]: I0202 21:38:08.549257 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4127fa0-de5d-43ce-b257-46b80eecd670-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-d5hwz\" (UID: \"d4127fa0-de5d-43ce-b257-46b80eecd670\") " pod="openstack/ovn-controller-metrics-d5hwz" Feb 02 21:38:08 crc kubenswrapper[4789]: I0202 21:38:08.549345 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4127fa0-de5d-43ce-b257-46b80eecd670-combined-ca-bundle\") pod \"ovn-controller-metrics-d5hwz\" (UID: \"d4127fa0-de5d-43ce-b257-46b80eecd670\") " pod="openstack/ovn-controller-metrics-d5hwz" Feb 02 21:38:08 crc kubenswrapper[4789]: I0202 21:38:08.558936 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txqhh\" (UniqueName: \"kubernetes.io/projected/d4127fa0-de5d-43ce-b257-46b80eecd670-kube-api-access-txqhh\") pod \"ovn-controller-metrics-d5hwz\" (UID: \"d4127fa0-de5d-43ce-b257-46b80eecd670\") " pod="openstack/ovn-controller-metrics-d5hwz" Feb 02 21:38:08 crc kubenswrapper[4789]: I0202 21:38:08.646257 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cd6df5c8-3899-431d-b9cd-9a9f022160d7-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-l6jmk\" (UID: \"cd6df5c8-3899-431d-b9cd-9a9f022160d7\") " pod="openstack/dnsmasq-dns-7fd796d7df-l6jmk" Feb 02 21:38:08 crc kubenswrapper[4789]: I0202 21:38:08.646310 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd6df5c8-3899-431d-b9cd-9a9f022160d7-config\") pod \"dnsmasq-dns-7fd796d7df-l6jmk\" (UID: \"cd6df5c8-3899-431d-b9cd-9a9f022160d7\") " pod="openstack/dnsmasq-dns-7fd796d7df-l6jmk" Feb 02 21:38:08 crc kubenswrapper[4789]: I0202 21:38:08.646381 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cd6df5c8-3899-431d-b9cd-9a9f022160d7-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-l6jmk\" (UID: \"cd6df5c8-3899-431d-b9cd-9a9f022160d7\") " pod="openstack/dnsmasq-dns-7fd796d7df-l6jmk" Feb 02 21:38:08 crc kubenswrapper[4789]: I0202 21:38:08.646438 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhztm\" (UniqueName: \"kubernetes.io/projected/cd6df5c8-3899-431d-b9cd-9a9f022160d7-kube-api-access-dhztm\") pod \"dnsmasq-dns-7fd796d7df-l6jmk\" (UID: \"cd6df5c8-3899-431d-b9cd-9a9f022160d7\") " pod="openstack/dnsmasq-dns-7fd796d7df-l6jmk" Feb 02 21:38:08 crc kubenswrapper[4789]: I0202 21:38:08.647508 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cd6df5c8-3899-431d-b9cd-9a9f022160d7-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-l6jmk\" (UID: \"cd6df5c8-3899-431d-b9cd-9a9f022160d7\") " pod="openstack/dnsmasq-dns-7fd796d7df-l6jmk" Feb 02 21:38:08 crc kubenswrapper[4789]: I0202 21:38:08.647562 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cd6df5c8-3899-431d-b9cd-9a9f022160d7-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-l6jmk\" (UID: \"cd6df5c8-3899-431d-b9cd-9a9f022160d7\") " pod="openstack/dnsmasq-dns-7fd796d7df-l6jmk" Feb 02 21:38:08 crc kubenswrapper[4789]: I0202 21:38:08.647635 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd6df5c8-3899-431d-b9cd-9a9f022160d7-config\") pod \"dnsmasq-dns-7fd796d7df-l6jmk\" (UID: \"cd6df5c8-3899-431d-b9cd-9a9f022160d7\") " pod="openstack/dnsmasq-dns-7fd796d7df-l6jmk" Feb 02 21:38:08 crc kubenswrapper[4789]: I0202 21:38:08.663488 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhztm\" (UniqueName: \"kubernetes.io/projected/cd6df5c8-3899-431d-b9cd-9a9f022160d7-kube-api-access-dhztm\") pod \"dnsmasq-dns-7fd796d7df-l6jmk\" (UID: \"cd6df5c8-3899-431d-b9cd-9a9f022160d7\") " pod="openstack/dnsmasq-dns-7fd796d7df-l6jmk" Feb 02 21:38:08 crc kubenswrapper[4789]: I0202 21:38:08.669255 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-d5hwz" Feb 02 21:38:08 crc kubenswrapper[4789]: I0202 21:38:08.811159 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-l6jmk" Feb 02 21:38:10 crc kubenswrapper[4789]: I0202 21:38:10.540872 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-tjn59" event={"ID":"0ad9b78e-5f3a-42f0-a462-c6b0bf0c4ec1","Type":"ContainerStarted","Data":"3d505979889fe8acec9d3d5c9c46cb8caf806d34a49a9d462b4a21176917521b"} Feb 02 21:38:15 crc kubenswrapper[4789]: I0202 21:38:15.070653 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 02 21:38:16 crc kubenswrapper[4789]: I0202 21:38:16.600229 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-l6jmk"] Feb 02 21:38:16 crc kubenswrapper[4789]: I0202 21:38:16.600892 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"c55d3f19-edf8-4cff-ab70-495607e77798","Type":"ContainerStarted","Data":"7ec1a310034257a2df09caaa21c724be8dc3217fad77778a34bb55891dbc4ebc"} Feb 02 21:38:16 crc kubenswrapper[4789]: I0202 21:38:16.675840 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-d5hwz"] Feb 02 21:38:17 crc kubenswrapper[4789]: I0202 21:38:17.616815 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"078a8abb-3926-40cd-9340-0bef088c130f","Type":"ContainerStarted","Data":"3d1acdaf38b8f90e2888fd9bb9d6b2a8fab388dd54ec79c7218017d80c8b5670"} Feb 02 21:38:17 crc kubenswrapper[4789]: I0202 21:38:17.617475 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 02 21:38:17 crc kubenswrapper[4789]: I0202 21:38:17.623627 4789 generic.go:334] "Generic (PLEG): container finished" podID="dc3193e4-49b3-4b91-a64f-cdd0b0c2b5db" containerID="d2ea15b72298c95470503f1dc3e85a13cc35593fcb47dfe79a1f40a11ae5a0cd" exitCode=0 Feb 02 21:38:17 crc kubenswrapper[4789]: I0202 21:38:17.623724 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-6l85z" event={"ID":"dc3193e4-49b3-4b91-a64f-cdd0b0c2b5db","Type":"ContainerDied","Data":"d2ea15b72298c95470503f1dc3e85a13cc35593fcb47dfe79a1f40a11ae5a0cd"} Feb 02 21:38:17 crc kubenswrapper[4789]: I0202 21:38:17.627683 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-5lz8j" event={"ID":"1ffc8320-6bb1-4763-a98e-5c86314a4ec4","Type":"ContainerStarted","Data":"32439174093a0775ff5b8156a0fdfc41e749a642fd124582b2a567d78a33e5d9"} Feb 02 21:38:17 crc kubenswrapper[4789]: I0202 21:38:17.627756 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-5lz8j" Feb 02 21:38:17 crc kubenswrapper[4789]: I0202 21:38:17.629736 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-l6jmk" event={"ID":"cd6df5c8-3899-431d-b9cd-9a9f022160d7","Type":"ContainerStarted","Data":"5b8a0a648dfe5ab0b3be7b0a5e314206ca54e9e33f9531f55c75534ba908378b"} Feb 02 21:38:17 crc kubenswrapper[4789]: I0202 21:38:17.636941 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"96f4773a-9fa9-41c6-ab4b-54107e66a498","Type":"ContainerStarted","Data":"a08b45f3dfbed710991b90377397df02266bd543ec0be74a9a29feca9df69385"} Feb 02 21:38:17 crc kubenswrapper[4789]: I0202 21:38:17.652749 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-tjn59" event={"ID":"0ad9b78e-5f3a-42f0-a462-c6b0bf0c4ec1","Type":"ContainerStarted","Data":"18af5228ac54d94f667c107fc2d86d65f1501a6cb0e8343a4e14781b1065e8a2"} Feb 02 21:38:17 crc kubenswrapper[4789]: I0202 21:38:17.654943 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"a77ac0de-f396-45e6-a92c-07fbddc4ec60","Type":"ContainerStarted","Data":"37070194a254abfa3aad802e5fbe6112834841dccb39ba3bd770d2c932dfbb36"} Feb 02 21:38:17 crc kubenswrapper[4789]: I0202 21:38:17.658643 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=10.629187619 podStartE2EDuration="19.658628456s" podCreationTimestamp="2026-02-02 21:37:58 +0000 UTC" firstStartedPulling="2026-02-02 21:38:07.03251825 +0000 UTC m=+1107.327543269" lastFinishedPulling="2026-02-02 21:38:16.061959077 +0000 UTC m=+1116.356984106" observedRunningTime="2026-02-02 21:38:17.636841021 +0000 UTC m=+1117.931866060" watchObservedRunningTime="2026-02-02 21:38:17.658628456 +0000 UTC m=+1117.953653475" Feb 02 21:38:17 crc kubenswrapper[4789]: I0202 21:38:17.670905 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-d5hwz" event={"ID":"d4127fa0-de5d-43ce-b257-46b80eecd670","Type":"ContainerStarted","Data":"ba76fd0a849263c8e1e0f434c7ff1decb2eab75f40ada85cac273132cc37afbb"} Feb 02 21:38:17 crc kubenswrapper[4789]: I0202 21:38:17.722008 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-5lz8j" podStartSLOduration=23.243838038 podStartE2EDuration="23.721969194s" podCreationTimestamp="2026-02-02 21:37:54 +0000 UTC" firstStartedPulling="2026-02-02 21:38:07.084116917 +0000 UTC m=+1107.379141936" lastFinishedPulling="2026-02-02 21:38:07.562248073 +0000 UTC m=+1107.857273092" observedRunningTime="2026-02-02 21:38:17.69810348 +0000 UTC m=+1117.993128499" watchObservedRunningTime="2026-02-02 21:38:17.721969194 +0000 UTC m=+1118.016994213" Feb 02 21:38:17 crc kubenswrapper[4789]: I0202 21:38:17.962241 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-6l85z" Feb 02 21:38:18 crc kubenswrapper[4789]: I0202 21:38:18.037977 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bj4hb\" (UniqueName: \"kubernetes.io/projected/dc3193e4-49b3-4b91-a64f-cdd0b0c2b5db-kube-api-access-bj4hb\") pod \"dc3193e4-49b3-4b91-a64f-cdd0b0c2b5db\" (UID: \"dc3193e4-49b3-4b91-a64f-cdd0b0c2b5db\") " Feb 02 21:38:18 crc kubenswrapper[4789]: I0202 21:38:18.038035 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc3193e4-49b3-4b91-a64f-cdd0b0c2b5db-config\") pod \"dc3193e4-49b3-4b91-a64f-cdd0b0c2b5db\" (UID: \"dc3193e4-49b3-4b91-a64f-cdd0b0c2b5db\") " Feb 02 21:38:18 crc kubenswrapper[4789]: I0202 21:38:18.038083 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dc3193e4-49b3-4b91-a64f-cdd0b0c2b5db-dns-svc\") pod \"dc3193e4-49b3-4b91-a64f-cdd0b0c2b5db\" (UID: \"dc3193e4-49b3-4b91-a64f-cdd0b0c2b5db\") " Feb 02 21:38:18 crc kubenswrapper[4789]: I0202 21:38:18.043185 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc3193e4-49b3-4b91-a64f-cdd0b0c2b5db-kube-api-access-bj4hb" (OuterVolumeSpecName: "kube-api-access-bj4hb") pod "dc3193e4-49b3-4b91-a64f-cdd0b0c2b5db" (UID: "dc3193e4-49b3-4b91-a64f-cdd0b0c2b5db"). InnerVolumeSpecName "kube-api-access-bj4hb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:38:18 crc kubenswrapper[4789]: I0202 21:38:18.098887 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc3193e4-49b3-4b91-a64f-cdd0b0c2b5db-config" (OuterVolumeSpecName: "config") pod "dc3193e4-49b3-4b91-a64f-cdd0b0c2b5db" (UID: "dc3193e4-49b3-4b91-a64f-cdd0b0c2b5db"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:38:18 crc kubenswrapper[4789]: I0202 21:38:18.098881 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc3193e4-49b3-4b91-a64f-cdd0b0c2b5db-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "dc3193e4-49b3-4b91-a64f-cdd0b0c2b5db" (UID: "dc3193e4-49b3-4b91-a64f-cdd0b0c2b5db"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:38:18 crc kubenswrapper[4789]: I0202 21:38:18.139809 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bj4hb\" (UniqueName: \"kubernetes.io/projected/dc3193e4-49b3-4b91-a64f-cdd0b0c2b5db-kube-api-access-bj4hb\") on node \"crc\" DevicePath \"\"" Feb 02 21:38:18 crc kubenswrapper[4789]: I0202 21:38:18.139844 4789 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc3193e4-49b3-4b91-a64f-cdd0b0c2b5db-config\") on node \"crc\" DevicePath \"\"" Feb 02 21:38:18 crc kubenswrapper[4789]: I0202 21:38:18.139855 4789 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dc3193e4-49b3-4b91-a64f-cdd0b0c2b5db-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 21:38:18 crc kubenswrapper[4789]: I0202 21:38:18.683700 4789 generic.go:334] "Generic (PLEG): container finished" podID="cd6df5c8-3899-431d-b9cd-9a9f022160d7" containerID="773a012e1fafa159bed9a71915769368717ee71dbe4a3b3579a0e01fdac72586" exitCode=0 Feb 02 21:38:18 crc kubenswrapper[4789]: I0202 21:38:18.683781 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-l6jmk" event={"ID":"cd6df5c8-3899-431d-b9cd-9a9f022160d7","Type":"ContainerDied","Data":"773a012e1fafa159bed9a71915769368717ee71dbe4a3b3579a0e01fdac72586"} Feb 02 21:38:18 crc kubenswrapper[4789]: I0202 21:38:18.687191 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"01c5293c-f7b0-4141-99a7-e423de507b87","Type":"ContainerStarted","Data":"ecfa06e359801169bdd06bd88548fc6c7999a73aea8eb2d73c459b8201ac6223"} Feb 02 21:38:18 crc kubenswrapper[4789]: I0202 21:38:18.688992 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"c55d3f19-edf8-4cff-ab70-495607e77798","Type":"ContainerStarted","Data":"7cf11c42fa6eee3581592e7cf6d8ad9c5bdb09ef4d82cebd87fe73a6989bc478"} Feb 02 21:38:18 crc kubenswrapper[4789]: I0202 21:38:18.691298 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-6l85z" Feb 02 21:38:18 crc kubenswrapper[4789]: I0202 21:38:18.691309 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-6l85z" event={"ID":"dc3193e4-49b3-4b91-a64f-cdd0b0c2b5db","Type":"ContainerDied","Data":"c3fdab3ce3a382512ea0621e13519149cb0b2a60d55a125f5ccaae6333455bd2"} Feb 02 21:38:18 crc kubenswrapper[4789]: I0202 21:38:18.691369 4789 scope.go:117] "RemoveContainer" containerID="d2ea15b72298c95470503f1dc3e85a13cc35593fcb47dfe79a1f40a11ae5a0cd" Feb 02 21:38:18 crc kubenswrapper[4789]: I0202 21:38:18.693334 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b4db4b23-dae0-42a5-ad47-3336073d0b6a","Type":"ContainerStarted","Data":"b73f21ef1c3cee1aa5a9891e737707de6085ae08c57a737e8bf2cb9c0bd4154c"} Feb 02 21:38:18 crc kubenswrapper[4789]: I0202 21:38:18.698341 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b8917d54-451e-4a56-9e8a-142bb5db17e1","Type":"ContainerStarted","Data":"1e002b2adadc7aa45b24e8a9b6b844784752243592f8b12913aa7c87780c5192"} Feb 02 21:38:18 crc kubenswrapper[4789]: I0202 21:38:18.706830 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-gjls4" event={"ID":"c571c3a8-8470-4076-adde-89416f071937","Type":"ContainerStarted","Data":"d17231a26fe8d830c193a7bd2b6abb5889e5164f3e5069db7e214257ec7141a8"} Feb 02 21:38:18 crc kubenswrapper[4789]: I0202 21:38:18.706922 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-gjls4" Feb 02 21:38:18 crc kubenswrapper[4789]: I0202 21:38:18.708311 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"b8a53bc3-3ae7-4358-8574-1adcd8d4fefb","Type":"ContainerStarted","Data":"fa090ae6cb37c6c6300df62319102a788932fa9ed0df451d265faa5feeafcb5f"} Feb 02 21:38:18 crc kubenswrapper[4789]: I0202 21:38:18.708405 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 02 21:38:18 crc kubenswrapper[4789]: I0202 21:38:18.710430 4789 generic.go:334] "Generic (PLEG): container finished" podID="0ad9b78e-5f3a-42f0-a462-c6b0bf0c4ec1" containerID="18af5228ac54d94f667c107fc2d86d65f1501a6cb0e8343a4e14781b1065e8a2" exitCode=0 Feb 02 21:38:18 crc kubenswrapper[4789]: I0202 21:38:18.710475 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-tjn59" event={"ID":"0ad9b78e-5f3a-42f0-a462-c6b0bf0c4ec1","Type":"ContainerDied","Data":"18af5228ac54d94f667c107fc2d86d65f1501a6cb0e8343a4e14781b1065e8a2"} Feb 02 21:38:18 crc kubenswrapper[4789]: I0202 21:38:18.810564 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-6l85z"] Feb 02 21:38:18 crc kubenswrapper[4789]: I0202 21:38:18.821490 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-6l85z"] Feb 02 21:38:18 crc kubenswrapper[4789]: I0202 21:38:18.822263 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=9.424711937 podStartE2EDuration="19.82224782s" podCreationTimestamp="2026-02-02 21:37:59 +0000 UTC" firstStartedPulling="2026-02-02 21:38:06.976460408 +0000 UTC m=+1107.271485427" lastFinishedPulling="2026-02-02 21:38:17.373996291 +0000 UTC m=+1117.669021310" observedRunningTime="2026-02-02 21:38:18.812244957 +0000 UTC m=+1119.107270026" watchObservedRunningTime="2026-02-02 21:38:18.82224782 +0000 UTC m=+1119.117272839" Feb 02 21:38:18 crc kubenswrapper[4789]: I0202 21:38:18.860714 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-gjls4" podStartSLOduration=7.775834515 podStartE2EDuration="16.860695425s" podCreationTimestamp="2026-02-02 21:38:02 +0000 UTC" firstStartedPulling="2026-02-02 21:38:07.068460055 +0000 UTC m=+1107.363485074" lastFinishedPulling="2026-02-02 21:38:16.153320965 +0000 UTC m=+1116.448345984" observedRunningTime="2026-02-02 21:38:18.855714524 +0000 UTC m=+1119.150739563" watchObservedRunningTime="2026-02-02 21:38:18.860695425 +0000 UTC m=+1119.155720444" Feb 02 21:38:19 crc kubenswrapper[4789]: I0202 21:38:19.723369 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"c55d3f19-edf8-4cff-ab70-495607e77798","Type":"ContainerStarted","Data":"e01772d808decb3380bc4d332c0752aeaf67cb8f5d0c5c9b2c8ae0ab15d89550"} Feb 02 21:38:19 crc kubenswrapper[4789]: I0202 21:38:19.727742 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-l6jmk" event={"ID":"cd6df5c8-3899-431d-b9cd-9a9f022160d7","Type":"ContainerStarted","Data":"bf8bfb384c1266daf0508adff005c3a21ceb0dbea842b6f707f271a6f9ddf49f"} Feb 02 21:38:19 crc kubenswrapper[4789]: I0202 21:38:19.727906 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7fd796d7df-l6jmk" Feb 02 21:38:19 crc kubenswrapper[4789]: I0202 21:38:19.729494 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"01c5293c-f7b0-4141-99a7-e423de507b87","Type":"ContainerStarted","Data":"c302d40717f0c425b6e65f87b401026a5061ab6e38b1f75577a83208d8771c00"} Feb 02 21:38:19 crc kubenswrapper[4789]: I0202 21:38:19.733166 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-tjn59" event={"ID":"0ad9b78e-5f3a-42f0-a462-c6b0bf0c4ec1","Type":"ContainerStarted","Data":"17dd53bbaea1cc18fc470d6b2376b40412de4caece9ebad3ce307cda50a79e10"} Feb 02 21:38:19 crc kubenswrapper[4789]: I0202 21:38:19.734794 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-d5hwz" event={"ID":"d4127fa0-de5d-43ce-b257-46b80eecd670","Type":"ContainerStarted","Data":"d4af60a82d31c25419cd380401fc674cf2e82d663ec23e3513afa5060752b0ed"} Feb 02 21:38:19 crc kubenswrapper[4789]: I0202 21:38:19.760377 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=10.454278831 podStartE2EDuration="13.760304359s" podCreationTimestamp="2026-02-02 21:38:06 +0000 UTC" firstStartedPulling="2026-02-02 21:38:15.890917549 +0000 UTC m=+1116.185942598" lastFinishedPulling="2026-02-02 21:38:19.196943077 +0000 UTC m=+1119.491968126" observedRunningTime="2026-02-02 21:38:19.751776108 +0000 UTC m=+1120.046801157" watchObservedRunningTime="2026-02-02 21:38:19.760304359 +0000 UTC m=+1120.055329418" Feb 02 21:38:19 crc kubenswrapper[4789]: I0202 21:38:19.810539 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-d5hwz" podStartSLOduration=9.885455089 podStartE2EDuration="11.810500666s" podCreationTimestamp="2026-02-02 21:38:08 +0000 UTC" firstStartedPulling="2026-02-02 21:38:17.270819959 +0000 UTC m=+1117.565845008" lastFinishedPulling="2026-02-02 21:38:19.195865576 +0000 UTC m=+1119.490890585" observedRunningTime="2026-02-02 21:38:19.773005178 +0000 UTC m=+1120.068030197" watchObservedRunningTime="2026-02-02 21:38:19.810500666 +0000 UTC m=+1120.105525685" Feb 02 21:38:19 crc kubenswrapper[4789]: I0202 21:38:19.828270 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=5.7673624629999996 podStartE2EDuration="17.828253567s" podCreationTimestamp="2026-02-02 21:38:02 +0000 UTC" firstStartedPulling="2026-02-02 21:38:07.146862488 +0000 UTC m=+1107.441887507" lastFinishedPulling="2026-02-02 21:38:19.207753592 +0000 UTC m=+1119.502778611" observedRunningTime="2026-02-02 21:38:19.807174612 +0000 UTC m=+1120.102199621" watchObservedRunningTime="2026-02-02 21:38:19.828253567 +0000 UTC m=+1120.123278586" Feb 02 21:38:19 crc kubenswrapper[4789]: I0202 21:38:19.838756 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7fd796d7df-l6jmk" podStartSLOduration=11.838738753 podStartE2EDuration="11.838738753s" podCreationTimestamp="2026-02-02 21:38:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:38:19.822197446 +0000 UTC m=+1120.117222455" watchObservedRunningTime="2026-02-02 21:38:19.838738753 +0000 UTC m=+1120.133763762" Feb 02 21:38:20 crc kubenswrapper[4789]: I0202 21:38:20.245232 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 02 21:38:20 crc kubenswrapper[4789]: I0202 21:38:20.266525 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-5lz8j"] Feb 02 21:38:20 crc kubenswrapper[4789]: I0202 21:38:20.267017 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-5lz8j" podUID="1ffc8320-6bb1-4763-a98e-5c86314a4ec4" containerName="dnsmasq-dns" containerID="cri-o://32439174093a0775ff5b8156a0fdfc41e749a642fd124582b2a567d78a33e5d9" gracePeriod=10 Feb 02 21:38:20 crc kubenswrapper[4789]: I0202 21:38:20.298506 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-2h2vh"] Feb 02 21:38:20 crc kubenswrapper[4789]: E0202 21:38:20.298827 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc3193e4-49b3-4b91-a64f-cdd0b0c2b5db" containerName="init" Feb 02 21:38:20 crc kubenswrapper[4789]: I0202 21:38:20.298842 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc3193e4-49b3-4b91-a64f-cdd0b0c2b5db" containerName="init" Feb 02 21:38:20 crc kubenswrapper[4789]: I0202 21:38:20.298991 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc3193e4-49b3-4b91-a64f-cdd0b0c2b5db" containerName="init" Feb 02 21:38:20 crc kubenswrapper[4789]: I0202 21:38:20.299734 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-2h2vh" Feb 02 21:38:20 crc kubenswrapper[4789]: I0202 21:38:20.302001 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 02 21:38:20 crc kubenswrapper[4789]: I0202 21:38:20.313099 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 02 21:38:20 crc kubenswrapper[4789]: I0202 21:38:20.314119 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-2h2vh"] Feb 02 21:38:20 crc kubenswrapper[4789]: I0202 21:38:20.398467 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0b7a7068-81a8-45bf-be5f-4a25a5a102e2-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-2h2vh\" (UID: \"0b7a7068-81a8-45bf-be5f-4a25a5a102e2\") " pod="openstack/dnsmasq-dns-86db49b7ff-2h2vh" Feb 02 21:38:20 crc kubenswrapper[4789]: I0202 21:38:20.398507 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b7a7068-81a8-45bf-be5f-4a25a5a102e2-config\") pod \"dnsmasq-dns-86db49b7ff-2h2vh\" (UID: \"0b7a7068-81a8-45bf-be5f-4a25a5a102e2\") " pod="openstack/dnsmasq-dns-86db49b7ff-2h2vh" Feb 02 21:38:20 crc kubenswrapper[4789]: I0202 21:38:20.398760 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0b7a7068-81a8-45bf-be5f-4a25a5a102e2-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-2h2vh\" (UID: \"0b7a7068-81a8-45bf-be5f-4a25a5a102e2\") " pod="openstack/dnsmasq-dns-86db49b7ff-2h2vh" Feb 02 21:38:20 crc kubenswrapper[4789]: I0202 21:38:20.398875 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0b7a7068-81a8-45bf-be5f-4a25a5a102e2-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-2h2vh\" (UID: \"0b7a7068-81a8-45bf-be5f-4a25a5a102e2\") " pod="openstack/dnsmasq-dns-86db49b7ff-2h2vh" Feb 02 21:38:20 crc kubenswrapper[4789]: I0202 21:38:20.398949 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cbwp\" (UniqueName: \"kubernetes.io/projected/0b7a7068-81a8-45bf-be5f-4a25a5a102e2-kube-api-access-5cbwp\") pod \"dnsmasq-dns-86db49b7ff-2h2vh\" (UID: \"0b7a7068-81a8-45bf-be5f-4a25a5a102e2\") " pod="openstack/dnsmasq-dns-86db49b7ff-2h2vh" Feb 02 21:38:20 crc kubenswrapper[4789]: I0202 21:38:20.428603 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc3193e4-49b3-4b91-a64f-cdd0b0c2b5db" path="/var/lib/kubelet/pods/dc3193e4-49b3-4b91-a64f-cdd0b0c2b5db/volumes" Feb 02 21:38:20 crc kubenswrapper[4789]: I0202 21:38:20.500315 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0b7a7068-81a8-45bf-be5f-4a25a5a102e2-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-2h2vh\" (UID: \"0b7a7068-81a8-45bf-be5f-4a25a5a102e2\") " pod="openstack/dnsmasq-dns-86db49b7ff-2h2vh" Feb 02 21:38:20 crc kubenswrapper[4789]: I0202 21:38:20.500376 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0b7a7068-81a8-45bf-be5f-4a25a5a102e2-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-2h2vh\" (UID: \"0b7a7068-81a8-45bf-be5f-4a25a5a102e2\") " pod="openstack/dnsmasq-dns-86db49b7ff-2h2vh" Feb 02 21:38:20 crc kubenswrapper[4789]: I0202 21:38:20.500406 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cbwp\" (UniqueName: \"kubernetes.io/projected/0b7a7068-81a8-45bf-be5f-4a25a5a102e2-kube-api-access-5cbwp\") pod \"dnsmasq-dns-86db49b7ff-2h2vh\" (UID: \"0b7a7068-81a8-45bf-be5f-4a25a5a102e2\") " pod="openstack/dnsmasq-dns-86db49b7ff-2h2vh" Feb 02 21:38:20 crc kubenswrapper[4789]: I0202 21:38:20.500452 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0b7a7068-81a8-45bf-be5f-4a25a5a102e2-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-2h2vh\" (UID: \"0b7a7068-81a8-45bf-be5f-4a25a5a102e2\") " pod="openstack/dnsmasq-dns-86db49b7ff-2h2vh" Feb 02 21:38:20 crc kubenswrapper[4789]: I0202 21:38:20.500468 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b7a7068-81a8-45bf-be5f-4a25a5a102e2-config\") pod \"dnsmasq-dns-86db49b7ff-2h2vh\" (UID: \"0b7a7068-81a8-45bf-be5f-4a25a5a102e2\") " pod="openstack/dnsmasq-dns-86db49b7ff-2h2vh" Feb 02 21:38:20 crc kubenswrapper[4789]: I0202 21:38:20.501296 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0b7a7068-81a8-45bf-be5f-4a25a5a102e2-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-2h2vh\" (UID: \"0b7a7068-81a8-45bf-be5f-4a25a5a102e2\") " pod="openstack/dnsmasq-dns-86db49b7ff-2h2vh" Feb 02 21:38:20 crc kubenswrapper[4789]: I0202 21:38:20.501396 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b7a7068-81a8-45bf-be5f-4a25a5a102e2-config\") pod \"dnsmasq-dns-86db49b7ff-2h2vh\" (UID: \"0b7a7068-81a8-45bf-be5f-4a25a5a102e2\") " pod="openstack/dnsmasq-dns-86db49b7ff-2h2vh" Feb 02 21:38:20 crc kubenswrapper[4789]: I0202 21:38:20.501495 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0b7a7068-81a8-45bf-be5f-4a25a5a102e2-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-2h2vh\" (UID: \"0b7a7068-81a8-45bf-be5f-4a25a5a102e2\") " pod="openstack/dnsmasq-dns-86db49b7ff-2h2vh" Feb 02 21:38:20 crc kubenswrapper[4789]: I0202 21:38:20.501644 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0b7a7068-81a8-45bf-be5f-4a25a5a102e2-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-2h2vh\" (UID: \"0b7a7068-81a8-45bf-be5f-4a25a5a102e2\") " pod="openstack/dnsmasq-dns-86db49b7ff-2h2vh" Feb 02 21:38:20 crc kubenswrapper[4789]: I0202 21:38:20.525212 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cbwp\" (UniqueName: \"kubernetes.io/projected/0b7a7068-81a8-45bf-be5f-4a25a5a102e2-kube-api-access-5cbwp\") pod \"dnsmasq-dns-86db49b7ff-2h2vh\" (UID: \"0b7a7068-81a8-45bf-be5f-4a25a5a102e2\") " pod="openstack/dnsmasq-dns-86db49b7ff-2h2vh" Feb 02 21:38:20 crc kubenswrapper[4789]: I0202 21:38:20.615998 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-2h2vh" Feb 02 21:38:20 crc kubenswrapper[4789]: I0202 21:38:20.740191 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 02 21:38:21 crc kubenswrapper[4789]: I0202 21:38:21.040649 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-2h2vh"] Feb 02 21:38:21 crc kubenswrapper[4789]: W0202 21:38:21.045858 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0b7a7068_81a8_45bf_be5f_4a25a5a102e2.slice/crio-4d5795a4bac6aa32a72aed831d5abe669dd5f190117b4ffb0aaee2afe65be733 WatchSource:0}: Error finding container 4d5795a4bac6aa32a72aed831d5abe669dd5f190117b4ffb0aaee2afe65be733: Status 404 returned error can't find the container with id 4d5795a4bac6aa32a72aed831d5abe669dd5f190117b4ffb0aaee2afe65be733 Feb 02 21:38:21 crc kubenswrapper[4789]: I0202 21:38:21.751977 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-2h2vh" event={"ID":"0b7a7068-81a8-45bf-be5f-4a25a5a102e2","Type":"ContainerStarted","Data":"4d5795a4bac6aa32a72aed831d5abe669dd5f190117b4ffb0aaee2afe65be733"} Feb 02 21:38:22 crc kubenswrapper[4789]: I0202 21:38:22.026357 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 02 21:38:22 crc kubenswrapper[4789]: I0202 21:38:22.087018 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 02 21:38:22 crc kubenswrapper[4789]: I0202 21:38:22.761328 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 02 21:38:22 crc kubenswrapper[4789]: I0202 21:38:22.808497 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 02 21:38:23 crc kubenswrapper[4789]: I0202 21:38:23.289285 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 02 21:38:23 crc kubenswrapper[4789]: I0202 21:38:23.488336 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 02 21:38:23 crc kubenswrapper[4789]: I0202 21:38:23.489869 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 02 21:38:23 crc kubenswrapper[4789]: I0202 21:38:23.491554 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 02 21:38:23 crc kubenswrapper[4789]: I0202 21:38:23.492005 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 02 21:38:23 crc kubenswrapper[4789]: I0202 21:38:23.492133 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Feb 02 21:38:23 crc kubenswrapper[4789]: I0202 21:38:23.498931 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-222kr" Feb 02 21:38:23 crc kubenswrapper[4789]: I0202 21:38:23.501191 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 02 21:38:23 crc kubenswrapper[4789]: I0202 21:38:23.659351 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab56a6da-6187-4fa6-bd4e-93046de2d432-config\") pod \"ovn-northd-0\" (UID: \"ab56a6da-6187-4fa6-bd4e-93046de2d432\") " pod="openstack/ovn-northd-0" Feb 02 21:38:23 crc kubenswrapper[4789]: I0202 21:38:23.659622 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab56a6da-6187-4fa6-bd4e-93046de2d432-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"ab56a6da-6187-4fa6-bd4e-93046de2d432\") " pod="openstack/ovn-northd-0" Feb 02 21:38:23 crc kubenswrapper[4789]: I0202 21:38:23.659732 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab56a6da-6187-4fa6-bd4e-93046de2d432-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"ab56a6da-6187-4fa6-bd4e-93046de2d432\") " pod="openstack/ovn-northd-0" Feb 02 21:38:23 crc kubenswrapper[4789]: I0202 21:38:23.659968 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ab56a6da-6187-4fa6-bd4e-93046de2d432-scripts\") pod \"ovn-northd-0\" (UID: \"ab56a6da-6187-4fa6-bd4e-93046de2d432\") " pod="openstack/ovn-northd-0" Feb 02 21:38:23 crc kubenswrapper[4789]: I0202 21:38:23.660073 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fs2r5\" (UniqueName: \"kubernetes.io/projected/ab56a6da-6187-4fa6-bd4e-93046de2d432-kube-api-access-fs2r5\") pod \"ovn-northd-0\" (UID: \"ab56a6da-6187-4fa6-bd4e-93046de2d432\") " pod="openstack/ovn-northd-0" Feb 02 21:38:23 crc kubenswrapper[4789]: I0202 21:38:23.660142 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab56a6da-6187-4fa6-bd4e-93046de2d432-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"ab56a6da-6187-4fa6-bd4e-93046de2d432\") " pod="openstack/ovn-northd-0" Feb 02 21:38:23 crc kubenswrapper[4789]: I0202 21:38:23.660208 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ab56a6da-6187-4fa6-bd4e-93046de2d432-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"ab56a6da-6187-4fa6-bd4e-93046de2d432\") " pod="openstack/ovn-northd-0" Feb 02 21:38:23 crc kubenswrapper[4789]: I0202 21:38:23.741419 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 02 21:38:23 crc kubenswrapper[4789]: I0202 21:38:23.766095 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab56a6da-6187-4fa6-bd4e-93046de2d432-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"ab56a6da-6187-4fa6-bd4e-93046de2d432\") " pod="openstack/ovn-northd-0" Feb 02 21:38:23 crc kubenswrapper[4789]: I0202 21:38:23.767297 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ab56a6da-6187-4fa6-bd4e-93046de2d432-scripts\") pod \"ovn-northd-0\" (UID: \"ab56a6da-6187-4fa6-bd4e-93046de2d432\") " pod="openstack/ovn-northd-0" Feb 02 21:38:23 crc kubenswrapper[4789]: I0202 21:38:23.767614 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fs2r5\" (UniqueName: \"kubernetes.io/projected/ab56a6da-6187-4fa6-bd4e-93046de2d432-kube-api-access-fs2r5\") pod \"ovn-northd-0\" (UID: \"ab56a6da-6187-4fa6-bd4e-93046de2d432\") " pod="openstack/ovn-northd-0" Feb 02 21:38:23 crc kubenswrapper[4789]: I0202 21:38:23.767638 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab56a6da-6187-4fa6-bd4e-93046de2d432-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"ab56a6da-6187-4fa6-bd4e-93046de2d432\") " pod="openstack/ovn-northd-0" Feb 02 21:38:23 crc kubenswrapper[4789]: I0202 21:38:23.767657 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ab56a6da-6187-4fa6-bd4e-93046de2d432-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"ab56a6da-6187-4fa6-bd4e-93046de2d432\") " pod="openstack/ovn-northd-0" Feb 02 21:38:23 crc kubenswrapper[4789]: I0202 21:38:23.767749 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab56a6da-6187-4fa6-bd4e-93046de2d432-config\") pod \"ovn-northd-0\" (UID: \"ab56a6da-6187-4fa6-bd4e-93046de2d432\") " pod="openstack/ovn-northd-0" Feb 02 21:38:23 crc kubenswrapper[4789]: I0202 21:38:23.767788 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab56a6da-6187-4fa6-bd4e-93046de2d432-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"ab56a6da-6187-4fa6-bd4e-93046de2d432\") " pod="openstack/ovn-northd-0" Feb 02 21:38:23 crc kubenswrapper[4789]: I0202 21:38:23.768695 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ab56a6da-6187-4fa6-bd4e-93046de2d432-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"ab56a6da-6187-4fa6-bd4e-93046de2d432\") " pod="openstack/ovn-northd-0" Feb 02 21:38:23 crc kubenswrapper[4789]: I0202 21:38:23.769081 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab56a6da-6187-4fa6-bd4e-93046de2d432-config\") pod \"ovn-northd-0\" (UID: \"ab56a6da-6187-4fa6-bd4e-93046de2d432\") " pod="openstack/ovn-northd-0" Feb 02 21:38:23 crc kubenswrapper[4789]: I0202 21:38:23.769346 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ab56a6da-6187-4fa6-bd4e-93046de2d432-scripts\") pod \"ovn-northd-0\" (UID: \"ab56a6da-6187-4fa6-bd4e-93046de2d432\") " pod="openstack/ovn-northd-0" Feb 02 21:38:23 crc kubenswrapper[4789]: I0202 21:38:23.783629 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab56a6da-6187-4fa6-bd4e-93046de2d432-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"ab56a6da-6187-4fa6-bd4e-93046de2d432\") " pod="openstack/ovn-northd-0" Feb 02 21:38:23 crc kubenswrapper[4789]: I0202 21:38:23.788361 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-tjn59" event={"ID":"0ad9b78e-5f3a-42f0-a462-c6b0bf0c4ec1","Type":"ContainerStarted","Data":"6c5300b0e812632a91b74ba2ab909199288368ea0a90d2cfd2fd7fbf3551f6b6"} Feb 02 21:38:23 crc kubenswrapper[4789]: I0202 21:38:23.791280 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab56a6da-6187-4fa6-bd4e-93046de2d432-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"ab56a6da-6187-4fa6-bd4e-93046de2d432\") " pod="openstack/ovn-northd-0" Feb 02 21:38:23 crc kubenswrapper[4789]: I0202 21:38:23.798494 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab56a6da-6187-4fa6-bd4e-93046de2d432-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"ab56a6da-6187-4fa6-bd4e-93046de2d432\") " pod="openstack/ovn-northd-0" Feb 02 21:38:23 crc kubenswrapper[4789]: I0202 21:38:23.810232 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fs2r5\" (UniqueName: \"kubernetes.io/projected/ab56a6da-6187-4fa6-bd4e-93046de2d432-kube-api-access-fs2r5\") pod \"ovn-northd-0\" (UID: \"ab56a6da-6187-4fa6-bd4e-93046de2d432\") " pod="openstack/ovn-northd-0" Feb 02 21:38:23 crc kubenswrapper[4789]: I0202 21:38:23.844882 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 02 21:38:24 crc kubenswrapper[4789]: I0202 21:38:24.415944 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 02 21:38:24 crc kubenswrapper[4789]: I0202 21:38:24.523732 4789 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-57d769cc4f-5lz8j" podUID="1ffc8320-6bb1-4763-a98e-5c86314a4ec4" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.100:5353: connect: connection refused" Feb 02 21:38:24 crc kubenswrapper[4789]: I0202 21:38:24.797092 4789 generic.go:334] "Generic (PLEG): container finished" podID="a77ac0de-f396-45e6-a92c-07fbddc4ec60" containerID="37070194a254abfa3aad802e5fbe6112834841dccb39ba3bd770d2c932dfbb36" exitCode=0 Feb 02 21:38:24 crc kubenswrapper[4789]: I0202 21:38:24.797213 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"a77ac0de-f396-45e6-a92c-07fbddc4ec60","Type":"ContainerDied","Data":"37070194a254abfa3aad802e5fbe6112834841dccb39ba3bd770d2c932dfbb36"} Feb 02 21:38:24 crc kubenswrapper[4789]: I0202 21:38:24.800488 4789 generic.go:334] "Generic (PLEG): container finished" podID="1ffc8320-6bb1-4763-a98e-5c86314a4ec4" containerID="32439174093a0775ff5b8156a0fdfc41e749a642fd124582b2a567d78a33e5d9" exitCode=0 Feb 02 21:38:24 crc kubenswrapper[4789]: I0202 21:38:24.800574 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-5lz8j" event={"ID":"1ffc8320-6bb1-4763-a98e-5c86314a4ec4","Type":"ContainerDied","Data":"32439174093a0775ff5b8156a0fdfc41e749a642fd124582b2a567d78a33e5d9"} Feb 02 21:38:24 crc kubenswrapper[4789]: I0202 21:38:24.801730 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"ab56a6da-6187-4fa6-bd4e-93046de2d432","Type":"ContainerStarted","Data":"2baa97b2e3a2b45df71dd00ff3af1026ded3306e49237f3b0ac6550c673a9545"} Feb 02 21:38:24 crc kubenswrapper[4789]: I0202 21:38:24.802947 4789 generic.go:334] "Generic (PLEG): container finished" podID="96f4773a-9fa9-41c6-ab4b-54107e66a498" containerID="a08b45f3dfbed710991b90377397df02266bd543ec0be74a9a29feca9df69385" exitCode=0 Feb 02 21:38:24 crc kubenswrapper[4789]: I0202 21:38:24.803036 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"96f4773a-9fa9-41c6-ab4b-54107e66a498","Type":"ContainerDied","Data":"a08b45f3dfbed710991b90377397df02266bd543ec0be74a9a29feca9df69385"} Feb 02 21:38:26 crc kubenswrapper[4789]: I0202 21:38:26.568878 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-5lz8j" Feb 02 21:38:26 crc kubenswrapper[4789]: I0202 21:38:26.635655 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r4ngc\" (UniqueName: \"kubernetes.io/projected/1ffc8320-6bb1-4763-a98e-5c86314a4ec4-kube-api-access-r4ngc\") pod \"1ffc8320-6bb1-4763-a98e-5c86314a4ec4\" (UID: \"1ffc8320-6bb1-4763-a98e-5c86314a4ec4\") " Feb 02 21:38:26 crc kubenswrapper[4789]: I0202 21:38:26.635802 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ffc8320-6bb1-4763-a98e-5c86314a4ec4-config\") pod \"1ffc8320-6bb1-4763-a98e-5c86314a4ec4\" (UID: \"1ffc8320-6bb1-4763-a98e-5c86314a4ec4\") " Feb 02 21:38:26 crc kubenswrapper[4789]: I0202 21:38:26.635923 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1ffc8320-6bb1-4763-a98e-5c86314a4ec4-dns-svc\") pod \"1ffc8320-6bb1-4763-a98e-5c86314a4ec4\" (UID: \"1ffc8320-6bb1-4763-a98e-5c86314a4ec4\") " Feb 02 21:38:26 crc kubenswrapper[4789]: I0202 21:38:26.641716 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ffc8320-6bb1-4763-a98e-5c86314a4ec4-kube-api-access-r4ngc" (OuterVolumeSpecName: "kube-api-access-r4ngc") pod "1ffc8320-6bb1-4763-a98e-5c86314a4ec4" (UID: "1ffc8320-6bb1-4763-a98e-5c86314a4ec4"). InnerVolumeSpecName "kube-api-access-r4ngc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:38:26 crc kubenswrapper[4789]: I0202 21:38:26.673224 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ffc8320-6bb1-4763-a98e-5c86314a4ec4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1ffc8320-6bb1-4763-a98e-5c86314a4ec4" (UID: "1ffc8320-6bb1-4763-a98e-5c86314a4ec4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:38:26 crc kubenswrapper[4789]: I0202 21:38:26.706482 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ffc8320-6bb1-4763-a98e-5c86314a4ec4-config" (OuterVolumeSpecName: "config") pod "1ffc8320-6bb1-4763-a98e-5c86314a4ec4" (UID: "1ffc8320-6bb1-4763-a98e-5c86314a4ec4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:38:26 crc kubenswrapper[4789]: I0202 21:38:26.738210 4789 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ffc8320-6bb1-4763-a98e-5c86314a4ec4-config\") on node \"crc\" DevicePath \"\"" Feb 02 21:38:26 crc kubenswrapper[4789]: I0202 21:38:26.738239 4789 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1ffc8320-6bb1-4763-a98e-5c86314a4ec4-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 21:38:26 crc kubenswrapper[4789]: I0202 21:38:26.738249 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r4ngc\" (UniqueName: \"kubernetes.io/projected/1ffc8320-6bb1-4763-a98e-5c86314a4ec4-kube-api-access-r4ngc\") on node \"crc\" DevicePath \"\"" Feb 02 21:38:26 crc kubenswrapper[4789]: I0202 21:38:26.818196 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-5lz8j" event={"ID":"1ffc8320-6bb1-4763-a98e-5c86314a4ec4","Type":"ContainerDied","Data":"566d4f69f0cc5ce3f967814aed8c359f88bb748b25f66513bdbf8dd79ef2cf5a"} Feb 02 21:38:26 crc kubenswrapper[4789]: I0202 21:38:26.818219 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-5lz8j" Feb 02 21:38:26 crc kubenswrapper[4789]: I0202 21:38:26.818251 4789 scope.go:117] "RemoveContainer" containerID="32439174093a0775ff5b8156a0fdfc41e749a642fd124582b2a567d78a33e5d9" Feb 02 21:38:26 crc kubenswrapper[4789]: I0202 21:38:26.820126 4789 generic.go:334] "Generic (PLEG): container finished" podID="0b7a7068-81a8-45bf-be5f-4a25a5a102e2" containerID="019b82a2b12782ef8d4faf29b4c9e3630d94df82a7bd6e6f08efd28c8499221f" exitCode=0 Feb 02 21:38:26 crc kubenswrapper[4789]: I0202 21:38:26.820205 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-2h2vh" event={"ID":"0b7a7068-81a8-45bf-be5f-4a25a5a102e2","Type":"ContainerDied","Data":"019b82a2b12782ef8d4faf29b4c9e3630d94df82a7bd6e6f08efd28c8499221f"} Feb 02 21:38:26 crc kubenswrapper[4789]: I0202 21:38:26.822015 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"96f4773a-9fa9-41c6-ab4b-54107e66a498","Type":"ContainerStarted","Data":"82ad90219c4a326128d9fb471414b4e0b74ee26ef1c8f08b2e8b89d720565f03"} Feb 02 21:38:26 crc kubenswrapper[4789]: I0202 21:38:26.824349 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"a77ac0de-f396-45e6-a92c-07fbddc4ec60","Type":"ContainerStarted","Data":"56c1fc152ae9c83eb013d9170e2ee84fae3556ed6cca1265e4e91d1f2bb54861"} Feb 02 21:38:26 crc kubenswrapper[4789]: I0202 21:38:26.824468 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-tjn59" Feb 02 21:38:26 crc kubenswrapper[4789]: I0202 21:38:26.825206 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-tjn59" Feb 02 21:38:26 crc kubenswrapper[4789]: I0202 21:38:26.857170 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-5lz8j"] Feb 02 21:38:26 crc kubenswrapper[4789]: I0202 21:38:26.862345 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-5lz8j"] Feb 02 21:38:26 crc kubenswrapper[4789]: I0202 21:38:26.873827 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-tjn59" podStartSLOduration=18.651948748 podStartE2EDuration="24.873809864s" podCreationTimestamp="2026-02-02 21:38:02 +0000 UTC" firstStartedPulling="2026-02-02 21:38:09.930976726 +0000 UTC m=+1110.226001775" lastFinishedPulling="2026-02-02 21:38:16.152837832 +0000 UTC m=+1116.447862891" observedRunningTime="2026-02-02 21:38:26.868960227 +0000 UTC m=+1127.163985246" watchObservedRunningTime="2026-02-02 21:38:26.873809864 +0000 UTC m=+1127.168834883" Feb 02 21:38:26 crc kubenswrapper[4789]: I0202 21:38:26.899441 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=22.86747671 podStartE2EDuration="31.899421817s" podCreationTimestamp="2026-02-02 21:37:55 +0000 UTC" firstStartedPulling="2026-02-02 21:38:07.030063241 +0000 UTC m=+1107.325088260" lastFinishedPulling="2026-02-02 21:38:16.062008338 +0000 UTC m=+1116.357033367" observedRunningTime="2026-02-02 21:38:26.890337311 +0000 UTC m=+1127.185362330" watchObservedRunningTime="2026-02-02 21:38:26.899421817 +0000 UTC m=+1127.194446856" Feb 02 21:38:26 crc kubenswrapper[4789]: I0202 21:38:26.916119 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=20.825052833 podStartE2EDuration="29.916095198s" podCreationTimestamp="2026-02-02 21:37:57 +0000 UTC" firstStartedPulling="2026-02-02 21:38:07.052863185 +0000 UTC m=+1107.347888204" lastFinishedPulling="2026-02-02 21:38:16.14390555 +0000 UTC m=+1116.438930569" observedRunningTime="2026-02-02 21:38:26.909953554 +0000 UTC m=+1127.204978583" watchObservedRunningTime="2026-02-02 21:38:26.916095198 +0000 UTC m=+1127.211120237" Feb 02 21:38:26 crc kubenswrapper[4789]: I0202 21:38:26.947615 4789 scope.go:117] "RemoveContainer" containerID="2751a6c6a42cae5b3e5b9efa0d416bad094639bf82efb03b8fd2e3d395ffb1d3" Feb 02 21:38:27 crc kubenswrapper[4789]: I0202 21:38:27.047250 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 02 21:38:27 crc kubenswrapper[4789]: I0202 21:38:27.047921 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 02 21:38:27 crc kubenswrapper[4789]: I0202 21:38:27.835691 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-2h2vh" event={"ID":"0b7a7068-81a8-45bf-be5f-4a25a5a102e2","Type":"ContainerStarted","Data":"ccc252969b587208204ba72dccf573a70c7f0476b34b58dead3d78de1378f227"} Feb 02 21:38:27 crc kubenswrapper[4789]: I0202 21:38:27.837132 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-2h2vh" Feb 02 21:38:27 crc kubenswrapper[4789]: I0202 21:38:27.840033 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"ab56a6da-6187-4fa6-bd4e-93046de2d432","Type":"ContainerStarted","Data":"f37965943ec7625f3192bcaac3c01b17a18ceddae04469351da0a2114b7fe47f"} Feb 02 21:38:27 crc kubenswrapper[4789]: I0202 21:38:27.840063 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"ab56a6da-6187-4fa6-bd4e-93046de2d432","Type":"ContainerStarted","Data":"9404edbdc9c7a81d7c48cab8b8c60b1fc5de57f009d5e80c304dd34c2eae41c2"} Feb 02 21:38:27 crc kubenswrapper[4789]: I0202 21:38:27.840264 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 02 21:38:27 crc kubenswrapper[4789]: I0202 21:38:27.859859 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-2h2vh" podStartSLOduration=7.859839857 podStartE2EDuration="7.859839857s" podCreationTimestamp="2026-02-02 21:38:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:38:27.857750338 +0000 UTC m=+1128.152775377" watchObservedRunningTime="2026-02-02 21:38:27.859839857 +0000 UTC m=+1128.154864866" Feb 02 21:38:27 crc kubenswrapper[4789]: I0202 21:38:27.884161 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.306977097 podStartE2EDuration="4.884141013s" podCreationTimestamp="2026-02-02 21:38:23 +0000 UTC" firstStartedPulling="2026-02-02 21:38:24.426299928 +0000 UTC m=+1124.721324967" lastFinishedPulling="2026-02-02 21:38:27.003463864 +0000 UTC m=+1127.298488883" observedRunningTime="2026-02-02 21:38:27.879634846 +0000 UTC m=+1128.174659875" watchObservedRunningTime="2026-02-02 21:38:27.884141013 +0000 UTC m=+1128.179166032" Feb 02 21:38:28 crc kubenswrapper[4789]: I0202 21:38:28.438468 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ffc8320-6bb1-4763-a98e-5c86314a4ec4" path="/var/lib/kubelet/pods/1ffc8320-6bb1-4763-a98e-5c86314a4ec4/volumes" Feb 02 21:38:28 crc kubenswrapper[4789]: I0202 21:38:28.455177 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 02 21:38:28 crc kubenswrapper[4789]: I0202 21:38:28.455238 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 02 21:38:28 crc kubenswrapper[4789]: I0202 21:38:28.812753 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7fd796d7df-l6jmk" Feb 02 21:38:29 crc kubenswrapper[4789]: E0202 21:38:29.152258 4789 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.189:60196->38.102.83.189:36729: write tcp 38.102.83.189:60196->38.102.83.189:36729: write: connection reset by peer Feb 02 21:38:30 crc kubenswrapper[4789]: I0202 21:38:30.305722 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 02 21:38:30 crc kubenswrapper[4789]: I0202 21:38:30.323512 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-2h2vh"] Feb 02 21:38:30 crc kubenswrapper[4789]: I0202 21:38:30.370729 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-d9x7g"] Feb 02 21:38:30 crc kubenswrapper[4789]: E0202 21:38:30.371253 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ffc8320-6bb1-4763-a98e-5c86314a4ec4" containerName="init" Feb 02 21:38:30 crc kubenswrapper[4789]: I0202 21:38:30.371272 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ffc8320-6bb1-4763-a98e-5c86314a4ec4" containerName="init" Feb 02 21:38:30 crc kubenswrapper[4789]: E0202 21:38:30.371309 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ffc8320-6bb1-4763-a98e-5c86314a4ec4" containerName="dnsmasq-dns" Feb 02 21:38:30 crc kubenswrapper[4789]: I0202 21:38:30.371315 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ffc8320-6bb1-4763-a98e-5c86314a4ec4" containerName="dnsmasq-dns" Feb 02 21:38:30 crc kubenswrapper[4789]: I0202 21:38:30.371478 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ffc8320-6bb1-4763-a98e-5c86314a4ec4" containerName="dnsmasq-dns" Feb 02 21:38:30 crc kubenswrapper[4789]: I0202 21:38:30.375810 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-d9x7g" Feb 02 21:38:30 crc kubenswrapper[4789]: I0202 21:38:30.391108 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-d9x7g"] Feb 02 21:38:30 crc kubenswrapper[4789]: I0202 21:38:30.416826 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjhcj\" (UniqueName: \"kubernetes.io/projected/397ad5b0-86b4-40b7-b8c3-bceb1d8aa470-kube-api-access-cjhcj\") pod \"dnsmasq-dns-698758b865-d9x7g\" (UID: \"397ad5b0-86b4-40b7-b8c3-bceb1d8aa470\") " pod="openstack/dnsmasq-dns-698758b865-d9x7g" Feb 02 21:38:30 crc kubenswrapper[4789]: I0202 21:38:30.416891 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/397ad5b0-86b4-40b7-b8c3-bceb1d8aa470-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-d9x7g\" (UID: \"397ad5b0-86b4-40b7-b8c3-bceb1d8aa470\") " pod="openstack/dnsmasq-dns-698758b865-d9x7g" Feb 02 21:38:30 crc kubenswrapper[4789]: I0202 21:38:30.416911 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/397ad5b0-86b4-40b7-b8c3-bceb1d8aa470-config\") pod \"dnsmasq-dns-698758b865-d9x7g\" (UID: \"397ad5b0-86b4-40b7-b8c3-bceb1d8aa470\") " pod="openstack/dnsmasq-dns-698758b865-d9x7g" Feb 02 21:38:30 crc kubenswrapper[4789]: I0202 21:38:30.417001 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/397ad5b0-86b4-40b7-b8c3-bceb1d8aa470-dns-svc\") pod \"dnsmasq-dns-698758b865-d9x7g\" (UID: \"397ad5b0-86b4-40b7-b8c3-bceb1d8aa470\") " pod="openstack/dnsmasq-dns-698758b865-d9x7g" Feb 02 21:38:30 crc kubenswrapper[4789]: I0202 21:38:30.417037 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/397ad5b0-86b4-40b7-b8c3-bceb1d8aa470-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-d9x7g\" (UID: \"397ad5b0-86b4-40b7-b8c3-bceb1d8aa470\") " pod="openstack/dnsmasq-dns-698758b865-d9x7g" Feb 02 21:38:30 crc kubenswrapper[4789]: I0202 21:38:30.518072 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/397ad5b0-86b4-40b7-b8c3-bceb1d8aa470-dns-svc\") pod \"dnsmasq-dns-698758b865-d9x7g\" (UID: \"397ad5b0-86b4-40b7-b8c3-bceb1d8aa470\") " pod="openstack/dnsmasq-dns-698758b865-d9x7g" Feb 02 21:38:30 crc kubenswrapper[4789]: I0202 21:38:30.518145 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/397ad5b0-86b4-40b7-b8c3-bceb1d8aa470-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-d9x7g\" (UID: \"397ad5b0-86b4-40b7-b8c3-bceb1d8aa470\") " pod="openstack/dnsmasq-dns-698758b865-d9x7g" Feb 02 21:38:30 crc kubenswrapper[4789]: I0202 21:38:30.518183 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjhcj\" (UniqueName: \"kubernetes.io/projected/397ad5b0-86b4-40b7-b8c3-bceb1d8aa470-kube-api-access-cjhcj\") pod \"dnsmasq-dns-698758b865-d9x7g\" (UID: \"397ad5b0-86b4-40b7-b8c3-bceb1d8aa470\") " pod="openstack/dnsmasq-dns-698758b865-d9x7g" Feb 02 21:38:30 crc kubenswrapper[4789]: I0202 21:38:30.518233 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/397ad5b0-86b4-40b7-b8c3-bceb1d8aa470-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-d9x7g\" (UID: \"397ad5b0-86b4-40b7-b8c3-bceb1d8aa470\") " pod="openstack/dnsmasq-dns-698758b865-d9x7g" Feb 02 21:38:30 crc kubenswrapper[4789]: I0202 21:38:30.518258 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/397ad5b0-86b4-40b7-b8c3-bceb1d8aa470-config\") pod \"dnsmasq-dns-698758b865-d9x7g\" (UID: \"397ad5b0-86b4-40b7-b8c3-bceb1d8aa470\") " pod="openstack/dnsmasq-dns-698758b865-d9x7g" Feb 02 21:38:30 crc kubenswrapper[4789]: I0202 21:38:30.519015 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/397ad5b0-86b4-40b7-b8c3-bceb1d8aa470-dns-svc\") pod \"dnsmasq-dns-698758b865-d9x7g\" (UID: \"397ad5b0-86b4-40b7-b8c3-bceb1d8aa470\") " pod="openstack/dnsmasq-dns-698758b865-d9x7g" Feb 02 21:38:30 crc kubenswrapper[4789]: I0202 21:38:30.519424 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/397ad5b0-86b4-40b7-b8c3-bceb1d8aa470-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-d9x7g\" (UID: \"397ad5b0-86b4-40b7-b8c3-bceb1d8aa470\") " pod="openstack/dnsmasq-dns-698758b865-d9x7g" Feb 02 21:38:30 crc kubenswrapper[4789]: I0202 21:38:30.519706 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/397ad5b0-86b4-40b7-b8c3-bceb1d8aa470-config\") pod \"dnsmasq-dns-698758b865-d9x7g\" (UID: \"397ad5b0-86b4-40b7-b8c3-bceb1d8aa470\") " pod="openstack/dnsmasq-dns-698758b865-d9x7g" Feb 02 21:38:30 crc kubenswrapper[4789]: I0202 21:38:30.520041 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/397ad5b0-86b4-40b7-b8c3-bceb1d8aa470-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-d9x7g\" (UID: \"397ad5b0-86b4-40b7-b8c3-bceb1d8aa470\") " pod="openstack/dnsmasq-dns-698758b865-d9x7g" Feb 02 21:38:30 crc kubenswrapper[4789]: I0202 21:38:30.540314 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjhcj\" (UniqueName: \"kubernetes.io/projected/397ad5b0-86b4-40b7-b8c3-bceb1d8aa470-kube-api-access-cjhcj\") pod \"dnsmasq-dns-698758b865-d9x7g\" (UID: \"397ad5b0-86b4-40b7-b8c3-bceb1d8aa470\") " pod="openstack/dnsmasq-dns-698758b865-d9x7g" Feb 02 21:38:30 crc kubenswrapper[4789]: I0202 21:38:30.706608 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-d9x7g" Feb 02 21:38:30 crc kubenswrapper[4789]: I0202 21:38:30.864926 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-2h2vh" podUID="0b7a7068-81a8-45bf-be5f-4a25a5a102e2" containerName="dnsmasq-dns" containerID="cri-o://ccc252969b587208204ba72dccf573a70c7f0476b34b58dead3d78de1378f227" gracePeriod=10 Feb 02 21:38:31 crc kubenswrapper[4789]: I0202 21:38:31.054163 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 02 21:38:31 crc kubenswrapper[4789]: I0202 21:38:31.147661 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 02 21:38:31 crc kubenswrapper[4789]: I0202 21:38:31.227653 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-d9x7g"] Feb 02 21:38:31 crc kubenswrapper[4789]: I0202 21:38:31.270350 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-2h2vh" Feb 02 21:38:31 crc kubenswrapper[4789]: I0202 21:38:31.332238 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0b7a7068-81a8-45bf-be5f-4a25a5a102e2-dns-svc\") pod \"0b7a7068-81a8-45bf-be5f-4a25a5a102e2\" (UID: \"0b7a7068-81a8-45bf-be5f-4a25a5a102e2\") " Feb 02 21:38:31 crc kubenswrapper[4789]: I0202 21:38:31.332289 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b7a7068-81a8-45bf-be5f-4a25a5a102e2-config\") pod \"0b7a7068-81a8-45bf-be5f-4a25a5a102e2\" (UID: \"0b7a7068-81a8-45bf-be5f-4a25a5a102e2\") " Feb 02 21:38:31 crc kubenswrapper[4789]: I0202 21:38:31.332355 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0b7a7068-81a8-45bf-be5f-4a25a5a102e2-ovsdbserver-nb\") pod \"0b7a7068-81a8-45bf-be5f-4a25a5a102e2\" (UID: \"0b7a7068-81a8-45bf-be5f-4a25a5a102e2\") " Feb 02 21:38:31 crc kubenswrapper[4789]: I0202 21:38:31.332475 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5cbwp\" (UniqueName: \"kubernetes.io/projected/0b7a7068-81a8-45bf-be5f-4a25a5a102e2-kube-api-access-5cbwp\") pod \"0b7a7068-81a8-45bf-be5f-4a25a5a102e2\" (UID: \"0b7a7068-81a8-45bf-be5f-4a25a5a102e2\") " Feb 02 21:38:31 crc kubenswrapper[4789]: I0202 21:38:31.332523 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0b7a7068-81a8-45bf-be5f-4a25a5a102e2-ovsdbserver-sb\") pod \"0b7a7068-81a8-45bf-be5f-4a25a5a102e2\" (UID: \"0b7a7068-81a8-45bf-be5f-4a25a5a102e2\") " Feb 02 21:38:31 crc kubenswrapper[4789]: I0202 21:38:31.338816 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b7a7068-81a8-45bf-be5f-4a25a5a102e2-kube-api-access-5cbwp" (OuterVolumeSpecName: "kube-api-access-5cbwp") pod "0b7a7068-81a8-45bf-be5f-4a25a5a102e2" (UID: "0b7a7068-81a8-45bf-be5f-4a25a5a102e2"). InnerVolumeSpecName "kube-api-access-5cbwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:38:31 crc kubenswrapper[4789]: I0202 21:38:31.377433 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b7a7068-81a8-45bf-be5f-4a25a5a102e2-config" (OuterVolumeSpecName: "config") pod "0b7a7068-81a8-45bf-be5f-4a25a5a102e2" (UID: "0b7a7068-81a8-45bf-be5f-4a25a5a102e2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:38:31 crc kubenswrapper[4789]: I0202 21:38:31.396005 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b7a7068-81a8-45bf-be5f-4a25a5a102e2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0b7a7068-81a8-45bf-be5f-4a25a5a102e2" (UID: "0b7a7068-81a8-45bf-be5f-4a25a5a102e2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:38:31 crc kubenswrapper[4789]: I0202 21:38:31.396098 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b7a7068-81a8-45bf-be5f-4a25a5a102e2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0b7a7068-81a8-45bf-be5f-4a25a5a102e2" (UID: "0b7a7068-81a8-45bf-be5f-4a25a5a102e2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:38:31 crc kubenswrapper[4789]: I0202 21:38:31.398773 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Feb 02 21:38:31 crc kubenswrapper[4789]: E0202 21:38:31.399203 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b7a7068-81a8-45bf-be5f-4a25a5a102e2" containerName="dnsmasq-dns" Feb 02 21:38:31 crc kubenswrapper[4789]: I0202 21:38:31.399228 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b7a7068-81a8-45bf-be5f-4a25a5a102e2" containerName="dnsmasq-dns" Feb 02 21:38:31 crc kubenswrapper[4789]: E0202 21:38:31.399261 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b7a7068-81a8-45bf-be5f-4a25a5a102e2" containerName="init" Feb 02 21:38:31 crc kubenswrapper[4789]: I0202 21:38:31.399271 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b7a7068-81a8-45bf-be5f-4a25a5a102e2" containerName="init" Feb 02 21:38:31 crc kubenswrapper[4789]: I0202 21:38:31.399499 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b7a7068-81a8-45bf-be5f-4a25a5a102e2" containerName="dnsmasq-dns" Feb 02 21:38:31 crc kubenswrapper[4789]: I0202 21:38:31.399808 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b7a7068-81a8-45bf-be5f-4a25a5a102e2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0b7a7068-81a8-45bf-be5f-4a25a5a102e2" (UID: "0b7a7068-81a8-45bf-be5f-4a25a5a102e2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:38:31 crc kubenswrapper[4789]: I0202 21:38:31.405736 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 02 21:38:31 crc kubenswrapper[4789]: I0202 21:38:31.409491 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Feb 02 21:38:31 crc kubenswrapper[4789]: I0202 21:38:31.409867 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Feb 02 21:38:31 crc kubenswrapper[4789]: I0202 21:38:31.410589 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Feb 02 21:38:31 crc kubenswrapper[4789]: I0202 21:38:31.414724 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-vlhmp" Feb 02 21:38:31 crc kubenswrapper[4789]: I0202 21:38:31.414968 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 02 21:38:31 crc kubenswrapper[4789]: I0202 21:38:31.434701 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/87f6bccb-d5fc-4868-aca2-734d16898805-etc-swift\") pod \"swift-storage-0\" (UID: \"87f6bccb-d5fc-4868-aca2-734d16898805\") " pod="openstack/swift-storage-0" Feb 02 21:38:31 crc kubenswrapper[4789]: I0202 21:38:31.434758 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"swift-storage-0\" (UID: \"87f6bccb-d5fc-4868-aca2-734d16898805\") " pod="openstack/swift-storage-0" Feb 02 21:38:31 crc kubenswrapper[4789]: I0202 21:38:31.434886 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gm8x\" (UniqueName: \"kubernetes.io/projected/87f6bccb-d5fc-4868-aca2-734d16898805-kube-api-access-2gm8x\") pod \"swift-storage-0\" (UID: \"87f6bccb-d5fc-4868-aca2-734d16898805\") " pod="openstack/swift-storage-0" Feb 02 21:38:31 crc kubenswrapper[4789]: I0202 21:38:31.434916 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/87f6bccb-d5fc-4868-aca2-734d16898805-lock\") pod \"swift-storage-0\" (UID: \"87f6bccb-d5fc-4868-aca2-734d16898805\") " pod="openstack/swift-storage-0" Feb 02 21:38:31 crc kubenswrapper[4789]: I0202 21:38:31.434963 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87f6bccb-d5fc-4868-aca2-734d16898805-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"87f6bccb-d5fc-4868-aca2-734d16898805\") " pod="openstack/swift-storage-0" Feb 02 21:38:31 crc kubenswrapper[4789]: I0202 21:38:31.434988 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/87f6bccb-d5fc-4868-aca2-734d16898805-cache\") pod \"swift-storage-0\" (UID: \"87f6bccb-d5fc-4868-aca2-734d16898805\") " pod="openstack/swift-storage-0" Feb 02 21:38:31 crc kubenswrapper[4789]: I0202 21:38:31.435051 4789 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0b7a7068-81a8-45bf-be5f-4a25a5a102e2-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 21:38:31 crc kubenswrapper[4789]: I0202 21:38:31.435063 4789 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b7a7068-81a8-45bf-be5f-4a25a5a102e2-config\") on node \"crc\" DevicePath \"\"" Feb 02 21:38:31 crc kubenswrapper[4789]: I0202 21:38:31.435071 4789 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0b7a7068-81a8-45bf-be5f-4a25a5a102e2-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 21:38:31 crc kubenswrapper[4789]: I0202 21:38:31.435081 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5cbwp\" (UniqueName: \"kubernetes.io/projected/0b7a7068-81a8-45bf-be5f-4a25a5a102e2-kube-api-access-5cbwp\") on node \"crc\" DevicePath \"\"" Feb 02 21:38:31 crc kubenswrapper[4789]: I0202 21:38:31.435090 4789 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0b7a7068-81a8-45bf-be5f-4a25a5a102e2-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 21:38:31 crc kubenswrapper[4789]: I0202 21:38:31.536887 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gm8x\" (UniqueName: \"kubernetes.io/projected/87f6bccb-d5fc-4868-aca2-734d16898805-kube-api-access-2gm8x\") pod \"swift-storage-0\" (UID: \"87f6bccb-d5fc-4868-aca2-734d16898805\") " pod="openstack/swift-storage-0" Feb 02 21:38:31 crc kubenswrapper[4789]: I0202 21:38:31.536974 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/87f6bccb-d5fc-4868-aca2-734d16898805-lock\") pod \"swift-storage-0\" (UID: \"87f6bccb-d5fc-4868-aca2-734d16898805\") " pod="openstack/swift-storage-0" Feb 02 21:38:31 crc kubenswrapper[4789]: I0202 21:38:31.537026 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87f6bccb-d5fc-4868-aca2-734d16898805-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"87f6bccb-d5fc-4868-aca2-734d16898805\") " pod="openstack/swift-storage-0" Feb 02 21:38:31 crc kubenswrapper[4789]: I0202 21:38:31.537049 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/87f6bccb-d5fc-4868-aca2-734d16898805-cache\") pod \"swift-storage-0\" (UID: \"87f6bccb-d5fc-4868-aca2-734d16898805\") " pod="openstack/swift-storage-0" Feb 02 21:38:31 crc kubenswrapper[4789]: I0202 21:38:31.537104 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/87f6bccb-d5fc-4868-aca2-734d16898805-etc-swift\") pod \"swift-storage-0\" (UID: \"87f6bccb-d5fc-4868-aca2-734d16898805\") " pod="openstack/swift-storage-0" Feb 02 21:38:31 crc kubenswrapper[4789]: I0202 21:38:31.537139 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"swift-storage-0\" (UID: \"87f6bccb-d5fc-4868-aca2-734d16898805\") " pod="openstack/swift-storage-0" Feb 02 21:38:31 crc kubenswrapper[4789]: I0202 21:38:31.537505 4789 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"swift-storage-0\" (UID: \"87f6bccb-d5fc-4868-aca2-734d16898805\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/swift-storage-0" Feb 02 21:38:31 crc kubenswrapper[4789]: I0202 21:38:31.537628 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/87f6bccb-d5fc-4868-aca2-734d16898805-lock\") pod \"swift-storage-0\" (UID: \"87f6bccb-d5fc-4868-aca2-734d16898805\") " pod="openstack/swift-storage-0" Feb 02 21:38:31 crc kubenswrapper[4789]: I0202 21:38:31.537637 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/87f6bccb-d5fc-4868-aca2-734d16898805-cache\") pod \"swift-storage-0\" (UID: \"87f6bccb-d5fc-4868-aca2-734d16898805\") " pod="openstack/swift-storage-0" Feb 02 21:38:31 crc kubenswrapper[4789]: E0202 21:38:31.537814 4789 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 02 21:38:31 crc kubenswrapper[4789]: E0202 21:38:31.537843 4789 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 02 21:38:31 crc kubenswrapper[4789]: E0202 21:38:31.537896 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/87f6bccb-d5fc-4868-aca2-734d16898805-etc-swift podName:87f6bccb-d5fc-4868-aca2-734d16898805 nodeName:}" failed. No retries permitted until 2026-02-02 21:38:32.037875469 +0000 UTC m=+1132.332900508 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/87f6bccb-d5fc-4868-aca2-734d16898805-etc-swift") pod "swift-storage-0" (UID: "87f6bccb-d5fc-4868-aca2-734d16898805") : configmap "swift-ring-files" not found Feb 02 21:38:31 crc kubenswrapper[4789]: I0202 21:38:31.541112 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87f6bccb-d5fc-4868-aca2-734d16898805-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"87f6bccb-d5fc-4868-aca2-734d16898805\") " pod="openstack/swift-storage-0" Feb 02 21:38:31 crc kubenswrapper[4789]: I0202 21:38:31.560342 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gm8x\" (UniqueName: \"kubernetes.io/projected/87f6bccb-d5fc-4868-aca2-734d16898805-kube-api-access-2gm8x\") pod \"swift-storage-0\" (UID: \"87f6bccb-d5fc-4868-aca2-734d16898805\") " pod="openstack/swift-storage-0" Feb 02 21:38:31 crc kubenswrapper[4789]: I0202 21:38:31.574366 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"swift-storage-0\" (UID: \"87f6bccb-d5fc-4868-aca2-734d16898805\") " pod="openstack/swift-storage-0" Feb 02 21:38:31 crc kubenswrapper[4789]: I0202 21:38:31.879461 4789 generic.go:334] "Generic (PLEG): container finished" podID="397ad5b0-86b4-40b7-b8c3-bceb1d8aa470" containerID="498c1f124125391c7f14850b4a6958307fb18d504f5822b34e731e12b4708eac" exitCode=0 Feb 02 21:38:31 crc kubenswrapper[4789]: I0202 21:38:31.879547 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-d9x7g" event={"ID":"397ad5b0-86b4-40b7-b8c3-bceb1d8aa470","Type":"ContainerDied","Data":"498c1f124125391c7f14850b4a6958307fb18d504f5822b34e731e12b4708eac"} Feb 02 21:38:31 crc kubenswrapper[4789]: I0202 21:38:31.879670 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-d9x7g" event={"ID":"397ad5b0-86b4-40b7-b8c3-bceb1d8aa470","Type":"ContainerStarted","Data":"22d107c3308406522b2875c46788b9371935f5cc3db46838cf8df44077d054ad"} Feb 02 21:38:31 crc kubenswrapper[4789]: I0202 21:38:31.883701 4789 generic.go:334] "Generic (PLEG): container finished" podID="0b7a7068-81a8-45bf-be5f-4a25a5a102e2" containerID="ccc252969b587208204ba72dccf573a70c7f0476b34b58dead3d78de1378f227" exitCode=0 Feb 02 21:38:31 crc kubenswrapper[4789]: I0202 21:38:31.883808 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-2h2vh" event={"ID":"0b7a7068-81a8-45bf-be5f-4a25a5a102e2","Type":"ContainerDied","Data":"ccc252969b587208204ba72dccf573a70c7f0476b34b58dead3d78de1378f227"} Feb 02 21:38:31 crc kubenswrapper[4789]: I0202 21:38:31.883876 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-2h2vh" event={"ID":"0b7a7068-81a8-45bf-be5f-4a25a5a102e2","Type":"ContainerDied","Data":"4d5795a4bac6aa32a72aed831d5abe669dd5f190117b4ffb0aaee2afe65be733"} Feb 02 21:38:31 crc kubenswrapper[4789]: I0202 21:38:31.883907 4789 scope.go:117] "RemoveContainer" containerID="ccc252969b587208204ba72dccf573a70c7f0476b34b58dead3d78de1378f227" Feb 02 21:38:31 crc kubenswrapper[4789]: I0202 21:38:31.883826 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-2h2vh" Feb 02 21:38:32 crc kubenswrapper[4789]: I0202 21:38:32.000806 4789 scope.go:117] "RemoveContainer" containerID="019b82a2b12782ef8d4faf29b4c9e3630d94df82a7bd6e6f08efd28c8499221f" Feb 02 21:38:32 crc kubenswrapper[4789]: I0202 21:38:32.000996 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-2h2vh"] Feb 02 21:38:32 crc kubenswrapper[4789]: I0202 21:38:32.009339 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-2h2vh"] Feb 02 21:38:32 crc kubenswrapper[4789]: I0202 21:38:32.022512 4789 scope.go:117] "RemoveContainer" containerID="ccc252969b587208204ba72dccf573a70c7f0476b34b58dead3d78de1378f227" Feb 02 21:38:32 crc kubenswrapper[4789]: E0202 21:38:32.022917 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ccc252969b587208204ba72dccf573a70c7f0476b34b58dead3d78de1378f227\": container with ID starting with ccc252969b587208204ba72dccf573a70c7f0476b34b58dead3d78de1378f227 not found: ID does not exist" containerID="ccc252969b587208204ba72dccf573a70c7f0476b34b58dead3d78de1378f227" Feb 02 21:38:32 crc kubenswrapper[4789]: I0202 21:38:32.022957 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ccc252969b587208204ba72dccf573a70c7f0476b34b58dead3d78de1378f227"} err="failed to get container status \"ccc252969b587208204ba72dccf573a70c7f0476b34b58dead3d78de1378f227\": rpc error: code = NotFound desc = could not find container \"ccc252969b587208204ba72dccf573a70c7f0476b34b58dead3d78de1378f227\": container with ID starting with ccc252969b587208204ba72dccf573a70c7f0476b34b58dead3d78de1378f227 not found: ID does not exist" Feb 02 21:38:32 crc kubenswrapper[4789]: I0202 21:38:32.023011 4789 scope.go:117] "RemoveContainer" containerID="019b82a2b12782ef8d4faf29b4c9e3630d94df82a7bd6e6f08efd28c8499221f" Feb 02 21:38:32 crc kubenswrapper[4789]: E0202 21:38:32.023516 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"019b82a2b12782ef8d4faf29b4c9e3630d94df82a7bd6e6f08efd28c8499221f\": container with ID starting with 019b82a2b12782ef8d4faf29b4c9e3630d94df82a7bd6e6f08efd28c8499221f not found: ID does not exist" containerID="019b82a2b12782ef8d4faf29b4c9e3630d94df82a7bd6e6f08efd28c8499221f" Feb 02 21:38:32 crc kubenswrapper[4789]: I0202 21:38:32.023544 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"019b82a2b12782ef8d4faf29b4c9e3630d94df82a7bd6e6f08efd28c8499221f"} err="failed to get container status \"019b82a2b12782ef8d4faf29b4c9e3630d94df82a7bd6e6f08efd28c8499221f\": rpc error: code = NotFound desc = could not find container \"019b82a2b12782ef8d4faf29b4c9e3630d94df82a7bd6e6f08efd28c8499221f\": container with ID starting with 019b82a2b12782ef8d4faf29b4c9e3630d94df82a7bd6e6f08efd28c8499221f not found: ID does not exist" Feb 02 21:38:32 crc kubenswrapper[4789]: I0202 21:38:32.049542 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/87f6bccb-d5fc-4868-aca2-734d16898805-etc-swift\") pod \"swift-storage-0\" (UID: \"87f6bccb-d5fc-4868-aca2-734d16898805\") " pod="openstack/swift-storage-0" Feb 02 21:38:32 crc kubenswrapper[4789]: E0202 21:38:32.050112 4789 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 02 21:38:32 crc kubenswrapper[4789]: E0202 21:38:32.050133 4789 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 02 21:38:32 crc kubenswrapper[4789]: E0202 21:38:32.050167 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/87f6bccb-d5fc-4868-aca2-734d16898805-etc-swift podName:87f6bccb-d5fc-4868-aca2-734d16898805 nodeName:}" failed. No retries permitted until 2026-02-02 21:38:33.050153139 +0000 UTC m=+1133.345178158 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/87f6bccb-d5fc-4868-aca2-734d16898805-etc-swift") pod "swift-storage-0" (UID: "87f6bccb-d5fc-4868-aca2-734d16898805") : configmap "swift-ring-files" not found Feb 02 21:38:32 crc kubenswrapper[4789]: I0202 21:38:32.427823 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b7a7068-81a8-45bf-be5f-4a25a5a102e2" path="/var/lib/kubelet/pods/0b7a7068-81a8-45bf-be5f-4a25a5a102e2/volumes" Feb 02 21:38:32 crc kubenswrapper[4789]: I0202 21:38:32.890557 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-d9x7g" event={"ID":"397ad5b0-86b4-40b7-b8c3-bceb1d8aa470","Type":"ContainerStarted","Data":"8bd4bcf7161f891ce9bf00e247effacfa5f70a433ffc16ebe79845ab753e6f9e"} Feb 02 21:38:32 crc kubenswrapper[4789]: I0202 21:38:32.890852 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-d9x7g" Feb 02 21:38:32 crc kubenswrapper[4789]: I0202 21:38:32.911376 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-d9x7g" podStartSLOduration=2.911357678 podStartE2EDuration="2.911357678s" podCreationTimestamp="2026-02-02 21:38:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:38:32.906949843 +0000 UTC m=+1133.201974862" watchObservedRunningTime="2026-02-02 21:38:32.911357678 +0000 UTC m=+1133.206382697" Feb 02 21:38:33 crc kubenswrapper[4789]: I0202 21:38:33.065133 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/87f6bccb-d5fc-4868-aca2-734d16898805-etc-swift\") pod \"swift-storage-0\" (UID: \"87f6bccb-d5fc-4868-aca2-734d16898805\") " pod="openstack/swift-storage-0" Feb 02 21:38:33 crc kubenswrapper[4789]: E0202 21:38:33.065296 4789 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 02 21:38:33 crc kubenswrapper[4789]: E0202 21:38:33.065313 4789 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 02 21:38:33 crc kubenswrapper[4789]: E0202 21:38:33.065353 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/87f6bccb-d5fc-4868-aca2-734d16898805-etc-swift podName:87f6bccb-d5fc-4868-aca2-734d16898805 nodeName:}" failed. No retries permitted until 2026-02-02 21:38:35.065339844 +0000 UTC m=+1135.360364863 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/87f6bccb-d5fc-4868-aca2-734d16898805-etc-swift") pod "swift-storage-0" (UID: "87f6bccb-d5fc-4868-aca2-734d16898805") : configmap "swift-ring-files" not found Feb 02 21:38:33 crc kubenswrapper[4789]: I0202 21:38:33.289790 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 02 21:38:33 crc kubenswrapper[4789]: I0202 21:38:33.374005 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 02 21:38:34 crc kubenswrapper[4789]: I0202 21:38:34.032101 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-bf56-account-create-update-dp5x9"] Feb 02 21:38:34 crc kubenswrapper[4789]: I0202 21:38:34.033463 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-bf56-account-create-update-dp5x9" Feb 02 21:38:34 crc kubenswrapper[4789]: I0202 21:38:34.041305 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 02 21:38:34 crc kubenswrapper[4789]: I0202 21:38:34.052740 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-bf56-account-create-update-dp5x9"] Feb 02 21:38:34 crc kubenswrapper[4789]: I0202 21:38:34.070387 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-25sxm"] Feb 02 21:38:34 crc kubenswrapper[4789]: I0202 21:38:34.071564 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-25sxm" Feb 02 21:38:34 crc kubenswrapper[4789]: I0202 21:38:34.077970 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-25sxm"] Feb 02 21:38:34 crc kubenswrapper[4789]: I0202 21:38:34.183311 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnnmk\" (UniqueName: \"kubernetes.io/projected/ce78c9ad-cbd4-4761-8485-af675e18d85a-kube-api-access-tnnmk\") pod \"glance-db-create-25sxm\" (UID: \"ce78c9ad-cbd4-4761-8485-af675e18d85a\") " pod="openstack/glance-db-create-25sxm" Feb 02 21:38:34 crc kubenswrapper[4789]: I0202 21:38:34.183568 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e31a5d32-604d-4e80-a9f2-0f7f8f3bd48a-operator-scripts\") pod \"glance-bf56-account-create-update-dp5x9\" (UID: \"e31a5d32-604d-4e80-a9f2-0f7f8f3bd48a\") " pod="openstack/glance-bf56-account-create-update-dp5x9" Feb 02 21:38:34 crc kubenswrapper[4789]: I0202 21:38:34.183662 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce78c9ad-cbd4-4761-8485-af675e18d85a-operator-scripts\") pod \"glance-db-create-25sxm\" (UID: \"ce78c9ad-cbd4-4761-8485-af675e18d85a\") " pod="openstack/glance-db-create-25sxm" Feb 02 21:38:34 crc kubenswrapper[4789]: I0202 21:38:34.183769 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57v6h\" (UniqueName: \"kubernetes.io/projected/e31a5d32-604d-4e80-a9f2-0f7f8f3bd48a-kube-api-access-57v6h\") pod \"glance-bf56-account-create-update-dp5x9\" (UID: \"e31a5d32-604d-4e80-a9f2-0f7f8f3bd48a\") " pod="openstack/glance-bf56-account-create-update-dp5x9" Feb 02 21:38:34 crc kubenswrapper[4789]: I0202 21:38:34.285795 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e31a5d32-604d-4e80-a9f2-0f7f8f3bd48a-operator-scripts\") pod \"glance-bf56-account-create-update-dp5x9\" (UID: \"e31a5d32-604d-4e80-a9f2-0f7f8f3bd48a\") " pod="openstack/glance-bf56-account-create-update-dp5x9" Feb 02 21:38:34 crc kubenswrapper[4789]: I0202 21:38:34.286179 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce78c9ad-cbd4-4761-8485-af675e18d85a-operator-scripts\") pod \"glance-db-create-25sxm\" (UID: \"ce78c9ad-cbd4-4761-8485-af675e18d85a\") " pod="openstack/glance-db-create-25sxm" Feb 02 21:38:34 crc kubenswrapper[4789]: I0202 21:38:34.286231 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57v6h\" (UniqueName: \"kubernetes.io/projected/e31a5d32-604d-4e80-a9f2-0f7f8f3bd48a-kube-api-access-57v6h\") pod \"glance-bf56-account-create-update-dp5x9\" (UID: \"e31a5d32-604d-4e80-a9f2-0f7f8f3bd48a\") " pod="openstack/glance-bf56-account-create-update-dp5x9" Feb 02 21:38:34 crc kubenswrapper[4789]: I0202 21:38:34.286313 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tnnmk\" (UniqueName: \"kubernetes.io/projected/ce78c9ad-cbd4-4761-8485-af675e18d85a-kube-api-access-tnnmk\") pod \"glance-db-create-25sxm\" (UID: \"ce78c9ad-cbd4-4761-8485-af675e18d85a\") " pod="openstack/glance-db-create-25sxm" Feb 02 21:38:34 crc kubenswrapper[4789]: I0202 21:38:34.286920 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce78c9ad-cbd4-4761-8485-af675e18d85a-operator-scripts\") pod \"glance-db-create-25sxm\" (UID: \"ce78c9ad-cbd4-4761-8485-af675e18d85a\") " pod="openstack/glance-db-create-25sxm" Feb 02 21:38:34 crc kubenswrapper[4789]: I0202 21:38:34.286947 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e31a5d32-604d-4e80-a9f2-0f7f8f3bd48a-operator-scripts\") pod \"glance-bf56-account-create-update-dp5x9\" (UID: \"e31a5d32-604d-4e80-a9f2-0f7f8f3bd48a\") " pod="openstack/glance-bf56-account-create-update-dp5x9" Feb 02 21:38:34 crc kubenswrapper[4789]: I0202 21:38:34.311356 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tnnmk\" (UniqueName: \"kubernetes.io/projected/ce78c9ad-cbd4-4761-8485-af675e18d85a-kube-api-access-tnnmk\") pod \"glance-db-create-25sxm\" (UID: \"ce78c9ad-cbd4-4761-8485-af675e18d85a\") " pod="openstack/glance-db-create-25sxm" Feb 02 21:38:34 crc kubenswrapper[4789]: I0202 21:38:34.319466 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57v6h\" (UniqueName: \"kubernetes.io/projected/e31a5d32-604d-4e80-a9f2-0f7f8f3bd48a-kube-api-access-57v6h\") pod \"glance-bf56-account-create-update-dp5x9\" (UID: \"e31a5d32-604d-4e80-a9f2-0f7f8f3bd48a\") " pod="openstack/glance-bf56-account-create-update-dp5x9" Feb 02 21:38:34 crc kubenswrapper[4789]: I0202 21:38:34.360520 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-bf56-account-create-update-dp5x9" Feb 02 21:38:34 crc kubenswrapper[4789]: I0202 21:38:34.399186 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-25sxm" Feb 02 21:38:34 crc kubenswrapper[4789]: I0202 21:38:34.858403 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-25sxm"] Feb 02 21:38:34 crc kubenswrapper[4789]: I0202 21:38:34.908028 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-25sxm" event={"ID":"ce78c9ad-cbd4-4761-8485-af675e18d85a","Type":"ContainerStarted","Data":"da2e29a65a08bbcdd44b8b821d026ff485e9a39b024bdaec315f463209cab90f"} Feb 02 21:38:34 crc kubenswrapper[4789]: I0202 21:38:34.934866 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-bf56-account-create-update-dp5x9"] Feb 02 21:38:34 crc kubenswrapper[4789]: W0202 21:38:34.940294 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode31a5d32_604d_4e80_a9f2_0f7f8f3bd48a.slice/crio-80b15dbe2543d4e1b04d5197647f7c5ac7d509e27e8f86eba0f20190619dde0e WatchSource:0}: Error finding container 80b15dbe2543d4e1b04d5197647f7c5ac7d509e27e8f86eba0f20190619dde0e: Status 404 returned error can't find the container with id 80b15dbe2543d4e1b04d5197647f7c5ac7d509e27e8f86eba0f20190619dde0e Feb 02 21:38:35 crc kubenswrapper[4789]: I0202 21:38:35.104097 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/87f6bccb-d5fc-4868-aca2-734d16898805-etc-swift\") pod \"swift-storage-0\" (UID: \"87f6bccb-d5fc-4868-aca2-734d16898805\") " pod="openstack/swift-storage-0" Feb 02 21:38:35 crc kubenswrapper[4789]: E0202 21:38:35.104351 4789 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 02 21:38:35 crc kubenswrapper[4789]: E0202 21:38:35.104388 4789 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 02 21:38:35 crc kubenswrapper[4789]: E0202 21:38:35.104454 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/87f6bccb-d5fc-4868-aca2-734d16898805-etc-swift podName:87f6bccb-d5fc-4868-aca2-734d16898805 nodeName:}" failed. No retries permitted until 2026-02-02 21:38:39.104428593 +0000 UTC m=+1139.399453642 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/87f6bccb-d5fc-4868-aca2-734d16898805-etc-swift") pod "swift-storage-0" (UID: "87f6bccb-d5fc-4868-aca2-734d16898805") : configmap "swift-ring-files" not found Feb 02 21:38:35 crc kubenswrapper[4789]: I0202 21:38:35.387154 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-q8pr6"] Feb 02 21:38:35 crc kubenswrapper[4789]: I0202 21:38:35.388749 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-q8pr6" Feb 02 21:38:35 crc kubenswrapper[4789]: I0202 21:38:35.390763 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Feb 02 21:38:35 crc kubenswrapper[4789]: I0202 21:38:35.390916 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Feb 02 21:38:35 crc kubenswrapper[4789]: I0202 21:38:35.391289 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 02 21:38:35 crc kubenswrapper[4789]: I0202 21:38:35.401312 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-q8pr6"] Feb 02 21:38:35 crc kubenswrapper[4789]: I0202 21:38:35.516862 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a55a234d-1af7-4e73-8f93-b614162be0c3-combined-ca-bundle\") pod \"swift-ring-rebalance-q8pr6\" (UID: \"a55a234d-1af7-4e73-8f93-b614162be0c3\") " pod="openstack/swift-ring-rebalance-q8pr6" Feb 02 21:38:35 crc kubenswrapper[4789]: I0202 21:38:35.516915 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a55a234d-1af7-4e73-8f93-b614162be0c3-swiftconf\") pod \"swift-ring-rebalance-q8pr6\" (UID: \"a55a234d-1af7-4e73-8f93-b614162be0c3\") " pod="openstack/swift-ring-rebalance-q8pr6" Feb 02 21:38:35 crc kubenswrapper[4789]: I0202 21:38:35.516991 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqmd2\" (UniqueName: \"kubernetes.io/projected/a55a234d-1af7-4e73-8f93-b614162be0c3-kube-api-access-bqmd2\") pod \"swift-ring-rebalance-q8pr6\" (UID: \"a55a234d-1af7-4e73-8f93-b614162be0c3\") " pod="openstack/swift-ring-rebalance-q8pr6" Feb 02 21:38:35 crc kubenswrapper[4789]: I0202 21:38:35.517017 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a55a234d-1af7-4e73-8f93-b614162be0c3-ring-data-devices\") pod \"swift-ring-rebalance-q8pr6\" (UID: \"a55a234d-1af7-4e73-8f93-b614162be0c3\") " pod="openstack/swift-ring-rebalance-q8pr6" Feb 02 21:38:35 crc kubenswrapper[4789]: I0202 21:38:35.517046 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a55a234d-1af7-4e73-8f93-b614162be0c3-etc-swift\") pod \"swift-ring-rebalance-q8pr6\" (UID: \"a55a234d-1af7-4e73-8f93-b614162be0c3\") " pod="openstack/swift-ring-rebalance-q8pr6" Feb 02 21:38:35 crc kubenswrapper[4789]: I0202 21:38:35.517068 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a55a234d-1af7-4e73-8f93-b614162be0c3-dispersionconf\") pod \"swift-ring-rebalance-q8pr6\" (UID: \"a55a234d-1af7-4e73-8f93-b614162be0c3\") " pod="openstack/swift-ring-rebalance-q8pr6" Feb 02 21:38:35 crc kubenswrapper[4789]: I0202 21:38:35.517211 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a55a234d-1af7-4e73-8f93-b614162be0c3-scripts\") pod \"swift-ring-rebalance-q8pr6\" (UID: \"a55a234d-1af7-4e73-8f93-b614162be0c3\") " pod="openstack/swift-ring-rebalance-q8pr6" Feb 02 21:38:35 crc kubenswrapper[4789]: I0202 21:38:35.619651 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqmd2\" (UniqueName: \"kubernetes.io/projected/a55a234d-1af7-4e73-8f93-b614162be0c3-kube-api-access-bqmd2\") pod \"swift-ring-rebalance-q8pr6\" (UID: \"a55a234d-1af7-4e73-8f93-b614162be0c3\") " pod="openstack/swift-ring-rebalance-q8pr6" Feb 02 21:38:35 crc kubenswrapper[4789]: I0202 21:38:35.619720 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a55a234d-1af7-4e73-8f93-b614162be0c3-ring-data-devices\") pod \"swift-ring-rebalance-q8pr6\" (UID: \"a55a234d-1af7-4e73-8f93-b614162be0c3\") " pod="openstack/swift-ring-rebalance-q8pr6" Feb 02 21:38:35 crc kubenswrapper[4789]: I0202 21:38:35.619777 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a55a234d-1af7-4e73-8f93-b614162be0c3-etc-swift\") pod \"swift-ring-rebalance-q8pr6\" (UID: \"a55a234d-1af7-4e73-8f93-b614162be0c3\") " pod="openstack/swift-ring-rebalance-q8pr6" Feb 02 21:38:35 crc kubenswrapper[4789]: I0202 21:38:35.619821 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a55a234d-1af7-4e73-8f93-b614162be0c3-dispersionconf\") pod \"swift-ring-rebalance-q8pr6\" (UID: \"a55a234d-1af7-4e73-8f93-b614162be0c3\") " pod="openstack/swift-ring-rebalance-q8pr6" Feb 02 21:38:35 crc kubenswrapper[4789]: I0202 21:38:35.620029 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a55a234d-1af7-4e73-8f93-b614162be0c3-scripts\") pod \"swift-ring-rebalance-q8pr6\" (UID: \"a55a234d-1af7-4e73-8f93-b614162be0c3\") " pod="openstack/swift-ring-rebalance-q8pr6" Feb 02 21:38:35 crc kubenswrapper[4789]: I0202 21:38:35.620064 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a55a234d-1af7-4e73-8f93-b614162be0c3-combined-ca-bundle\") pod \"swift-ring-rebalance-q8pr6\" (UID: \"a55a234d-1af7-4e73-8f93-b614162be0c3\") " pod="openstack/swift-ring-rebalance-q8pr6" Feb 02 21:38:35 crc kubenswrapper[4789]: I0202 21:38:35.620154 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a55a234d-1af7-4e73-8f93-b614162be0c3-swiftconf\") pod \"swift-ring-rebalance-q8pr6\" (UID: \"a55a234d-1af7-4e73-8f93-b614162be0c3\") " pod="openstack/swift-ring-rebalance-q8pr6" Feb 02 21:38:35 crc kubenswrapper[4789]: I0202 21:38:35.620789 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a55a234d-1af7-4e73-8f93-b614162be0c3-ring-data-devices\") pod \"swift-ring-rebalance-q8pr6\" (UID: \"a55a234d-1af7-4e73-8f93-b614162be0c3\") " pod="openstack/swift-ring-rebalance-q8pr6" Feb 02 21:38:35 crc kubenswrapper[4789]: I0202 21:38:35.621497 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a55a234d-1af7-4e73-8f93-b614162be0c3-scripts\") pod \"swift-ring-rebalance-q8pr6\" (UID: \"a55a234d-1af7-4e73-8f93-b614162be0c3\") " pod="openstack/swift-ring-rebalance-q8pr6" Feb 02 21:38:35 crc kubenswrapper[4789]: I0202 21:38:35.621919 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a55a234d-1af7-4e73-8f93-b614162be0c3-etc-swift\") pod \"swift-ring-rebalance-q8pr6\" (UID: \"a55a234d-1af7-4e73-8f93-b614162be0c3\") " pod="openstack/swift-ring-rebalance-q8pr6" Feb 02 21:38:35 crc kubenswrapper[4789]: I0202 21:38:35.627114 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a55a234d-1af7-4e73-8f93-b614162be0c3-combined-ca-bundle\") pod \"swift-ring-rebalance-q8pr6\" (UID: \"a55a234d-1af7-4e73-8f93-b614162be0c3\") " pod="openstack/swift-ring-rebalance-q8pr6" Feb 02 21:38:35 crc kubenswrapper[4789]: I0202 21:38:35.634019 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a55a234d-1af7-4e73-8f93-b614162be0c3-dispersionconf\") pod \"swift-ring-rebalance-q8pr6\" (UID: \"a55a234d-1af7-4e73-8f93-b614162be0c3\") " pod="openstack/swift-ring-rebalance-q8pr6" Feb 02 21:38:35 crc kubenswrapper[4789]: I0202 21:38:35.634921 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a55a234d-1af7-4e73-8f93-b614162be0c3-swiftconf\") pod \"swift-ring-rebalance-q8pr6\" (UID: \"a55a234d-1af7-4e73-8f93-b614162be0c3\") " pod="openstack/swift-ring-rebalance-q8pr6" Feb 02 21:38:35 crc kubenswrapper[4789]: I0202 21:38:35.642371 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqmd2\" (UniqueName: \"kubernetes.io/projected/a55a234d-1af7-4e73-8f93-b614162be0c3-kube-api-access-bqmd2\") pod \"swift-ring-rebalance-q8pr6\" (UID: \"a55a234d-1af7-4e73-8f93-b614162be0c3\") " pod="openstack/swift-ring-rebalance-q8pr6" Feb 02 21:38:35 crc kubenswrapper[4789]: I0202 21:38:35.669967 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-6xcwz"] Feb 02 21:38:35 crc kubenswrapper[4789]: I0202 21:38:35.671231 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-6xcwz" Feb 02 21:38:35 crc kubenswrapper[4789]: I0202 21:38:35.673871 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 02 21:38:35 crc kubenswrapper[4789]: I0202 21:38:35.678375 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-6xcwz"] Feb 02 21:38:35 crc kubenswrapper[4789]: I0202 21:38:35.743632 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-q8pr6" Feb 02 21:38:35 crc kubenswrapper[4789]: I0202 21:38:35.822942 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6n4d\" (UniqueName: \"kubernetes.io/projected/96c8644c-3d61-4da2-91ef-d668da7e01b9-kube-api-access-z6n4d\") pod \"root-account-create-update-6xcwz\" (UID: \"96c8644c-3d61-4da2-91ef-d668da7e01b9\") " pod="openstack/root-account-create-update-6xcwz" Feb 02 21:38:35 crc kubenswrapper[4789]: I0202 21:38:35.823202 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/96c8644c-3d61-4da2-91ef-d668da7e01b9-operator-scripts\") pod \"root-account-create-update-6xcwz\" (UID: \"96c8644c-3d61-4da2-91ef-d668da7e01b9\") " pod="openstack/root-account-create-update-6xcwz" Feb 02 21:38:35 crc kubenswrapper[4789]: I0202 21:38:35.929224 4789 generic.go:334] "Generic (PLEG): container finished" podID="e31a5d32-604d-4e80-a9f2-0f7f8f3bd48a" containerID="4117429bd46f62e85af19b47cebd37c852b76015feb8cc2c979245ca7a597def" exitCode=0 Feb 02 21:38:35 crc kubenswrapper[4789]: I0202 21:38:35.929336 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-bf56-account-create-update-dp5x9" event={"ID":"e31a5d32-604d-4e80-a9f2-0f7f8f3bd48a","Type":"ContainerDied","Data":"4117429bd46f62e85af19b47cebd37c852b76015feb8cc2c979245ca7a597def"} Feb 02 21:38:35 crc kubenswrapper[4789]: I0202 21:38:35.929688 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-bf56-account-create-update-dp5x9" event={"ID":"e31a5d32-604d-4e80-a9f2-0f7f8f3bd48a","Type":"ContainerStarted","Data":"80b15dbe2543d4e1b04d5197647f7c5ac7d509e27e8f86eba0f20190619dde0e"} Feb 02 21:38:35 crc kubenswrapper[4789]: I0202 21:38:35.929457 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/96c8644c-3d61-4da2-91ef-d668da7e01b9-operator-scripts\") pod \"root-account-create-update-6xcwz\" (UID: \"96c8644c-3d61-4da2-91ef-d668da7e01b9\") " pod="openstack/root-account-create-update-6xcwz" Feb 02 21:38:35 crc kubenswrapper[4789]: I0202 21:38:35.930119 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6n4d\" (UniqueName: \"kubernetes.io/projected/96c8644c-3d61-4da2-91ef-d668da7e01b9-kube-api-access-z6n4d\") pod \"root-account-create-update-6xcwz\" (UID: \"96c8644c-3d61-4da2-91ef-d668da7e01b9\") " pod="openstack/root-account-create-update-6xcwz" Feb 02 21:38:35 crc kubenswrapper[4789]: I0202 21:38:35.930444 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/96c8644c-3d61-4da2-91ef-d668da7e01b9-operator-scripts\") pod \"root-account-create-update-6xcwz\" (UID: \"96c8644c-3d61-4da2-91ef-d668da7e01b9\") " pod="openstack/root-account-create-update-6xcwz" Feb 02 21:38:35 crc kubenswrapper[4789]: I0202 21:38:35.940463 4789 generic.go:334] "Generic (PLEG): container finished" podID="ce78c9ad-cbd4-4761-8485-af675e18d85a" containerID="aa30436da3f9aef978f1a1d46087f72ad354d7c56731802c59d834777a580e9a" exitCode=0 Feb 02 21:38:35 crc kubenswrapper[4789]: I0202 21:38:35.940517 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-25sxm" event={"ID":"ce78c9ad-cbd4-4761-8485-af675e18d85a","Type":"ContainerDied","Data":"aa30436da3f9aef978f1a1d46087f72ad354d7c56731802c59d834777a580e9a"} Feb 02 21:38:35 crc kubenswrapper[4789]: I0202 21:38:35.955252 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6n4d\" (UniqueName: \"kubernetes.io/projected/96c8644c-3d61-4da2-91ef-d668da7e01b9-kube-api-access-z6n4d\") pod \"root-account-create-update-6xcwz\" (UID: \"96c8644c-3d61-4da2-91ef-d668da7e01b9\") " pod="openstack/root-account-create-update-6xcwz" Feb 02 21:38:36 crc kubenswrapper[4789]: I0202 21:38:36.027533 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-6xcwz" Feb 02 21:38:36 crc kubenswrapper[4789]: I0202 21:38:36.214017 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-q8pr6"] Feb 02 21:38:36 crc kubenswrapper[4789]: W0202 21:38:36.223462 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda55a234d_1af7_4e73_8f93_b614162be0c3.slice/crio-e025359b1fbebbafbaff4cad6f83eeeae6325f1acc25745ecbd27a56dd61d625 WatchSource:0}: Error finding container e025359b1fbebbafbaff4cad6f83eeeae6325f1acc25745ecbd27a56dd61d625: Status 404 returned error can't find the container with id e025359b1fbebbafbaff4cad6f83eeeae6325f1acc25745ecbd27a56dd61d625 Feb 02 21:38:36 crc kubenswrapper[4789]: I0202 21:38:36.516908 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-6xcwz"] Feb 02 21:38:36 crc kubenswrapper[4789]: I0202 21:38:36.951667 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-q8pr6" event={"ID":"a55a234d-1af7-4e73-8f93-b614162be0c3","Type":"ContainerStarted","Data":"e025359b1fbebbafbaff4cad6f83eeeae6325f1acc25745ecbd27a56dd61d625"} Feb 02 21:38:36 crc kubenswrapper[4789]: I0202 21:38:36.954065 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-6xcwz" event={"ID":"96c8644c-3d61-4da2-91ef-d668da7e01b9","Type":"ContainerStarted","Data":"47bf2be39b5dad7a475d59f7c4179d5a96aa340d7f1de5e0bc249d3d76a4661b"} Feb 02 21:38:36 crc kubenswrapper[4789]: I0202 21:38:36.954110 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-6xcwz" event={"ID":"96c8644c-3d61-4da2-91ef-d668da7e01b9","Type":"ContainerStarted","Data":"38fe1307b47aa9c07dc63428e24a7342c484150858057a0a775f330f9ae24130"} Feb 02 21:38:37 crc kubenswrapper[4789]: I0202 21:38:37.965754 4789 generic.go:334] "Generic (PLEG): container finished" podID="96c8644c-3d61-4da2-91ef-d668da7e01b9" containerID="47bf2be39b5dad7a475d59f7c4179d5a96aa340d7f1de5e0bc249d3d76a4661b" exitCode=0 Feb 02 21:38:37 crc kubenswrapper[4789]: I0202 21:38:37.965879 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-6xcwz" event={"ID":"96c8644c-3d61-4da2-91ef-d668da7e01b9","Type":"ContainerDied","Data":"47bf2be39b5dad7a475d59f7c4179d5a96aa340d7f1de5e0bc249d3d76a4661b"} Feb 02 21:38:38 crc kubenswrapper[4789]: I0202 21:38:38.146501 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-bf56-account-create-update-dp5x9" Feb 02 21:38:38 crc kubenswrapper[4789]: I0202 21:38:38.208495 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-57v6h\" (UniqueName: \"kubernetes.io/projected/e31a5d32-604d-4e80-a9f2-0f7f8f3bd48a-kube-api-access-57v6h\") pod \"e31a5d32-604d-4e80-a9f2-0f7f8f3bd48a\" (UID: \"e31a5d32-604d-4e80-a9f2-0f7f8f3bd48a\") " Feb 02 21:38:38 crc kubenswrapper[4789]: I0202 21:38:38.208650 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e31a5d32-604d-4e80-a9f2-0f7f8f3bd48a-operator-scripts\") pod \"e31a5d32-604d-4e80-a9f2-0f7f8f3bd48a\" (UID: \"e31a5d32-604d-4e80-a9f2-0f7f8f3bd48a\") " Feb 02 21:38:38 crc kubenswrapper[4789]: I0202 21:38:38.209468 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e31a5d32-604d-4e80-a9f2-0f7f8f3bd48a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e31a5d32-604d-4e80-a9f2-0f7f8f3bd48a" (UID: "e31a5d32-604d-4e80-a9f2-0f7f8f3bd48a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:38:38 crc kubenswrapper[4789]: I0202 21:38:38.215640 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e31a5d32-604d-4e80-a9f2-0f7f8f3bd48a-kube-api-access-57v6h" (OuterVolumeSpecName: "kube-api-access-57v6h") pod "e31a5d32-604d-4e80-a9f2-0f7f8f3bd48a" (UID: "e31a5d32-604d-4e80-a9f2-0f7f8f3bd48a"). InnerVolumeSpecName "kube-api-access-57v6h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:38:38 crc kubenswrapper[4789]: I0202 21:38:38.310854 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-wrn6h"] Feb 02 21:38:38 crc kubenswrapper[4789]: I0202 21:38:38.310920 4789 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e31a5d32-604d-4e80-a9f2-0f7f8f3bd48a-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 21:38:38 crc kubenswrapper[4789]: I0202 21:38:38.310946 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-57v6h\" (UniqueName: \"kubernetes.io/projected/e31a5d32-604d-4e80-a9f2-0f7f8f3bd48a-kube-api-access-57v6h\") on node \"crc\" DevicePath \"\"" Feb 02 21:38:38 crc kubenswrapper[4789]: E0202 21:38:38.311233 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e31a5d32-604d-4e80-a9f2-0f7f8f3bd48a" containerName="mariadb-account-create-update" Feb 02 21:38:38 crc kubenswrapper[4789]: I0202 21:38:38.311254 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="e31a5d32-604d-4e80-a9f2-0f7f8f3bd48a" containerName="mariadb-account-create-update" Feb 02 21:38:38 crc kubenswrapper[4789]: I0202 21:38:38.311445 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="e31a5d32-604d-4e80-a9f2-0f7f8f3bd48a" containerName="mariadb-account-create-update" Feb 02 21:38:38 crc kubenswrapper[4789]: I0202 21:38:38.312081 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-wrn6h" Feb 02 21:38:38 crc kubenswrapper[4789]: I0202 21:38:38.321005 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-wrn6h"] Feb 02 21:38:38 crc kubenswrapper[4789]: I0202 21:38:38.412711 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/743bffd7-f479-4b98-8cd6-9714dfcfeab1-operator-scripts\") pod \"keystone-db-create-wrn6h\" (UID: \"743bffd7-f479-4b98-8cd6-9714dfcfeab1\") " pod="openstack/keystone-db-create-wrn6h" Feb 02 21:38:38 crc kubenswrapper[4789]: I0202 21:38:38.412787 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ggcv\" (UniqueName: \"kubernetes.io/projected/743bffd7-f479-4b98-8cd6-9714dfcfeab1-kube-api-access-6ggcv\") pod \"keystone-db-create-wrn6h\" (UID: \"743bffd7-f479-4b98-8cd6-9714dfcfeab1\") " pod="openstack/keystone-db-create-wrn6h" Feb 02 21:38:38 crc kubenswrapper[4789]: I0202 21:38:38.434785 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-4dc6-account-create-update-bf2l2"] Feb 02 21:38:38 crc kubenswrapper[4789]: I0202 21:38:38.435783 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-4dc6-account-create-update-bf2l2"] Feb 02 21:38:38 crc kubenswrapper[4789]: I0202 21:38:38.435889 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-4dc6-account-create-update-bf2l2" Feb 02 21:38:38 crc kubenswrapper[4789]: I0202 21:38:38.438231 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 02 21:38:38 crc kubenswrapper[4789]: I0202 21:38:38.514757 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27d741b5-10a4-4acb-b4b8-cf06f35a66f2-operator-scripts\") pod \"keystone-4dc6-account-create-update-bf2l2\" (UID: \"27d741b5-10a4-4acb-b4b8-cf06f35a66f2\") " pod="openstack/keystone-4dc6-account-create-update-bf2l2" Feb 02 21:38:38 crc kubenswrapper[4789]: I0202 21:38:38.515019 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-626jt\" (UniqueName: \"kubernetes.io/projected/27d741b5-10a4-4acb-b4b8-cf06f35a66f2-kube-api-access-626jt\") pod \"keystone-4dc6-account-create-update-bf2l2\" (UID: \"27d741b5-10a4-4acb-b4b8-cf06f35a66f2\") " pod="openstack/keystone-4dc6-account-create-update-bf2l2" Feb 02 21:38:38 crc kubenswrapper[4789]: I0202 21:38:38.515140 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/743bffd7-f479-4b98-8cd6-9714dfcfeab1-operator-scripts\") pod \"keystone-db-create-wrn6h\" (UID: \"743bffd7-f479-4b98-8cd6-9714dfcfeab1\") " pod="openstack/keystone-db-create-wrn6h" Feb 02 21:38:38 crc kubenswrapper[4789]: I0202 21:38:38.515212 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ggcv\" (UniqueName: \"kubernetes.io/projected/743bffd7-f479-4b98-8cd6-9714dfcfeab1-kube-api-access-6ggcv\") pod \"keystone-db-create-wrn6h\" (UID: \"743bffd7-f479-4b98-8cd6-9714dfcfeab1\") " pod="openstack/keystone-db-create-wrn6h" Feb 02 21:38:38 crc kubenswrapper[4789]: I0202 21:38:38.515748 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/743bffd7-f479-4b98-8cd6-9714dfcfeab1-operator-scripts\") pod \"keystone-db-create-wrn6h\" (UID: \"743bffd7-f479-4b98-8cd6-9714dfcfeab1\") " pod="openstack/keystone-db-create-wrn6h" Feb 02 21:38:38 crc kubenswrapper[4789]: I0202 21:38:38.530016 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ggcv\" (UniqueName: \"kubernetes.io/projected/743bffd7-f479-4b98-8cd6-9714dfcfeab1-kube-api-access-6ggcv\") pod \"keystone-db-create-wrn6h\" (UID: \"743bffd7-f479-4b98-8cd6-9714dfcfeab1\") " pod="openstack/keystone-db-create-wrn6h" Feb 02 21:38:38 crc kubenswrapper[4789]: I0202 21:38:38.616513 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27d741b5-10a4-4acb-b4b8-cf06f35a66f2-operator-scripts\") pod \"keystone-4dc6-account-create-update-bf2l2\" (UID: \"27d741b5-10a4-4acb-b4b8-cf06f35a66f2\") " pod="openstack/keystone-4dc6-account-create-update-bf2l2" Feb 02 21:38:38 crc kubenswrapper[4789]: I0202 21:38:38.616664 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-626jt\" (UniqueName: \"kubernetes.io/projected/27d741b5-10a4-4acb-b4b8-cf06f35a66f2-kube-api-access-626jt\") pod \"keystone-4dc6-account-create-update-bf2l2\" (UID: \"27d741b5-10a4-4acb-b4b8-cf06f35a66f2\") " pod="openstack/keystone-4dc6-account-create-update-bf2l2" Feb 02 21:38:38 crc kubenswrapper[4789]: I0202 21:38:38.617429 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27d741b5-10a4-4acb-b4b8-cf06f35a66f2-operator-scripts\") pod \"keystone-4dc6-account-create-update-bf2l2\" (UID: \"27d741b5-10a4-4acb-b4b8-cf06f35a66f2\") " pod="openstack/keystone-4dc6-account-create-update-bf2l2" Feb 02 21:38:38 crc kubenswrapper[4789]: I0202 21:38:38.631942 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-qvp7v"] Feb 02 21:38:38 crc kubenswrapper[4789]: I0202 21:38:38.633006 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-qvp7v" Feb 02 21:38:38 crc kubenswrapper[4789]: I0202 21:38:38.641717 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-qvp7v"] Feb 02 21:38:38 crc kubenswrapper[4789]: I0202 21:38:38.647836 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-626jt\" (UniqueName: \"kubernetes.io/projected/27d741b5-10a4-4acb-b4b8-cf06f35a66f2-kube-api-access-626jt\") pod \"keystone-4dc6-account-create-update-bf2l2\" (UID: \"27d741b5-10a4-4acb-b4b8-cf06f35a66f2\") " pod="openstack/keystone-4dc6-account-create-update-bf2l2" Feb 02 21:38:38 crc kubenswrapper[4789]: I0202 21:38:38.678016 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-wrn6h" Feb 02 21:38:38 crc kubenswrapper[4789]: I0202 21:38:38.742156 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-32d6-account-create-update-5tgfk"] Feb 02 21:38:38 crc kubenswrapper[4789]: I0202 21:38:38.743186 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-32d6-account-create-update-5tgfk" Feb 02 21:38:38 crc kubenswrapper[4789]: I0202 21:38:38.745052 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 02 21:38:38 crc kubenswrapper[4789]: I0202 21:38:38.748657 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-32d6-account-create-update-5tgfk"] Feb 02 21:38:38 crc kubenswrapper[4789]: I0202 21:38:38.761671 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-4dc6-account-create-update-bf2l2" Feb 02 21:38:38 crc kubenswrapper[4789]: I0202 21:38:38.820492 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d80bd6c-a1d7-4ee6-bc8c-8e534d7bfa1a-operator-scripts\") pod \"placement-db-create-qvp7v\" (UID: \"7d80bd6c-a1d7-4ee6-bc8c-8e534d7bfa1a\") " pod="openstack/placement-db-create-qvp7v" Feb 02 21:38:38 crc kubenswrapper[4789]: I0202 21:38:38.820625 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-872j2\" (UniqueName: \"kubernetes.io/projected/7d80bd6c-a1d7-4ee6-bc8c-8e534d7bfa1a-kube-api-access-872j2\") pod \"placement-db-create-qvp7v\" (UID: \"7d80bd6c-a1d7-4ee6-bc8c-8e534d7bfa1a\") " pod="openstack/placement-db-create-qvp7v" Feb 02 21:38:38 crc kubenswrapper[4789]: I0202 21:38:38.922613 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d80bd6c-a1d7-4ee6-bc8c-8e534d7bfa1a-operator-scripts\") pod \"placement-db-create-qvp7v\" (UID: \"7d80bd6c-a1d7-4ee6-bc8c-8e534d7bfa1a\") " pod="openstack/placement-db-create-qvp7v" Feb 02 21:38:38 crc kubenswrapper[4789]: I0202 21:38:38.922737 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6c2t\" (UniqueName: \"kubernetes.io/projected/2c632452-0823-4b9b-9eaf-b8e9da3084c9-kube-api-access-h6c2t\") pod \"placement-32d6-account-create-update-5tgfk\" (UID: \"2c632452-0823-4b9b-9eaf-b8e9da3084c9\") " pod="openstack/placement-32d6-account-create-update-5tgfk" Feb 02 21:38:38 crc kubenswrapper[4789]: I0202 21:38:38.922860 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-872j2\" (UniqueName: \"kubernetes.io/projected/7d80bd6c-a1d7-4ee6-bc8c-8e534d7bfa1a-kube-api-access-872j2\") pod \"placement-db-create-qvp7v\" (UID: \"7d80bd6c-a1d7-4ee6-bc8c-8e534d7bfa1a\") " pod="openstack/placement-db-create-qvp7v" Feb 02 21:38:38 crc kubenswrapper[4789]: I0202 21:38:38.922959 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2c632452-0823-4b9b-9eaf-b8e9da3084c9-operator-scripts\") pod \"placement-32d6-account-create-update-5tgfk\" (UID: \"2c632452-0823-4b9b-9eaf-b8e9da3084c9\") " pod="openstack/placement-32d6-account-create-update-5tgfk" Feb 02 21:38:38 crc kubenswrapper[4789]: I0202 21:38:38.923694 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d80bd6c-a1d7-4ee6-bc8c-8e534d7bfa1a-operator-scripts\") pod \"placement-db-create-qvp7v\" (UID: \"7d80bd6c-a1d7-4ee6-bc8c-8e534d7bfa1a\") " pod="openstack/placement-db-create-qvp7v" Feb 02 21:38:38 crc kubenswrapper[4789]: I0202 21:38:38.939055 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-872j2\" (UniqueName: \"kubernetes.io/projected/7d80bd6c-a1d7-4ee6-bc8c-8e534d7bfa1a-kube-api-access-872j2\") pod \"placement-db-create-qvp7v\" (UID: \"7d80bd6c-a1d7-4ee6-bc8c-8e534d7bfa1a\") " pod="openstack/placement-db-create-qvp7v" Feb 02 21:38:38 crc kubenswrapper[4789]: I0202 21:38:38.975948 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-bf56-account-create-update-dp5x9" event={"ID":"e31a5d32-604d-4e80-a9f2-0f7f8f3bd48a","Type":"ContainerDied","Data":"80b15dbe2543d4e1b04d5197647f7c5ac7d509e27e8f86eba0f20190619dde0e"} Feb 02 21:38:38 crc kubenswrapper[4789]: I0202 21:38:38.977065 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="80b15dbe2543d4e1b04d5197647f7c5ac7d509e27e8f86eba0f20190619dde0e" Feb 02 21:38:38 crc kubenswrapper[4789]: I0202 21:38:38.977211 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-bf56-account-create-update-dp5x9" Feb 02 21:38:38 crc kubenswrapper[4789]: I0202 21:38:38.988764 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-qvp7v" Feb 02 21:38:39 crc kubenswrapper[4789]: I0202 21:38:39.024992 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2c632452-0823-4b9b-9eaf-b8e9da3084c9-operator-scripts\") pod \"placement-32d6-account-create-update-5tgfk\" (UID: \"2c632452-0823-4b9b-9eaf-b8e9da3084c9\") " pod="openstack/placement-32d6-account-create-update-5tgfk" Feb 02 21:38:39 crc kubenswrapper[4789]: I0202 21:38:39.025175 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6c2t\" (UniqueName: \"kubernetes.io/projected/2c632452-0823-4b9b-9eaf-b8e9da3084c9-kube-api-access-h6c2t\") pod \"placement-32d6-account-create-update-5tgfk\" (UID: \"2c632452-0823-4b9b-9eaf-b8e9da3084c9\") " pod="openstack/placement-32d6-account-create-update-5tgfk" Feb 02 21:38:39 crc kubenswrapper[4789]: I0202 21:38:39.026362 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2c632452-0823-4b9b-9eaf-b8e9da3084c9-operator-scripts\") pod \"placement-32d6-account-create-update-5tgfk\" (UID: \"2c632452-0823-4b9b-9eaf-b8e9da3084c9\") " pod="openstack/placement-32d6-account-create-update-5tgfk" Feb 02 21:38:39 crc kubenswrapper[4789]: I0202 21:38:39.041376 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6c2t\" (UniqueName: \"kubernetes.io/projected/2c632452-0823-4b9b-9eaf-b8e9da3084c9-kube-api-access-h6c2t\") pod \"placement-32d6-account-create-update-5tgfk\" (UID: \"2c632452-0823-4b9b-9eaf-b8e9da3084c9\") " pod="openstack/placement-32d6-account-create-update-5tgfk" Feb 02 21:38:39 crc kubenswrapper[4789]: I0202 21:38:39.059951 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-32d6-account-create-update-5tgfk" Feb 02 21:38:39 crc kubenswrapper[4789]: I0202 21:38:39.126204 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/87f6bccb-d5fc-4868-aca2-734d16898805-etc-swift\") pod \"swift-storage-0\" (UID: \"87f6bccb-d5fc-4868-aca2-734d16898805\") " pod="openstack/swift-storage-0" Feb 02 21:38:39 crc kubenswrapper[4789]: E0202 21:38:39.126371 4789 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 02 21:38:39 crc kubenswrapper[4789]: E0202 21:38:39.126386 4789 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 02 21:38:39 crc kubenswrapper[4789]: E0202 21:38:39.126426 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/87f6bccb-d5fc-4868-aca2-734d16898805-etc-swift podName:87f6bccb-d5fc-4868-aca2-734d16898805 nodeName:}" failed. No retries permitted until 2026-02-02 21:38:47.126413513 +0000 UTC m=+1147.421438532 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/87f6bccb-d5fc-4868-aca2-734d16898805-etc-swift") pod "swift-storage-0" (UID: "87f6bccb-d5fc-4868-aca2-734d16898805") : configmap "swift-ring-files" not found Feb 02 21:38:39 crc kubenswrapper[4789]: I0202 21:38:39.699763 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-25sxm" Feb 02 21:38:39 crc kubenswrapper[4789]: I0202 21:38:39.730477 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-6xcwz" Feb 02 21:38:39 crc kubenswrapper[4789]: I0202 21:38:39.845325 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z6n4d\" (UniqueName: \"kubernetes.io/projected/96c8644c-3d61-4da2-91ef-d668da7e01b9-kube-api-access-z6n4d\") pod \"96c8644c-3d61-4da2-91ef-d668da7e01b9\" (UID: \"96c8644c-3d61-4da2-91ef-d668da7e01b9\") " Feb 02 21:38:39 crc kubenswrapper[4789]: I0202 21:38:39.845415 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/96c8644c-3d61-4da2-91ef-d668da7e01b9-operator-scripts\") pod \"96c8644c-3d61-4da2-91ef-d668da7e01b9\" (UID: \"96c8644c-3d61-4da2-91ef-d668da7e01b9\") " Feb 02 21:38:39 crc kubenswrapper[4789]: I0202 21:38:39.845533 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tnnmk\" (UniqueName: \"kubernetes.io/projected/ce78c9ad-cbd4-4761-8485-af675e18d85a-kube-api-access-tnnmk\") pod \"ce78c9ad-cbd4-4761-8485-af675e18d85a\" (UID: \"ce78c9ad-cbd4-4761-8485-af675e18d85a\") " Feb 02 21:38:39 crc kubenswrapper[4789]: I0202 21:38:39.845603 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce78c9ad-cbd4-4761-8485-af675e18d85a-operator-scripts\") pod \"ce78c9ad-cbd4-4761-8485-af675e18d85a\" (UID: \"ce78c9ad-cbd4-4761-8485-af675e18d85a\") " Feb 02 21:38:39 crc kubenswrapper[4789]: I0202 21:38:39.846895 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce78c9ad-cbd4-4761-8485-af675e18d85a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ce78c9ad-cbd4-4761-8485-af675e18d85a" (UID: "ce78c9ad-cbd4-4761-8485-af675e18d85a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:38:39 crc kubenswrapper[4789]: I0202 21:38:39.846916 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96c8644c-3d61-4da2-91ef-d668da7e01b9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "96c8644c-3d61-4da2-91ef-d668da7e01b9" (UID: "96c8644c-3d61-4da2-91ef-d668da7e01b9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:38:39 crc kubenswrapper[4789]: I0202 21:38:39.851452 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96c8644c-3d61-4da2-91ef-d668da7e01b9-kube-api-access-z6n4d" (OuterVolumeSpecName: "kube-api-access-z6n4d") pod "96c8644c-3d61-4da2-91ef-d668da7e01b9" (UID: "96c8644c-3d61-4da2-91ef-d668da7e01b9"). InnerVolumeSpecName "kube-api-access-z6n4d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:38:39 crc kubenswrapper[4789]: I0202 21:38:39.851866 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce78c9ad-cbd4-4761-8485-af675e18d85a-kube-api-access-tnnmk" (OuterVolumeSpecName: "kube-api-access-tnnmk") pod "ce78c9ad-cbd4-4761-8485-af675e18d85a" (UID: "ce78c9ad-cbd4-4761-8485-af675e18d85a"). InnerVolumeSpecName "kube-api-access-tnnmk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:38:39 crc kubenswrapper[4789]: I0202 21:38:39.947893 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tnnmk\" (UniqueName: \"kubernetes.io/projected/ce78c9ad-cbd4-4761-8485-af675e18d85a-kube-api-access-tnnmk\") on node \"crc\" DevicePath \"\"" Feb 02 21:38:39 crc kubenswrapper[4789]: I0202 21:38:39.948550 4789 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce78c9ad-cbd4-4761-8485-af675e18d85a-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 21:38:39 crc kubenswrapper[4789]: I0202 21:38:39.948611 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z6n4d\" (UniqueName: \"kubernetes.io/projected/96c8644c-3d61-4da2-91ef-d668da7e01b9-kube-api-access-z6n4d\") on node \"crc\" DevicePath \"\"" Feb 02 21:38:39 crc kubenswrapper[4789]: I0202 21:38:39.948628 4789 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/96c8644c-3d61-4da2-91ef-d668da7e01b9-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 21:38:39 crc kubenswrapper[4789]: I0202 21:38:39.985995 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-6xcwz" event={"ID":"96c8644c-3d61-4da2-91ef-d668da7e01b9","Type":"ContainerDied","Data":"38fe1307b47aa9c07dc63428e24a7342c484150858057a0a775f330f9ae24130"} Feb 02 21:38:39 crc kubenswrapper[4789]: I0202 21:38:39.986020 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-6xcwz" Feb 02 21:38:39 crc kubenswrapper[4789]: I0202 21:38:39.986039 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="38fe1307b47aa9c07dc63428e24a7342c484150858057a0a775f330f9ae24130" Feb 02 21:38:39 crc kubenswrapper[4789]: I0202 21:38:39.988081 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-25sxm" event={"ID":"ce78c9ad-cbd4-4761-8485-af675e18d85a","Type":"ContainerDied","Data":"da2e29a65a08bbcdd44b8b821d026ff485e9a39b024bdaec315f463209cab90f"} Feb 02 21:38:39 crc kubenswrapper[4789]: I0202 21:38:39.988103 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da2e29a65a08bbcdd44b8b821d026ff485e9a39b024bdaec315f463209cab90f" Feb 02 21:38:39 crc kubenswrapper[4789]: I0202 21:38:39.988163 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-25sxm" Feb 02 21:38:40 crc kubenswrapper[4789]: I0202 21:38:40.077658 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-wrn6h"] Feb 02 21:38:40 crc kubenswrapper[4789]: I0202 21:38:40.086133 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-32d6-account-create-update-5tgfk"] Feb 02 21:38:40 crc kubenswrapper[4789]: W0202 21:38:40.087489 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7d80bd6c_a1d7_4ee6_bc8c_8e534d7bfa1a.slice/crio-1383668cc60f70ba3bfa0f2d62f9ae95ccb8647e59fb2ec929730a79aa4c0bc2 WatchSource:0}: Error finding container 1383668cc60f70ba3bfa0f2d62f9ae95ccb8647e59fb2ec929730a79aa4c0bc2: Status 404 returned error can't find the container with id 1383668cc60f70ba3bfa0f2d62f9ae95ccb8647e59fb2ec929730a79aa4c0bc2 Feb 02 21:38:40 crc kubenswrapper[4789]: I0202 21:38:40.094676 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-qvp7v"] Feb 02 21:38:40 crc kubenswrapper[4789]: I0202 21:38:40.301054 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-4dc6-account-create-update-bf2l2"] Feb 02 21:38:40 crc kubenswrapper[4789]: I0202 21:38:40.708875 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-d9x7g" Feb 02 21:38:40 crc kubenswrapper[4789]: I0202 21:38:40.808819 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-l6jmk"] Feb 02 21:38:40 crc kubenswrapper[4789]: I0202 21:38:40.809107 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7fd796d7df-l6jmk" podUID="cd6df5c8-3899-431d-b9cd-9a9f022160d7" containerName="dnsmasq-dns" containerID="cri-o://bf8bfb384c1266daf0508adff005c3a21ceb0dbea842b6f707f271a6f9ddf49f" gracePeriod=10 Feb 02 21:38:40 crc kubenswrapper[4789]: I0202 21:38:40.999376 4789 generic.go:334] "Generic (PLEG): container finished" podID="cd6df5c8-3899-431d-b9cd-9a9f022160d7" containerID="bf8bfb384c1266daf0508adff005c3a21ceb0dbea842b6f707f271a6f9ddf49f" exitCode=0 Feb 02 21:38:41 crc kubenswrapper[4789]: I0202 21:38:40.999472 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-l6jmk" event={"ID":"cd6df5c8-3899-431d-b9cd-9a9f022160d7","Type":"ContainerDied","Data":"bf8bfb384c1266daf0508adff005c3a21ceb0dbea842b6f707f271a6f9ddf49f"} Feb 02 21:38:41 crc kubenswrapper[4789]: I0202 21:38:41.001517 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-qvp7v" event={"ID":"7d80bd6c-a1d7-4ee6-bc8c-8e534d7bfa1a","Type":"ContainerStarted","Data":"6726f0ab9af33468e45fe77530ced8a0b271c97eb7762c9a2fb78e8d8c2d78d1"} Feb 02 21:38:41 crc kubenswrapper[4789]: I0202 21:38:41.001548 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-qvp7v" event={"ID":"7d80bd6c-a1d7-4ee6-bc8c-8e534d7bfa1a","Type":"ContainerStarted","Data":"1383668cc60f70ba3bfa0f2d62f9ae95ccb8647e59fb2ec929730a79aa4c0bc2"} Feb 02 21:38:41 crc kubenswrapper[4789]: I0202 21:38:41.004663 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-q8pr6" event={"ID":"a55a234d-1af7-4e73-8f93-b614162be0c3","Type":"ContainerStarted","Data":"2d756f003e9b75af5444a466f5fd1cebbbb53bbbc68e11562d8bf580bc3c9dae"} Feb 02 21:38:41 crc kubenswrapper[4789]: I0202 21:38:41.019980 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-4dc6-account-create-update-bf2l2" event={"ID":"27d741b5-10a4-4acb-b4b8-cf06f35a66f2","Type":"ContainerStarted","Data":"99815c9ab4c4a392d428b682bb8183bd80d7ee78da0ca0fe649f88834caeb4a5"} Feb 02 21:38:41 crc kubenswrapper[4789]: I0202 21:38:41.020216 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-4dc6-account-create-update-bf2l2" event={"ID":"27d741b5-10a4-4acb-b4b8-cf06f35a66f2","Type":"ContainerStarted","Data":"1d397be40b25691f4b16520eaa186a40c073250b7d4ede1cf10108f9056ffaca"} Feb 02 21:38:41 crc kubenswrapper[4789]: I0202 21:38:41.022051 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-qvp7v" podStartSLOduration=3.022034671 podStartE2EDuration="3.022034671s" podCreationTimestamp="2026-02-02 21:38:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:38:41.015569669 +0000 UTC m=+1141.310594688" watchObservedRunningTime="2026-02-02 21:38:41.022034671 +0000 UTC m=+1141.317059690" Feb 02 21:38:41 crc kubenswrapper[4789]: I0202 21:38:41.024117 4789 generic.go:334] "Generic (PLEG): container finished" podID="743bffd7-f479-4b98-8cd6-9714dfcfeab1" containerID="ac99a70faf619168f2f6dbb6cfc2aa89482ec9b6ff1eab45b5685ea95ef9ca8e" exitCode=0 Feb 02 21:38:41 crc kubenswrapper[4789]: I0202 21:38:41.024320 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-wrn6h" event={"ID":"743bffd7-f479-4b98-8cd6-9714dfcfeab1","Type":"ContainerDied","Data":"ac99a70faf619168f2f6dbb6cfc2aa89482ec9b6ff1eab45b5685ea95ef9ca8e"} Feb 02 21:38:41 crc kubenswrapper[4789]: I0202 21:38:41.024553 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-wrn6h" event={"ID":"743bffd7-f479-4b98-8cd6-9714dfcfeab1","Type":"ContainerStarted","Data":"16eddef71fed015c81ef78465afaf8cb734114f54b6e33c2c900bb5324302a37"} Feb 02 21:38:41 crc kubenswrapper[4789]: I0202 21:38:41.025432 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-32d6-account-create-update-5tgfk" event={"ID":"2c632452-0823-4b9b-9eaf-b8e9da3084c9","Type":"ContainerStarted","Data":"f560b261973fe579074cd34bbc1721935671051919e539fb9ad4d3adc8d8a597"} Feb 02 21:38:41 crc kubenswrapper[4789]: I0202 21:38:41.025523 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-32d6-account-create-update-5tgfk" event={"ID":"2c632452-0823-4b9b-9eaf-b8e9da3084c9","Type":"ContainerStarted","Data":"9ab1b3e46052c1852da033bfa9dcc2ce64dc03a6a9906ebe3347c6b7e9e5ba0d"} Feb 02 21:38:41 crc kubenswrapper[4789]: I0202 21:38:41.038066 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-q8pr6" podStartSLOduration=2.420742846 podStartE2EDuration="6.038050573s" podCreationTimestamp="2026-02-02 21:38:35 +0000 UTC" firstStartedPulling="2026-02-02 21:38:36.236751166 +0000 UTC m=+1136.531776185" lastFinishedPulling="2026-02-02 21:38:39.854058893 +0000 UTC m=+1140.149083912" observedRunningTime="2026-02-02 21:38:41.035128351 +0000 UTC m=+1141.330153380" watchObservedRunningTime="2026-02-02 21:38:41.038050573 +0000 UTC m=+1141.333075592" Feb 02 21:38:41 crc kubenswrapper[4789]: I0202 21:38:41.071189 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-4dc6-account-create-update-bf2l2" podStartSLOduration=3.071168398 podStartE2EDuration="3.071168398s" podCreationTimestamp="2026-02-02 21:38:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:38:41.066339752 +0000 UTC m=+1141.361364781" watchObservedRunningTime="2026-02-02 21:38:41.071168398 +0000 UTC m=+1141.366193417" Feb 02 21:38:41 crc kubenswrapper[4789]: I0202 21:38:41.445899 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-l6jmk" Feb 02 21:38:41 crc kubenswrapper[4789]: I0202 21:38:41.464788 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-32d6-account-create-update-5tgfk" podStartSLOduration=3.464768649 podStartE2EDuration="3.464768649s" podCreationTimestamp="2026-02-02 21:38:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:38:41.094832016 +0000 UTC m=+1141.389857055" watchObservedRunningTime="2026-02-02 21:38:41.464768649 +0000 UTC m=+1141.759793668" Feb 02 21:38:41 crc kubenswrapper[4789]: I0202 21:38:41.500283 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cd6df5c8-3899-431d-b9cd-9a9f022160d7-dns-svc\") pod \"cd6df5c8-3899-431d-b9cd-9a9f022160d7\" (UID: \"cd6df5c8-3899-431d-b9cd-9a9f022160d7\") " Feb 02 21:38:41 crc kubenswrapper[4789]: I0202 21:38:41.501261 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dhztm\" (UniqueName: \"kubernetes.io/projected/cd6df5c8-3899-431d-b9cd-9a9f022160d7-kube-api-access-dhztm\") pod \"cd6df5c8-3899-431d-b9cd-9a9f022160d7\" (UID: \"cd6df5c8-3899-431d-b9cd-9a9f022160d7\") " Feb 02 21:38:41 crc kubenswrapper[4789]: I0202 21:38:41.501387 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd6df5c8-3899-431d-b9cd-9a9f022160d7-config\") pod \"cd6df5c8-3899-431d-b9cd-9a9f022160d7\" (UID: \"cd6df5c8-3899-431d-b9cd-9a9f022160d7\") " Feb 02 21:38:41 crc kubenswrapper[4789]: I0202 21:38:41.501634 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cd6df5c8-3899-431d-b9cd-9a9f022160d7-ovsdbserver-nb\") pod \"cd6df5c8-3899-431d-b9cd-9a9f022160d7\" (UID: \"cd6df5c8-3899-431d-b9cd-9a9f022160d7\") " Feb 02 21:38:41 crc kubenswrapper[4789]: I0202 21:38:41.522862 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd6df5c8-3899-431d-b9cd-9a9f022160d7-kube-api-access-dhztm" (OuterVolumeSpecName: "kube-api-access-dhztm") pod "cd6df5c8-3899-431d-b9cd-9a9f022160d7" (UID: "cd6df5c8-3899-431d-b9cd-9a9f022160d7"). InnerVolumeSpecName "kube-api-access-dhztm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:38:41 crc kubenswrapper[4789]: I0202 21:38:41.541799 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd6df5c8-3899-431d-b9cd-9a9f022160d7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cd6df5c8-3899-431d-b9cd-9a9f022160d7" (UID: "cd6df5c8-3899-431d-b9cd-9a9f022160d7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:38:41 crc kubenswrapper[4789]: I0202 21:38:41.551849 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd6df5c8-3899-431d-b9cd-9a9f022160d7-config" (OuterVolumeSpecName: "config") pod "cd6df5c8-3899-431d-b9cd-9a9f022160d7" (UID: "cd6df5c8-3899-431d-b9cd-9a9f022160d7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:38:41 crc kubenswrapper[4789]: I0202 21:38:41.571028 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd6df5c8-3899-431d-b9cd-9a9f022160d7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "cd6df5c8-3899-431d-b9cd-9a9f022160d7" (UID: "cd6df5c8-3899-431d-b9cd-9a9f022160d7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:38:41 crc kubenswrapper[4789]: I0202 21:38:41.604526 4789 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cd6df5c8-3899-431d-b9cd-9a9f022160d7-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 21:38:41 crc kubenswrapper[4789]: I0202 21:38:41.604573 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dhztm\" (UniqueName: \"kubernetes.io/projected/cd6df5c8-3899-431d-b9cd-9a9f022160d7-kube-api-access-dhztm\") on node \"crc\" DevicePath \"\"" Feb 02 21:38:41 crc kubenswrapper[4789]: I0202 21:38:41.604600 4789 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd6df5c8-3899-431d-b9cd-9a9f022160d7-config\") on node \"crc\" DevicePath \"\"" Feb 02 21:38:41 crc kubenswrapper[4789]: I0202 21:38:41.604612 4789 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cd6df5c8-3899-431d-b9cd-9a9f022160d7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 21:38:42 crc kubenswrapper[4789]: I0202 21:38:42.033894 4789 generic.go:334] "Generic (PLEG): container finished" podID="7d80bd6c-a1d7-4ee6-bc8c-8e534d7bfa1a" containerID="6726f0ab9af33468e45fe77530ced8a0b271c97eb7762c9a2fb78e8d8c2d78d1" exitCode=0 Feb 02 21:38:42 crc kubenswrapper[4789]: I0202 21:38:42.033996 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-qvp7v" event={"ID":"7d80bd6c-a1d7-4ee6-bc8c-8e534d7bfa1a","Type":"ContainerDied","Data":"6726f0ab9af33468e45fe77530ced8a0b271c97eb7762c9a2fb78e8d8c2d78d1"} Feb 02 21:38:42 crc kubenswrapper[4789]: I0202 21:38:42.036889 4789 generic.go:334] "Generic (PLEG): container finished" podID="27d741b5-10a4-4acb-b4b8-cf06f35a66f2" containerID="99815c9ab4c4a392d428b682bb8183bd80d7ee78da0ca0fe649f88834caeb4a5" exitCode=0 Feb 02 21:38:42 crc kubenswrapper[4789]: I0202 21:38:42.036967 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-4dc6-account-create-update-bf2l2" event={"ID":"27d741b5-10a4-4acb-b4b8-cf06f35a66f2","Type":"ContainerDied","Data":"99815c9ab4c4a392d428b682bb8183bd80d7ee78da0ca0fe649f88834caeb4a5"} Feb 02 21:38:42 crc kubenswrapper[4789]: I0202 21:38:42.039845 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-l6jmk" event={"ID":"cd6df5c8-3899-431d-b9cd-9a9f022160d7","Type":"ContainerDied","Data":"5b8a0a648dfe5ab0b3be7b0a5e314206ca54e9e33f9531f55c75534ba908378b"} Feb 02 21:38:42 crc kubenswrapper[4789]: I0202 21:38:42.039988 4789 scope.go:117] "RemoveContainer" containerID="bf8bfb384c1266daf0508adff005c3a21ceb0dbea842b6f707f271a6f9ddf49f" Feb 02 21:38:42 crc kubenswrapper[4789]: I0202 21:38:42.040360 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-l6jmk" Feb 02 21:38:42 crc kubenswrapper[4789]: I0202 21:38:42.051342 4789 generic.go:334] "Generic (PLEG): container finished" podID="2c632452-0823-4b9b-9eaf-b8e9da3084c9" containerID="f560b261973fe579074cd34bbc1721935671051919e539fb9ad4d3adc8d8a597" exitCode=0 Feb 02 21:38:42 crc kubenswrapper[4789]: I0202 21:38:42.053738 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-32d6-account-create-update-5tgfk" event={"ID":"2c632452-0823-4b9b-9eaf-b8e9da3084c9","Type":"ContainerDied","Data":"f560b261973fe579074cd34bbc1721935671051919e539fb9ad4d3adc8d8a597"} Feb 02 21:38:42 crc kubenswrapper[4789]: I0202 21:38:42.115508 4789 scope.go:117] "RemoveContainer" containerID="773a012e1fafa159bed9a71915769368717ee71dbe4a3b3579a0e01fdac72586" Feb 02 21:38:42 crc kubenswrapper[4789]: I0202 21:38:42.149112 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-6xcwz"] Feb 02 21:38:42 crc kubenswrapper[4789]: I0202 21:38:42.157533 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-l6jmk"] Feb 02 21:38:42 crc kubenswrapper[4789]: I0202 21:38:42.164819 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-l6jmk"] Feb 02 21:38:42 crc kubenswrapper[4789]: I0202 21:38:42.170443 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-6xcwz"] Feb 02 21:38:42 crc kubenswrapper[4789]: I0202 21:38:42.393672 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-wrn6h" Feb 02 21:38:42 crc kubenswrapper[4789]: I0202 21:38:42.419176 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ggcv\" (UniqueName: \"kubernetes.io/projected/743bffd7-f479-4b98-8cd6-9714dfcfeab1-kube-api-access-6ggcv\") pod \"743bffd7-f479-4b98-8cd6-9714dfcfeab1\" (UID: \"743bffd7-f479-4b98-8cd6-9714dfcfeab1\") " Feb 02 21:38:42 crc kubenswrapper[4789]: I0202 21:38:42.419264 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/743bffd7-f479-4b98-8cd6-9714dfcfeab1-operator-scripts\") pod \"743bffd7-f479-4b98-8cd6-9714dfcfeab1\" (UID: \"743bffd7-f479-4b98-8cd6-9714dfcfeab1\") " Feb 02 21:38:42 crc kubenswrapper[4789]: I0202 21:38:42.420176 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/743bffd7-f479-4b98-8cd6-9714dfcfeab1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "743bffd7-f479-4b98-8cd6-9714dfcfeab1" (UID: "743bffd7-f479-4b98-8cd6-9714dfcfeab1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:38:42 crc kubenswrapper[4789]: I0202 21:38:42.426233 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/743bffd7-f479-4b98-8cd6-9714dfcfeab1-kube-api-access-6ggcv" (OuterVolumeSpecName: "kube-api-access-6ggcv") pod "743bffd7-f479-4b98-8cd6-9714dfcfeab1" (UID: "743bffd7-f479-4b98-8cd6-9714dfcfeab1"). InnerVolumeSpecName "kube-api-access-6ggcv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:38:42 crc kubenswrapper[4789]: I0202 21:38:42.430057 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96c8644c-3d61-4da2-91ef-d668da7e01b9" path="/var/lib/kubelet/pods/96c8644c-3d61-4da2-91ef-d668da7e01b9/volumes" Feb 02 21:38:42 crc kubenswrapper[4789]: I0202 21:38:42.430655 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd6df5c8-3899-431d-b9cd-9a9f022160d7" path="/var/lib/kubelet/pods/cd6df5c8-3899-431d-b9cd-9a9f022160d7/volumes" Feb 02 21:38:42 crc kubenswrapper[4789]: I0202 21:38:42.522570 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ggcv\" (UniqueName: \"kubernetes.io/projected/743bffd7-f479-4b98-8cd6-9714dfcfeab1-kube-api-access-6ggcv\") on node \"crc\" DevicePath \"\"" Feb 02 21:38:42 crc kubenswrapper[4789]: I0202 21:38:42.522647 4789 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/743bffd7-f479-4b98-8cd6-9714dfcfeab1-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 21:38:43 crc kubenswrapper[4789]: I0202 21:38:43.078258 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-wrn6h" Feb 02 21:38:43 crc kubenswrapper[4789]: I0202 21:38:43.078292 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-wrn6h" event={"ID":"743bffd7-f479-4b98-8cd6-9714dfcfeab1","Type":"ContainerDied","Data":"16eddef71fed015c81ef78465afaf8cb734114f54b6e33c2c900bb5324302a37"} Feb 02 21:38:43 crc kubenswrapper[4789]: I0202 21:38:43.078366 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16eddef71fed015c81ef78465afaf8cb734114f54b6e33c2c900bb5324302a37" Feb 02 21:38:43 crc kubenswrapper[4789]: I0202 21:38:43.485943 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-4dc6-account-create-update-bf2l2" Feb 02 21:38:43 crc kubenswrapper[4789]: I0202 21:38:43.544668 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27d741b5-10a4-4acb-b4b8-cf06f35a66f2-operator-scripts\") pod \"27d741b5-10a4-4acb-b4b8-cf06f35a66f2\" (UID: \"27d741b5-10a4-4acb-b4b8-cf06f35a66f2\") " Feb 02 21:38:43 crc kubenswrapper[4789]: I0202 21:38:43.544712 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-626jt\" (UniqueName: \"kubernetes.io/projected/27d741b5-10a4-4acb-b4b8-cf06f35a66f2-kube-api-access-626jt\") pod \"27d741b5-10a4-4acb-b4b8-cf06f35a66f2\" (UID: \"27d741b5-10a4-4acb-b4b8-cf06f35a66f2\") " Feb 02 21:38:43 crc kubenswrapper[4789]: I0202 21:38:43.546360 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27d741b5-10a4-4acb-b4b8-cf06f35a66f2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "27d741b5-10a4-4acb-b4b8-cf06f35a66f2" (UID: "27d741b5-10a4-4acb-b4b8-cf06f35a66f2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:38:43 crc kubenswrapper[4789]: I0202 21:38:43.550934 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27d741b5-10a4-4acb-b4b8-cf06f35a66f2-kube-api-access-626jt" (OuterVolumeSpecName: "kube-api-access-626jt") pod "27d741b5-10a4-4acb-b4b8-cf06f35a66f2" (UID: "27d741b5-10a4-4acb-b4b8-cf06f35a66f2"). InnerVolumeSpecName "kube-api-access-626jt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:38:43 crc kubenswrapper[4789]: I0202 21:38:43.599394 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-qvp7v" Feb 02 21:38:43 crc kubenswrapper[4789]: I0202 21:38:43.605013 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-32d6-account-create-update-5tgfk" Feb 02 21:38:43 crc kubenswrapper[4789]: I0202 21:38:43.646536 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d80bd6c-a1d7-4ee6-bc8c-8e534d7bfa1a-operator-scripts\") pod \"7d80bd6c-a1d7-4ee6-bc8c-8e534d7bfa1a\" (UID: \"7d80bd6c-a1d7-4ee6-bc8c-8e534d7bfa1a\") " Feb 02 21:38:43 crc kubenswrapper[4789]: I0202 21:38:43.646634 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-872j2\" (UniqueName: \"kubernetes.io/projected/7d80bd6c-a1d7-4ee6-bc8c-8e534d7bfa1a-kube-api-access-872j2\") pod \"7d80bd6c-a1d7-4ee6-bc8c-8e534d7bfa1a\" (UID: \"7d80bd6c-a1d7-4ee6-bc8c-8e534d7bfa1a\") " Feb 02 21:38:43 crc kubenswrapper[4789]: I0202 21:38:43.646729 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2c632452-0823-4b9b-9eaf-b8e9da3084c9-operator-scripts\") pod \"2c632452-0823-4b9b-9eaf-b8e9da3084c9\" (UID: \"2c632452-0823-4b9b-9eaf-b8e9da3084c9\") " Feb 02 21:38:43 crc kubenswrapper[4789]: I0202 21:38:43.646839 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h6c2t\" (UniqueName: \"kubernetes.io/projected/2c632452-0823-4b9b-9eaf-b8e9da3084c9-kube-api-access-h6c2t\") pod \"2c632452-0823-4b9b-9eaf-b8e9da3084c9\" (UID: \"2c632452-0823-4b9b-9eaf-b8e9da3084c9\") " Feb 02 21:38:43 crc kubenswrapper[4789]: I0202 21:38:43.647282 4789 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27d741b5-10a4-4acb-b4b8-cf06f35a66f2-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 21:38:43 crc kubenswrapper[4789]: I0202 21:38:43.647312 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-626jt\" (UniqueName: \"kubernetes.io/projected/27d741b5-10a4-4acb-b4b8-cf06f35a66f2-kube-api-access-626jt\") on node \"crc\" DevicePath \"\"" Feb 02 21:38:43 crc kubenswrapper[4789]: I0202 21:38:43.648167 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c632452-0823-4b9b-9eaf-b8e9da3084c9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2c632452-0823-4b9b-9eaf-b8e9da3084c9" (UID: "2c632452-0823-4b9b-9eaf-b8e9da3084c9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:38:43 crc kubenswrapper[4789]: I0202 21:38:43.648333 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d80bd6c-a1d7-4ee6-bc8c-8e534d7bfa1a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7d80bd6c-a1d7-4ee6-bc8c-8e534d7bfa1a" (UID: "7d80bd6c-a1d7-4ee6-bc8c-8e534d7bfa1a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:38:43 crc kubenswrapper[4789]: I0202 21:38:43.650557 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d80bd6c-a1d7-4ee6-bc8c-8e534d7bfa1a-kube-api-access-872j2" (OuterVolumeSpecName: "kube-api-access-872j2") pod "7d80bd6c-a1d7-4ee6-bc8c-8e534d7bfa1a" (UID: "7d80bd6c-a1d7-4ee6-bc8c-8e534d7bfa1a"). InnerVolumeSpecName "kube-api-access-872j2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:38:43 crc kubenswrapper[4789]: I0202 21:38:43.651982 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c632452-0823-4b9b-9eaf-b8e9da3084c9-kube-api-access-h6c2t" (OuterVolumeSpecName: "kube-api-access-h6c2t") pod "2c632452-0823-4b9b-9eaf-b8e9da3084c9" (UID: "2c632452-0823-4b9b-9eaf-b8e9da3084c9"). InnerVolumeSpecName "kube-api-access-h6c2t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:38:43 crc kubenswrapper[4789]: I0202 21:38:43.748907 4789 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d80bd6c-a1d7-4ee6-bc8c-8e534d7bfa1a-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 21:38:43 crc kubenswrapper[4789]: I0202 21:38:43.748954 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-872j2\" (UniqueName: \"kubernetes.io/projected/7d80bd6c-a1d7-4ee6-bc8c-8e534d7bfa1a-kube-api-access-872j2\") on node \"crc\" DevicePath \"\"" Feb 02 21:38:43 crc kubenswrapper[4789]: I0202 21:38:43.748976 4789 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2c632452-0823-4b9b-9eaf-b8e9da3084c9-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 21:38:43 crc kubenswrapper[4789]: I0202 21:38:43.748995 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h6c2t\" (UniqueName: \"kubernetes.io/projected/2c632452-0823-4b9b-9eaf-b8e9da3084c9-kube-api-access-h6c2t\") on node \"crc\" DevicePath \"\"" Feb 02 21:38:43 crc kubenswrapper[4789]: I0202 21:38:43.948738 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 02 21:38:44 crc kubenswrapper[4789]: I0202 21:38:44.124140 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-4dc6-account-create-update-bf2l2" Feb 02 21:38:44 crc kubenswrapper[4789]: I0202 21:38:44.125249 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-4dc6-account-create-update-bf2l2" event={"ID":"27d741b5-10a4-4acb-b4b8-cf06f35a66f2","Type":"ContainerDied","Data":"1d397be40b25691f4b16520eaa186a40c073250b7d4ede1cf10108f9056ffaca"} Feb 02 21:38:44 crc kubenswrapper[4789]: I0202 21:38:44.125304 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d397be40b25691f4b16520eaa186a40c073250b7d4ede1cf10108f9056ffaca" Feb 02 21:38:44 crc kubenswrapper[4789]: I0202 21:38:44.128641 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-32d6-account-create-update-5tgfk" event={"ID":"2c632452-0823-4b9b-9eaf-b8e9da3084c9","Type":"ContainerDied","Data":"9ab1b3e46052c1852da033bfa9dcc2ce64dc03a6a9906ebe3347c6b7e9e5ba0d"} Feb 02 21:38:44 crc kubenswrapper[4789]: I0202 21:38:44.128701 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ab1b3e46052c1852da033bfa9dcc2ce64dc03a6a9906ebe3347c6b7e9e5ba0d" Feb 02 21:38:44 crc kubenswrapper[4789]: I0202 21:38:44.128791 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-32d6-account-create-update-5tgfk" Feb 02 21:38:44 crc kubenswrapper[4789]: I0202 21:38:44.139281 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-qvp7v" event={"ID":"7d80bd6c-a1d7-4ee6-bc8c-8e534d7bfa1a","Type":"ContainerDied","Data":"1383668cc60f70ba3bfa0f2d62f9ae95ccb8647e59fb2ec929730a79aa4c0bc2"} Feb 02 21:38:44 crc kubenswrapper[4789]: I0202 21:38:44.139321 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1383668cc60f70ba3bfa0f2d62f9ae95ccb8647e59fb2ec929730a79aa4c0bc2" Feb 02 21:38:44 crc kubenswrapper[4789]: I0202 21:38:44.139379 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-qvp7v" Feb 02 21:38:44 crc kubenswrapper[4789]: I0202 21:38:44.369138 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-sxqwc"] Feb 02 21:38:44 crc kubenswrapper[4789]: E0202 21:38:44.369433 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd6df5c8-3899-431d-b9cd-9a9f022160d7" containerName="dnsmasq-dns" Feb 02 21:38:44 crc kubenswrapper[4789]: I0202 21:38:44.369447 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd6df5c8-3899-431d-b9cd-9a9f022160d7" containerName="dnsmasq-dns" Feb 02 21:38:44 crc kubenswrapper[4789]: E0202 21:38:44.369463 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce78c9ad-cbd4-4761-8485-af675e18d85a" containerName="mariadb-database-create" Feb 02 21:38:44 crc kubenswrapper[4789]: I0202 21:38:44.369470 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce78c9ad-cbd4-4761-8485-af675e18d85a" containerName="mariadb-database-create" Feb 02 21:38:44 crc kubenswrapper[4789]: E0202 21:38:44.369478 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c632452-0823-4b9b-9eaf-b8e9da3084c9" containerName="mariadb-account-create-update" Feb 02 21:38:44 crc kubenswrapper[4789]: I0202 21:38:44.369483 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c632452-0823-4b9b-9eaf-b8e9da3084c9" containerName="mariadb-account-create-update" Feb 02 21:38:44 crc kubenswrapper[4789]: E0202 21:38:44.369493 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd6df5c8-3899-431d-b9cd-9a9f022160d7" containerName="init" Feb 02 21:38:44 crc kubenswrapper[4789]: I0202 21:38:44.369499 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd6df5c8-3899-431d-b9cd-9a9f022160d7" containerName="init" Feb 02 21:38:44 crc kubenswrapper[4789]: E0202 21:38:44.369514 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96c8644c-3d61-4da2-91ef-d668da7e01b9" containerName="mariadb-account-create-update" Feb 02 21:38:44 crc kubenswrapper[4789]: I0202 21:38:44.369520 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="96c8644c-3d61-4da2-91ef-d668da7e01b9" containerName="mariadb-account-create-update" Feb 02 21:38:44 crc kubenswrapper[4789]: E0202 21:38:44.369529 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="743bffd7-f479-4b98-8cd6-9714dfcfeab1" containerName="mariadb-database-create" Feb 02 21:38:44 crc kubenswrapper[4789]: I0202 21:38:44.369534 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="743bffd7-f479-4b98-8cd6-9714dfcfeab1" containerName="mariadb-database-create" Feb 02 21:38:44 crc kubenswrapper[4789]: E0202 21:38:44.369540 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27d741b5-10a4-4acb-b4b8-cf06f35a66f2" containerName="mariadb-account-create-update" Feb 02 21:38:44 crc kubenswrapper[4789]: I0202 21:38:44.369546 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="27d741b5-10a4-4acb-b4b8-cf06f35a66f2" containerName="mariadb-account-create-update" Feb 02 21:38:44 crc kubenswrapper[4789]: E0202 21:38:44.369556 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d80bd6c-a1d7-4ee6-bc8c-8e534d7bfa1a" containerName="mariadb-database-create" Feb 02 21:38:44 crc kubenswrapper[4789]: I0202 21:38:44.369562 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d80bd6c-a1d7-4ee6-bc8c-8e534d7bfa1a" containerName="mariadb-database-create" Feb 02 21:38:44 crc kubenswrapper[4789]: I0202 21:38:44.369740 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="96c8644c-3d61-4da2-91ef-d668da7e01b9" containerName="mariadb-account-create-update" Feb 02 21:38:44 crc kubenswrapper[4789]: I0202 21:38:44.369756 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce78c9ad-cbd4-4761-8485-af675e18d85a" containerName="mariadb-database-create" Feb 02 21:38:44 crc kubenswrapper[4789]: I0202 21:38:44.369768 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d80bd6c-a1d7-4ee6-bc8c-8e534d7bfa1a" containerName="mariadb-database-create" Feb 02 21:38:44 crc kubenswrapper[4789]: I0202 21:38:44.369781 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c632452-0823-4b9b-9eaf-b8e9da3084c9" containerName="mariadb-account-create-update" Feb 02 21:38:44 crc kubenswrapper[4789]: I0202 21:38:44.369794 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="743bffd7-f479-4b98-8cd6-9714dfcfeab1" containerName="mariadb-database-create" Feb 02 21:38:44 crc kubenswrapper[4789]: I0202 21:38:44.369807 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="27d741b5-10a4-4acb-b4b8-cf06f35a66f2" containerName="mariadb-account-create-update" Feb 02 21:38:44 crc kubenswrapper[4789]: I0202 21:38:44.369820 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd6df5c8-3899-431d-b9cd-9a9f022160d7" containerName="dnsmasq-dns" Feb 02 21:38:44 crc kubenswrapper[4789]: I0202 21:38:44.370335 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-sxqwc" Feb 02 21:38:44 crc kubenswrapper[4789]: I0202 21:38:44.374363 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Feb 02 21:38:44 crc kubenswrapper[4789]: I0202 21:38:44.374798 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-z2f8j" Feb 02 21:38:44 crc kubenswrapper[4789]: I0202 21:38:44.381257 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-sxqwc"] Feb 02 21:38:44 crc kubenswrapper[4789]: I0202 21:38:44.562623 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5c4da6b-2b71-4018-90ce-d569b9f03cfd-config-data\") pod \"glance-db-sync-sxqwc\" (UID: \"c5c4da6b-2b71-4018-90ce-d569b9f03cfd\") " pod="openstack/glance-db-sync-sxqwc" Feb 02 21:38:44 crc kubenswrapper[4789]: I0202 21:38:44.562762 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c5c4da6b-2b71-4018-90ce-d569b9f03cfd-db-sync-config-data\") pod \"glance-db-sync-sxqwc\" (UID: \"c5c4da6b-2b71-4018-90ce-d569b9f03cfd\") " pod="openstack/glance-db-sync-sxqwc" Feb 02 21:38:44 crc kubenswrapper[4789]: I0202 21:38:44.563403 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdn5j\" (UniqueName: \"kubernetes.io/projected/c5c4da6b-2b71-4018-90ce-d569b9f03cfd-kube-api-access-hdn5j\") pod \"glance-db-sync-sxqwc\" (UID: \"c5c4da6b-2b71-4018-90ce-d569b9f03cfd\") " pod="openstack/glance-db-sync-sxqwc" Feb 02 21:38:44 crc kubenswrapper[4789]: I0202 21:38:44.563496 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5c4da6b-2b71-4018-90ce-d569b9f03cfd-combined-ca-bundle\") pod \"glance-db-sync-sxqwc\" (UID: \"c5c4da6b-2b71-4018-90ce-d569b9f03cfd\") " pod="openstack/glance-db-sync-sxqwc" Feb 02 21:38:44 crc kubenswrapper[4789]: I0202 21:38:44.666276 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5c4da6b-2b71-4018-90ce-d569b9f03cfd-combined-ca-bundle\") pod \"glance-db-sync-sxqwc\" (UID: \"c5c4da6b-2b71-4018-90ce-d569b9f03cfd\") " pod="openstack/glance-db-sync-sxqwc" Feb 02 21:38:44 crc kubenswrapper[4789]: I0202 21:38:44.666615 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5c4da6b-2b71-4018-90ce-d569b9f03cfd-config-data\") pod \"glance-db-sync-sxqwc\" (UID: \"c5c4da6b-2b71-4018-90ce-d569b9f03cfd\") " pod="openstack/glance-db-sync-sxqwc" Feb 02 21:38:44 crc kubenswrapper[4789]: I0202 21:38:44.666714 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c5c4da6b-2b71-4018-90ce-d569b9f03cfd-db-sync-config-data\") pod \"glance-db-sync-sxqwc\" (UID: \"c5c4da6b-2b71-4018-90ce-d569b9f03cfd\") " pod="openstack/glance-db-sync-sxqwc" Feb 02 21:38:44 crc kubenswrapper[4789]: I0202 21:38:44.666825 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdn5j\" (UniqueName: \"kubernetes.io/projected/c5c4da6b-2b71-4018-90ce-d569b9f03cfd-kube-api-access-hdn5j\") pod \"glance-db-sync-sxqwc\" (UID: \"c5c4da6b-2b71-4018-90ce-d569b9f03cfd\") " pod="openstack/glance-db-sync-sxqwc" Feb 02 21:38:44 crc kubenswrapper[4789]: I0202 21:38:44.670609 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c5c4da6b-2b71-4018-90ce-d569b9f03cfd-db-sync-config-data\") pod \"glance-db-sync-sxqwc\" (UID: \"c5c4da6b-2b71-4018-90ce-d569b9f03cfd\") " pod="openstack/glance-db-sync-sxqwc" Feb 02 21:38:44 crc kubenswrapper[4789]: I0202 21:38:44.670933 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5c4da6b-2b71-4018-90ce-d569b9f03cfd-combined-ca-bundle\") pod \"glance-db-sync-sxqwc\" (UID: \"c5c4da6b-2b71-4018-90ce-d569b9f03cfd\") " pod="openstack/glance-db-sync-sxqwc" Feb 02 21:38:44 crc kubenswrapper[4789]: I0202 21:38:44.680182 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5c4da6b-2b71-4018-90ce-d569b9f03cfd-config-data\") pod \"glance-db-sync-sxqwc\" (UID: \"c5c4da6b-2b71-4018-90ce-d569b9f03cfd\") " pod="openstack/glance-db-sync-sxqwc" Feb 02 21:38:44 crc kubenswrapper[4789]: I0202 21:38:44.683006 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdn5j\" (UniqueName: \"kubernetes.io/projected/c5c4da6b-2b71-4018-90ce-d569b9f03cfd-kube-api-access-hdn5j\") pod \"glance-db-sync-sxqwc\" (UID: \"c5c4da6b-2b71-4018-90ce-d569b9f03cfd\") " pod="openstack/glance-db-sync-sxqwc" Feb 02 21:38:44 crc kubenswrapper[4789]: I0202 21:38:44.703487 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-sxqwc" Feb 02 21:38:45 crc kubenswrapper[4789]: I0202 21:38:45.237249 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-sxqwc"] Feb 02 21:38:45 crc kubenswrapper[4789]: W0202 21:38:45.241915 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc5c4da6b_2b71_4018_90ce_d569b9f03cfd.slice/crio-f4574fbf874b92f32e4adfad2f3c4e1fde8ad17931192bd3d2ab6305d5f12abe WatchSource:0}: Error finding container f4574fbf874b92f32e4adfad2f3c4e1fde8ad17931192bd3d2ab6305d5f12abe: Status 404 returned error can't find the container with id f4574fbf874b92f32e4adfad2f3c4e1fde8ad17931192bd3d2ab6305d5f12abe Feb 02 21:38:45 crc kubenswrapper[4789]: I0202 21:38:45.691323 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-9c62m"] Feb 02 21:38:45 crc kubenswrapper[4789]: I0202 21:38:45.692949 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9c62m" Feb 02 21:38:45 crc kubenswrapper[4789]: I0202 21:38:45.695436 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 02 21:38:45 crc kubenswrapper[4789]: I0202 21:38:45.704211 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-9c62m"] Feb 02 21:38:45 crc kubenswrapper[4789]: I0202 21:38:45.892945 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c998019-f43d-4699-ba3b-f7b1bfec35d0-operator-scripts\") pod \"root-account-create-update-9c62m\" (UID: \"4c998019-f43d-4699-ba3b-f7b1bfec35d0\") " pod="openstack/root-account-create-update-9c62m" Feb 02 21:38:45 crc kubenswrapper[4789]: I0202 21:38:45.893285 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wq54\" (UniqueName: \"kubernetes.io/projected/4c998019-f43d-4699-ba3b-f7b1bfec35d0-kube-api-access-8wq54\") pod \"root-account-create-update-9c62m\" (UID: \"4c998019-f43d-4699-ba3b-f7b1bfec35d0\") " pod="openstack/root-account-create-update-9c62m" Feb 02 21:38:45 crc kubenswrapper[4789]: I0202 21:38:45.994883 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c998019-f43d-4699-ba3b-f7b1bfec35d0-operator-scripts\") pod \"root-account-create-update-9c62m\" (UID: \"4c998019-f43d-4699-ba3b-f7b1bfec35d0\") " pod="openstack/root-account-create-update-9c62m" Feb 02 21:38:45 crc kubenswrapper[4789]: I0202 21:38:45.994953 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wq54\" (UniqueName: \"kubernetes.io/projected/4c998019-f43d-4699-ba3b-f7b1bfec35d0-kube-api-access-8wq54\") pod \"root-account-create-update-9c62m\" (UID: \"4c998019-f43d-4699-ba3b-f7b1bfec35d0\") " pod="openstack/root-account-create-update-9c62m" Feb 02 21:38:45 crc kubenswrapper[4789]: I0202 21:38:45.997160 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c998019-f43d-4699-ba3b-f7b1bfec35d0-operator-scripts\") pod \"root-account-create-update-9c62m\" (UID: \"4c998019-f43d-4699-ba3b-f7b1bfec35d0\") " pod="openstack/root-account-create-update-9c62m" Feb 02 21:38:46 crc kubenswrapper[4789]: I0202 21:38:46.028622 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wq54\" (UniqueName: \"kubernetes.io/projected/4c998019-f43d-4699-ba3b-f7b1bfec35d0-kube-api-access-8wq54\") pod \"root-account-create-update-9c62m\" (UID: \"4c998019-f43d-4699-ba3b-f7b1bfec35d0\") " pod="openstack/root-account-create-update-9c62m" Feb 02 21:38:46 crc kubenswrapper[4789]: I0202 21:38:46.034493 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9c62m" Feb 02 21:38:46 crc kubenswrapper[4789]: I0202 21:38:46.165456 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-sxqwc" event={"ID":"c5c4da6b-2b71-4018-90ce-d569b9f03cfd","Type":"ContainerStarted","Data":"f4574fbf874b92f32e4adfad2f3c4e1fde8ad17931192bd3d2ab6305d5f12abe"} Feb 02 21:38:46 crc kubenswrapper[4789]: I0202 21:38:46.498561 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-9c62m"] Feb 02 21:38:47 crc kubenswrapper[4789]: I0202 21:38:47.180031 4789 generic.go:334] "Generic (PLEG): container finished" podID="4c998019-f43d-4699-ba3b-f7b1bfec35d0" containerID="9a5ca93c4582c6514e21c95b5e463e017afe67a0e00064023f85657aa0366a24" exitCode=0 Feb 02 21:38:47 crc kubenswrapper[4789]: I0202 21:38:47.180273 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-9c62m" event={"ID":"4c998019-f43d-4699-ba3b-f7b1bfec35d0","Type":"ContainerDied","Data":"9a5ca93c4582c6514e21c95b5e463e017afe67a0e00064023f85657aa0366a24"} Feb 02 21:38:47 crc kubenswrapper[4789]: I0202 21:38:47.180344 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-9c62m" event={"ID":"4c998019-f43d-4699-ba3b-f7b1bfec35d0","Type":"ContainerStarted","Data":"755b24cbffbd798bd207c2f1bf68c6bbf3fc6ba50123e43ed7ab7af483f207b0"} Feb 02 21:38:47 crc kubenswrapper[4789]: I0202 21:38:47.182293 4789 generic.go:334] "Generic (PLEG): container finished" podID="a55a234d-1af7-4e73-8f93-b614162be0c3" containerID="2d756f003e9b75af5444a466f5fd1cebbbb53bbbc68e11562d8bf580bc3c9dae" exitCode=0 Feb 02 21:38:47 crc kubenswrapper[4789]: I0202 21:38:47.182348 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-q8pr6" event={"ID":"a55a234d-1af7-4e73-8f93-b614162be0c3","Type":"ContainerDied","Data":"2d756f003e9b75af5444a466f5fd1cebbbb53bbbc68e11562d8bf580bc3c9dae"} Feb 02 21:38:47 crc kubenswrapper[4789]: I0202 21:38:47.211400 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/87f6bccb-d5fc-4868-aca2-734d16898805-etc-swift\") pod \"swift-storage-0\" (UID: \"87f6bccb-d5fc-4868-aca2-734d16898805\") " pod="openstack/swift-storage-0" Feb 02 21:38:47 crc kubenswrapper[4789]: I0202 21:38:47.219521 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/87f6bccb-d5fc-4868-aca2-734d16898805-etc-swift\") pod \"swift-storage-0\" (UID: \"87f6bccb-d5fc-4868-aca2-734d16898805\") " pod="openstack/swift-storage-0" Feb 02 21:38:47 crc kubenswrapper[4789]: I0202 21:38:47.320794 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 02 21:38:47 crc kubenswrapper[4789]: I0202 21:38:47.857747 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 02 21:38:47 crc kubenswrapper[4789]: W0202 21:38:47.858342 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod87f6bccb_d5fc_4868_aca2_734d16898805.slice/crio-fa3ee7e4fc1174542a4372cd96b69826c253a37869acc4458144491c01712e4a WatchSource:0}: Error finding container fa3ee7e4fc1174542a4372cd96b69826c253a37869acc4458144491c01712e4a: Status 404 returned error can't find the container with id fa3ee7e4fc1174542a4372cd96b69826c253a37869acc4458144491c01712e4a Feb 02 21:38:48 crc kubenswrapper[4789]: I0202 21:38:48.196621 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"87f6bccb-d5fc-4868-aca2-734d16898805","Type":"ContainerStarted","Data":"fa3ee7e4fc1174542a4372cd96b69826c253a37869acc4458144491c01712e4a"} Feb 02 21:38:48 crc kubenswrapper[4789]: I0202 21:38:48.234746 4789 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-gjls4" podUID="c571c3a8-8470-4076-adde-89416f071937" containerName="ovn-controller" probeResult="failure" output=< Feb 02 21:38:48 crc kubenswrapper[4789]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 02 21:38:48 crc kubenswrapper[4789]: > Feb 02 21:38:48 crc kubenswrapper[4789]: I0202 21:38:48.697719 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-q8pr6" Feb 02 21:38:48 crc kubenswrapper[4789]: I0202 21:38:48.708255 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9c62m" Feb 02 21:38:48 crc kubenswrapper[4789]: I0202 21:38:48.875650 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c998019-f43d-4699-ba3b-f7b1bfec35d0-operator-scripts\") pod \"4c998019-f43d-4699-ba3b-f7b1bfec35d0\" (UID: \"4c998019-f43d-4699-ba3b-f7b1bfec35d0\") " Feb 02 21:38:48 crc kubenswrapper[4789]: I0202 21:38:48.875746 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a55a234d-1af7-4e73-8f93-b614162be0c3-etc-swift\") pod \"a55a234d-1af7-4e73-8f93-b614162be0c3\" (UID: \"a55a234d-1af7-4e73-8f93-b614162be0c3\") " Feb 02 21:38:48 crc kubenswrapper[4789]: I0202 21:38:48.875820 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a55a234d-1af7-4e73-8f93-b614162be0c3-combined-ca-bundle\") pod \"a55a234d-1af7-4e73-8f93-b614162be0c3\" (UID: \"a55a234d-1af7-4e73-8f93-b614162be0c3\") " Feb 02 21:38:48 crc kubenswrapper[4789]: I0202 21:38:48.875853 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a55a234d-1af7-4e73-8f93-b614162be0c3-swiftconf\") pod \"a55a234d-1af7-4e73-8f93-b614162be0c3\" (UID: \"a55a234d-1af7-4e73-8f93-b614162be0c3\") " Feb 02 21:38:48 crc kubenswrapper[4789]: I0202 21:38:48.875895 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a55a234d-1af7-4e73-8f93-b614162be0c3-scripts\") pod \"a55a234d-1af7-4e73-8f93-b614162be0c3\" (UID: \"a55a234d-1af7-4e73-8f93-b614162be0c3\") " Feb 02 21:38:48 crc kubenswrapper[4789]: I0202 21:38:48.875911 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8wq54\" (UniqueName: \"kubernetes.io/projected/4c998019-f43d-4699-ba3b-f7b1bfec35d0-kube-api-access-8wq54\") pod \"4c998019-f43d-4699-ba3b-f7b1bfec35d0\" (UID: \"4c998019-f43d-4699-ba3b-f7b1bfec35d0\") " Feb 02 21:38:48 crc kubenswrapper[4789]: I0202 21:38:48.875966 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a55a234d-1af7-4e73-8f93-b614162be0c3-ring-data-devices\") pod \"a55a234d-1af7-4e73-8f93-b614162be0c3\" (UID: \"a55a234d-1af7-4e73-8f93-b614162be0c3\") " Feb 02 21:38:48 crc kubenswrapper[4789]: I0202 21:38:48.876009 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bqmd2\" (UniqueName: \"kubernetes.io/projected/a55a234d-1af7-4e73-8f93-b614162be0c3-kube-api-access-bqmd2\") pod \"a55a234d-1af7-4e73-8f93-b614162be0c3\" (UID: \"a55a234d-1af7-4e73-8f93-b614162be0c3\") " Feb 02 21:38:48 crc kubenswrapper[4789]: I0202 21:38:48.876039 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a55a234d-1af7-4e73-8f93-b614162be0c3-dispersionconf\") pod \"a55a234d-1af7-4e73-8f93-b614162be0c3\" (UID: \"a55a234d-1af7-4e73-8f93-b614162be0c3\") " Feb 02 21:38:48 crc kubenswrapper[4789]: I0202 21:38:48.876497 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c998019-f43d-4699-ba3b-f7b1bfec35d0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4c998019-f43d-4699-ba3b-f7b1bfec35d0" (UID: "4c998019-f43d-4699-ba3b-f7b1bfec35d0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:38:48 crc kubenswrapper[4789]: I0202 21:38:48.877004 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a55a234d-1af7-4e73-8f93-b614162be0c3-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "a55a234d-1af7-4e73-8f93-b614162be0c3" (UID: "a55a234d-1af7-4e73-8f93-b614162be0c3"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:38:48 crc kubenswrapper[4789]: I0202 21:38:48.878163 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a55a234d-1af7-4e73-8f93-b614162be0c3-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "a55a234d-1af7-4e73-8f93-b614162be0c3" (UID: "a55a234d-1af7-4e73-8f93-b614162be0c3"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 21:38:48 crc kubenswrapper[4789]: I0202 21:38:48.882625 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c998019-f43d-4699-ba3b-f7b1bfec35d0-kube-api-access-8wq54" (OuterVolumeSpecName: "kube-api-access-8wq54") pod "4c998019-f43d-4699-ba3b-f7b1bfec35d0" (UID: "4c998019-f43d-4699-ba3b-f7b1bfec35d0"). InnerVolumeSpecName "kube-api-access-8wq54". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:38:48 crc kubenswrapper[4789]: I0202 21:38:48.886116 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a55a234d-1af7-4e73-8f93-b614162be0c3-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "a55a234d-1af7-4e73-8f93-b614162be0c3" (UID: "a55a234d-1af7-4e73-8f93-b614162be0c3"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:38:48 crc kubenswrapper[4789]: I0202 21:38:48.889675 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a55a234d-1af7-4e73-8f93-b614162be0c3-kube-api-access-bqmd2" (OuterVolumeSpecName: "kube-api-access-bqmd2") pod "a55a234d-1af7-4e73-8f93-b614162be0c3" (UID: "a55a234d-1af7-4e73-8f93-b614162be0c3"). InnerVolumeSpecName "kube-api-access-bqmd2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:38:48 crc kubenswrapper[4789]: I0202 21:38:48.902636 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a55a234d-1af7-4e73-8f93-b614162be0c3-scripts" (OuterVolumeSpecName: "scripts") pod "a55a234d-1af7-4e73-8f93-b614162be0c3" (UID: "a55a234d-1af7-4e73-8f93-b614162be0c3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:38:48 crc kubenswrapper[4789]: I0202 21:38:48.903115 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a55a234d-1af7-4e73-8f93-b614162be0c3-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "a55a234d-1af7-4e73-8f93-b614162be0c3" (UID: "a55a234d-1af7-4e73-8f93-b614162be0c3"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:38:48 crc kubenswrapper[4789]: I0202 21:38:48.905317 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a55a234d-1af7-4e73-8f93-b614162be0c3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a55a234d-1af7-4e73-8f93-b614162be0c3" (UID: "a55a234d-1af7-4e73-8f93-b614162be0c3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:38:48 crc kubenswrapper[4789]: I0202 21:38:48.977846 4789 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a55a234d-1af7-4e73-8f93-b614162be0c3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 21:38:48 crc kubenswrapper[4789]: I0202 21:38:48.977875 4789 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a55a234d-1af7-4e73-8f93-b614162be0c3-swiftconf\") on node \"crc\" DevicePath \"\"" Feb 02 21:38:48 crc kubenswrapper[4789]: I0202 21:38:48.977884 4789 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a55a234d-1af7-4e73-8f93-b614162be0c3-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 21:38:48 crc kubenswrapper[4789]: I0202 21:38:48.977892 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8wq54\" (UniqueName: \"kubernetes.io/projected/4c998019-f43d-4699-ba3b-f7b1bfec35d0-kube-api-access-8wq54\") on node \"crc\" DevicePath \"\"" Feb 02 21:38:48 crc kubenswrapper[4789]: I0202 21:38:48.977902 4789 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a55a234d-1af7-4e73-8f93-b614162be0c3-ring-data-devices\") on node \"crc\" DevicePath \"\"" Feb 02 21:38:48 crc kubenswrapper[4789]: I0202 21:38:48.977910 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bqmd2\" (UniqueName: \"kubernetes.io/projected/a55a234d-1af7-4e73-8f93-b614162be0c3-kube-api-access-bqmd2\") on node \"crc\" DevicePath \"\"" Feb 02 21:38:48 crc kubenswrapper[4789]: I0202 21:38:48.977918 4789 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a55a234d-1af7-4e73-8f93-b614162be0c3-dispersionconf\") on node \"crc\" DevicePath \"\"" Feb 02 21:38:48 crc kubenswrapper[4789]: I0202 21:38:48.977925 4789 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c998019-f43d-4699-ba3b-f7b1bfec35d0-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 21:38:48 crc kubenswrapper[4789]: I0202 21:38:48.977933 4789 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a55a234d-1af7-4e73-8f93-b614162be0c3-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 02 21:38:49 crc kubenswrapper[4789]: I0202 21:38:49.208800 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-q8pr6" event={"ID":"a55a234d-1af7-4e73-8f93-b614162be0c3","Type":"ContainerDied","Data":"e025359b1fbebbafbaff4cad6f83eeeae6325f1acc25745ecbd27a56dd61d625"} Feb 02 21:38:49 crc kubenswrapper[4789]: I0202 21:38:49.208837 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e025359b1fbebbafbaff4cad6f83eeeae6325f1acc25745ecbd27a56dd61d625" Feb 02 21:38:49 crc kubenswrapper[4789]: I0202 21:38:49.208880 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-q8pr6" Feb 02 21:38:49 crc kubenswrapper[4789]: I0202 21:38:49.211872 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"87f6bccb-d5fc-4868-aca2-734d16898805","Type":"ContainerStarted","Data":"db66ce76b54133027343e52fa4a37bee9603c2a78eccea429cb9107f7f66533b"} Feb 02 21:38:49 crc kubenswrapper[4789]: I0202 21:38:49.214814 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-9c62m" event={"ID":"4c998019-f43d-4699-ba3b-f7b1bfec35d0","Type":"ContainerDied","Data":"755b24cbffbd798bd207c2f1bf68c6bbf3fc6ba50123e43ed7ab7af483f207b0"} Feb 02 21:38:49 crc kubenswrapper[4789]: I0202 21:38:49.214839 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="755b24cbffbd798bd207c2f1bf68c6bbf3fc6ba50123e43ed7ab7af483f207b0" Feb 02 21:38:49 crc kubenswrapper[4789]: I0202 21:38:49.214887 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9c62m" Feb 02 21:38:50 crc kubenswrapper[4789]: I0202 21:38:50.231133 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"87f6bccb-d5fc-4868-aca2-734d16898805","Type":"ContainerStarted","Data":"b2a613095dfded30ccf9e469a7904687f82e0e1076df8bb3c12d61ae91f09cbb"} Feb 02 21:38:50 crc kubenswrapper[4789]: I0202 21:38:50.231438 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"87f6bccb-d5fc-4868-aca2-734d16898805","Type":"ContainerStarted","Data":"dc1d8d39fd0b72fbfd8a3196945369271e6997b06ed178e120be5a8c661363c0"} Feb 02 21:38:51 crc kubenswrapper[4789]: I0202 21:38:51.239957 4789 generic.go:334] "Generic (PLEG): container finished" podID="b8917d54-451e-4a56-9e8a-142bb5db17e1" containerID="1e002b2adadc7aa45b24e8a9b6b844784752243592f8b12913aa7c87780c5192" exitCode=0 Feb 02 21:38:51 crc kubenswrapper[4789]: I0202 21:38:51.240023 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b8917d54-451e-4a56-9e8a-142bb5db17e1","Type":"ContainerDied","Data":"1e002b2adadc7aa45b24e8a9b6b844784752243592f8b12913aa7c87780c5192"} Feb 02 21:38:51 crc kubenswrapper[4789]: I0202 21:38:51.243615 4789 generic.go:334] "Generic (PLEG): container finished" podID="b4db4b23-dae0-42a5-ad47-3336073d0b6a" containerID="b73f21ef1c3cee1aa5a9891e737707de6085ae08c57a737e8bf2cb9c0bd4154c" exitCode=0 Feb 02 21:38:51 crc kubenswrapper[4789]: I0202 21:38:51.243668 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b4db4b23-dae0-42a5-ad47-3336073d0b6a","Type":"ContainerDied","Data":"b73f21ef1c3cee1aa5a9891e737707de6085ae08c57a737e8bf2cb9c0bd4154c"} Feb 02 21:38:52 crc kubenswrapper[4789]: I0202 21:38:52.138231 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-9c62m"] Feb 02 21:38:52 crc kubenswrapper[4789]: I0202 21:38:52.153313 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-9c62m"] Feb 02 21:38:52 crc kubenswrapper[4789]: I0202 21:38:52.438111 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c998019-f43d-4699-ba3b-f7b1bfec35d0" path="/var/lib/kubelet/pods/4c998019-f43d-4699-ba3b-f7b1bfec35d0/volumes" Feb 02 21:38:53 crc kubenswrapper[4789]: I0202 21:38:53.222940 4789 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-gjls4" podUID="c571c3a8-8470-4076-adde-89416f071937" containerName="ovn-controller" probeResult="failure" output=< Feb 02 21:38:53 crc kubenswrapper[4789]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 02 21:38:53 crc kubenswrapper[4789]: > Feb 02 21:38:53 crc kubenswrapper[4789]: I0202 21:38:53.250784 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-tjn59" Feb 02 21:38:53 crc kubenswrapper[4789]: I0202 21:38:53.268184 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b8917d54-451e-4a56-9e8a-142bb5db17e1","Type":"ContainerStarted","Data":"c1c71c5e760475551c02af4a87ac69c6090f0b50c9bec80607975d728d2b02e2"} Feb 02 21:38:53 crc kubenswrapper[4789]: I0202 21:38:53.269410 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 02 21:38:53 crc kubenswrapper[4789]: I0202 21:38:53.271857 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"87f6bccb-d5fc-4868-aca2-734d16898805","Type":"ContainerStarted","Data":"b07c3c791de729e8c85f1895c49db2a43d74603b713f577900b8371d9d871050"} Feb 02 21:38:53 crc kubenswrapper[4789]: I0202 21:38:53.276664 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b4db4b23-dae0-42a5-ad47-3336073d0b6a","Type":"ContainerStarted","Data":"669108a572e6de86b6fe38547a253f5eabaaaa84647d8dcb02f45a63322c1bd9"} Feb 02 21:38:53 crc kubenswrapper[4789]: I0202 21:38:53.277235 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 02 21:38:53 crc kubenswrapper[4789]: I0202 21:38:53.284350 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-tjn59" Feb 02 21:38:53 crc kubenswrapper[4789]: I0202 21:38:53.303347 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=50.336701695 podStartE2EDuration="59.303329058s" podCreationTimestamp="2026-02-02 21:37:54 +0000 UTC" firstStartedPulling="2026-02-02 21:38:07.002175434 +0000 UTC m=+1107.297200453" lastFinishedPulling="2026-02-02 21:38:15.968802797 +0000 UTC m=+1116.263827816" observedRunningTime="2026-02-02 21:38:53.2973658 +0000 UTC m=+1153.592390839" watchObservedRunningTime="2026-02-02 21:38:53.303329058 +0000 UTC m=+1153.598354077" Feb 02 21:38:53 crc kubenswrapper[4789]: I0202 21:38:53.328723 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=49.333904139 podStartE2EDuration="59.328708035s" podCreationTimestamp="2026-02-02 21:37:54 +0000 UTC" firstStartedPulling="2026-02-02 21:38:06.067186541 +0000 UTC m=+1106.362211600" lastFinishedPulling="2026-02-02 21:38:16.061990437 +0000 UTC m=+1116.357015496" observedRunningTime="2026-02-02 21:38:53.321399689 +0000 UTC m=+1153.616424708" watchObservedRunningTime="2026-02-02 21:38:53.328708035 +0000 UTC m=+1153.623733054" Feb 02 21:38:53 crc kubenswrapper[4789]: I0202 21:38:53.540190 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-gjls4-config-xgfd5"] Feb 02 21:38:53 crc kubenswrapper[4789]: E0202 21:38:53.540637 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c998019-f43d-4699-ba3b-f7b1bfec35d0" containerName="mariadb-account-create-update" Feb 02 21:38:53 crc kubenswrapper[4789]: I0202 21:38:53.540657 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c998019-f43d-4699-ba3b-f7b1bfec35d0" containerName="mariadb-account-create-update" Feb 02 21:38:53 crc kubenswrapper[4789]: E0202 21:38:53.540669 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a55a234d-1af7-4e73-8f93-b614162be0c3" containerName="swift-ring-rebalance" Feb 02 21:38:53 crc kubenswrapper[4789]: I0202 21:38:53.540677 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="a55a234d-1af7-4e73-8f93-b614162be0c3" containerName="swift-ring-rebalance" Feb 02 21:38:53 crc kubenswrapper[4789]: I0202 21:38:53.540875 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="a55a234d-1af7-4e73-8f93-b614162be0c3" containerName="swift-ring-rebalance" Feb 02 21:38:53 crc kubenswrapper[4789]: I0202 21:38:53.540896 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c998019-f43d-4699-ba3b-f7b1bfec35d0" containerName="mariadb-account-create-update" Feb 02 21:38:53 crc kubenswrapper[4789]: I0202 21:38:53.541550 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-gjls4-config-xgfd5" Feb 02 21:38:53 crc kubenswrapper[4789]: I0202 21:38:53.557431 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 02 21:38:53 crc kubenswrapper[4789]: I0202 21:38:53.558991 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-gjls4-config-xgfd5"] Feb 02 21:38:53 crc kubenswrapper[4789]: I0202 21:38:53.661717 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0c45a38b-1868-48cc-b0b9-3f70adeeebff-var-run\") pod \"ovn-controller-gjls4-config-xgfd5\" (UID: \"0c45a38b-1868-48cc-b0b9-3f70adeeebff\") " pod="openstack/ovn-controller-gjls4-config-xgfd5" Feb 02 21:38:53 crc kubenswrapper[4789]: I0202 21:38:53.661764 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0c45a38b-1868-48cc-b0b9-3f70adeeebff-scripts\") pod \"ovn-controller-gjls4-config-xgfd5\" (UID: \"0c45a38b-1868-48cc-b0b9-3f70adeeebff\") " pod="openstack/ovn-controller-gjls4-config-xgfd5" Feb 02 21:38:53 crc kubenswrapper[4789]: I0202 21:38:53.661794 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0c45a38b-1868-48cc-b0b9-3f70adeeebff-var-log-ovn\") pod \"ovn-controller-gjls4-config-xgfd5\" (UID: \"0c45a38b-1868-48cc-b0b9-3f70adeeebff\") " pod="openstack/ovn-controller-gjls4-config-xgfd5" Feb 02 21:38:53 crc kubenswrapper[4789]: I0202 21:38:53.661871 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/0c45a38b-1868-48cc-b0b9-3f70adeeebff-additional-scripts\") pod \"ovn-controller-gjls4-config-xgfd5\" (UID: \"0c45a38b-1868-48cc-b0b9-3f70adeeebff\") " pod="openstack/ovn-controller-gjls4-config-xgfd5" Feb 02 21:38:53 crc kubenswrapper[4789]: I0202 21:38:53.661913 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwcvq\" (UniqueName: \"kubernetes.io/projected/0c45a38b-1868-48cc-b0b9-3f70adeeebff-kube-api-access-kwcvq\") pod \"ovn-controller-gjls4-config-xgfd5\" (UID: \"0c45a38b-1868-48cc-b0b9-3f70adeeebff\") " pod="openstack/ovn-controller-gjls4-config-xgfd5" Feb 02 21:38:53 crc kubenswrapper[4789]: I0202 21:38:53.661945 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0c45a38b-1868-48cc-b0b9-3f70adeeebff-var-run-ovn\") pod \"ovn-controller-gjls4-config-xgfd5\" (UID: \"0c45a38b-1868-48cc-b0b9-3f70adeeebff\") " pod="openstack/ovn-controller-gjls4-config-xgfd5" Feb 02 21:38:53 crc kubenswrapper[4789]: I0202 21:38:53.763462 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0c45a38b-1868-48cc-b0b9-3f70adeeebff-var-run-ovn\") pod \"ovn-controller-gjls4-config-xgfd5\" (UID: \"0c45a38b-1868-48cc-b0b9-3f70adeeebff\") " pod="openstack/ovn-controller-gjls4-config-xgfd5" Feb 02 21:38:53 crc kubenswrapper[4789]: I0202 21:38:53.763569 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0c45a38b-1868-48cc-b0b9-3f70adeeebff-var-run\") pod \"ovn-controller-gjls4-config-xgfd5\" (UID: \"0c45a38b-1868-48cc-b0b9-3f70adeeebff\") " pod="openstack/ovn-controller-gjls4-config-xgfd5" Feb 02 21:38:53 crc kubenswrapper[4789]: I0202 21:38:53.763611 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0c45a38b-1868-48cc-b0b9-3f70adeeebff-scripts\") pod \"ovn-controller-gjls4-config-xgfd5\" (UID: \"0c45a38b-1868-48cc-b0b9-3f70adeeebff\") " pod="openstack/ovn-controller-gjls4-config-xgfd5" Feb 02 21:38:53 crc kubenswrapper[4789]: I0202 21:38:53.763639 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0c45a38b-1868-48cc-b0b9-3f70adeeebff-var-log-ovn\") pod \"ovn-controller-gjls4-config-xgfd5\" (UID: \"0c45a38b-1868-48cc-b0b9-3f70adeeebff\") " pod="openstack/ovn-controller-gjls4-config-xgfd5" Feb 02 21:38:53 crc kubenswrapper[4789]: I0202 21:38:53.763714 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/0c45a38b-1868-48cc-b0b9-3f70adeeebff-additional-scripts\") pod \"ovn-controller-gjls4-config-xgfd5\" (UID: \"0c45a38b-1868-48cc-b0b9-3f70adeeebff\") " pod="openstack/ovn-controller-gjls4-config-xgfd5" Feb 02 21:38:53 crc kubenswrapper[4789]: I0202 21:38:53.763753 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwcvq\" (UniqueName: \"kubernetes.io/projected/0c45a38b-1868-48cc-b0b9-3f70adeeebff-kube-api-access-kwcvq\") pod \"ovn-controller-gjls4-config-xgfd5\" (UID: \"0c45a38b-1868-48cc-b0b9-3f70adeeebff\") " pod="openstack/ovn-controller-gjls4-config-xgfd5" Feb 02 21:38:53 crc kubenswrapper[4789]: I0202 21:38:53.764506 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0c45a38b-1868-48cc-b0b9-3f70adeeebff-var-run-ovn\") pod \"ovn-controller-gjls4-config-xgfd5\" (UID: \"0c45a38b-1868-48cc-b0b9-3f70adeeebff\") " pod="openstack/ovn-controller-gjls4-config-xgfd5" Feb 02 21:38:53 crc kubenswrapper[4789]: I0202 21:38:53.764558 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0c45a38b-1868-48cc-b0b9-3f70adeeebff-var-run\") pod \"ovn-controller-gjls4-config-xgfd5\" (UID: \"0c45a38b-1868-48cc-b0b9-3f70adeeebff\") " pod="openstack/ovn-controller-gjls4-config-xgfd5" Feb 02 21:38:53 crc kubenswrapper[4789]: I0202 21:38:53.766293 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0c45a38b-1868-48cc-b0b9-3f70adeeebff-scripts\") pod \"ovn-controller-gjls4-config-xgfd5\" (UID: \"0c45a38b-1868-48cc-b0b9-3f70adeeebff\") " pod="openstack/ovn-controller-gjls4-config-xgfd5" Feb 02 21:38:53 crc kubenswrapper[4789]: I0202 21:38:53.766350 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0c45a38b-1868-48cc-b0b9-3f70adeeebff-var-log-ovn\") pod \"ovn-controller-gjls4-config-xgfd5\" (UID: \"0c45a38b-1868-48cc-b0b9-3f70adeeebff\") " pod="openstack/ovn-controller-gjls4-config-xgfd5" Feb 02 21:38:53 crc kubenswrapper[4789]: I0202 21:38:53.766766 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/0c45a38b-1868-48cc-b0b9-3f70adeeebff-additional-scripts\") pod \"ovn-controller-gjls4-config-xgfd5\" (UID: \"0c45a38b-1868-48cc-b0b9-3f70adeeebff\") " pod="openstack/ovn-controller-gjls4-config-xgfd5" Feb 02 21:38:53 crc kubenswrapper[4789]: I0202 21:38:53.793912 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwcvq\" (UniqueName: \"kubernetes.io/projected/0c45a38b-1868-48cc-b0b9-3f70adeeebff-kube-api-access-kwcvq\") pod \"ovn-controller-gjls4-config-xgfd5\" (UID: \"0c45a38b-1868-48cc-b0b9-3f70adeeebff\") " pod="openstack/ovn-controller-gjls4-config-xgfd5" Feb 02 21:38:53 crc kubenswrapper[4789]: I0202 21:38:53.870567 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-gjls4-config-xgfd5" Feb 02 21:38:57 crc kubenswrapper[4789]: I0202 21:38:57.146451 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-xzh8f"] Feb 02 21:38:57 crc kubenswrapper[4789]: I0202 21:38:57.149881 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-xzh8f" Feb 02 21:38:57 crc kubenswrapper[4789]: I0202 21:38:57.151999 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 02 21:38:57 crc kubenswrapper[4789]: I0202 21:38:57.168562 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-xzh8f"] Feb 02 21:38:57 crc kubenswrapper[4789]: I0202 21:38:57.239700 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a32183f3-d42d-459f-8fd6-268d398cbb82-operator-scripts\") pod \"root-account-create-update-xzh8f\" (UID: \"a32183f3-d42d-459f-8fd6-268d398cbb82\") " pod="openstack/root-account-create-update-xzh8f" Feb 02 21:38:57 crc kubenswrapper[4789]: I0202 21:38:57.239768 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgzsw\" (UniqueName: \"kubernetes.io/projected/a32183f3-d42d-459f-8fd6-268d398cbb82-kube-api-access-hgzsw\") pod \"root-account-create-update-xzh8f\" (UID: \"a32183f3-d42d-459f-8fd6-268d398cbb82\") " pod="openstack/root-account-create-update-xzh8f" Feb 02 21:38:57 crc kubenswrapper[4789]: I0202 21:38:57.340769 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a32183f3-d42d-459f-8fd6-268d398cbb82-operator-scripts\") pod \"root-account-create-update-xzh8f\" (UID: \"a32183f3-d42d-459f-8fd6-268d398cbb82\") " pod="openstack/root-account-create-update-xzh8f" Feb 02 21:38:57 crc kubenswrapper[4789]: I0202 21:38:57.340839 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgzsw\" (UniqueName: \"kubernetes.io/projected/a32183f3-d42d-459f-8fd6-268d398cbb82-kube-api-access-hgzsw\") pod \"root-account-create-update-xzh8f\" (UID: \"a32183f3-d42d-459f-8fd6-268d398cbb82\") " pod="openstack/root-account-create-update-xzh8f" Feb 02 21:38:57 crc kubenswrapper[4789]: I0202 21:38:57.341852 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a32183f3-d42d-459f-8fd6-268d398cbb82-operator-scripts\") pod \"root-account-create-update-xzh8f\" (UID: \"a32183f3-d42d-459f-8fd6-268d398cbb82\") " pod="openstack/root-account-create-update-xzh8f" Feb 02 21:38:57 crc kubenswrapper[4789]: I0202 21:38:57.373979 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgzsw\" (UniqueName: \"kubernetes.io/projected/a32183f3-d42d-459f-8fd6-268d398cbb82-kube-api-access-hgzsw\") pod \"root-account-create-update-xzh8f\" (UID: \"a32183f3-d42d-459f-8fd6-268d398cbb82\") " pod="openstack/root-account-create-update-xzh8f" Feb 02 21:38:57 crc kubenswrapper[4789]: I0202 21:38:57.477010 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-xzh8f" Feb 02 21:38:58 crc kubenswrapper[4789]: I0202 21:38:58.213009 4789 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-gjls4" podUID="c571c3a8-8470-4076-adde-89416f071937" containerName="ovn-controller" probeResult="failure" output=< Feb 02 21:38:58 crc kubenswrapper[4789]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 02 21:38:58 crc kubenswrapper[4789]: > Feb 02 21:39:00 crc kubenswrapper[4789]: I0202 21:39:00.109169 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-xzh8f"] Feb 02 21:39:00 crc kubenswrapper[4789]: I0202 21:39:00.212884 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-gjls4-config-xgfd5"] Feb 02 21:39:00 crc kubenswrapper[4789]: W0202 21:39:00.315366 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda32183f3_d42d_459f_8fd6_268d398cbb82.slice/crio-d6b558f55eed21ba35e39235864762dc371ccd80943f1ab0e2261589ca37349b WatchSource:0}: Error finding container d6b558f55eed21ba35e39235864762dc371ccd80943f1ab0e2261589ca37349b: Status 404 returned error can't find the container with id d6b558f55eed21ba35e39235864762dc371ccd80943f1ab0e2261589ca37349b Feb 02 21:39:00 crc kubenswrapper[4789]: W0202 21:39:00.316668 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0c45a38b_1868_48cc_b0b9_3f70adeeebff.slice/crio-a143f757064d2a5c7e93f098e58d5dca57fe5f295988436667cc54ab79c46f9c WatchSource:0}: Error finding container a143f757064d2a5c7e93f098e58d5dca57fe5f295988436667cc54ab79c46f9c: Status 404 returned error can't find the container with id a143f757064d2a5c7e93f098e58d5dca57fe5f295988436667cc54ab79c46f9c Feb 02 21:39:00 crc kubenswrapper[4789]: I0202 21:39:00.337096 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-xzh8f" event={"ID":"a32183f3-d42d-459f-8fd6-268d398cbb82","Type":"ContainerStarted","Data":"d6b558f55eed21ba35e39235864762dc371ccd80943f1ab0e2261589ca37349b"} Feb 02 21:39:00 crc kubenswrapper[4789]: I0202 21:39:00.338197 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-gjls4-config-xgfd5" event={"ID":"0c45a38b-1868-48cc-b0b9-3f70adeeebff","Type":"ContainerStarted","Data":"a143f757064d2a5c7e93f098e58d5dca57fe5f295988436667cc54ab79c46f9c"} Feb 02 21:39:01 crc kubenswrapper[4789]: I0202 21:39:01.355015 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"87f6bccb-d5fc-4868-aca2-734d16898805","Type":"ContainerStarted","Data":"aab045fa01e8633951d3b23cb6099a13479fc7bde9e851b10aeb53ad724f1a5a"} Feb 02 21:39:01 crc kubenswrapper[4789]: I0202 21:39:01.355620 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"87f6bccb-d5fc-4868-aca2-734d16898805","Type":"ContainerStarted","Data":"d8b8973838965c20503722920a92fa3f55adad61b2b29d0ad5b46e04847ba642"} Feb 02 21:39:01 crc kubenswrapper[4789]: I0202 21:39:01.355631 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"87f6bccb-d5fc-4868-aca2-734d16898805","Type":"ContainerStarted","Data":"f8710e800cb558add663bfff070701d51801997c411687aea039144baf3f407d"} Feb 02 21:39:01 crc kubenswrapper[4789]: I0202 21:39:01.358130 4789 generic.go:334] "Generic (PLEG): container finished" podID="a32183f3-d42d-459f-8fd6-268d398cbb82" containerID="96a08e8fb516847374b2ab373ad97fdcb73d2efa2aed29cc4309574c6e8ffd3b" exitCode=0 Feb 02 21:39:01 crc kubenswrapper[4789]: I0202 21:39:01.358499 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-xzh8f" event={"ID":"a32183f3-d42d-459f-8fd6-268d398cbb82","Type":"ContainerDied","Data":"96a08e8fb516847374b2ab373ad97fdcb73d2efa2aed29cc4309574c6e8ffd3b"} Feb 02 21:39:01 crc kubenswrapper[4789]: I0202 21:39:01.363844 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-sxqwc" event={"ID":"c5c4da6b-2b71-4018-90ce-d569b9f03cfd","Type":"ContainerStarted","Data":"6a85531001078e7b0623762a1ac1ae36a06d2b03afa2a75de0c40ceaef0cf5ef"} Feb 02 21:39:01 crc kubenswrapper[4789]: I0202 21:39:01.371377 4789 generic.go:334] "Generic (PLEG): container finished" podID="0c45a38b-1868-48cc-b0b9-3f70adeeebff" containerID="f47ff43ff041b8e71635de9af25216b371b7e740200d70f77ff890795c3a4085" exitCode=0 Feb 02 21:39:01 crc kubenswrapper[4789]: I0202 21:39:01.371462 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-gjls4-config-xgfd5" event={"ID":"0c45a38b-1868-48cc-b0b9-3f70adeeebff","Type":"ContainerDied","Data":"f47ff43ff041b8e71635de9af25216b371b7e740200d70f77ff890795c3a4085"} Feb 02 21:39:01 crc kubenswrapper[4789]: I0202 21:39:01.454488 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-sxqwc" podStartSLOduration=2.874240844 podStartE2EDuration="17.454471234s" podCreationTimestamp="2026-02-02 21:38:44 +0000 UTC" firstStartedPulling="2026-02-02 21:38:45.24379399 +0000 UTC m=+1145.538819009" lastFinishedPulling="2026-02-02 21:38:59.82402439 +0000 UTC m=+1160.119049399" observedRunningTime="2026-02-02 21:39:01.452901729 +0000 UTC m=+1161.747926758" watchObservedRunningTime="2026-02-02 21:39:01.454471234 +0000 UTC m=+1161.749496253" Feb 02 21:39:02 crc kubenswrapper[4789]: I0202 21:39:02.384213 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"87f6bccb-d5fc-4868-aca2-734d16898805","Type":"ContainerStarted","Data":"7a20dacf9652208f7b99bf2a1079fa1a4eb150591b3740a517f85585c21a53d1"} Feb 02 21:39:02 crc kubenswrapper[4789]: I0202 21:39:02.786496 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-gjls4-config-xgfd5" Feb 02 21:39:02 crc kubenswrapper[4789]: I0202 21:39:02.794087 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-xzh8f" Feb 02 21:39:02 crc kubenswrapper[4789]: I0202 21:39:02.938066 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0c45a38b-1868-48cc-b0b9-3f70adeeebff-var-run\") pod \"0c45a38b-1868-48cc-b0b9-3f70adeeebff\" (UID: \"0c45a38b-1868-48cc-b0b9-3f70adeeebff\") " Feb 02 21:39:02 crc kubenswrapper[4789]: I0202 21:39:02.938358 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0c45a38b-1868-48cc-b0b9-3f70adeeebff-var-run-ovn\") pod \"0c45a38b-1868-48cc-b0b9-3f70adeeebff\" (UID: \"0c45a38b-1868-48cc-b0b9-3f70adeeebff\") " Feb 02 21:39:02 crc kubenswrapper[4789]: I0202 21:39:02.938568 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/0c45a38b-1868-48cc-b0b9-3f70adeeebff-additional-scripts\") pod \"0c45a38b-1868-48cc-b0b9-3f70adeeebff\" (UID: \"0c45a38b-1868-48cc-b0b9-3f70adeeebff\") " Feb 02 21:39:02 crc kubenswrapper[4789]: I0202 21:39:02.938779 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kwcvq\" (UniqueName: \"kubernetes.io/projected/0c45a38b-1868-48cc-b0b9-3f70adeeebff-kube-api-access-kwcvq\") pod \"0c45a38b-1868-48cc-b0b9-3f70adeeebff\" (UID: \"0c45a38b-1868-48cc-b0b9-3f70adeeebff\") " Feb 02 21:39:02 crc kubenswrapper[4789]: I0202 21:39:02.938951 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0c45a38b-1868-48cc-b0b9-3f70adeeebff-var-log-ovn\") pod \"0c45a38b-1868-48cc-b0b9-3f70adeeebff\" (UID: \"0c45a38b-1868-48cc-b0b9-3f70adeeebff\") " Feb 02 21:39:02 crc kubenswrapper[4789]: I0202 21:39:02.938179 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c45a38b-1868-48cc-b0b9-3f70adeeebff-var-run" (OuterVolumeSpecName: "var-run") pod "0c45a38b-1868-48cc-b0b9-3f70adeeebff" (UID: "0c45a38b-1868-48cc-b0b9-3f70adeeebff"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 21:39:02 crc kubenswrapper[4789]: I0202 21:39:02.938450 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c45a38b-1868-48cc-b0b9-3f70adeeebff-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "0c45a38b-1868-48cc-b0b9-3f70adeeebff" (UID: "0c45a38b-1868-48cc-b0b9-3f70adeeebff"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 21:39:02 crc kubenswrapper[4789]: I0202 21:39:02.939045 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c45a38b-1868-48cc-b0b9-3f70adeeebff-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "0c45a38b-1868-48cc-b0b9-3f70adeeebff" (UID: "0c45a38b-1868-48cc-b0b9-3f70adeeebff"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 21:39:02 crc kubenswrapper[4789]: I0202 21:39:02.939428 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0c45a38b-1868-48cc-b0b9-3f70adeeebff-scripts\") pod \"0c45a38b-1868-48cc-b0b9-3f70adeeebff\" (UID: \"0c45a38b-1868-48cc-b0b9-3f70adeeebff\") " Feb 02 21:39:02 crc kubenswrapper[4789]: I0202 21:39:02.939720 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hgzsw\" (UniqueName: \"kubernetes.io/projected/a32183f3-d42d-459f-8fd6-268d398cbb82-kube-api-access-hgzsw\") pod \"a32183f3-d42d-459f-8fd6-268d398cbb82\" (UID: \"a32183f3-d42d-459f-8fd6-268d398cbb82\") " Feb 02 21:39:02 crc kubenswrapper[4789]: I0202 21:39:02.939918 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a32183f3-d42d-459f-8fd6-268d398cbb82-operator-scripts\") pod \"a32183f3-d42d-459f-8fd6-268d398cbb82\" (UID: \"a32183f3-d42d-459f-8fd6-268d398cbb82\") " Feb 02 21:39:02 crc kubenswrapper[4789]: I0202 21:39:02.940250 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c45a38b-1868-48cc-b0b9-3f70adeeebff-scripts" (OuterVolumeSpecName: "scripts") pod "0c45a38b-1868-48cc-b0b9-3f70adeeebff" (UID: "0c45a38b-1868-48cc-b0b9-3f70adeeebff"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:39:02 crc kubenswrapper[4789]: I0202 21:39:02.940429 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c45a38b-1868-48cc-b0b9-3f70adeeebff-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "0c45a38b-1868-48cc-b0b9-3f70adeeebff" (UID: "0c45a38b-1868-48cc-b0b9-3f70adeeebff"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:39:02 crc kubenswrapper[4789]: I0202 21:39:02.940801 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a32183f3-d42d-459f-8fd6-268d398cbb82-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a32183f3-d42d-459f-8fd6-268d398cbb82" (UID: "a32183f3-d42d-459f-8fd6-268d398cbb82"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:39:02 crc kubenswrapper[4789]: I0202 21:39:02.940980 4789 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0c45a38b-1868-48cc-b0b9-3f70adeeebff-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 02 21:39:02 crc kubenswrapper[4789]: I0202 21:39:02.941105 4789 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/0c45a38b-1868-48cc-b0b9-3f70adeeebff-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 21:39:02 crc kubenswrapper[4789]: I0202 21:39:02.941221 4789 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0c45a38b-1868-48cc-b0b9-3f70adeeebff-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 02 21:39:02 crc kubenswrapper[4789]: I0202 21:39:02.941353 4789 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0c45a38b-1868-48cc-b0b9-3f70adeeebff-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 21:39:02 crc kubenswrapper[4789]: I0202 21:39:02.942924 4789 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0c45a38b-1868-48cc-b0b9-3f70adeeebff-var-run\") on node \"crc\" DevicePath \"\"" Feb 02 21:39:02 crc kubenswrapper[4789]: I0202 21:39:02.944415 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a32183f3-d42d-459f-8fd6-268d398cbb82-kube-api-access-hgzsw" (OuterVolumeSpecName: "kube-api-access-hgzsw") pod "a32183f3-d42d-459f-8fd6-268d398cbb82" (UID: "a32183f3-d42d-459f-8fd6-268d398cbb82"). InnerVolumeSpecName "kube-api-access-hgzsw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:39:02 crc kubenswrapper[4789]: I0202 21:39:02.944545 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c45a38b-1868-48cc-b0b9-3f70adeeebff-kube-api-access-kwcvq" (OuterVolumeSpecName: "kube-api-access-kwcvq") pod "0c45a38b-1868-48cc-b0b9-3f70adeeebff" (UID: "0c45a38b-1868-48cc-b0b9-3f70adeeebff"). InnerVolumeSpecName "kube-api-access-kwcvq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:39:03 crc kubenswrapper[4789]: I0202 21:39:03.044482 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hgzsw\" (UniqueName: \"kubernetes.io/projected/a32183f3-d42d-459f-8fd6-268d398cbb82-kube-api-access-hgzsw\") on node \"crc\" DevicePath \"\"" Feb 02 21:39:03 crc kubenswrapper[4789]: I0202 21:39:03.044724 4789 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a32183f3-d42d-459f-8fd6-268d398cbb82-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 21:39:03 crc kubenswrapper[4789]: I0202 21:39:03.044742 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kwcvq\" (UniqueName: \"kubernetes.io/projected/0c45a38b-1868-48cc-b0b9-3f70adeeebff-kube-api-access-kwcvq\") on node \"crc\" DevicePath \"\"" Feb 02 21:39:03 crc kubenswrapper[4789]: I0202 21:39:03.232907 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-gjls4" Feb 02 21:39:03 crc kubenswrapper[4789]: I0202 21:39:03.395458 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-gjls4-config-xgfd5" event={"ID":"0c45a38b-1868-48cc-b0b9-3f70adeeebff","Type":"ContainerDied","Data":"a143f757064d2a5c7e93f098e58d5dca57fe5f295988436667cc54ab79c46f9c"} Feb 02 21:39:03 crc kubenswrapper[4789]: I0202 21:39:03.396741 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a143f757064d2a5c7e93f098e58d5dca57fe5f295988436667cc54ab79c46f9c" Feb 02 21:39:03 crc kubenswrapper[4789]: I0202 21:39:03.395507 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-gjls4-config-xgfd5" Feb 02 21:39:03 crc kubenswrapper[4789]: I0202 21:39:03.410103 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"87f6bccb-d5fc-4868-aca2-734d16898805","Type":"ContainerStarted","Data":"81a1db9e6f95967f7398c2d9e33aef20a4ebd27dac4bde8ca54c1d2cb9e32588"} Feb 02 21:39:03 crc kubenswrapper[4789]: I0202 21:39:03.410155 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"87f6bccb-d5fc-4868-aca2-734d16898805","Type":"ContainerStarted","Data":"292bcc186a04274a666bd4bca60221734a4bf42019919ba532cfde2503636ddb"} Feb 02 21:39:03 crc kubenswrapper[4789]: I0202 21:39:03.410166 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"87f6bccb-d5fc-4868-aca2-734d16898805","Type":"ContainerStarted","Data":"1e6fc4897376cc9d269976f61acf3f0cc76fb66b261f7e18fb05f5f9f439d27d"} Feb 02 21:39:03 crc kubenswrapper[4789]: I0202 21:39:03.410175 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"87f6bccb-d5fc-4868-aca2-734d16898805","Type":"ContainerStarted","Data":"41f66ea30afde5a33d387e2cc7b5c5ed11aef0e66a8afd458c8af299945c2460"} Feb 02 21:39:03 crc kubenswrapper[4789]: I0202 21:39:03.412213 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-xzh8f" event={"ID":"a32183f3-d42d-459f-8fd6-268d398cbb82","Type":"ContainerDied","Data":"d6b558f55eed21ba35e39235864762dc371ccd80943f1ab0e2261589ca37349b"} Feb 02 21:39:03 crc kubenswrapper[4789]: I0202 21:39:03.412540 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d6b558f55eed21ba35e39235864762dc371ccd80943f1ab0e2261589ca37349b" Feb 02 21:39:03 crc kubenswrapper[4789]: I0202 21:39:03.412627 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-xzh8f" Feb 02 21:39:03 crc kubenswrapper[4789]: I0202 21:39:03.903095 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-gjls4-config-xgfd5"] Feb 02 21:39:03 crc kubenswrapper[4789]: I0202 21:39:03.907999 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-gjls4-config-xgfd5"] Feb 02 21:39:04 crc kubenswrapper[4789]: I0202 21:39:04.433295 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c45a38b-1868-48cc-b0b9-3f70adeeebff" path="/var/lib/kubelet/pods/0c45a38b-1868-48cc-b0b9-3f70adeeebff/volumes" Feb 02 21:39:04 crc kubenswrapper[4789]: I0202 21:39:04.434391 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"87f6bccb-d5fc-4868-aca2-734d16898805","Type":"ContainerStarted","Data":"19152882f397a8eaf801b2e8d8fd5858677ede37b6cfd35d02fe8847efc8de27"} Feb 02 21:39:04 crc kubenswrapper[4789]: I0202 21:39:04.434443 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"87f6bccb-d5fc-4868-aca2-734d16898805","Type":"ContainerStarted","Data":"758668fe2c5ee9470a7c3aa0b9a80c8ff6b3ee015da4b7aab90845bdc8131fbe"} Feb 02 21:39:04 crc kubenswrapper[4789]: I0202 21:39:04.434465 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"87f6bccb-d5fc-4868-aca2-734d16898805","Type":"ContainerStarted","Data":"772b32b4a568764e9d52dc458b0ac79908b73b42aa7c0ab429a6e69ef36ff4ee"} Feb 02 21:39:04 crc kubenswrapper[4789]: I0202 21:39:04.492756 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=19.803991753 podStartE2EDuration="34.492737946s" podCreationTimestamp="2026-02-02 21:38:30 +0000 UTC" firstStartedPulling="2026-02-02 21:38:47.860683348 +0000 UTC m=+1148.155708367" lastFinishedPulling="2026-02-02 21:39:02.549429531 +0000 UTC m=+1162.844454560" observedRunningTime="2026-02-02 21:39:04.490383049 +0000 UTC m=+1164.785408068" watchObservedRunningTime="2026-02-02 21:39:04.492737946 +0000 UTC m=+1164.787762965" Feb 02 21:39:04 crc kubenswrapper[4789]: I0202 21:39:04.824370 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-zsggc"] Feb 02 21:39:04 crc kubenswrapper[4789]: E0202 21:39:04.824886 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c45a38b-1868-48cc-b0b9-3f70adeeebff" containerName="ovn-config" Feb 02 21:39:04 crc kubenswrapper[4789]: I0202 21:39:04.824913 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c45a38b-1868-48cc-b0b9-3f70adeeebff" containerName="ovn-config" Feb 02 21:39:04 crc kubenswrapper[4789]: E0202 21:39:04.824968 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a32183f3-d42d-459f-8fd6-268d398cbb82" containerName="mariadb-account-create-update" Feb 02 21:39:04 crc kubenswrapper[4789]: I0202 21:39:04.824982 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="a32183f3-d42d-459f-8fd6-268d398cbb82" containerName="mariadb-account-create-update" Feb 02 21:39:04 crc kubenswrapper[4789]: I0202 21:39:04.825255 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c45a38b-1868-48cc-b0b9-3f70adeeebff" containerName="ovn-config" Feb 02 21:39:04 crc kubenswrapper[4789]: I0202 21:39:04.825307 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="a32183f3-d42d-459f-8fd6-268d398cbb82" containerName="mariadb-account-create-update" Feb 02 21:39:04 crc kubenswrapper[4789]: I0202 21:39:04.826920 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-zsggc" Feb 02 21:39:04 crc kubenswrapper[4789]: I0202 21:39:04.841160 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Feb 02 21:39:04 crc kubenswrapper[4789]: I0202 21:39:04.841697 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-zsggc"] Feb 02 21:39:04 crc kubenswrapper[4789]: I0202 21:39:04.976337 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6139e5bb-4e3a-45d2-a284-eebacdb72422-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-zsggc\" (UID: \"6139e5bb-4e3a-45d2-a284-eebacdb72422\") " pod="openstack/dnsmasq-dns-764c5664d7-zsggc" Feb 02 21:39:04 crc kubenswrapper[4789]: I0202 21:39:04.976386 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6139e5bb-4e3a-45d2-a284-eebacdb72422-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-zsggc\" (UID: \"6139e5bb-4e3a-45d2-a284-eebacdb72422\") " pod="openstack/dnsmasq-dns-764c5664d7-zsggc" Feb 02 21:39:04 crc kubenswrapper[4789]: I0202 21:39:04.976419 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6139e5bb-4e3a-45d2-a284-eebacdb72422-dns-svc\") pod \"dnsmasq-dns-764c5664d7-zsggc\" (UID: \"6139e5bb-4e3a-45d2-a284-eebacdb72422\") " pod="openstack/dnsmasq-dns-764c5664d7-zsggc" Feb 02 21:39:04 crc kubenswrapper[4789]: I0202 21:39:04.976463 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6139e5bb-4e3a-45d2-a284-eebacdb72422-config\") pod \"dnsmasq-dns-764c5664d7-zsggc\" (UID: \"6139e5bb-4e3a-45d2-a284-eebacdb72422\") " pod="openstack/dnsmasq-dns-764c5664d7-zsggc" Feb 02 21:39:04 crc kubenswrapper[4789]: I0202 21:39:04.976484 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6139e5bb-4e3a-45d2-a284-eebacdb72422-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-zsggc\" (UID: \"6139e5bb-4e3a-45d2-a284-eebacdb72422\") " pod="openstack/dnsmasq-dns-764c5664d7-zsggc" Feb 02 21:39:04 crc kubenswrapper[4789]: I0202 21:39:04.976660 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8cqk\" (UniqueName: \"kubernetes.io/projected/6139e5bb-4e3a-45d2-a284-eebacdb72422-kube-api-access-r8cqk\") pod \"dnsmasq-dns-764c5664d7-zsggc\" (UID: \"6139e5bb-4e3a-45d2-a284-eebacdb72422\") " pod="openstack/dnsmasq-dns-764c5664d7-zsggc" Feb 02 21:39:05 crc kubenswrapper[4789]: I0202 21:39:05.079001 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6139e5bb-4e3a-45d2-a284-eebacdb72422-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-zsggc\" (UID: \"6139e5bb-4e3a-45d2-a284-eebacdb72422\") " pod="openstack/dnsmasq-dns-764c5664d7-zsggc" Feb 02 21:39:05 crc kubenswrapper[4789]: I0202 21:39:05.079087 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6139e5bb-4e3a-45d2-a284-eebacdb72422-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-zsggc\" (UID: \"6139e5bb-4e3a-45d2-a284-eebacdb72422\") " pod="openstack/dnsmasq-dns-764c5664d7-zsggc" Feb 02 21:39:05 crc kubenswrapper[4789]: I0202 21:39:05.079166 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6139e5bb-4e3a-45d2-a284-eebacdb72422-dns-svc\") pod \"dnsmasq-dns-764c5664d7-zsggc\" (UID: \"6139e5bb-4e3a-45d2-a284-eebacdb72422\") " pod="openstack/dnsmasq-dns-764c5664d7-zsggc" Feb 02 21:39:05 crc kubenswrapper[4789]: I0202 21:39:05.079259 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6139e5bb-4e3a-45d2-a284-eebacdb72422-config\") pod \"dnsmasq-dns-764c5664d7-zsggc\" (UID: \"6139e5bb-4e3a-45d2-a284-eebacdb72422\") " pod="openstack/dnsmasq-dns-764c5664d7-zsggc" Feb 02 21:39:05 crc kubenswrapper[4789]: I0202 21:39:05.079303 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6139e5bb-4e3a-45d2-a284-eebacdb72422-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-zsggc\" (UID: \"6139e5bb-4e3a-45d2-a284-eebacdb72422\") " pod="openstack/dnsmasq-dns-764c5664d7-zsggc" Feb 02 21:39:05 crc kubenswrapper[4789]: I0202 21:39:05.080622 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6139e5bb-4e3a-45d2-a284-eebacdb72422-config\") pod \"dnsmasq-dns-764c5664d7-zsggc\" (UID: \"6139e5bb-4e3a-45d2-a284-eebacdb72422\") " pod="openstack/dnsmasq-dns-764c5664d7-zsggc" Feb 02 21:39:05 crc kubenswrapper[4789]: I0202 21:39:05.080800 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6139e5bb-4e3a-45d2-a284-eebacdb72422-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-zsggc\" (UID: \"6139e5bb-4e3a-45d2-a284-eebacdb72422\") " pod="openstack/dnsmasq-dns-764c5664d7-zsggc" Feb 02 21:39:05 crc kubenswrapper[4789]: I0202 21:39:05.080995 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6139e5bb-4e3a-45d2-a284-eebacdb72422-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-zsggc\" (UID: \"6139e5bb-4e3a-45d2-a284-eebacdb72422\") " pod="openstack/dnsmasq-dns-764c5664d7-zsggc" Feb 02 21:39:05 crc kubenswrapper[4789]: I0202 21:39:05.081037 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6139e5bb-4e3a-45d2-a284-eebacdb72422-dns-svc\") pod \"dnsmasq-dns-764c5664d7-zsggc\" (UID: \"6139e5bb-4e3a-45d2-a284-eebacdb72422\") " pod="openstack/dnsmasq-dns-764c5664d7-zsggc" Feb 02 21:39:05 crc kubenswrapper[4789]: I0202 21:39:05.081161 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6139e5bb-4e3a-45d2-a284-eebacdb72422-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-zsggc\" (UID: \"6139e5bb-4e3a-45d2-a284-eebacdb72422\") " pod="openstack/dnsmasq-dns-764c5664d7-zsggc" Feb 02 21:39:05 crc kubenswrapper[4789]: I0202 21:39:05.081442 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8cqk\" (UniqueName: \"kubernetes.io/projected/6139e5bb-4e3a-45d2-a284-eebacdb72422-kube-api-access-r8cqk\") pod \"dnsmasq-dns-764c5664d7-zsggc\" (UID: \"6139e5bb-4e3a-45d2-a284-eebacdb72422\") " pod="openstack/dnsmasq-dns-764c5664d7-zsggc" Feb 02 21:39:05 crc kubenswrapper[4789]: I0202 21:39:05.111856 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8cqk\" (UniqueName: \"kubernetes.io/projected/6139e5bb-4e3a-45d2-a284-eebacdb72422-kube-api-access-r8cqk\") pod \"dnsmasq-dns-764c5664d7-zsggc\" (UID: \"6139e5bb-4e3a-45d2-a284-eebacdb72422\") " pod="openstack/dnsmasq-dns-764c5664d7-zsggc" Feb 02 21:39:05 crc kubenswrapper[4789]: I0202 21:39:05.158911 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-zsggc" Feb 02 21:39:05 crc kubenswrapper[4789]: I0202 21:39:05.379920 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 02 21:39:05 crc kubenswrapper[4789]: I0202 21:39:05.655096 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-828hm"] Feb 02 21:39:05 crc kubenswrapper[4789]: I0202 21:39:05.656368 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-828hm" Feb 02 21:39:05 crc kubenswrapper[4789]: I0202 21:39:05.671634 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-828hm"] Feb 02 21:39:05 crc kubenswrapper[4789]: I0202 21:39:05.734762 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 02 21:39:05 crc kubenswrapper[4789]: I0202 21:39:05.764399 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-zsggc"] Feb 02 21:39:05 crc kubenswrapper[4789]: I0202 21:39:05.776545 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-e597-account-create-update-rz7c6"] Feb 02 21:39:05 crc kubenswrapper[4789]: I0202 21:39:05.780085 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-e597-account-create-update-rz7c6" Feb 02 21:39:05 crc kubenswrapper[4789]: I0202 21:39:05.788725 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 02 21:39:05 crc kubenswrapper[4789]: I0202 21:39:05.794393 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ee0798dd-aec6-4d0f-b4e9-efde747097cd-operator-scripts\") pod \"cinder-db-create-828hm\" (UID: \"ee0798dd-aec6-4d0f-b4e9-efde747097cd\") " pod="openstack/cinder-db-create-828hm" Feb 02 21:39:05 crc kubenswrapper[4789]: I0202 21:39:05.794475 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vntl2\" (UniqueName: \"kubernetes.io/projected/ee0798dd-aec6-4d0f-b4e9-efde747097cd-kube-api-access-vntl2\") pod \"cinder-db-create-828hm\" (UID: \"ee0798dd-aec6-4d0f-b4e9-efde747097cd\") " pod="openstack/cinder-db-create-828hm" Feb 02 21:39:05 crc kubenswrapper[4789]: I0202 21:39:05.822199 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-e597-account-create-update-rz7c6"] Feb 02 21:39:05 crc kubenswrapper[4789]: I0202 21:39:05.896660 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gmzx\" (UniqueName: \"kubernetes.io/projected/a21aa3a8-7aa8-4eda-bc74-1809a4cc774b-kube-api-access-8gmzx\") pod \"cinder-e597-account-create-update-rz7c6\" (UID: \"a21aa3a8-7aa8-4eda-bc74-1809a4cc774b\") " pod="openstack/cinder-e597-account-create-update-rz7c6" Feb 02 21:39:05 crc kubenswrapper[4789]: I0202 21:39:05.896736 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ee0798dd-aec6-4d0f-b4e9-efde747097cd-operator-scripts\") pod \"cinder-db-create-828hm\" (UID: \"ee0798dd-aec6-4d0f-b4e9-efde747097cd\") " pod="openstack/cinder-db-create-828hm" Feb 02 21:39:05 crc kubenswrapper[4789]: I0202 21:39:05.896785 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vntl2\" (UniqueName: \"kubernetes.io/projected/ee0798dd-aec6-4d0f-b4e9-efde747097cd-kube-api-access-vntl2\") pod \"cinder-db-create-828hm\" (UID: \"ee0798dd-aec6-4d0f-b4e9-efde747097cd\") " pod="openstack/cinder-db-create-828hm" Feb 02 21:39:05 crc kubenswrapper[4789]: I0202 21:39:05.896812 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a21aa3a8-7aa8-4eda-bc74-1809a4cc774b-operator-scripts\") pod \"cinder-e597-account-create-update-rz7c6\" (UID: \"a21aa3a8-7aa8-4eda-bc74-1809a4cc774b\") " pod="openstack/cinder-e597-account-create-update-rz7c6" Feb 02 21:39:05 crc kubenswrapper[4789]: I0202 21:39:05.897759 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ee0798dd-aec6-4d0f-b4e9-efde747097cd-operator-scripts\") pod \"cinder-db-create-828hm\" (UID: \"ee0798dd-aec6-4d0f-b4e9-efde747097cd\") " pod="openstack/cinder-db-create-828hm" Feb 02 21:39:05 crc kubenswrapper[4789]: I0202 21:39:05.925466 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-ghqst"] Feb 02 21:39:05 crc kubenswrapper[4789]: I0202 21:39:05.926372 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-ghqst" Feb 02 21:39:05 crc kubenswrapper[4789]: I0202 21:39:05.960228 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vntl2\" (UniqueName: \"kubernetes.io/projected/ee0798dd-aec6-4d0f-b4e9-efde747097cd-kube-api-access-vntl2\") pod \"cinder-db-create-828hm\" (UID: \"ee0798dd-aec6-4d0f-b4e9-efde747097cd\") " pod="openstack/cinder-db-create-828hm" Feb 02 21:39:05 crc kubenswrapper[4789]: I0202 21:39:05.966269 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-ghqst"] Feb 02 21:39:05 crc kubenswrapper[4789]: I0202 21:39:05.998875 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a21aa3a8-7aa8-4eda-bc74-1809a4cc774b-operator-scripts\") pod \"cinder-e597-account-create-update-rz7c6\" (UID: \"a21aa3a8-7aa8-4eda-bc74-1809a4cc774b\") " pod="openstack/cinder-e597-account-create-update-rz7c6" Feb 02 21:39:05 crc kubenswrapper[4789]: I0202 21:39:05.998959 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntt4h\" (UniqueName: \"kubernetes.io/projected/3addc62a-b5f6-4e9e-9d32-6f4f9d5550b7-kube-api-access-ntt4h\") pod \"barbican-db-create-ghqst\" (UID: \"3addc62a-b5f6-4e9e-9d32-6f4f9d5550b7\") " pod="openstack/barbican-db-create-ghqst" Feb 02 21:39:05 crc kubenswrapper[4789]: I0202 21:39:05.999030 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3addc62a-b5f6-4e9e-9d32-6f4f9d5550b7-operator-scripts\") pod \"barbican-db-create-ghqst\" (UID: \"3addc62a-b5f6-4e9e-9d32-6f4f9d5550b7\") " pod="openstack/barbican-db-create-ghqst" Feb 02 21:39:05 crc kubenswrapper[4789]: I0202 21:39:05.999053 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gmzx\" (UniqueName: \"kubernetes.io/projected/a21aa3a8-7aa8-4eda-bc74-1809a4cc774b-kube-api-access-8gmzx\") pod \"cinder-e597-account-create-update-rz7c6\" (UID: \"a21aa3a8-7aa8-4eda-bc74-1809a4cc774b\") " pod="openstack/cinder-e597-account-create-update-rz7c6" Feb 02 21:39:05 crc kubenswrapper[4789]: I0202 21:39:05.999556 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a21aa3a8-7aa8-4eda-bc74-1809a4cc774b-operator-scripts\") pod \"cinder-e597-account-create-update-rz7c6\" (UID: \"a21aa3a8-7aa8-4eda-bc74-1809a4cc774b\") " pod="openstack/cinder-e597-account-create-update-rz7c6" Feb 02 21:39:06 crc kubenswrapper[4789]: I0202 21:39:06.022318 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gmzx\" (UniqueName: \"kubernetes.io/projected/a21aa3a8-7aa8-4eda-bc74-1809a4cc774b-kube-api-access-8gmzx\") pod \"cinder-e597-account-create-update-rz7c6\" (UID: \"a21aa3a8-7aa8-4eda-bc74-1809a4cc774b\") " pod="openstack/cinder-e597-account-create-update-rz7c6" Feb 02 21:39:06 crc kubenswrapper[4789]: I0202 21:39:06.035408 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-c430-account-create-update-gbnml"] Feb 02 21:39:06 crc kubenswrapper[4789]: I0202 21:39:06.036503 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-c430-account-create-update-gbnml" Feb 02 21:39:06 crc kubenswrapper[4789]: I0202 21:39:06.037028 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-828hm" Feb 02 21:39:06 crc kubenswrapper[4789]: I0202 21:39:06.041025 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 02 21:39:06 crc kubenswrapper[4789]: I0202 21:39:06.042544 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-tw98w"] Feb 02 21:39:06 crc kubenswrapper[4789]: I0202 21:39:06.043695 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-tw98w" Feb 02 21:39:06 crc kubenswrapper[4789]: I0202 21:39:06.061147 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-c430-account-create-update-gbnml"] Feb 02 21:39:06 crc kubenswrapper[4789]: I0202 21:39:06.082139 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-tw98w"] Feb 02 21:39:06 crc kubenswrapper[4789]: I0202 21:39:06.100207 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3addc62a-b5f6-4e9e-9d32-6f4f9d5550b7-operator-scripts\") pod \"barbican-db-create-ghqst\" (UID: \"3addc62a-b5f6-4e9e-9d32-6f4f9d5550b7\") " pod="openstack/barbican-db-create-ghqst" Feb 02 21:39:06 crc kubenswrapper[4789]: I0202 21:39:06.100287 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23e4386a-307f-4ab1-bac5-fb2260dff5ec-operator-scripts\") pod \"neutron-db-create-tw98w\" (UID: \"23e4386a-307f-4ab1-bac5-fb2260dff5ec\") " pod="openstack/neutron-db-create-tw98w" Feb 02 21:39:06 crc kubenswrapper[4789]: I0202 21:39:06.100350 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8592ef51-732b-4428-adcb-1da5d2c7b2e8-operator-scripts\") pod \"barbican-c430-account-create-update-gbnml\" (UID: \"8592ef51-732b-4428-adcb-1da5d2c7b2e8\") " pod="openstack/barbican-c430-account-create-update-gbnml" Feb 02 21:39:06 crc kubenswrapper[4789]: I0202 21:39:06.100382 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntt4h\" (UniqueName: \"kubernetes.io/projected/3addc62a-b5f6-4e9e-9d32-6f4f9d5550b7-kube-api-access-ntt4h\") pod \"barbican-db-create-ghqst\" (UID: \"3addc62a-b5f6-4e9e-9d32-6f4f9d5550b7\") " pod="openstack/barbican-db-create-ghqst" Feb 02 21:39:06 crc kubenswrapper[4789]: I0202 21:39:06.100406 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ntzv\" (UniqueName: \"kubernetes.io/projected/23e4386a-307f-4ab1-bac5-fb2260dff5ec-kube-api-access-8ntzv\") pod \"neutron-db-create-tw98w\" (UID: \"23e4386a-307f-4ab1-bac5-fb2260dff5ec\") " pod="openstack/neutron-db-create-tw98w" Feb 02 21:39:06 crc kubenswrapper[4789]: I0202 21:39:06.100424 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccwqj\" (UniqueName: \"kubernetes.io/projected/8592ef51-732b-4428-adcb-1da5d2c7b2e8-kube-api-access-ccwqj\") pod \"barbican-c430-account-create-update-gbnml\" (UID: \"8592ef51-732b-4428-adcb-1da5d2c7b2e8\") " pod="openstack/barbican-c430-account-create-update-gbnml" Feb 02 21:39:06 crc kubenswrapper[4789]: I0202 21:39:06.101119 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3addc62a-b5f6-4e9e-9d32-6f4f9d5550b7-operator-scripts\") pod \"barbican-db-create-ghqst\" (UID: \"3addc62a-b5f6-4e9e-9d32-6f4f9d5550b7\") " pod="openstack/barbican-db-create-ghqst" Feb 02 21:39:06 crc kubenswrapper[4789]: I0202 21:39:06.119540 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-rbrvz"] Feb 02 21:39:06 crc kubenswrapper[4789]: I0202 21:39:06.120646 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-rbrvz" Feb 02 21:39:06 crc kubenswrapper[4789]: I0202 21:39:06.124766 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-2xd6t" Feb 02 21:39:06 crc kubenswrapper[4789]: I0202 21:39:06.124959 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 02 21:39:06 crc kubenswrapper[4789]: I0202 21:39:06.125079 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 02 21:39:06 crc kubenswrapper[4789]: I0202 21:39:06.125137 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 02 21:39:06 crc kubenswrapper[4789]: I0202 21:39:06.128023 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-rbrvz"] Feb 02 21:39:06 crc kubenswrapper[4789]: I0202 21:39:06.136744 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntt4h\" (UniqueName: \"kubernetes.io/projected/3addc62a-b5f6-4e9e-9d32-6f4f9d5550b7-kube-api-access-ntt4h\") pod \"barbican-db-create-ghqst\" (UID: \"3addc62a-b5f6-4e9e-9d32-6f4f9d5550b7\") " pod="openstack/barbican-db-create-ghqst" Feb 02 21:39:06 crc kubenswrapper[4789]: I0202 21:39:06.201815 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ntzv\" (UniqueName: \"kubernetes.io/projected/23e4386a-307f-4ab1-bac5-fb2260dff5ec-kube-api-access-8ntzv\") pod \"neutron-db-create-tw98w\" (UID: \"23e4386a-307f-4ab1-bac5-fb2260dff5ec\") " pod="openstack/neutron-db-create-tw98w" Feb 02 21:39:06 crc kubenswrapper[4789]: I0202 21:39:06.201848 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ccwqj\" (UniqueName: \"kubernetes.io/projected/8592ef51-732b-4428-adcb-1da5d2c7b2e8-kube-api-access-ccwqj\") pod \"barbican-c430-account-create-update-gbnml\" (UID: \"8592ef51-732b-4428-adcb-1da5d2c7b2e8\") " pod="openstack/barbican-c430-account-create-update-gbnml" Feb 02 21:39:06 crc kubenswrapper[4789]: I0202 21:39:06.201911 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4s52g\" (UniqueName: \"kubernetes.io/projected/509b2067-171f-4e99-86fa-12cd19ff40ee-kube-api-access-4s52g\") pod \"keystone-db-sync-rbrvz\" (UID: \"509b2067-171f-4e99-86fa-12cd19ff40ee\") " pod="openstack/keystone-db-sync-rbrvz" Feb 02 21:39:06 crc kubenswrapper[4789]: I0202 21:39:06.201942 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/509b2067-171f-4e99-86fa-12cd19ff40ee-combined-ca-bundle\") pod \"keystone-db-sync-rbrvz\" (UID: \"509b2067-171f-4e99-86fa-12cd19ff40ee\") " pod="openstack/keystone-db-sync-rbrvz" Feb 02 21:39:06 crc kubenswrapper[4789]: I0202 21:39:06.201969 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23e4386a-307f-4ab1-bac5-fb2260dff5ec-operator-scripts\") pod \"neutron-db-create-tw98w\" (UID: \"23e4386a-307f-4ab1-bac5-fb2260dff5ec\") " pod="openstack/neutron-db-create-tw98w" Feb 02 21:39:06 crc kubenswrapper[4789]: I0202 21:39:06.202013 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/509b2067-171f-4e99-86fa-12cd19ff40ee-config-data\") pod \"keystone-db-sync-rbrvz\" (UID: \"509b2067-171f-4e99-86fa-12cd19ff40ee\") " pod="openstack/keystone-db-sync-rbrvz" Feb 02 21:39:06 crc kubenswrapper[4789]: I0202 21:39:06.202055 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8592ef51-732b-4428-adcb-1da5d2c7b2e8-operator-scripts\") pod \"barbican-c430-account-create-update-gbnml\" (UID: \"8592ef51-732b-4428-adcb-1da5d2c7b2e8\") " pod="openstack/barbican-c430-account-create-update-gbnml" Feb 02 21:39:06 crc kubenswrapper[4789]: I0202 21:39:06.202817 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8592ef51-732b-4428-adcb-1da5d2c7b2e8-operator-scripts\") pod \"barbican-c430-account-create-update-gbnml\" (UID: \"8592ef51-732b-4428-adcb-1da5d2c7b2e8\") " pod="openstack/barbican-c430-account-create-update-gbnml" Feb 02 21:39:06 crc kubenswrapper[4789]: I0202 21:39:06.204095 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23e4386a-307f-4ab1-bac5-fb2260dff5ec-operator-scripts\") pod \"neutron-db-create-tw98w\" (UID: \"23e4386a-307f-4ab1-bac5-fb2260dff5ec\") " pod="openstack/neutron-db-create-tw98w" Feb 02 21:39:06 crc kubenswrapper[4789]: I0202 21:39:06.228933 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ccwqj\" (UniqueName: \"kubernetes.io/projected/8592ef51-732b-4428-adcb-1da5d2c7b2e8-kube-api-access-ccwqj\") pod \"barbican-c430-account-create-update-gbnml\" (UID: \"8592ef51-732b-4428-adcb-1da5d2c7b2e8\") " pod="openstack/barbican-c430-account-create-update-gbnml" Feb 02 21:39:06 crc kubenswrapper[4789]: I0202 21:39:06.232197 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-9127-account-create-update-jb8z8"] Feb 02 21:39:06 crc kubenswrapper[4789]: I0202 21:39:06.237231 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-e597-account-create-update-rz7c6" Feb 02 21:39:06 crc kubenswrapper[4789]: I0202 21:39:06.242689 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-9127-account-create-update-jb8z8"] Feb 02 21:39:06 crc kubenswrapper[4789]: I0202 21:39:06.242777 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-9127-account-create-update-jb8z8" Feb 02 21:39:06 crc kubenswrapper[4789]: I0202 21:39:06.247638 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ntzv\" (UniqueName: \"kubernetes.io/projected/23e4386a-307f-4ab1-bac5-fb2260dff5ec-kube-api-access-8ntzv\") pod \"neutron-db-create-tw98w\" (UID: \"23e4386a-307f-4ab1-bac5-fb2260dff5ec\") " pod="openstack/neutron-db-create-tw98w" Feb 02 21:39:06 crc kubenswrapper[4789]: I0202 21:39:06.252918 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-ghqst" Feb 02 21:39:06 crc kubenswrapper[4789]: I0202 21:39:06.253098 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 02 21:39:06 crc kubenswrapper[4789]: I0202 21:39:06.303751 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4s52g\" (UniqueName: \"kubernetes.io/projected/509b2067-171f-4e99-86fa-12cd19ff40ee-kube-api-access-4s52g\") pod \"keystone-db-sync-rbrvz\" (UID: \"509b2067-171f-4e99-86fa-12cd19ff40ee\") " pod="openstack/keystone-db-sync-rbrvz" Feb 02 21:39:06 crc kubenswrapper[4789]: I0202 21:39:06.304082 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/509b2067-171f-4e99-86fa-12cd19ff40ee-combined-ca-bundle\") pod \"keystone-db-sync-rbrvz\" (UID: \"509b2067-171f-4e99-86fa-12cd19ff40ee\") " pod="openstack/keystone-db-sync-rbrvz" Feb 02 21:39:06 crc kubenswrapper[4789]: I0202 21:39:06.304143 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbvhs\" (UniqueName: \"kubernetes.io/projected/cc5c3e3f-2b45-4376-9b5b-cc92c2d4837b-kube-api-access-qbvhs\") pod \"neutron-9127-account-create-update-jb8z8\" (UID: \"cc5c3e3f-2b45-4376-9b5b-cc92c2d4837b\") " pod="openstack/neutron-9127-account-create-update-jb8z8" Feb 02 21:39:06 crc kubenswrapper[4789]: I0202 21:39:06.304171 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/509b2067-171f-4e99-86fa-12cd19ff40ee-config-data\") pod \"keystone-db-sync-rbrvz\" (UID: \"509b2067-171f-4e99-86fa-12cd19ff40ee\") " pod="openstack/keystone-db-sync-rbrvz" Feb 02 21:39:06 crc kubenswrapper[4789]: I0202 21:39:06.304251 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cc5c3e3f-2b45-4376-9b5b-cc92c2d4837b-operator-scripts\") pod \"neutron-9127-account-create-update-jb8z8\" (UID: \"cc5c3e3f-2b45-4376-9b5b-cc92c2d4837b\") " pod="openstack/neutron-9127-account-create-update-jb8z8" Feb 02 21:39:06 crc kubenswrapper[4789]: I0202 21:39:06.308013 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/509b2067-171f-4e99-86fa-12cd19ff40ee-combined-ca-bundle\") pod \"keystone-db-sync-rbrvz\" (UID: \"509b2067-171f-4e99-86fa-12cd19ff40ee\") " pod="openstack/keystone-db-sync-rbrvz" Feb 02 21:39:06 crc kubenswrapper[4789]: I0202 21:39:06.311814 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/509b2067-171f-4e99-86fa-12cd19ff40ee-config-data\") pod \"keystone-db-sync-rbrvz\" (UID: \"509b2067-171f-4e99-86fa-12cd19ff40ee\") " pod="openstack/keystone-db-sync-rbrvz" Feb 02 21:39:06 crc kubenswrapper[4789]: I0202 21:39:06.338330 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-828hm"] Feb 02 21:39:06 crc kubenswrapper[4789]: I0202 21:39:06.338426 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4s52g\" (UniqueName: \"kubernetes.io/projected/509b2067-171f-4e99-86fa-12cd19ff40ee-kube-api-access-4s52g\") pod \"keystone-db-sync-rbrvz\" (UID: \"509b2067-171f-4e99-86fa-12cd19ff40ee\") " pod="openstack/keystone-db-sync-rbrvz" Feb 02 21:39:06 crc kubenswrapper[4789]: W0202 21:39:06.355083 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podee0798dd_aec6_4d0f_b4e9_efde747097cd.slice/crio-538021f3ed1ab5b8b8178d6550818cc00c9f59d0ab5a0faa1b403b1ee679e02e WatchSource:0}: Error finding container 538021f3ed1ab5b8b8178d6550818cc00c9f59d0ab5a0faa1b403b1ee679e02e: Status 404 returned error can't find the container with id 538021f3ed1ab5b8b8178d6550818cc00c9f59d0ab5a0faa1b403b1ee679e02e Feb 02 21:39:06 crc kubenswrapper[4789]: I0202 21:39:06.370174 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-c430-account-create-update-gbnml" Feb 02 21:39:06 crc kubenswrapper[4789]: I0202 21:39:06.382200 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-tw98w" Feb 02 21:39:06 crc kubenswrapper[4789]: I0202 21:39:06.405565 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbvhs\" (UniqueName: \"kubernetes.io/projected/cc5c3e3f-2b45-4376-9b5b-cc92c2d4837b-kube-api-access-qbvhs\") pod \"neutron-9127-account-create-update-jb8z8\" (UID: \"cc5c3e3f-2b45-4376-9b5b-cc92c2d4837b\") " pod="openstack/neutron-9127-account-create-update-jb8z8" Feb 02 21:39:06 crc kubenswrapper[4789]: I0202 21:39:06.405681 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cc5c3e3f-2b45-4376-9b5b-cc92c2d4837b-operator-scripts\") pod \"neutron-9127-account-create-update-jb8z8\" (UID: \"cc5c3e3f-2b45-4376-9b5b-cc92c2d4837b\") " pod="openstack/neutron-9127-account-create-update-jb8z8" Feb 02 21:39:06 crc kubenswrapper[4789]: I0202 21:39:06.406572 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cc5c3e3f-2b45-4376-9b5b-cc92c2d4837b-operator-scripts\") pod \"neutron-9127-account-create-update-jb8z8\" (UID: \"cc5c3e3f-2b45-4376-9b5b-cc92c2d4837b\") " pod="openstack/neutron-9127-account-create-update-jb8z8" Feb 02 21:39:06 crc kubenswrapper[4789]: I0202 21:39:06.426647 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbvhs\" (UniqueName: \"kubernetes.io/projected/cc5c3e3f-2b45-4376-9b5b-cc92c2d4837b-kube-api-access-qbvhs\") pod \"neutron-9127-account-create-update-jb8z8\" (UID: \"cc5c3e3f-2b45-4376-9b5b-cc92c2d4837b\") " pod="openstack/neutron-9127-account-create-update-jb8z8" Feb 02 21:39:06 crc kubenswrapper[4789]: I0202 21:39:06.475977 4789 generic.go:334] "Generic (PLEG): container finished" podID="6139e5bb-4e3a-45d2-a284-eebacdb72422" containerID="6f7f368af8c9fc46ec4f500b6fdcd99c54aa4139c8dee4524fa4194556dbd319" exitCode=0 Feb 02 21:39:06 crc kubenswrapper[4789]: I0202 21:39:06.476099 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-zsggc" event={"ID":"6139e5bb-4e3a-45d2-a284-eebacdb72422","Type":"ContainerDied","Data":"6f7f368af8c9fc46ec4f500b6fdcd99c54aa4139c8dee4524fa4194556dbd319"} Feb 02 21:39:06 crc kubenswrapper[4789]: I0202 21:39:06.476128 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-zsggc" event={"ID":"6139e5bb-4e3a-45d2-a284-eebacdb72422","Type":"ContainerStarted","Data":"f91e146ce6254fac34d7590746ac108e3550d977f178f7760cb10c0581a03a04"} Feb 02 21:39:06 crc kubenswrapper[4789]: I0202 21:39:06.488520 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-828hm" event={"ID":"ee0798dd-aec6-4d0f-b4e9-efde747097cd","Type":"ContainerStarted","Data":"538021f3ed1ab5b8b8178d6550818cc00c9f59d0ab5a0faa1b403b1ee679e02e"} Feb 02 21:39:06 crc kubenswrapper[4789]: I0202 21:39:06.491634 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-rbrvz" Feb 02 21:39:06 crc kubenswrapper[4789]: I0202 21:39:06.561152 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-9127-account-create-update-jb8z8" Feb 02 21:39:06 crc kubenswrapper[4789]: I0202 21:39:06.675748 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-ghqst"] Feb 02 21:39:06 crc kubenswrapper[4789]: I0202 21:39:06.727395 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-e597-account-create-update-rz7c6"] Feb 02 21:39:06 crc kubenswrapper[4789]: I0202 21:39:06.992713 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-c430-account-create-update-gbnml"] Feb 02 21:39:07 crc kubenswrapper[4789]: I0202 21:39:07.015489 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-tw98w"] Feb 02 21:39:07 crc kubenswrapper[4789]: I0202 21:39:07.241960 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-rbrvz"] Feb 02 21:39:07 crc kubenswrapper[4789]: I0202 21:39:07.355783 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-9127-account-create-update-jb8z8"] Feb 02 21:39:07 crc kubenswrapper[4789]: I0202 21:39:07.497710 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-c430-account-create-update-gbnml" event={"ID":"8592ef51-732b-4428-adcb-1da5d2c7b2e8","Type":"ContainerStarted","Data":"c258b245ff5663d640558115f7c3d15a377cce4b941559f3e648a918e8a7b996"} Feb 02 21:39:07 crc kubenswrapper[4789]: I0202 21:39:07.498025 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-c430-account-create-update-gbnml" event={"ID":"8592ef51-732b-4428-adcb-1da5d2c7b2e8","Type":"ContainerStarted","Data":"0f5961715f45f2caaec926699b0c7dbdaeb51e010ff87f09afb34d17267dd9b2"} Feb 02 21:39:07 crc kubenswrapper[4789]: I0202 21:39:07.499368 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-9127-account-create-update-jb8z8" event={"ID":"cc5c3e3f-2b45-4376-9b5b-cc92c2d4837b","Type":"ContainerStarted","Data":"28ae4ce8143169b3cdc70332b382b5abe55ce1bfb5cfb0ab898beeaadfc9f864"} Feb 02 21:39:07 crc kubenswrapper[4789]: I0202 21:39:07.499411 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-9127-account-create-update-jb8z8" event={"ID":"cc5c3e3f-2b45-4376-9b5b-cc92c2d4837b","Type":"ContainerStarted","Data":"41a7e6bc180c1c058bdf0076ce081ddc5f171ed3935d7c29b16cafa4b2fdf5c1"} Feb 02 21:39:07 crc kubenswrapper[4789]: I0202 21:39:07.501999 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-tw98w" event={"ID":"23e4386a-307f-4ab1-bac5-fb2260dff5ec","Type":"ContainerStarted","Data":"24049c2be1d4b8b3da9929ca79be7a24c066039c1f9c521ceac6f6fdb404fc0d"} Feb 02 21:39:07 crc kubenswrapper[4789]: I0202 21:39:07.502039 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-tw98w" event={"ID":"23e4386a-307f-4ab1-bac5-fb2260dff5ec","Type":"ContainerStarted","Data":"860ab300bd2405fb5389214e79cf9e3799c875be41b5e5402f749afdab4546e2"} Feb 02 21:39:07 crc kubenswrapper[4789]: I0202 21:39:07.506967 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-e597-account-create-update-rz7c6" event={"ID":"a21aa3a8-7aa8-4eda-bc74-1809a4cc774b","Type":"ContainerStarted","Data":"a02144326266b78df86578a9882f49b5733c0fda172e3dbbc76c0dc7873e9df6"} Feb 02 21:39:07 crc kubenswrapper[4789]: I0202 21:39:07.506996 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-e597-account-create-update-rz7c6" event={"ID":"a21aa3a8-7aa8-4eda-bc74-1809a4cc774b","Type":"ContainerStarted","Data":"3a409da78f98f6b2c76f4b639f4a4c0b5e05e7f5dd014b824ca6f1dd7313ab27"} Feb 02 21:39:07 crc kubenswrapper[4789]: I0202 21:39:07.510396 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-zsggc" event={"ID":"6139e5bb-4e3a-45d2-a284-eebacdb72422","Type":"ContainerStarted","Data":"f94b762b6357cdf512aec6f1d903135715c5b8924699bd1ffc4b5669478b106f"} Feb 02 21:39:07 crc kubenswrapper[4789]: I0202 21:39:07.510732 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-764c5664d7-zsggc" Feb 02 21:39:07 crc kubenswrapper[4789]: I0202 21:39:07.512796 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-rbrvz" event={"ID":"509b2067-171f-4e99-86fa-12cd19ff40ee","Type":"ContainerStarted","Data":"fd230e3ac0620fd65d96c10516ba69c1f04fa11dc5476bc1ba6f1a08422d25ab"} Feb 02 21:39:07 crc kubenswrapper[4789]: I0202 21:39:07.517400 4789 generic.go:334] "Generic (PLEG): container finished" podID="3addc62a-b5f6-4e9e-9d32-6f4f9d5550b7" containerID="bca07d9fc6526f1efc5fac2d1c5f5e2995acb5069d6fe6aaa00814d05a0d8296" exitCode=0 Feb 02 21:39:07 crc kubenswrapper[4789]: I0202 21:39:07.517497 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-ghqst" event={"ID":"3addc62a-b5f6-4e9e-9d32-6f4f9d5550b7","Type":"ContainerDied","Data":"bca07d9fc6526f1efc5fac2d1c5f5e2995acb5069d6fe6aaa00814d05a0d8296"} Feb 02 21:39:07 crc kubenswrapper[4789]: I0202 21:39:07.517524 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-ghqst" event={"ID":"3addc62a-b5f6-4e9e-9d32-6f4f9d5550b7","Type":"ContainerStarted","Data":"922412e79591afe1d046b4da9fc989993e984a5fb3e26aac9d796895e8f5af81"} Feb 02 21:39:07 crc kubenswrapper[4789]: I0202 21:39:07.517895 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-c430-account-create-update-gbnml" podStartSLOduration=1.517882897 podStartE2EDuration="1.517882897s" podCreationTimestamp="2026-02-02 21:39:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:39:07.511845657 +0000 UTC m=+1167.806870676" watchObservedRunningTime="2026-02-02 21:39:07.517882897 +0000 UTC m=+1167.812907916" Feb 02 21:39:07 crc kubenswrapper[4789]: I0202 21:39:07.521276 4789 generic.go:334] "Generic (PLEG): container finished" podID="ee0798dd-aec6-4d0f-b4e9-efde747097cd" containerID="53541170e44fa63e8ef02609a4f413138b680f2efd72af45a332e10060e0d01f" exitCode=0 Feb 02 21:39:07 crc kubenswrapper[4789]: I0202 21:39:07.521322 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-828hm" event={"ID":"ee0798dd-aec6-4d0f-b4e9-efde747097cd","Type":"ContainerDied","Data":"53541170e44fa63e8ef02609a4f413138b680f2efd72af45a332e10060e0d01f"} Feb 02 21:39:07 crc kubenswrapper[4789]: I0202 21:39:07.545788 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-9127-account-create-update-jb8z8" podStartSLOduration=1.545770864 podStartE2EDuration="1.545770864s" podCreationTimestamp="2026-02-02 21:39:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:39:07.52614355 +0000 UTC m=+1167.821168569" watchObservedRunningTime="2026-02-02 21:39:07.545770864 +0000 UTC m=+1167.840795883" Feb 02 21:39:07 crc kubenswrapper[4789]: I0202 21:39:07.548770 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-e597-account-create-update-rz7c6" podStartSLOduration=2.548763439 podStartE2EDuration="2.548763439s" podCreationTimestamp="2026-02-02 21:39:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:39:07.540888867 +0000 UTC m=+1167.835913886" watchObservedRunningTime="2026-02-02 21:39:07.548763439 +0000 UTC m=+1167.843788458" Feb 02 21:39:07 crc kubenswrapper[4789]: I0202 21:39:07.557338 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-tw98w" podStartSLOduration=1.5573219200000001 podStartE2EDuration="1.55732192s" podCreationTimestamp="2026-02-02 21:39:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:39:07.554446109 +0000 UTC m=+1167.849471128" watchObservedRunningTime="2026-02-02 21:39:07.55732192 +0000 UTC m=+1167.852346939" Feb 02 21:39:07 crc kubenswrapper[4789]: I0202 21:39:07.574178 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-764c5664d7-zsggc" podStartSLOduration=3.574161906 podStartE2EDuration="3.574161906s" podCreationTimestamp="2026-02-02 21:39:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:39:07.571232293 +0000 UTC m=+1167.866257312" watchObservedRunningTime="2026-02-02 21:39:07.574161906 +0000 UTC m=+1167.869186925" Feb 02 21:39:07 crc kubenswrapper[4789]: E0202 21:39:07.597150 4789 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3addc62a_b5f6_4e9e_9d32_6f4f9d5550b7.slice/crio-conmon-bca07d9fc6526f1efc5fac2d1c5f5e2995acb5069d6fe6aaa00814d05a0d8296.scope\": RecentStats: unable to find data in memory cache]" Feb 02 21:39:08 crc kubenswrapper[4789]: I0202 21:39:08.532213 4789 generic.go:334] "Generic (PLEG): container finished" podID="8592ef51-732b-4428-adcb-1da5d2c7b2e8" containerID="c258b245ff5663d640558115f7c3d15a377cce4b941559f3e648a918e8a7b996" exitCode=0 Feb 02 21:39:08 crc kubenswrapper[4789]: I0202 21:39:08.532389 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-c430-account-create-update-gbnml" event={"ID":"8592ef51-732b-4428-adcb-1da5d2c7b2e8","Type":"ContainerDied","Data":"c258b245ff5663d640558115f7c3d15a377cce4b941559f3e648a918e8a7b996"} Feb 02 21:39:08 crc kubenswrapper[4789]: I0202 21:39:08.534799 4789 generic.go:334] "Generic (PLEG): container finished" podID="cc5c3e3f-2b45-4376-9b5b-cc92c2d4837b" containerID="28ae4ce8143169b3cdc70332b382b5abe55ce1bfb5cfb0ab898beeaadfc9f864" exitCode=0 Feb 02 21:39:08 crc kubenswrapper[4789]: I0202 21:39:08.534837 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-9127-account-create-update-jb8z8" event={"ID":"cc5c3e3f-2b45-4376-9b5b-cc92c2d4837b","Type":"ContainerDied","Data":"28ae4ce8143169b3cdc70332b382b5abe55ce1bfb5cfb0ab898beeaadfc9f864"} Feb 02 21:39:08 crc kubenswrapper[4789]: I0202 21:39:08.537034 4789 generic.go:334] "Generic (PLEG): container finished" podID="23e4386a-307f-4ab1-bac5-fb2260dff5ec" containerID="24049c2be1d4b8b3da9929ca79be7a24c066039c1f9c521ceac6f6fdb404fc0d" exitCode=0 Feb 02 21:39:08 crc kubenswrapper[4789]: I0202 21:39:08.537192 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-tw98w" event={"ID":"23e4386a-307f-4ab1-bac5-fb2260dff5ec","Type":"ContainerDied","Data":"24049c2be1d4b8b3da9929ca79be7a24c066039c1f9c521ceac6f6fdb404fc0d"} Feb 02 21:39:08 crc kubenswrapper[4789]: I0202 21:39:08.538544 4789 generic.go:334] "Generic (PLEG): container finished" podID="a21aa3a8-7aa8-4eda-bc74-1809a4cc774b" containerID="a02144326266b78df86578a9882f49b5733c0fda172e3dbbc76c0dc7873e9df6" exitCode=0 Feb 02 21:39:08 crc kubenswrapper[4789]: I0202 21:39:08.538681 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-e597-account-create-update-rz7c6" event={"ID":"a21aa3a8-7aa8-4eda-bc74-1809a4cc774b","Type":"ContainerDied","Data":"a02144326266b78df86578a9882f49b5733c0fda172e3dbbc76c0dc7873e9df6"} Feb 02 21:39:08 crc kubenswrapper[4789]: I0202 21:39:08.957317 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-ghqst" Feb 02 21:39:08 crc kubenswrapper[4789]: I0202 21:39:08.962112 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-828hm" Feb 02 21:39:08 crc kubenswrapper[4789]: I0202 21:39:08.997545 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ee0798dd-aec6-4d0f-b4e9-efde747097cd-operator-scripts\") pod \"ee0798dd-aec6-4d0f-b4e9-efde747097cd\" (UID: \"ee0798dd-aec6-4d0f-b4e9-efde747097cd\") " Feb 02 21:39:08 crc kubenswrapper[4789]: I0202 21:39:08.997686 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vntl2\" (UniqueName: \"kubernetes.io/projected/ee0798dd-aec6-4d0f-b4e9-efde747097cd-kube-api-access-vntl2\") pod \"ee0798dd-aec6-4d0f-b4e9-efde747097cd\" (UID: \"ee0798dd-aec6-4d0f-b4e9-efde747097cd\") " Feb 02 21:39:08 crc kubenswrapper[4789]: I0202 21:39:08.997797 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3addc62a-b5f6-4e9e-9d32-6f4f9d5550b7-operator-scripts\") pod \"3addc62a-b5f6-4e9e-9d32-6f4f9d5550b7\" (UID: \"3addc62a-b5f6-4e9e-9d32-6f4f9d5550b7\") " Feb 02 21:39:08 crc kubenswrapper[4789]: I0202 21:39:08.998052 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ntt4h\" (UniqueName: \"kubernetes.io/projected/3addc62a-b5f6-4e9e-9d32-6f4f9d5550b7-kube-api-access-ntt4h\") pod \"3addc62a-b5f6-4e9e-9d32-6f4f9d5550b7\" (UID: \"3addc62a-b5f6-4e9e-9d32-6f4f9d5550b7\") " Feb 02 21:39:08 crc kubenswrapper[4789]: I0202 21:39:08.998223 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3addc62a-b5f6-4e9e-9d32-6f4f9d5550b7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3addc62a-b5f6-4e9e-9d32-6f4f9d5550b7" (UID: "3addc62a-b5f6-4e9e-9d32-6f4f9d5550b7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:39:08 crc kubenswrapper[4789]: I0202 21:39:08.998339 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee0798dd-aec6-4d0f-b4e9-efde747097cd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ee0798dd-aec6-4d0f-b4e9-efde747097cd" (UID: "ee0798dd-aec6-4d0f-b4e9-efde747097cd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:39:08 crc kubenswrapper[4789]: I0202 21:39:08.998879 4789 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3addc62a-b5f6-4e9e-9d32-6f4f9d5550b7-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 21:39:08 crc kubenswrapper[4789]: I0202 21:39:08.999302 4789 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ee0798dd-aec6-4d0f-b4e9-efde747097cd-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 21:39:09 crc kubenswrapper[4789]: I0202 21:39:09.004314 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee0798dd-aec6-4d0f-b4e9-efde747097cd-kube-api-access-vntl2" (OuterVolumeSpecName: "kube-api-access-vntl2") pod "ee0798dd-aec6-4d0f-b4e9-efde747097cd" (UID: "ee0798dd-aec6-4d0f-b4e9-efde747097cd"). InnerVolumeSpecName "kube-api-access-vntl2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:39:09 crc kubenswrapper[4789]: I0202 21:39:09.009012 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3addc62a-b5f6-4e9e-9d32-6f4f9d5550b7-kube-api-access-ntt4h" (OuterVolumeSpecName: "kube-api-access-ntt4h") pod "3addc62a-b5f6-4e9e-9d32-6f4f9d5550b7" (UID: "3addc62a-b5f6-4e9e-9d32-6f4f9d5550b7"). InnerVolumeSpecName "kube-api-access-ntt4h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:39:09 crc kubenswrapper[4789]: I0202 21:39:09.101448 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ntt4h\" (UniqueName: \"kubernetes.io/projected/3addc62a-b5f6-4e9e-9d32-6f4f9d5550b7-kube-api-access-ntt4h\") on node \"crc\" DevicePath \"\"" Feb 02 21:39:09 crc kubenswrapper[4789]: I0202 21:39:09.101507 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vntl2\" (UniqueName: \"kubernetes.io/projected/ee0798dd-aec6-4d0f-b4e9-efde747097cd-kube-api-access-vntl2\") on node \"crc\" DevicePath \"\"" Feb 02 21:39:09 crc kubenswrapper[4789]: I0202 21:39:09.552622 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-ghqst" event={"ID":"3addc62a-b5f6-4e9e-9d32-6f4f9d5550b7","Type":"ContainerDied","Data":"922412e79591afe1d046b4da9fc989993e984a5fb3e26aac9d796895e8f5af81"} Feb 02 21:39:09 crc kubenswrapper[4789]: I0202 21:39:09.552683 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="922412e79591afe1d046b4da9fc989993e984a5fb3e26aac9d796895e8f5af81" Feb 02 21:39:09 crc kubenswrapper[4789]: I0202 21:39:09.552733 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-ghqst" Feb 02 21:39:09 crc kubenswrapper[4789]: I0202 21:39:09.565958 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-828hm" event={"ID":"ee0798dd-aec6-4d0f-b4e9-efde747097cd","Type":"ContainerDied","Data":"538021f3ed1ab5b8b8178d6550818cc00c9f59d0ab5a0faa1b403b1ee679e02e"} Feb 02 21:39:09 crc kubenswrapper[4789]: I0202 21:39:09.566014 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-828hm" Feb 02 21:39:09 crc kubenswrapper[4789]: I0202 21:39:09.566041 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="538021f3ed1ab5b8b8178d6550818cc00c9f59d0ab5a0faa1b403b1ee679e02e" Feb 02 21:39:09 crc kubenswrapper[4789]: I0202 21:39:09.570842 4789 generic.go:334] "Generic (PLEG): container finished" podID="c5c4da6b-2b71-4018-90ce-d569b9f03cfd" containerID="6a85531001078e7b0623762a1ac1ae36a06d2b03afa2a75de0c40ceaef0cf5ef" exitCode=0 Feb 02 21:39:09 crc kubenswrapper[4789]: I0202 21:39:09.570947 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-sxqwc" event={"ID":"c5c4da6b-2b71-4018-90ce-d569b9f03cfd","Type":"ContainerDied","Data":"6a85531001078e7b0623762a1ac1ae36a06d2b03afa2a75de0c40ceaef0cf5ef"} Feb 02 21:39:12 crc kubenswrapper[4789]: I0202 21:39:12.157650 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-c430-account-create-update-gbnml" Feb 02 21:39:12 crc kubenswrapper[4789]: I0202 21:39:12.229647 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-sxqwc" Feb 02 21:39:12 crc kubenswrapper[4789]: I0202 21:39:12.234867 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-tw98w" Feb 02 21:39:12 crc kubenswrapper[4789]: I0202 21:39:12.253224 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8ntzv\" (UniqueName: \"kubernetes.io/projected/23e4386a-307f-4ab1-bac5-fb2260dff5ec-kube-api-access-8ntzv\") pod \"23e4386a-307f-4ab1-bac5-fb2260dff5ec\" (UID: \"23e4386a-307f-4ab1-bac5-fb2260dff5ec\") " Feb 02 21:39:12 crc kubenswrapper[4789]: I0202 21:39:12.253273 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hdn5j\" (UniqueName: \"kubernetes.io/projected/c5c4da6b-2b71-4018-90ce-d569b9f03cfd-kube-api-access-hdn5j\") pod \"c5c4da6b-2b71-4018-90ce-d569b9f03cfd\" (UID: \"c5c4da6b-2b71-4018-90ce-d569b9f03cfd\") " Feb 02 21:39:12 crc kubenswrapper[4789]: I0202 21:39:12.253309 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5c4da6b-2b71-4018-90ce-d569b9f03cfd-combined-ca-bundle\") pod \"c5c4da6b-2b71-4018-90ce-d569b9f03cfd\" (UID: \"c5c4da6b-2b71-4018-90ce-d569b9f03cfd\") " Feb 02 21:39:12 crc kubenswrapper[4789]: I0202 21:39:12.253341 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c5c4da6b-2b71-4018-90ce-d569b9f03cfd-db-sync-config-data\") pod \"c5c4da6b-2b71-4018-90ce-d569b9f03cfd\" (UID: \"c5c4da6b-2b71-4018-90ce-d569b9f03cfd\") " Feb 02 21:39:12 crc kubenswrapper[4789]: I0202 21:39:12.253445 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23e4386a-307f-4ab1-bac5-fb2260dff5ec-operator-scripts\") pod \"23e4386a-307f-4ab1-bac5-fb2260dff5ec\" (UID: \"23e4386a-307f-4ab1-bac5-fb2260dff5ec\") " Feb 02 21:39:12 crc kubenswrapper[4789]: I0202 21:39:12.253495 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ccwqj\" (UniqueName: \"kubernetes.io/projected/8592ef51-732b-4428-adcb-1da5d2c7b2e8-kube-api-access-ccwqj\") pod \"8592ef51-732b-4428-adcb-1da5d2c7b2e8\" (UID: \"8592ef51-732b-4428-adcb-1da5d2c7b2e8\") " Feb 02 21:39:12 crc kubenswrapper[4789]: I0202 21:39:12.253614 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5c4da6b-2b71-4018-90ce-d569b9f03cfd-config-data\") pod \"c5c4da6b-2b71-4018-90ce-d569b9f03cfd\" (UID: \"c5c4da6b-2b71-4018-90ce-d569b9f03cfd\") " Feb 02 21:39:12 crc kubenswrapper[4789]: I0202 21:39:12.253648 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8592ef51-732b-4428-adcb-1da5d2c7b2e8-operator-scripts\") pod \"8592ef51-732b-4428-adcb-1da5d2c7b2e8\" (UID: \"8592ef51-732b-4428-adcb-1da5d2c7b2e8\") " Feb 02 21:39:12 crc kubenswrapper[4789]: I0202 21:39:12.254377 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8592ef51-732b-4428-adcb-1da5d2c7b2e8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8592ef51-732b-4428-adcb-1da5d2c7b2e8" (UID: "8592ef51-732b-4428-adcb-1da5d2c7b2e8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:39:12 crc kubenswrapper[4789]: I0202 21:39:12.254895 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23e4386a-307f-4ab1-bac5-fb2260dff5ec-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "23e4386a-307f-4ab1-bac5-fb2260dff5ec" (UID: "23e4386a-307f-4ab1-bac5-fb2260dff5ec"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:39:12 crc kubenswrapper[4789]: I0202 21:39:12.258200 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5c4da6b-2b71-4018-90ce-d569b9f03cfd-kube-api-access-hdn5j" (OuterVolumeSpecName: "kube-api-access-hdn5j") pod "c5c4da6b-2b71-4018-90ce-d569b9f03cfd" (UID: "c5c4da6b-2b71-4018-90ce-d569b9f03cfd"). InnerVolumeSpecName "kube-api-access-hdn5j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:39:12 crc kubenswrapper[4789]: I0202 21:39:12.262846 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5c4da6b-2b71-4018-90ce-d569b9f03cfd-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "c5c4da6b-2b71-4018-90ce-d569b9f03cfd" (UID: "c5c4da6b-2b71-4018-90ce-d569b9f03cfd"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:39:12 crc kubenswrapper[4789]: I0202 21:39:12.263419 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-e597-account-create-update-rz7c6" Feb 02 21:39:12 crc kubenswrapper[4789]: I0202 21:39:12.269045 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8592ef51-732b-4428-adcb-1da5d2c7b2e8-kube-api-access-ccwqj" (OuterVolumeSpecName: "kube-api-access-ccwqj") pod "8592ef51-732b-4428-adcb-1da5d2c7b2e8" (UID: "8592ef51-732b-4428-adcb-1da5d2c7b2e8"). InnerVolumeSpecName "kube-api-access-ccwqj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:39:12 crc kubenswrapper[4789]: I0202 21:39:12.276812 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23e4386a-307f-4ab1-bac5-fb2260dff5ec-kube-api-access-8ntzv" (OuterVolumeSpecName: "kube-api-access-8ntzv") pod "23e4386a-307f-4ab1-bac5-fb2260dff5ec" (UID: "23e4386a-307f-4ab1-bac5-fb2260dff5ec"). InnerVolumeSpecName "kube-api-access-8ntzv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:39:12 crc kubenswrapper[4789]: I0202 21:39:12.302626 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5c4da6b-2b71-4018-90ce-d569b9f03cfd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c5c4da6b-2b71-4018-90ce-d569b9f03cfd" (UID: "c5c4da6b-2b71-4018-90ce-d569b9f03cfd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:39:12 crc kubenswrapper[4789]: I0202 21:39:12.323320 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-9127-account-create-update-jb8z8" Feb 02 21:39:12 crc kubenswrapper[4789]: I0202 21:39:12.326262 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5c4da6b-2b71-4018-90ce-d569b9f03cfd-config-data" (OuterVolumeSpecName: "config-data") pod "c5c4da6b-2b71-4018-90ce-d569b9f03cfd" (UID: "c5c4da6b-2b71-4018-90ce-d569b9f03cfd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:39:12 crc kubenswrapper[4789]: I0202 21:39:12.354629 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cc5c3e3f-2b45-4376-9b5b-cc92c2d4837b-operator-scripts\") pod \"cc5c3e3f-2b45-4376-9b5b-cc92c2d4837b\" (UID: \"cc5c3e3f-2b45-4376-9b5b-cc92c2d4837b\") " Feb 02 21:39:12 crc kubenswrapper[4789]: I0202 21:39:12.354703 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8gmzx\" (UniqueName: \"kubernetes.io/projected/a21aa3a8-7aa8-4eda-bc74-1809a4cc774b-kube-api-access-8gmzx\") pod \"a21aa3a8-7aa8-4eda-bc74-1809a4cc774b\" (UID: \"a21aa3a8-7aa8-4eda-bc74-1809a4cc774b\") " Feb 02 21:39:12 crc kubenswrapper[4789]: I0202 21:39:12.354724 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qbvhs\" (UniqueName: \"kubernetes.io/projected/cc5c3e3f-2b45-4376-9b5b-cc92c2d4837b-kube-api-access-qbvhs\") pod \"cc5c3e3f-2b45-4376-9b5b-cc92c2d4837b\" (UID: \"cc5c3e3f-2b45-4376-9b5b-cc92c2d4837b\") " Feb 02 21:39:12 crc kubenswrapper[4789]: I0202 21:39:12.354765 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a21aa3a8-7aa8-4eda-bc74-1809a4cc774b-operator-scripts\") pod \"a21aa3a8-7aa8-4eda-bc74-1809a4cc774b\" (UID: \"a21aa3a8-7aa8-4eda-bc74-1809a4cc774b\") " Feb 02 21:39:12 crc kubenswrapper[4789]: I0202 21:39:12.355157 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8ntzv\" (UniqueName: \"kubernetes.io/projected/23e4386a-307f-4ab1-bac5-fb2260dff5ec-kube-api-access-8ntzv\") on node \"crc\" DevicePath \"\"" Feb 02 21:39:12 crc kubenswrapper[4789]: I0202 21:39:12.355169 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hdn5j\" (UniqueName: \"kubernetes.io/projected/c5c4da6b-2b71-4018-90ce-d569b9f03cfd-kube-api-access-hdn5j\") on node \"crc\" DevicePath \"\"" Feb 02 21:39:12 crc kubenswrapper[4789]: I0202 21:39:12.355209 4789 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5c4da6b-2b71-4018-90ce-d569b9f03cfd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 21:39:12 crc kubenswrapper[4789]: I0202 21:39:12.355218 4789 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c5c4da6b-2b71-4018-90ce-d569b9f03cfd-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 21:39:12 crc kubenswrapper[4789]: I0202 21:39:12.355227 4789 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23e4386a-307f-4ab1-bac5-fb2260dff5ec-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 21:39:12 crc kubenswrapper[4789]: I0202 21:39:12.355236 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ccwqj\" (UniqueName: \"kubernetes.io/projected/8592ef51-732b-4428-adcb-1da5d2c7b2e8-kube-api-access-ccwqj\") on node \"crc\" DevicePath \"\"" Feb 02 21:39:12 crc kubenswrapper[4789]: I0202 21:39:12.355244 4789 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5c4da6b-2b71-4018-90ce-d569b9f03cfd-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 21:39:12 crc kubenswrapper[4789]: I0202 21:39:12.355253 4789 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8592ef51-732b-4428-adcb-1da5d2c7b2e8-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 21:39:12 crc kubenswrapper[4789]: I0202 21:39:12.355746 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a21aa3a8-7aa8-4eda-bc74-1809a4cc774b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a21aa3a8-7aa8-4eda-bc74-1809a4cc774b" (UID: "a21aa3a8-7aa8-4eda-bc74-1809a4cc774b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:39:12 crc kubenswrapper[4789]: I0202 21:39:12.355756 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc5c3e3f-2b45-4376-9b5b-cc92c2d4837b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cc5c3e3f-2b45-4376-9b5b-cc92c2d4837b" (UID: "cc5c3e3f-2b45-4376-9b5b-cc92c2d4837b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:39:12 crc kubenswrapper[4789]: I0202 21:39:12.357839 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc5c3e3f-2b45-4376-9b5b-cc92c2d4837b-kube-api-access-qbvhs" (OuterVolumeSpecName: "kube-api-access-qbvhs") pod "cc5c3e3f-2b45-4376-9b5b-cc92c2d4837b" (UID: "cc5c3e3f-2b45-4376-9b5b-cc92c2d4837b"). InnerVolumeSpecName "kube-api-access-qbvhs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:39:12 crc kubenswrapper[4789]: I0202 21:39:12.358283 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a21aa3a8-7aa8-4eda-bc74-1809a4cc774b-kube-api-access-8gmzx" (OuterVolumeSpecName: "kube-api-access-8gmzx") pod "a21aa3a8-7aa8-4eda-bc74-1809a4cc774b" (UID: "a21aa3a8-7aa8-4eda-bc74-1809a4cc774b"). InnerVolumeSpecName "kube-api-access-8gmzx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:39:12 crc kubenswrapper[4789]: I0202 21:39:12.456906 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qbvhs\" (UniqueName: \"kubernetes.io/projected/cc5c3e3f-2b45-4376-9b5b-cc92c2d4837b-kube-api-access-qbvhs\") on node \"crc\" DevicePath \"\"" Feb 02 21:39:12 crc kubenswrapper[4789]: I0202 21:39:12.456935 4789 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a21aa3a8-7aa8-4eda-bc74-1809a4cc774b-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 21:39:12 crc kubenswrapper[4789]: I0202 21:39:12.456944 4789 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cc5c3e3f-2b45-4376-9b5b-cc92c2d4837b-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 21:39:12 crc kubenswrapper[4789]: I0202 21:39:12.456952 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8gmzx\" (UniqueName: \"kubernetes.io/projected/a21aa3a8-7aa8-4eda-bc74-1809a4cc774b-kube-api-access-8gmzx\") on node \"crc\" DevicePath \"\"" Feb 02 21:39:12 crc kubenswrapper[4789]: I0202 21:39:12.595225 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-9127-account-create-update-jb8z8" event={"ID":"cc5c3e3f-2b45-4376-9b5b-cc92c2d4837b","Type":"ContainerDied","Data":"41a7e6bc180c1c058bdf0076ce081ddc5f171ed3935d7c29b16cafa4b2fdf5c1"} Feb 02 21:39:12 crc kubenswrapper[4789]: I0202 21:39:12.595270 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41a7e6bc180c1c058bdf0076ce081ddc5f171ed3935d7c29b16cafa4b2fdf5c1" Feb 02 21:39:12 crc kubenswrapper[4789]: I0202 21:39:12.595318 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-9127-account-create-update-jb8z8" Feb 02 21:39:12 crc kubenswrapper[4789]: I0202 21:39:12.597080 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-tw98w" event={"ID":"23e4386a-307f-4ab1-bac5-fb2260dff5ec","Type":"ContainerDied","Data":"860ab300bd2405fb5389214e79cf9e3799c875be41b5e5402f749afdab4546e2"} Feb 02 21:39:12 crc kubenswrapper[4789]: I0202 21:39:12.597107 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="860ab300bd2405fb5389214e79cf9e3799c875be41b5e5402f749afdab4546e2" Feb 02 21:39:12 crc kubenswrapper[4789]: I0202 21:39:12.597141 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-tw98w" Feb 02 21:39:12 crc kubenswrapper[4789]: I0202 21:39:12.599065 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-e597-account-create-update-rz7c6" event={"ID":"a21aa3a8-7aa8-4eda-bc74-1809a4cc774b","Type":"ContainerDied","Data":"3a409da78f98f6b2c76f4b639f4a4c0b5e05e7f5dd014b824ca6f1dd7313ab27"} Feb 02 21:39:12 crc kubenswrapper[4789]: I0202 21:39:12.599089 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a409da78f98f6b2c76f4b639f4a4c0b5e05e7f5dd014b824ca6f1dd7313ab27" Feb 02 21:39:12 crc kubenswrapper[4789]: I0202 21:39:12.599120 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-e597-account-create-update-rz7c6" Feb 02 21:39:12 crc kubenswrapper[4789]: I0202 21:39:12.601466 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-rbrvz" event={"ID":"509b2067-171f-4e99-86fa-12cd19ff40ee","Type":"ContainerStarted","Data":"922c7100bf1da4cbcedd71c1e8161322c8c3c06483b622837923aef6ad441e2c"} Feb 02 21:39:12 crc kubenswrapper[4789]: I0202 21:39:12.604362 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-sxqwc" Feb 02 21:39:12 crc kubenswrapper[4789]: I0202 21:39:12.604809 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-sxqwc" event={"ID":"c5c4da6b-2b71-4018-90ce-d569b9f03cfd","Type":"ContainerDied","Data":"f4574fbf874b92f32e4adfad2f3c4e1fde8ad17931192bd3d2ab6305d5f12abe"} Feb 02 21:39:12 crc kubenswrapper[4789]: I0202 21:39:12.604832 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f4574fbf874b92f32e4adfad2f3c4e1fde8ad17931192bd3d2ab6305d5f12abe" Feb 02 21:39:12 crc kubenswrapper[4789]: I0202 21:39:12.606364 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-c430-account-create-update-gbnml" event={"ID":"8592ef51-732b-4428-adcb-1da5d2c7b2e8","Type":"ContainerDied","Data":"0f5961715f45f2caaec926699b0c7dbdaeb51e010ff87f09afb34d17267dd9b2"} Feb 02 21:39:12 crc kubenswrapper[4789]: I0202 21:39:12.606445 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f5961715f45f2caaec926699b0c7dbdaeb51e010ff87f09afb34d17267dd9b2" Feb 02 21:39:12 crc kubenswrapper[4789]: I0202 21:39:12.606560 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-c430-account-create-update-gbnml" Feb 02 21:39:12 crc kubenswrapper[4789]: I0202 21:39:12.629663 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-rbrvz" podStartSLOduration=1.832890868 podStartE2EDuration="6.629646328s" podCreationTimestamp="2026-02-02 21:39:06 +0000 UTC" firstStartedPulling="2026-02-02 21:39:07.259649398 +0000 UTC m=+1167.554674407" lastFinishedPulling="2026-02-02 21:39:12.056404838 +0000 UTC m=+1172.351429867" observedRunningTime="2026-02-02 21:39:12.618566875 +0000 UTC m=+1172.913591914" watchObservedRunningTime="2026-02-02 21:39:12.629646328 +0000 UTC m=+1172.924671357" Feb 02 21:39:13 crc kubenswrapper[4789]: I0202 21:39:13.647634 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-zsggc"] Feb 02 21:39:13 crc kubenswrapper[4789]: I0202 21:39:13.649143 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-764c5664d7-zsggc" podUID="6139e5bb-4e3a-45d2-a284-eebacdb72422" containerName="dnsmasq-dns" containerID="cri-o://f94b762b6357cdf512aec6f1d903135715c5b8924699bd1ffc4b5669478b106f" gracePeriod=10 Feb 02 21:39:13 crc kubenswrapper[4789]: I0202 21:39:13.651252 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-764c5664d7-zsggc" Feb 02 21:39:13 crc kubenswrapper[4789]: I0202 21:39:13.675189 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-vjxxv"] Feb 02 21:39:13 crc kubenswrapper[4789]: E0202 21:39:13.675497 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a21aa3a8-7aa8-4eda-bc74-1809a4cc774b" containerName="mariadb-account-create-update" Feb 02 21:39:13 crc kubenswrapper[4789]: I0202 21:39:13.675520 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="a21aa3a8-7aa8-4eda-bc74-1809a4cc774b" containerName="mariadb-account-create-update" Feb 02 21:39:13 crc kubenswrapper[4789]: E0202 21:39:13.675530 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee0798dd-aec6-4d0f-b4e9-efde747097cd" containerName="mariadb-database-create" Feb 02 21:39:13 crc kubenswrapper[4789]: I0202 21:39:13.675536 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee0798dd-aec6-4d0f-b4e9-efde747097cd" containerName="mariadb-database-create" Feb 02 21:39:13 crc kubenswrapper[4789]: E0202 21:39:13.675550 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23e4386a-307f-4ab1-bac5-fb2260dff5ec" containerName="mariadb-database-create" Feb 02 21:39:13 crc kubenswrapper[4789]: I0202 21:39:13.675556 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="23e4386a-307f-4ab1-bac5-fb2260dff5ec" containerName="mariadb-database-create" Feb 02 21:39:13 crc kubenswrapper[4789]: E0202 21:39:13.675564 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8592ef51-732b-4428-adcb-1da5d2c7b2e8" containerName="mariadb-account-create-update" Feb 02 21:39:13 crc kubenswrapper[4789]: I0202 21:39:13.675571 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="8592ef51-732b-4428-adcb-1da5d2c7b2e8" containerName="mariadb-account-create-update" Feb 02 21:39:13 crc kubenswrapper[4789]: E0202 21:39:13.675598 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3addc62a-b5f6-4e9e-9d32-6f4f9d5550b7" containerName="mariadb-database-create" Feb 02 21:39:13 crc kubenswrapper[4789]: I0202 21:39:13.675604 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="3addc62a-b5f6-4e9e-9d32-6f4f9d5550b7" containerName="mariadb-database-create" Feb 02 21:39:13 crc kubenswrapper[4789]: E0202 21:39:13.675620 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5c4da6b-2b71-4018-90ce-d569b9f03cfd" containerName="glance-db-sync" Feb 02 21:39:13 crc kubenswrapper[4789]: I0202 21:39:13.675626 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5c4da6b-2b71-4018-90ce-d569b9f03cfd" containerName="glance-db-sync" Feb 02 21:39:13 crc kubenswrapper[4789]: E0202 21:39:13.675637 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc5c3e3f-2b45-4376-9b5b-cc92c2d4837b" containerName="mariadb-account-create-update" Feb 02 21:39:13 crc kubenswrapper[4789]: I0202 21:39:13.675643 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc5c3e3f-2b45-4376-9b5b-cc92c2d4837b" containerName="mariadb-account-create-update" Feb 02 21:39:13 crc kubenswrapper[4789]: I0202 21:39:13.675845 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee0798dd-aec6-4d0f-b4e9-efde747097cd" containerName="mariadb-database-create" Feb 02 21:39:13 crc kubenswrapper[4789]: I0202 21:39:13.675863 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="23e4386a-307f-4ab1-bac5-fb2260dff5ec" containerName="mariadb-database-create" Feb 02 21:39:13 crc kubenswrapper[4789]: I0202 21:39:13.675874 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5c4da6b-2b71-4018-90ce-d569b9f03cfd" containerName="glance-db-sync" Feb 02 21:39:13 crc kubenswrapper[4789]: I0202 21:39:13.675886 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="a21aa3a8-7aa8-4eda-bc74-1809a4cc774b" containerName="mariadb-account-create-update" Feb 02 21:39:13 crc kubenswrapper[4789]: I0202 21:39:13.675894 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc5c3e3f-2b45-4376-9b5b-cc92c2d4837b" containerName="mariadb-account-create-update" Feb 02 21:39:13 crc kubenswrapper[4789]: I0202 21:39:13.675903 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="3addc62a-b5f6-4e9e-9d32-6f4f9d5550b7" containerName="mariadb-database-create" Feb 02 21:39:13 crc kubenswrapper[4789]: I0202 21:39:13.675911 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="8592ef51-732b-4428-adcb-1da5d2c7b2e8" containerName="mariadb-account-create-update" Feb 02 21:39:13 crc kubenswrapper[4789]: I0202 21:39:13.676732 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-vjxxv" Feb 02 21:39:13 crc kubenswrapper[4789]: I0202 21:39:13.698271 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-vjxxv"] Feb 02 21:39:13 crc kubenswrapper[4789]: I0202 21:39:13.877424 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a3d51bcf-493e-46d1-83a7-a861a8a59577-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-vjxxv\" (UID: \"a3d51bcf-493e-46d1-83a7-a861a8a59577\") " pod="openstack/dnsmasq-dns-74f6bcbc87-vjxxv" Feb 02 21:39:13 crc kubenswrapper[4789]: I0202 21:39:13.877734 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xz6rj\" (UniqueName: \"kubernetes.io/projected/a3d51bcf-493e-46d1-83a7-a861a8a59577-kube-api-access-xz6rj\") pod \"dnsmasq-dns-74f6bcbc87-vjxxv\" (UID: \"a3d51bcf-493e-46d1-83a7-a861a8a59577\") " pod="openstack/dnsmasq-dns-74f6bcbc87-vjxxv" Feb 02 21:39:13 crc kubenswrapper[4789]: I0202 21:39:13.877757 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a3d51bcf-493e-46d1-83a7-a861a8a59577-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-vjxxv\" (UID: \"a3d51bcf-493e-46d1-83a7-a861a8a59577\") " pod="openstack/dnsmasq-dns-74f6bcbc87-vjxxv" Feb 02 21:39:13 crc kubenswrapper[4789]: I0202 21:39:13.888155 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a3d51bcf-493e-46d1-83a7-a861a8a59577-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-vjxxv\" (UID: \"a3d51bcf-493e-46d1-83a7-a861a8a59577\") " pod="openstack/dnsmasq-dns-74f6bcbc87-vjxxv" Feb 02 21:39:13 crc kubenswrapper[4789]: I0202 21:39:13.888257 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a3d51bcf-493e-46d1-83a7-a861a8a59577-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-vjxxv\" (UID: \"a3d51bcf-493e-46d1-83a7-a861a8a59577\") " pod="openstack/dnsmasq-dns-74f6bcbc87-vjxxv" Feb 02 21:39:13 crc kubenswrapper[4789]: I0202 21:39:13.888315 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3d51bcf-493e-46d1-83a7-a861a8a59577-config\") pod \"dnsmasq-dns-74f6bcbc87-vjxxv\" (UID: \"a3d51bcf-493e-46d1-83a7-a861a8a59577\") " pod="openstack/dnsmasq-dns-74f6bcbc87-vjxxv" Feb 02 21:39:13 crc kubenswrapper[4789]: I0202 21:39:13.990056 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3d51bcf-493e-46d1-83a7-a861a8a59577-config\") pod \"dnsmasq-dns-74f6bcbc87-vjxxv\" (UID: \"a3d51bcf-493e-46d1-83a7-a861a8a59577\") " pod="openstack/dnsmasq-dns-74f6bcbc87-vjxxv" Feb 02 21:39:13 crc kubenswrapper[4789]: I0202 21:39:13.990165 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a3d51bcf-493e-46d1-83a7-a861a8a59577-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-vjxxv\" (UID: \"a3d51bcf-493e-46d1-83a7-a861a8a59577\") " pod="openstack/dnsmasq-dns-74f6bcbc87-vjxxv" Feb 02 21:39:13 crc kubenswrapper[4789]: I0202 21:39:13.990184 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xz6rj\" (UniqueName: \"kubernetes.io/projected/a3d51bcf-493e-46d1-83a7-a861a8a59577-kube-api-access-xz6rj\") pod \"dnsmasq-dns-74f6bcbc87-vjxxv\" (UID: \"a3d51bcf-493e-46d1-83a7-a861a8a59577\") " pod="openstack/dnsmasq-dns-74f6bcbc87-vjxxv" Feb 02 21:39:13 crc kubenswrapper[4789]: I0202 21:39:13.990205 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a3d51bcf-493e-46d1-83a7-a861a8a59577-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-vjxxv\" (UID: \"a3d51bcf-493e-46d1-83a7-a861a8a59577\") " pod="openstack/dnsmasq-dns-74f6bcbc87-vjxxv" Feb 02 21:39:13 crc kubenswrapper[4789]: I0202 21:39:13.990250 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a3d51bcf-493e-46d1-83a7-a861a8a59577-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-vjxxv\" (UID: \"a3d51bcf-493e-46d1-83a7-a861a8a59577\") " pod="openstack/dnsmasq-dns-74f6bcbc87-vjxxv" Feb 02 21:39:13 crc kubenswrapper[4789]: I0202 21:39:13.990279 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a3d51bcf-493e-46d1-83a7-a861a8a59577-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-vjxxv\" (UID: \"a3d51bcf-493e-46d1-83a7-a861a8a59577\") " pod="openstack/dnsmasq-dns-74f6bcbc87-vjxxv" Feb 02 21:39:13 crc kubenswrapper[4789]: I0202 21:39:13.991089 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a3d51bcf-493e-46d1-83a7-a861a8a59577-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-vjxxv\" (UID: \"a3d51bcf-493e-46d1-83a7-a861a8a59577\") " pod="openstack/dnsmasq-dns-74f6bcbc87-vjxxv" Feb 02 21:39:13 crc kubenswrapper[4789]: I0202 21:39:13.991847 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3d51bcf-493e-46d1-83a7-a861a8a59577-config\") pod \"dnsmasq-dns-74f6bcbc87-vjxxv\" (UID: \"a3d51bcf-493e-46d1-83a7-a861a8a59577\") " pod="openstack/dnsmasq-dns-74f6bcbc87-vjxxv" Feb 02 21:39:13 crc kubenswrapper[4789]: I0202 21:39:13.992482 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a3d51bcf-493e-46d1-83a7-a861a8a59577-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-vjxxv\" (UID: \"a3d51bcf-493e-46d1-83a7-a861a8a59577\") " pod="openstack/dnsmasq-dns-74f6bcbc87-vjxxv" Feb 02 21:39:13 crc kubenswrapper[4789]: I0202 21:39:13.992937 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a3d51bcf-493e-46d1-83a7-a861a8a59577-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-vjxxv\" (UID: \"a3d51bcf-493e-46d1-83a7-a861a8a59577\") " pod="openstack/dnsmasq-dns-74f6bcbc87-vjxxv" Feb 02 21:39:13 crc kubenswrapper[4789]: I0202 21:39:13.993263 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a3d51bcf-493e-46d1-83a7-a861a8a59577-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-vjxxv\" (UID: \"a3d51bcf-493e-46d1-83a7-a861a8a59577\") " pod="openstack/dnsmasq-dns-74f6bcbc87-vjxxv" Feb 02 21:39:14 crc kubenswrapper[4789]: I0202 21:39:14.011076 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xz6rj\" (UniqueName: \"kubernetes.io/projected/a3d51bcf-493e-46d1-83a7-a861a8a59577-kube-api-access-xz6rj\") pod \"dnsmasq-dns-74f6bcbc87-vjxxv\" (UID: \"a3d51bcf-493e-46d1-83a7-a861a8a59577\") " pod="openstack/dnsmasq-dns-74f6bcbc87-vjxxv" Feb 02 21:39:14 crc kubenswrapper[4789]: I0202 21:39:14.090614 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-zsggc" Feb 02 21:39:14 crc kubenswrapper[4789]: I0202 21:39:14.197202 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6139e5bb-4e3a-45d2-a284-eebacdb72422-config\") pod \"6139e5bb-4e3a-45d2-a284-eebacdb72422\" (UID: \"6139e5bb-4e3a-45d2-a284-eebacdb72422\") " Feb 02 21:39:14 crc kubenswrapper[4789]: I0202 21:39:14.197258 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6139e5bb-4e3a-45d2-a284-eebacdb72422-ovsdbserver-nb\") pod \"6139e5bb-4e3a-45d2-a284-eebacdb72422\" (UID: \"6139e5bb-4e3a-45d2-a284-eebacdb72422\") " Feb 02 21:39:14 crc kubenswrapper[4789]: I0202 21:39:14.197287 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6139e5bb-4e3a-45d2-a284-eebacdb72422-dns-svc\") pod \"6139e5bb-4e3a-45d2-a284-eebacdb72422\" (UID: \"6139e5bb-4e3a-45d2-a284-eebacdb72422\") " Feb 02 21:39:14 crc kubenswrapper[4789]: I0202 21:39:14.197317 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r8cqk\" (UniqueName: \"kubernetes.io/projected/6139e5bb-4e3a-45d2-a284-eebacdb72422-kube-api-access-r8cqk\") pod \"6139e5bb-4e3a-45d2-a284-eebacdb72422\" (UID: \"6139e5bb-4e3a-45d2-a284-eebacdb72422\") " Feb 02 21:39:14 crc kubenswrapper[4789]: I0202 21:39:14.197438 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6139e5bb-4e3a-45d2-a284-eebacdb72422-ovsdbserver-sb\") pod \"6139e5bb-4e3a-45d2-a284-eebacdb72422\" (UID: \"6139e5bb-4e3a-45d2-a284-eebacdb72422\") " Feb 02 21:39:14 crc kubenswrapper[4789]: I0202 21:39:14.197485 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6139e5bb-4e3a-45d2-a284-eebacdb72422-dns-swift-storage-0\") pod \"6139e5bb-4e3a-45d2-a284-eebacdb72422\" (UID: \"6139e5bb-4e3a-45d2-a284-eebacdb72422\") " Feb 02 21:39:14 crc kubenswrapper[4789]: I0202 21:39:14.230783 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6139e5bb-4e3a-45d2-a284-eebacdb72422-kube-api-access-r8cqk" (OuterVolumeSpecName: "kube-api-access-r8cqk") pod "6139e5bb-4e3a-45d2-a284-eebacdb72422" (UID: "6139e5bb-4e3a-45d2-a284-eebacdb72422"). InnerVolumeSpecName "kube-api-access-r8cqk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:39:14 crc kubenswrapper[4789]: I0202 21:39:14.280100 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6139e5bb-4e3a-45d2-a284-eebacdb72422-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "6139e5bb-4e3a-45d2-a284-eebacdb72422" (UID: "6139e5bb-4e3a-45d2-a284-eebacdb72422"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:39:14 crc kubenswrapper[4789]: I0202 21:39:14.296840 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-vjxxv" Feb 02 21:39:14 crc kubenswrapper[4789]: I0202 21:39:14.300213 4789 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6139e5bb-4e3a-45d2-a284-eebacdb72422-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 02 21:39:14 crc kubenswrapper[4789]: I0202 21:39:14.300244 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r8cqk\" (UniqueName: \"kubernetes.io/projected/6139e5bb-4e3a-45d2-a284-eebacdb72422-kube-api-access-r8cqk\") on node \"crc\" DevicePath \"\"" Feb 02 21:39:14 crc kubenswrapper[4789]: I0202 21:39:14.309851 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6139e5bb-4e3a-45d2-a284-eebacdb72422-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6139e5bb-4e3a-45d2-a284-eebacdb72422" (UID: "6139e5bb-4e3a-45d2-a284-eebacdb72422"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:39:14 crc kubenswrapper[4789]: I0202 21:39:14.340067 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6139e5bb-4e3a-45d2-a284-eebacdb72422-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6139e5bb-4e3a-45d2-a284-eebacdb72422" (UID: "6139e5bb-4e3a-45d2-a284-eebacdb72422"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:39:14 crc kubenswrapper[4789]: I0202 21:39:14.346784 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6139e5bb-4e3a-45d2-a284-eebacdb72422-config" (OuterVolumeSpecName: "config") pod "6139e5bb-4e3a-45d2-a284-eebacdb72422" (UID: "6139e5bb-4e3a-45d2-a284-eebacdb72422"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:39:14 crc kubenswrapper[4789]: I0202 21:39:14.358149 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6139e5bb-4e3a-45d2-a284-eebacdb72422-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6139e5bb-4e3a-45d2-a284-eebacdb72422" (UID: "6139e5bb-4e3a-45d2-a284-eebacdb72422"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:39:14 crc kubenswrapper[4789]: I0202 21:39:14.404606 4789 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6139e5bb-4e3a-45d2-a284-eebacdb72422-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 21:39:14 crc kubenswrapper[4789]: I0202 21:39:14.404639 4789 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6139e5bb-4e3a-45d2-a284-eebacdb72422-config\") on node \"crc\" DevicePath \"\"" Feb 02 21:39:14 crc kubenswrapper[4789]: I0202 21:39:14.404649 4789 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6139e5bb-4e3a-45d2-a284-eebacdb72422-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 21:39:14 crc kubenswrapper[4789]: I0202 21:39:14.404657 4789 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6139e5bb-4e3a-45d2-a284-eebacdb72422-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 21:39:14 crc kubenswrapper[4789]: I0202 21:39:14.622774 4789 generic.go:334] "Generic (PLEG): container finished" podID="6139e5bb-4e3a-45d2-a284-eebacdb72422" containerID="f94b762b6357cdf512aec6f1d903135715c5b8924699bd1ffc4b5669478b106f" exitCode=0 Feb 02 21:39:14 crc kubenswrapper[4789]: I0202 21:39:14.622816 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-zsggc" event={"ID":"6139e5bb-4e3a-45d2-a284-eebacdb72422","Type":"ContainerDied","Data":"f94b762b6357cdf512aec6f1d903135715c5b8924699bd1ffc4b5669478b106f"} Feb 02 21:39:14 crc kubenswrapper[4789]: I0202 21:39:14.622841 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-zsggc" event={"ID":"6139e5bb-4e3a-45d2-a284-eebacdb72422","Type":"ContainerDied","Data":"f91e146ce6254fac34d7590746ac108e3550d977f178f7760cb10c0581a03a04"} Feb 02 21:39:14 crc kubenswrapper[4789]: I0202 21:39:14.622859 4789 scope.go:117] "RemoveContainer" containerID="f94b762b6357cdf512aec6f1d903135715c5b8924699bd1ffc4b5669478b106f" Feb 02 21:39:14 crc kubenswrapper[4789]: I0202 21:39:14.622989 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-zsggc" Feb 02 21:39:14 crc kubenswrapper[4789]: I0202 21:39:14.649750 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-zsggc"] Feb 02 21:39:14 crc kubenswrapper[4789]: I0202 21:39:14.651002 4789 scope.go:117] "RemoveContainer" containerID="6f7f368af8c9fc46ec4f500b6fdcd99c54aa4139c8dee4524fa4194556dbd319" Feb 02 21:39:14 crc kubenswrapper[4789]: I0202 21:39:14.663953 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-zsggc"] Feb 02 21:39:14 crc kubenswrapper[4789]: I0202 21:39:14.691790 4789 scope.go:117] "RemoveContainer" containerID="f94b762b6357cdf512aec6f1d903135715c5b8924699bd1ffc4b5669478b106f" Feb 02 21:39:14 crc kubenswrapper[4789]: E0202 21:39:14.692126 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f94b762b6357cdf512aec6f1d903135715c5b8924699bd1ffc4b5669478b106f\": container with ID starting with f94b762b6357cdf512aec6f1d903135715c5b8924699bd1ffc4b5669478b106f not found: ID does not exist" containerID="f94b762b6357cdf512aec6f1d903135715c5b8924699bd1ffc4b5669478b106f" Feb 02 21:39:14 crc kubenswrapper[4789]: I0202 21:39:14.692174 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f94b762b6357cdf512aec6f1d903135715c5b8924699bd1ffc4b5669478b106f"} err="failed to get container status \"f94b762b6357cdf512aec6f1d903135715c5b8924699bd1ffc4b5669478b106f\": rpc error: code = NotFound desc = could not find container \"f94b762b6357cdf512aec6f1d903135715c5b8924699bd1ffc4b5669478b106f\": container with ID starting with f94b762b6357cdf512aec6f1d903135715c5b8924699bd1ffc4b5669478b106f not found: ID does not exist" Feb 02 21:39:14 crc kubenswrapper[4789]: I0202 21:39:14.692209 4789 scope.go:117] "RemoveContainer" containerID="6f7f368af8c9fc46ec4f500b6fdcd99c54aa4139c8dee4524fa4194556dbd319" Feb 02 21:39:14 crc kubenswrapper[4789]: E0202 21:39:14.692645 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f7f368af8c9fc46ec4f500b6fdcd99c54aa4139c8dee4524fa4194556dbd319\": container with ID starting with 6f7f368af8c9fc46ec4f500b6fdcd99c54aa4139c8dee4524fa4194556dbd319 not found: ID does not exist" containerID="6f7f368af8c9fc46ec4f500b6fdcd99c54aa4139c8dee4524fa4194556dbd319" Feb 02 21:39:14 crc kubenswrapper[4789]: I0202 21:39:14.692666 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f7f368af8c9fc46ec4f500b6fdcd99c54aa4139c8dee4524fa4194556dbd319"} err="failed to get container status \"6f7f368af8c9fc46ec4f500b6fdcd99c54aa4139c8dee4524fa4194556dbd319\": rpc error: code = NotFound desc = could not find container \"6f7f368af8c9fc46ec4f500b6fdcd99c54aa4139c8dee4524fa4194556dbd319\": container with ID starting with 6f7f368af8c9fc46ec4f500b6fdcd99c54aa4139c8dee4524fa4194556dbd319 not found: ID does not exist" Feb 02 21:39:14 crc kubenswrapper[4789]: I0202 21:39:14.774887 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-vjxxv"] Feb 02 21:39:15 crc kubenswrapper[4789]: I0202 21:39:15.633099 4789 generic.go:334] "Generic (PLEG): container finished" podID="a3d51bcf-493e-46d1-83a7-a861a8a59577" containerID="5aa5a97dd9d445e7950c2f5fde861f59476f37b5ef96aec01ee3db928f31ac72" exitCode=0 Feb 02 21:39:15 crc kubenswrapper[4789]: I0202 21:39:15.633359 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-vjxxv" event={"ID":"a3d51bcf-493e-46d1-83a7-a861a8a59577","Type":"ContainerDied","Data":"5aa5a97dd9d445e7950c2f5fde861f59476f37b5ef96aec01ee3db928f31ac72"} Feb 02 21:39:15 crc kubenswrapper[4789]: I0202 21:39:15.633418 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-vjxxv" event={"ID":"a3d51bcf-493e-46d1-83a7-a861a8a59577","Type":"ContainerStarted","Data":"330d84511b202d7bcaddefe5dfdb83b937c00f0b7484471e40de35219a7e5b3e"} Feb 02 21:39:16 crc kubenswrapper[4789]: I0202 21:39:16.437691 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6139e5bb-4e3a-45d2-a284-eebacdb72422" path="/var/lib/kubelet/pods/6139e5bb-4e3a-45d2-a284-eebacdb72422/volumes" Feb 02 21:39:16 crc kubenswrapper[4789]: I0202 21:39:16.644125 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-vjxxv" event={"ID":"a3d51bcf-493e-46d1-83a7-a861a8a59577","Type":"ContainerStarted","Data":"a6207dd8d2d0ad6b5dca859e615bb3beeaa816c2ef66cbf9255df3545eb9cbda"} Feb 02 21:39:16 crc kubenswrapper[4789]: I0202 21:39:16.644276 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-74f6bcbc87-vjxxv" Feb 02 21:39:16 crc kubenswrapper[4789]: I0202 21:39:16.646341 4789 generic.go:334] "Generic (PLEG): container finished" podID="509b2067-171f-4e99-86fa-12cd19ff40ee" containerID="922c7100bf1da4cbcedd71c1e8161322c8c3c06483b622837923aef6ad441e2c" exitCode=0 Feb 02 21:39:16 crc kubenswrapper[4789]: I0202 21:39:16.646391 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-rbrvz" event={"ID":"509b2067-171f-4e99-86fa-12cd19ff40ee","Type":"ContainerDied","Data":"922c7100bf1da4cbcedd71c1e8161322c8c3c06483b622837923aef6ad441e2c"} Feb 02 21:39:16 crc kubenswrapper[4789]: I0202 21:39:16.669549 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-74f6bcbc87-vjxxv" podStartSLOduration=3.669529293 podStartE2EDuration="3.669529293s" podCreationTimestamp="2026-02-02 21:39:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:39:16.66375424 +0000 UTC m=+1176.958779259" watchObservedRunningTime="2026-02-02 21:39:16.669529293 +0000 UTC m=+1176.964554322" Feb 02 21:39:17 crc kubenswrapper[4789]: I0202 21:39:17.991770 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-rbrvz" Feb 02 21:39:18 crc kubenswrapper[4789]: I0202 21:39:18.164009 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/509b2067-171f-4e99-86fa-12cd19ff40ee-config-data\") pod \"509b2067-171f-4e99-86fa-12cd19ff40ee\" (UID: \"509b2067-171f-4e99-86fa-12cd19ff40ee\") " Feb 02 21:39:18 crc kubenswrapper[4789]: I0202 21:39:18.164659 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/509b2067-171f-4e99-86fa-12cd19ff40ee-combined-ca-bundle\") pod \"509b2067-171f-4e99-86fa-12cd19ff40ee\" (UID: \"509b2067-171f-4e99-86fa-12cd19ff40ee\") " Feb 02 21:39:18 crc kubenswrapper[4789]: I0202 21:39:18.164794 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4s52g\" (UniqueName: \"kubernetes.io/projected/509b2067-171f-4e99-86fa-12cd19ff40ee-kube-api-access-4s52g\") pod \"509b2067-171f-4e99-86fa-12cd19ff40ee\" (UID: \"509b2067-171f-4e99-86fa-12cd19ff40ee\") " Feb 02 21:39:18 crc kubenswrapper[4789]: I0202 21:39:18.171699 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/509b2067-171f-4e99-86fa-12cd19ff40ee-kube-api-access-4s52g" (OuterVolumeSpecName: "kube-api-access-4s52g") pod "509b2067-171f-4e99-86fa-12cd19ff40ee" (UID: "509b2067-171f-4e99-86fa-12cd19ff40ee"). InnerVolumeSpecName "kube-api-access-4s52g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:39:18 crc kubenswrapper[4789]: I0202 21:39:18.217497 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/509b2067-171f-4e99-86fa-12cd19ff40ee-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "509b2067-171f-4e99-86fa-12cd19ff40ee" (UID: "509b2067-171f-4e99-86fa-12cd19ff40ee"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:39:18 crc kubenswrapper[4789]: I0202 21:39:18.228970 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/509b2067-171f-4e99-86fa-12cd19ff40ee-config-data" (OuterVolumeSpecName: "config-data") pod "509b2067-171f-4e99-86fa-12cd19ff40ee" (UID: "509b2067-171f-4e99-86fa-12cd19ff40ee"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:39:18 crc kubenswrapper[4789]: I0202 21:39:18.266928 4789 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/509b2067-171f-4e99-86fa-12cd19ff40ee-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 21:39:18 crc kubenswrapper[4789]: I0202 21:39:18.266965 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4s52g\" (UniqueName: \"kubernetes.io/projected/509b2067-171f-4e99-86fa-12cd19ff40ee-kube-api-access-4s52g\") on node \"crc\" DevicePath \"\"" Feb 02 21:39:18 crc kubenswrapper[4789]: I0202 21:39:18.266978 4789 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/509b2067-171f-4e99-86fa-12cd19ff40ee-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 21:39:18 crc kubenswrapper[4789]: I0202 21:39:18.667026 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-rbrvz" event={"ID":"509b2067-171f-4e99-86fa-12cd19ff40ee","Type":"ContainerDied","Data":"fd230e3ac0620fd65d96c10516ba69c1f04fa11dc5476bc1ba6f1a08422d25ab"} Feb 02 21:39:18 crc kubenswrapper[4789]: I0202 21:39:18.667103 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd230e3ac0620fd65d96c10516ba69c1f04fa11dc5476bc1ba6f1a08422d25ab" Feb 02 21:39:18 crc kubenswrapper[4789]: I0202 21:39:18.667057 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-rbrvz" Feb 02 21:39:18 crc kubenswrapper[4789]: I0202 21:39:18.981786 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-sk5p7"] Feb 02 21:39:18 crc kubenswrapper[4789]: E0202 21:39:18.982358 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6139e5bb-4e3a-45d2-a284-eebacdb72422" containerName="init" Feb 02 21:39:18 crc kubenswrapper[4789]: I0202 21:39:18.982373 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="6139e5bb-4e3a-45d2-a284-eebacdb72422" containerName="init" Feb 02 21:39:18 crc kubenswrapper[4789]: E0202 21:39:18.982393 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6139e5bb-4e3a-45d2-a284-eebacdb72422" containerName="dnsmasq-dns" Feb 02 21:39:18 crc kubenswrapper[4789]: I0202 21:39:18.982399 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="6139e5bb-4e3a-45d2-a284-eebacdb72422" containerName="dnsmasq-dns" Feb 02 21:39:18 crc kubenswrapper[4789]: E0202 21:39:18.982420 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="509b2067-171f-4e99-86fa-12cd19ff40ee" containerName="keystone-db-sync" Feb 02 21:39:18 crc kubenswrapper[4789]: I0202 21:39:18.982427 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="509b2067-171f-4e99-86fa-12cd19ff40ee" containerName="keystone-db-sync" Feb 02 21:39:18 crc kubenswrapper[4789]: I0202 21:39:18.982591 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="6139e5bb-4e3a-45d2-a284-eebacdb72422" containerName="dnsmasq-dns" Feb 02 21:39:18 crc kubenswrapper[4789]: I0202 21:39:18.982603 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="509b2067-171f-4e99-86fa-12cd19ff40ee" containerName="keystone-db-sync" Feb 02 21:39:18 crc kubenswrapper[4789]: I0202 21:39:18.983166 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-sk5p7" Feb 02 21:39:18 crc kubenswrapper[4789]: I0202 21:39:18.985810 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 02 21:39:18 crc kubenswrapper[4789]: I0202 21:39:18.986794 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 02 21:39:18 crc kubenswrapper[4789]: I0202 21:39:18.987335 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-2xd6t" Feb 02 21:39:18 crc kubenswrapper[4789]: I0202 21:39:18.987873 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 02 21:39:18 crc kubenswrapper[4789]: I0202 21:39:18.988326 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.001832 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-vjxxv"] Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.002078 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-74f6bcbc87-vjxxv" podUID="a3d51bcf-493e-46d1-83a7-a861a8a59577" containerName="dnsmasq-dns" containerID="cri-o://a6207dd8d2d0ad6b5dca859e615bb3beeaa816c2ef66cbf9255df3545eb9cbda" gracePeriod=10 Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.019763 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-sk5p7"] Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.047500 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-mlcmr"] Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.056082 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-mlcmr" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.081964 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-mlcmr"] Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.083163 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c7a402ea-2333-422a-a02d-b2ad98a989a4-dns-svc\") pod \"dnsmasq-dns-847c4cc679-mlcmr\" (UID: \"c7a402ea-2333-422a-a02d-b2ad98a989a4\") " pod="openstack/dnsmasq-dns-847c4cc679-mlcmr" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.083190 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfvnm\" (UniqueName: \"kubernetes.io/projected/c7a402ea-2333-422a-a02d-b2ad98a989a4-kube-api-access-rfvnm\") pod \"dnsmasq-dns-847c4cc679-mlcmr\" (UID: \"c7a402ea-2333-422a-a02d-b2ad98a989a4\") " pod="openstack/dnsmasq-dns-847c4cc679-mlcmr" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.083238 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c7a402ea-2333-422a-a02d-b2ad98a989a4-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-mlcmr\" (UID: \"c7a402ea-2333-422a-a02d-b2ad98a989a4\") " pod="openstack/dnsmasq-dns-847c4cc679-mlcmr" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.083267 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7a402ea-2333-422a-a02d-b2ad98a989a4-config\") pod \"dnsmasq-dns-847c4cc679-mlcmr\" (UID: \"c7a402ea-2333-422a-a02d-b2ad98a989a4\") " pod="openstack/dnsmasq-dns-847c4cc679-mlcmr" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.083288 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c7a402ea-2333-422a-a02d-b2ad98a989a4-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-mlcmr\" (UID: \"c7a402ea-2333-422a-a02d-b2ad98a989a4\") " pod="openstack/dnsmasq-dns-847c4cc679-mlcmr" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.083318 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c7a402ea-2333-422a-a02d-b2ad98a989a4-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-mlcmr\" (UID: \"c7a402ea-2333-422a-a02d-b2ad98a989a4\") " pod="openstack/dnsmasq-dns-847c4cc679-mlcmr" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.180110 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-rvwgc"] Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.184447 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3168dcf5-60b0-4c18-bf26-4b480e7c0f05-credential-keys\") pod \"keystone-bootstrap-sk5p7\" (UID: \"3168dcf5-60b0-4c18-bf26-4b480e7c0f05\") " pod="openstack/keystone-bootstrap-sk5p7" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.184476 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-rvwgc" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.184497 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c7a402ea-2333-422a-a02d-b2ad98a989a4-dns-svc\") pod \"dnsmasq-dns-847c4cc679-mlcmr\" (UID: \"c7a402ea-2333-422a-a02d-b2ad98a989a4\") " pod="openstack/dnsmasq-dns-847c4cc679-mlcmr" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.184522 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfvnm\" (UniqueName: \"kubernetes.io/projected/c7a402ea-2333-422a-a02d-b2ad98a989a4-kube-api-access-rfvnm\") pod \"dnsmasq-dns-847c4cc679-mlcmr\" (UID: \"c7a402ea-2333-422a-a02d-b2ad98a989a4\") " pod="openstack/dnsmasq-dns-847c4cc679-mlcmr" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.184539 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3168dcf5-60b0-4c18-bf26-4b480e7c0f05-config-data\") pod \"keystone-bootstrap-sk5p7\" (UID: \"3168dcf5-60b0-4c18-bf26-4b480e7c0f05\") " pod="openstack/keystone-bootstrap-sk5p7" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.184553 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3168dcf5-60b0-4c18-bf26-4b480e7c0f05-fernet-keys\") pod \"keystone-bootstrap-sk5p7\" (UID: \"3168dcf5-60b0-4c18-bf26-4b480e7c0f05\") " pod="openstack/keystone-bootstrap-sk5p7" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.184595 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c7a402ea-2333-422a-a02d-b2ad98a989a4-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-mlcmr\" (UID: \"c7a402ea-2333-422a-a02d-b2ad98a989a4\") " pod="openstack/dnsmasq-dns-847c4cc679-mlcmr" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.184626 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqktz\" (UniqueName: \"kubernetes.io/projected/3168dcf5-60b0-4c18-bf26-4b480e7c0f05-kube-api-access-nqktz\") pod \"keystone-bootstrap-sk5p7\" (UID: \"3168dcf5-60b0-4c18-bf26-4b480e7c0f05\") " pod="openstack/keystone-bootstrap-sk5p7" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.184650 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7a402ea-2333-422a-a02d-b2ad98a989a4-config\") pod \"dnsmasq-dns-847c4cc679-mlcmr\" (UID: \"c7a402ea-2333-422a-a02d-b2ad98a989a4\") " pod="openstack/dnsmasq-dns-847c4cc679-mlcmr" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.184671 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c7a402ea-2333-422a-a02d-b2ad98a989a4-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-mlcmr\" (UID: \"c7a402ea-2333-422a-a02d-b2ad98a989a4\") " pod="openstack/dnsmasq-dns-847c4cc679-mlcmr" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.184688 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c7a402ea-2333-422a-a02d-b2ad98a989a4-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-mlcmr\" (UID: \"c7a402ea-2333-422a-a02d-b2ad98a989a4\") " pod="openstack/dnsmasq-dns-847c4cc679-mlcmr" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.184722 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3168dcf5-60b0-4c18-bf26-4b480e7c0f05-combined-ca-bundle\") pod \"keystone-bootstrap-sk5p7\" (UID: \"3168dcf5-60b0-4c18-bf26-4b480e7c0f05\") " pod="openstack/keystone-bootstrap-sk5p7" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.184743 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3168dcf5-60b0-4c18-bf26-4b480e7c0f05-scripts\") pod \"keystone-bootstrap-sk5p7\" (UID: \"3168dcf5-60b0-4c18-bf26-4b480e7c0f05\") " pod="openstack/keystone-bootstrap-sk5p7" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.185720 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c7a402ea-2333-422a-a02d-b2ad98a989a4-dns-svc\") pod \"dnsmasq-dns-847c4cc679-mlcmr\" (UID: \"c7a402ea-2333-422a-a02d-b2ad98a989a4\") " pod="openstack/dnsmasq-dns-847c4cc679-mlcmr" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.186297 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c7a402ea-2333-422a-a02d-b2ad98a989a4-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-mlcmr\" (UID: \"c7a402ea-2333-422a-a02d-b2ad98a989a4\") " pod="openstack/dnsmasq-dns-847c4cc679-mlcmr" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.187167 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c7a402ea-2333-422a-a02d-b2ad98a989a4-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-mlcmr\" (UID: \"c7a402ea-2333-422a-a02d-b2ad98a989a4\") " pod="openstack/dnsmasq-dns-847c4cc679-mlcmr" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.189035 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.189173 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-wwv7s" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.189384 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.190066 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c7a402ea-2333-422a-a02d-b2ad98a989a4-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-mlcmr\" (UID: \"c7a402ea-2333-422a-a02d-b2ad98a989a4\") " pod="openstack/dnsmasq-dns-847c4cc679-mlcmr" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.190621 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7a402ea-2333-422a-a02d-b2ad98a989a4-config\") pod \"dnsmasq-dns-847c4cc679-mlcmr\" (UID: \"c7a402ea-2333-422a-a02d-b2ad98a989a4\") " pod="openstack/dnsmasq-dns-847c4cc679-mlcmr" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.191358 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-rvwgc"] Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.238358 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfvnm\" (UniqueName: \"kubernetes.io/projected/c7a402ea-2333-422a-a02d-b2ad98a989a4-kube-api-access-rfvnm\") pod \"dnsmasq-dns-847c4cc679-mlcmr\" (UID: \"c7a402ea-2333-422a-a02d-b2ad98a989a4\") " pod="openstack/dnsmasq-dns-847c4cc679-mlcmr" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.253118 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.254928 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.260438 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.261276 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.267177 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-5kfhw"] Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.285251 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-5kfhw" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.287326 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4aaa8d11-6409-415e-836b-b7941b66f6e4-etc-machine-id\") pod \"cinder-db-sync-5kfhw\" (UID: \"4aaa8d11-6409-415e-836b-b7941b66f6e4\") " pod="openstack/cinder-db-sync-5kfhw" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.287364 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4aaa8d11-6409-415e-836b-b7941b66f6e4-db-sync-config-data\") pod \"cinder-db-sync-5kfhw\" (UID: \"4aaa8d11-6409-415e-836b-b7941b66f6e4\") " pod="openstack/cinder-db-sync-5kfhw" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.287383 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4aaa8d11-6409-415e-836b-b7941b66f6e4-config-data\") pod \"cinder-db-sync-5kfhw\" (UID: \"4aaa8d11-6409-415e-836b-b7941b66f6e4\") " pod="openstack/cinder-db-sync-5kfhw" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.287407 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4aaa8d11-6409-415e-836b-b7941b66f6e4-scripts\") pod \"cinder-db-sync-5kfhw\" (UID: \"4aaa8d11-6409-415e-836b-b7941b66f6e4\") " pod="openstack/cinder-db-sync-5kfhw" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.287428 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3168dcf5-60b0-4c18-bf26-4b480e7c0f05-credential-keys\") pod \"keystone-bootstrap-sk5p7\" (UID: \"3168dcf5-60b0-4c18-bf26-4b480e7c0f05\") " pod="openstack/keystone-bootstrap-sk5p7" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.287445 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c202a904-ccae-4f90-a284-d7e2a3b5e0f7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c202a904-ccae-4f90-a284-d7e2a3b5e0f7\") " pod="openstack/ceilometer-0" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.287466 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwp6d\" (UniqueName: \"kubernetes.io/projected/4aaa8d11-6409-415e-836b-b7941b66f6e4-kube-api-access-fwp6d\") pod \"cinder-db-sync-5kfhw\" (UID: \"4aaa8d11-6409-415e-836b-b7941b66f6e4\") " pod="openstack/cinder-db-sync-5kfhw" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.287490 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3168dcf5-60b0-4c18-bf26-4b480e7c0f05-config-data\") pod \"keystone-bootstrap-sk5p7\" (UID: \"3168dcf5-60b0-4c18-bf26-4b480e7c0f05\") " pod="openstack/keystone-bootstrap-sk5p7" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.287504 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4aaa8d11-6409-415e-836b-b7941b66f6e4-combined-ca-bundle\") pod \"cinder-db-sync-5kfhw\" (UID: \"4aaa8d11-6409-415e-836b-b7941b66f6e4\") " pod="openstack/cinder-db-sync-5kfhw" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.287519 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3168dcf5-60b0-4c18-bf26-4b480e7c0f05-fernet-keys\") pod \"keystone-bootstrap-sk5p7\" (UID: \"3168dcf5-60b0-4c18-bf26-4b480e7c0f05\") " pod="openstack/keystone-bootstrap-sk5p7" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.287534 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c202a904-ccae-4f90-a284-d7e2a3b5e0f7-log-httpd\") pod \"ceilometer-0\" (UID: \"c202a904-ccae-4f90-a284-d7e2a3b5e0f7\") " pod="openstack/ceilometer-0" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.287550 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flplq\" (UniqueName: \"kubernetes.io/projected/42e496d3-8d68-48a0-a0ca-058126b200a1-kube-api-access-flplq\") pod \"neutron-db-sync-rvwgc\" (UID: \"42e496d3-8d68-48a0-a0ca-058126b200a1\") " pod="openstack/neutron-db-sync-rvwgc" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.287571 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c202a904-ccae-4f90-a284-d7e2a3b5e0f7-scripts\") pod \"ceilometer-0\" (UID: \"c202a904-ccae-4f90-a284-d7e2a3b5e0f7\") " pod="openstack/ceilometer-0" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.287612 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42e496d3-8d68-48a0-a0ca-058126b200a1-combined-ca-bundle\") pod \"neutron-db-sync-rvwgc\" (UID: \"42e496d3-8d68-48a0-a0ca-058126b200a1\") " pod="openstack/neutron-db-sync-rvwgc" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.287635 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c202a904-ccae-4f90-a284-d7e2a3b5e0f7-config-data\") pod \"ceilometer-0\" (UID: \"c202a904-ccae-4f90-a284-d7e2a3b5e0f7\") " pod="openstack/ceilometer-0" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.287680 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqktz\" (UniqueName: \"kubernetes.io/projected/3168dcf5-60b0-4c18-bf26-4b480e7c0f05-kube-api-access-nqktz\") pod \"keystone-bootstrap-sk5p7\" (UID: \"3168dcf5-60b0-4c18-bf26-4b480e7c0f05\") " pod="openstack/keystone-bootstrap-sk5p7" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.287710 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/42e496d3-8d68-48a0-a0ca-058126b200a1-config\") pod \"neutron-db-sync-rvwgc\" (UID: \"42e496d3-8d68-48a0-a0ca-058126b200a1\") " pod="openstack/neutron-db-sync-rvwgc" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.287727 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c202a904-ccae-4f90-a284-d7e2a3b5e0f7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c202a904-ccae-4f90-a284-d7e2a3b5e0f7\") " pod="openstack/ceilometer-0" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.287755 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptb7f\" (UniqueName: \"kubernetes.io/projected/c202a904-ccae-4f90-a284-d7e2a3b5e0f7-kube-api-access-ptb7f\") pod \"ceilometer-0\" (UID: \"c202a904-ccae-4f90-a284-d7e2a3b5e0f7\") " pod="openstack/ceilometer-0" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.287784 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c202a904-ccae-4f90-a284-d7e2a3b5e0f7-run-httpd\") pod \"ceilometer-0\" (UID: \"c202a904-ccae-4f90-a284-d7e2a3b5e0f7\") " pod="openstack/ceilometer-0" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.287802 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3168dcf5-60b0-4c18-bf26-4b480e7c0f05-combined-ca-bundle\") pod \"keystone-bootstrap-sk5p7\" (UID: \"3168dcf5-60b0-4c18-bf26-4b480e7c0f05\") " pod="openstack/keystone-bootstrap-sk5p7" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.287822 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3168dcf5-60b0-4c18-bf26-4b480e7c0f05-scripts\") pod \"keystone-bootstrap-sk5p7\" (UID: \"3168dcf5-60b0-4c18-bf26-4b480e7c0f05\") " pod="openstack/keystone-bootstrap-sk5p7" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.289214 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.289401 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.289530 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-n4mwh" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.304362 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3168dcf5-60b0-4c18-bf26-4b480e7c0f05-scripts\") pod \"keystone-bootstrap-sk5p7\" (UID: \"3168dcf5-60b0-4c18-bf26-4b480e7c0f05\") " pod="openstack/keystone-bootstrap-sk5p7" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.304595 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3168dcf5-60b0-4c18-bf26-4b480e7c0f05-fernet-keys\") pod \"keystone-bootstrap-sk5p7\" (UID: \"3168dcf5-60b0-4c18-bf26-4b480e7c0f05\") " pod="openstack/keystone-bootstrap-sk5p7" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.306229 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3168dcf5-60b0-4c18-bf26-4b480e7c0f05-combined-ca-bundle\") pod \"keystone-bootstrap-sk5p7\" (UID: \"3168dcf5-60b0-4c18-bf26-4b480e7c0f05\") " pod="openstack/keystone-bootstrap-sk5p7" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.310126 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.312256 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3168dcf5-60b0-4c18-bf26-4b480e7c0f05-config-data\") pod \"keystone-bootstrap-sk5p7\" (UID: \"3168dcf5-60b0-4c18-bf26-4b480e7c0f05\") " pod="openstack/keystone-bootstrap-sk5p7" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.317894 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3168dcf5-60b0-4c18-bf26-4b480e7c0f05-credential-keys\") pod \"keystone-bootstrap-sk5p7\" (UID: \"3168dcf5-60b0-4c18-bf26-4b480e7c0f05\") " pod="openstack/keystone-bootstrap-sk5p7" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.318184 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqktz\" (UniqueName: \"kubernetes.io/projected/3168dcf5-60b0-4c18-bf26-4b480e7c0f05-kube-api-access-nqktz\") pod \"keystone-bootstrap-sk5p7\" (UID: \"3168dcf5-60b0-4c18-bf26-4b480e7c0f05\") " pod="openstack/keystone-bootstrap-sk5p7" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.330756 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-5kfhw"] Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.337822 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-mc8z9"] Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.338846 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-mc8z9" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.344226 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.344450 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-v2ntj" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.360641 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-mc8z9"] Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.404011 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c202a904-ccae-4f90-a284-d7e2a3b5e0f7-run-httpd\") pod \"ceilometer-0\" (UID: \"c202a904-ccae-4f90-a284-d7e2a3b5e0f7\") " pod="openstack/ceilometer-0" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.404112 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4aaa8d11-6409-415e-836b-b7941b66f6e4-etc-machine-id\") pod \"cinder-db-sync-5kfhw\" (UID: \"4aaa8d11-6409-415e-836b-b7941b66f6e4\") " pod="openstack/cinder-db-sync-5kfhw" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.404174 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4aaa8d11-6409-415e-836b-b7941b66f6e4-db-sync-config-data\") pod \"cinder-db-sync-5kfhw\" (UID: \"4aaa8d11-6409-415e-836b-b7941b66f6e4\") " pod="openstack/cinder-db-sync-5kfhw" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.404197 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4aaa8d11-6409-415e-836b-b7941b66f6e4-config-data\") pod \"cinder-db-sync-5kfhw\" (UID: \"4aaa8d11-6409-415e-836b-b7941b66f6e4\") " pod="openstack/cinder-db-sync-5kfhw" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.404246 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4aaa8d11-6409-415e-836b-b7941b66f6e4-scripts\") pod \"cinder-db-sync-5kfhw\" (UID: \"4aaa8d11-6409-415e-836b-b7941b66f6e4\") " pod="openstack/cinder-db-sync-5kfhw" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.404290 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c202a904-ccae-4f90-a284-d7e2a3b5e0f7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c202a904-ccae-4f90-a284-d7e2a3b5e0f7\") " pod="openstack/ceilometer-0" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.404330 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwp6d\" (UniqueName: \"kubernetes.io/projected/4aaa8d11-6409-415e-836b-b7941b66f6e4-kube-api-access-fwp6d\") pod \"cinder-db-sync-5kfhw\" (UID: \"4aaa8d11-6409-415e-836b-b7941b66f6e4\") " pod="openstack/cinder-db-sync-5kfhw" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.404382 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4aaa8d11-6409-415e-836b-b7941b66f6e4-combined-ca-bundle\") pod \"cinder-db-sync-5kfhw\" (UID: \"4aaa8d11-6409-415e-836b-b7941b66f6e4\") " pod="openstack/cinder-db-sync-5kfhw" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.404407 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c202a904-ccae-4f90-a284-d7e2a3b5e0f7-log-httpd\") pod \"ceilometer-0\" (UID: \"c202a904-ccae-4f90-a284-d7e2a3b5e0f7\") " pod="openstack/ceilometer-0" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.404432 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-flplq\" (UniqueName: \"kubernetes.io/projected/42e496d3-8d68-48a0-a0ca-058126b200a1-kube-api-access-flplq\") pod \"neutron-db-sync-rvwgc\" (UID: \"42e496d3-8d68-48a0-a0ca-058126b200a1\") " pod="openstack/neutron-db-sync-rvwgc" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.404462 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c202a904-ccae-4f90-a284-d7e2a3b5e0f7-scripts\") pod \"ceilometer-0\" (UID: \"c202a904-ccae-4f90-a284-d7e2a3b5e0f7\") " pod="openstack/ceilometer-0" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.404505 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42e496d3-8d68-48a0-a0ca-058126b200a1-combined-ca-bundle\") pod \"neutron-db-sync-rvwgc\" (UID: \"42e496d3-8d68-48a0-a0ca-058126b200a1\") " pod="openstack/neutron-db-sync-rvwgc" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.404526 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c202a904-ccae-4f90-a284-d7e2a3b5e0f7-config-data\") pod \"ceilometer-0\" (UID: \"c202a904-ccae-4f90-a284-d7e2a3b5e0f7\") " pod="openstack/ceilometer-0" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.404572 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/42e496d3-8d68-48a0-a0ca-058126b200a1-config\") pod \"neutron-db-sync-rvwgc\" (UID: \"42e496d3-8d68-48a0-a0ca-058126b200a1\") " pod="openstack/neutron-db-sync-rvwgc" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.404610 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c202a904-ccae-4f90-a284-d7e2a3b5e0f7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c202a904-ccae-4f90-a284-d7e2a3b5e0f7\") " pod="openstack/ceilometer-0" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.404650 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptb7f\" (UniqueName: \"kubernetes.io/projected/c202a904-ccae-4f90-a284-d7e2a3b5e0f7-kube-api-access-ptb7f\") pod \"ceilometer-0\" (UID: \"c202a904-ccae-4f90-a284-d7e2a3b5e0f7\") " pod="openstack/ceilometer-0" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.408032 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4aaa8d11-6409-415e-836b-b7941b66f6e4-etc-machine-id\") pod \"cinder-db-sync-5kfhw\" (UID: \"4aaa8d11-6409-415e-836b-b7941b66f6e4\") " pod="openstack/cinder-db-sync-5kfhw" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.408296 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c202a904-ccae-4f90-a284-d7e2a3b5e0f7-run-httpd\") pod \"ceilometer-0\" (UID: \"c202a904-ccae-4f90-a284-d7e2a3b5e0f7\") " pod="openstack/ceilometer-0" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.408642 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c202a904-ccae-4f90-a284-d7e2a3b5e0f7-log-httpd\") pod \"ceilometer-0\" (UID: \"c202a904-ccae-4f90-a284-d7e2a3b5e0f7\") " pod="openstack/ceilometer-0" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.413189 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4aaa8d11-6409-415e-836b-b7941b66f6e4-combined-ca-bundle\") pod \"cinder-db-sync-5kfhw\" (UID: \"4aaa8d11-6409-415e-836b-b7941b66f6e4\") " pod="openstack/cinder-db-sync-5kfhw" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.417177 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c202a904-ccae-4f90-a284-d7e2a3b5e0f7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c202a904-ccae-4f90-a284-d7e2a3b5e0f7\") " pod="openstack/ceilometer-0" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.419819 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c202a904-ccae-4f90-a284-d7e2a3b5e0f7-config-data\") pod \"ceilometer-0\" (UID: \"c202a904-ccae-4f90-a284-d7e2a3b5e0f7\") " pod="openstack/ceilometer-0" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.420404 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c202a904-ccae-4f90-a284-d7e2a3b5e0f7-scripts\") pod \"ceilometer-0\" (UID: \"c202a904-ccae-4f90-a284-d7e2a3b5e0f7\") " pod="openstack/ceilometer-0" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.421791 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42e496d3-8d68-48a0-a0ca-058126b200a1-combined-ca-bundle\") pod \"neutron-db-sync-rvwgc\" (UID: \"42e496d3-8d68-48a0-a0ca-058126b200a1\") " pod="openstack/neutron-db-sync-rvwgc" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.441187 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-mlcmr"] Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.442935 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-mlcmr" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.446446 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4aaa8d11-6409-415e-836b-b7941b66f6e4-db-sync-config-data\") pod \"cinder-db-sync-5kfhw\" (UID: \"4aaa8d11-6409-415e-836b-b7941b66f6e4\") " pod="openstack/cinder-db-sync-5kfhw" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.447130 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4aaa8d11-6409-415e-836b-b7941b66f6e4-scripts\") pod \"cinder-db-sync-5kfhw\" (UID: \"4aaa8d11-6409-415e-836b-b7941b66f6e4\") " pod="openstack/cinder-db-sync-5kfhw" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.449045 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4aaa8d11-6409-415e-836b-b7941b66f6e4-config-data\") pod \"cinder-db-sync-5kfhw\" (UID: \"4aaa8d11-6409-415e-836b-b7941b66f6e4\") " pod="openstack/cinder-db-sync-5kfhw" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.452509 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c202a904-ccae-4f90-a284-d7e2a3b5e0f7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c202a904-ccae-4f90-a284-d7e2a3b5e0f7\") " pod="openstack/ceilometer-0" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.456191 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwp6d\" (UniqueName: \"kubernetes.io/projected/4aaa8d11-6409-415e-836b-b7941b66f6e4-kube-api-access-fwp6d\") pod \"cinder-db-sync-5kfhw\" (UID: \"4aaa8d11-6409-415e-836b-b7941b66f6e4\") " pod="openstack/cinder-db-sync-5kfhw" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.461887 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptb7f\" (UniqueName: \"kubernetes.io/projected/c202a904-ccae-4f90-a284-d7e2a3b5e0f7-kube-api-access-ptb7f\") pod \"ceilometer-0\" (UID: \"c202a904-ccae-4f90-a284-d7e2a3b5e0f7\") " pod="openstack/ceilometer-0" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.463255 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/42e496d3-8d68-48a0-a0ca-058126b200a1-config\") pod \"neutron-db-sync-rvwgc\" (UID: \"42e496d3-8d68-48a0-a0ca-058126b200a1\") " pod="openstack/neutron-db-sync-rvwgc" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.473037 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-flplq\" (UniqueName: \"kubernetes.io/projected/42e496d3-8d68-48a0-a0ca-058126b200a1-kube-api-access-flplq\") pod \"neutron-db-sync-rvwgc\" (UID: \"42e496d3-8d68-48a0-a0ca-058126b200a1\") " pod="openstack/neutron-db-sync-rvwgc" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.509770 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ss2g\" (UniqueName: \"kubernetes.io/projected/ead77939-6823-47d8-83e8-7dc74b841d49-kube-api-access-7ss2g\") pod \"barbican-db-sync-mc8z9\" (UID: \"ead77939-6823-47d8-83e8-7dc74b841d49\") " pod="openstack/barbican-db-sync-mc8z9" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.509873 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ead77939-6823-47d8-83e8-7dc74b841d49-combined-ca-bundle\") pod \"barbican-db-sync-mc8z9\" (UID: \"ead77939-6823-47d8-83e8-7dc74b841d49\") " pod="openstack/barbican-db-sync-mc8z9" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.509910 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ead77939-6823-47d8-83e8-7dc74b841d49-db-sync-config-data\") pod \"barbican-db-sync-mc8z9\" (UID: \"ead77939-6823-47d8-83e8-7dc74b841d49\") " pod="openstack/barbican-db-sync-mc8z9" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.510079 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-rvwgc" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.550871 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-ccdfw"] Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.552221 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-ccdfw" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.562162 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.562998 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-82gwc"] Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.565456 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-82gwc" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.565977 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-vs2lc" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.573774 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.575449 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-ccdfw"] Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.586756 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-82gwc"] Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.602590 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-sk5p7" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.611991 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ss2g\" (UniqueName: \"kubernetes.io/projected/ead77939-6823-47d8-83e8-7dc74b841d49-kube-api-access-7ss2g\") pod \"barbican-db-sync-mc8z9\" (UID: \"ead77939-6823-47d8-83e8-7dc74b841d49\") " pod="openstack/barbican-db-sync-mc8z9" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.612080 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ead77939-6823-47d8-83e8-7dc74b841d49-combined-ca-bundle\") pod \"barbican-db-sync-mc8z9\" (UID: \"ead77939-6823-47d8-83e8-7dc74b841d49\") " pod="openstack/barbican-db-sync-mc8z9" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.612110 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ead77939-6823-47d8-83e8-7dc74b841d49-db-sync-config-data\") pod \"barbican-db-sync-mc8z9\" (UID: \"ead77939-6823-47d8-83e8-7dc74b841d49\") " pod="openstack/barbican-db-sync-mc8z9" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.634245 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ead77939-6823-47d8-83e8-7dc74b841d49-db-sync-config-data\") pod \"barbican-db-sync-mc8z9\" (UID: \"ead77939-6823-47d8-83e8-7dc74b841d49\") " pod="openstack/barbican-db-sync-mc8z9" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.647322 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ead77939-6823-47d8-83e8-7dc74b841d49-combined-ca-bundle\") pod \"barbican-db-sync-mc8z9\" (UID: \"ead77939-6823-47d8-83e8-7dc74b841d49\") " pod="openstack/barbican-db-sync-mc8z9" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.656164 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ss2g\" (UniqueName: \"kubernetes.io/projected/ead77939-6823-47d8-83e8-7dc74b841d49-kube-api-access-7ss2g\") pod \"barbican-db-sync-mc8z9\" (UID: \"ead77939-6823-47d8-83e8-7dc74b841d49\") " pod="openstack/barbican-db-sync-mc8z9" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.678177 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.692058 4789 generic.go:334] "Generic (PLEG): container finished" podID="a3d51bcf-493e-46d1-83a7-a861a8a59577" containerID="a6207dd8d2d0ad6b5dca859e615bb3beeaa816c2ef66cbf9255df3545eb9cbda" exitCode=0 Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.692096 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-vjxxv" event={"ID":"a3d51bcf-493e-46d1-83a7-a861a8a59577","Type":"ContainerDied","Data":"a6207dd8d2d0ad6b5dca859e615bb3beeaa816c2ef66cbf9255df3545eb9cbda"} Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.702960 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-5kfhw" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.717378 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22dd9cf7-e9fb-443a-a7d2-46f72c6ee5e3-combined-ca-bundle\") pod \"placement-db-sync-ccdfw\" (UID: \"22dd9cf7-e9fb-443a-a7d2-46f72c6ee5e3\") " pod="openstack/placement-db-sync-ccdfw" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.717453 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/339cb6ee-98df-41da-81a4-9aaf77f01cc8-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-82gwc\" (UID: \"339cb6ee-98df-41da-81a4-9aaf77f01cc8\") " pod="openstack/dnsmasq-dns-785d8bcb8c-82gwc" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.717474 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/339cb6ee-98df-41da-81a4-9aaf77f01cc8-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-82gwc\" (UID: \"339cb6ee-98df-41da-81a4-9aaf77f01cc8\") " pod="openstack/dnsmasq-dns-785d8bcb8c-82gwc" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.717498 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/339cb6ee-98df-41da-81a4-9aaf77f01cc8-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-82gwc\" (UID: \"339cb6ee-98df-41da-81a4-9aaf77f01cc8\") " pod="openstack/dnsmasq-dns-785d8bcb8c-82gwc" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.717560 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/22dd9cf7-e9fb-443a-a7d2-46f72c6ee5e3-scripts\") pod \"placement-db-sync-ccdfw\" (UID: \"22dd9cf7-e9fb-443a-a7d2-46f72c6ee5e3\") " pod="openstack/placement-db-sync-ccdfw" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.717599 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/22dd9cf7-e9fb-443a-a7d2-46f72c6ee5e3-logs\") pod \"placement-db-sync-ccdfw\" (UID: \"22dd9cf7-e9fb-443a-a7d2-46f72c6ee5e3\") " pod="openstack/placement-db-sync-ccdfw" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.717617 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22dd9cf7-e9fb-443a-a7d2-46f72c6ee5e3-config-data\") pod \"placement-db-sync-ccdfw\" (UID: \"22dd9cf7-e9fb-443a-a7d2-46f72c6ee5e3\") " pod="openstack/placement-db-sync-ccdfw" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.717635 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/339cb6ee-98df-41da-81a4-9aaf77f01cc8-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-82gwc\" (UID: \"339cb6ee-98df-41da-81a4-9aaf77f01cc8\") " pod="openstack/dnsmasq-dns-785d8bcb8c-82gwc" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.717676 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6v482\" (UniqueName: \"kubernetes.io/projected/22dd9cf7-e9fb-443a-a7d2-46f72c6ee5e3-kube-api-access-6v482\") pod \"placement-db-sync-ccdfw\" (UID: \"22dd9cf7-e9fb-443a-a7d2-46f72c6ee5e3\") " pod="openstack/placement-db-sync-ccdfw" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.717704 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98kbm\" (UniqueName: \"kubernetes.io/projected/339cb6ee-98df-41da-81a4-9aaf77f01cc8-kube-api-access-98kbm\") pod \"dnsmasq-dns-785d8bcb8c-82gwc\" (UID: \"339cb6ee-98df-41da-81a4-9aaf77f01cc8\") " pod="openstack/dnsmasq-dns-785d8bcb8c-82gwc" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.717747 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/339cb6ee-98df-41da-81a4-9aaf77f01cc8-config\") pod \"dnsmasq-dns-785d8bcb8c-82gwc\" (UID: \"339cb6ee-98df-41da-81a4-9aaf77f01cc8\") " pod="openstack/dnsmasq-dns-785d8bcb8c-82gwc" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.755371 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-vjxxv" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.815957 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-mc8z9" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.818675 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22dd9cf7-e9fb-443a-a7d2-46f72c6ee5e3-combined-ca-bundle\") pod \"placement-db-sync-ccdfw\" (UID: \"22dd9cf7-e9fb-443a-a7d2-46f72c6ee5e3\") " pod="openstack/placement-db-sync-ccdfw" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.818710 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/339cb6ee-98df-41da-81a4-9aaf77f01cc8-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-82gwc\" (UID: \"339cb6ee-98df-41da-81a4-9aaf77f01cc8\") " pod="openstack/dnsmasq-dns-785d8bcb8c-82gwc" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.818735 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/339cb6ee-98df-41da-81a4-9aaf77f01cc8-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-82gwc\" (UID: \"339cb6ee-98df-41da-81a4-9aaf77f01cc8\") " pod="openstack/dnsmasq-dns-785d8bcb8c-82gwc" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.818762 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/339cb6ee-98df-41da-81a4-9aaf77f01cc8-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-82gwc\" (UID: \"339cb6ee-98df-41da-81a4-9aaf77f01cc8\") " pod="openstack/dnsmasq-dns-785d8bcb8c-82gwc" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.818805 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/22dd9cf7-e9fb-443a-a7d2-46f72c6ee5e3-scripts\") pod \"placement-db-sync-ccdfw\" (UID: \"22dd9cf7-e9fb-443a-a7d2-46f72c6ee5e3\") " pod="openstack/placement-db-sync-ccdfw" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.818825 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/22dd9cf7-e9fb-443a-a7d2-46f72c6ee5e3-logs\") pod \"placement-db-sync-ccdfw\" (UID: \"22dd9cf7-e9fb-443a-a7d2-46f72c6ee5e3\") " pod="openstack/placement-db-sync-ccdfw" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.818842 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22dd9cf7-e9fb-443a-a7d2-46f72c6ee5e3-config-data\") pod \"placement-db-sync-ccdfw\" (UID: \"22dd9cf7-e9fb-443a-a7d2-46f72c6ee5e3\") " pod="openstack/placement-db-sync-ccdfw" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.818860 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/339cb6ee-98df-41da-81a4-9aaf77f01cc8-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-82gwc\" (UID: \"339cb6ee-98df-41da-81a4-9aaf77f01cc8\") " pod="openstack/dnsmasq-dns-785d8bcb8c-82gwc" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.818902 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6v482\" (UniqueName: \"kubernetes.io/projected/22dd9cf7-e9fb-443a-a7d2-46f72c6ee5e3-kube-api-access-6v482\") pod \"placement-db-sync-ccdfw\" (UID: \"22dd9cf7-e9fb-443a-a7d2-46f72c6ee5e3\") " pod="openstack/placement-db-sync-ccdfw" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.818933 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98kbm\" (UniqueName: \"kubernetes.io/projected/339cb6ee-98df-41da-81a4-9aaf77f01cc8-kube-api-access-98kbm\") pod \"dnsmasq-dns-785d8bcb8c-82gwc\" (UID: \"339cb6ee-98df-41da-81a4-9aaf77f01cc8\") " pod="openstack/dnsmasq-dns-785d8bcb8c-82gwc" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.818978 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/339cb6ee-98df-41da-81a4-9aaf77f01cc8-config\") pod \"dnsmasq-dns-785d8bcb8c-82gwc\" (UID: \"339cb6ee-98df-41da-81a4-9aaf77f01cc8\") " pod="openstack/dnsmasq-dns-785d8bcb8c-82gwc" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.819786 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/339cb6ee-98df-41da-81a4-9aaf77f01cc8-config\") pod \"dnsmasq-dns-785d8bcb8c-82gwc\" (UID: \"339cb6ee-98df-41da-81a4-9aaf77f01cc8\") " pod="openstack/dnsmasq-dns-785d8bcb8c-82gwc" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.821004 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/22dd9cf7-e9fb-443a-a7d2-46f72c6ee5e3-logs\") pod \"placement-db-sync-ccdfw\" (UID: \"22dd9cf7-e9fb-443a-a7d2-46f72c6ee5e3\") " pod="openstack/placement-db-sync-ccdfw" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.821551 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/339cb6ee-98df-41da-81a4-9aaf77f01cc8-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-82gwc\" (UID: \"339cb6ee-98df-41da-81a4-9aaf77f01cc8\") " pod="openstack/dnsmasq-dns-785d8bcb8c-82gwc" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.823204 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/339cb6ee-98df-41da-81a4-9aaf77f01cc8-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-82gwc\" (UID: \"339cb6ee-98df-41da-81a4-9aaf77f01cc8\") " pod="openstack/dnsmasq-dns-785d8bcb8c-82gwc" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.823842 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/339cb6ee-98df-41da-81a4-9aaf77f01cc8-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-82gwc\" (UID: \"339cb6ee-98df-41da-81a4-9aaf77f01cc8\") " pod="openstack/dnsmasq-dns-785d8bcb8c-82gwc" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.823919 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22dd9cf7-e9fb-443a-a7d2-46f72c6ee5e3-combined-ca-bundle\") pod \"placement-db-sync-ccdfw\" (UID: \"22dd9cf7-e9fb-443a-a7d2-46f72c6ee5e3\") " pod="openstack/placement-db-sync-ccdfw" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.827460 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/22dd9cf7-e9fb-443a-a7d2-46f72c6ee5e3-scripts\") pod \"placement-db-sync-ccdfw\" (UID: \"22dd9cf7-e9fb-443a-a7d2-46f72c6ee5e3\") " pod="openstack/placement-db-sync-ccdfw" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.828486 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/339cb6ee-98df-41da-81a4-9aaf77f01cc8-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-82gwc\" (UID: \"339cb6ee-98df-41da-81a4-9aaf77f01cc8\") " pod="openstack/dnsmasq-dns-785d8bcb8c-82gwc" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.831044 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22dd9cf7-e9fb-443a-a7d2-46f72c6ee5e3-config-data\") pod \"placement-db-sync-ccdfw\" (UID: \"22dd9cf7-e9fb-443a-a7d2-46f72c6ee5e3\") " pod="openstack/placement-db-sync-ccdfw" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.847401 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98kbm\" (UniqueName: \"kubernetes.io/projected/339cb6ee-98df-41da-81a4-9aaf77f01cc8-kube-api-access-98kbm\") pod \"dnsmasq-dns-785d8bcb8c-82gwc\" (UID: \"339cb6ee-98df-41da-81a4-9aaf77f01cc8\") " pod="openstack/dnsmasq-dns-785d8bcb8c-82gwc" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.847466 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6v482\" (UniqueName: \"kubernetes.io/projected/22dd9cf7-e9fb-443a-a7d2-46f72c6ee5e3-kube-api-access-6v482\") pod \"placement-db-sync-ccdfw\" (UID: \"22dd9cf7-e9fb-443a-a7d2-46f72c6ee5e3\") " pod="openstack/placement-db-sync-ccdfw" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.884637 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-ccdfw" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.892626 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-82gwc" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.919945 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3d51bcf-493e-46d1-83a7-a861a8a59577-config\") pod \"a3d51bcf-493e-46d1-83a7-a861a8a59577\" (UID: \"a3d51bcf-493e-46d1-83a7-a861a8a59577\") " Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.920101 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a3d51bcf-493e-46d1-83a7-a861a8a59577-ovsdbserver-nb\") pod \"a3d51bcf-493e-46d1-83a7-a861a8a59577\" (UID: \"a3d51bcf-493e-46d1-83a7-a861a8a59577\") " Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.920191 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a3d51bcf-493e-46d1-83a7-a861a8a59577-ovsdbserver-sb\") pod \"a3d51bcf-493e-46d1-83a7-a861a8a59577\" (UID: \"a3d51bcf-493e-46d1-83a7-a861a8a59577\") " Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.920263 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a3d51bcf-493e-46d1-83a7-a861a8a59577-dns-swift-storage-0\") pod \"a3d51bcf-493e-46d1-83a7-a861a8a59577\" (UID: \"a3d51bcf-493e-46d1-83a7-a861a8a59577\") " Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.920298 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a3d51bcf-493e-46d1-83a7-a861a8a59577-dns-svc\") pod \"a3d51bcf-493e-46d1-83a7-a861a8a59577\" (UID: \"a3d51bcf-493e-46d1-83a7-a861a8a59577\") " Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.920387 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xz6rj\" (UniqueName: \"kubernetes.io/projected/a3d51bcf-493e-46d1-83a7-a861a8a59577-kube-api-access-xz6rj\") pod \"a3d51bcf-493e-46d1-83a7-a861a8a59577\" (UID: \"a3d51bcf-493e-46d1-83a7-a861a8a59577\") " Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.924976 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3d51bcf-493e-46d1-83a7-a861a8a59577-kube-api-access-xz6rj" (OuterVolumeSpecName: "kube-api-access-xz6rj") pod "a3d51bcf-493e-46d1-83a7-a861a8a59577" (UID: "a3d51bcf-493e-46d1-83a7-a861a8a59577"). InnerVolumeSpecName "kube-api-access-xz6rj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.982898 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3d51bcf-493e-46d1-83a7-a861a8a59577-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a3d51bcf-493e-46d1-83a7-a861a8a59577" (UID: "a3d51bcf-493e-46d1-83a7-a861a8a59577"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.986891 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3d51bcf-493e-46d1-83a7-a861a8a59577-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a3d51bcf-493e-46d1-83a7-a861a8a59577" (UID: "a3d51bcf-493e-46d1-83a7-a861a8a59577"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.993596 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3d51bcf-493e-46d1-83a7-a861a8a59577-config" (OuterVolumeSpecName: "config") pod "a3d51bcf-493e-46d1-83a7-a861a8a59577" (UID: "a3d51bcf-493e-46d1-83a7-a861a8a59577"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:39:19 crc kubenswrapper[4789]: I0202 21:39:19.995626 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3d51bcf-493e-46d1-83a7-a861a8a59577-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a3d51bcf-493e-46d1-83a7-a861a8a59577" (UID: "a3d51bcf-493e-46d1-83a7-a861a8a59577"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:39:20 crc kubenswrapper[4789]: I0202 21:39:20.008077 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3d51bcf-493e-46d1-83a7-a861a8a59577-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a3d51bcf-493e-46d1-83a7-a861a8a59577" (UID: "a3d51bcf-493e-46d1-83a7-a861a8a59577"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:39:20 crc kubenswrapper[4789]: I0202 21:39:20.026214 4789 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a3d51bcf-493e-46d1-83a7-a861a8a59577-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 21:39:20 crc kubenswrapper[4789]: I0202 21:39:20.026247 4789 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a3d51bcf-493e-46d1-83a7-a861a8a59577-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 21:39:20 crc kubenswrapper[4789]: I0202 21:39:20.026260 4789 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a3d51bcf-493e-46d1-83a7-a861a8a59577-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 02 21:39:20 crc kubenswrapper[4789]: I0202 21:39:20.026269 4789 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a3d51bcf-493e-46d1-83a7-a861a8a59577-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 21:39:20 crc kubenswrapper[4789]: I0202 21:39:20.026278 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xz6rj\" (UniqueName: \"kubernetes.io/projected/a3d51bcf-493e-46d1-83a7-a861a8a59577-kube-api-access-xz6rj\") on node \"crc\" DevicePath \"\"" Feb 02 21:39:20 crc kubenswrapper[4789]: I0202 21:39:20.026287 4789 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3d51bcf-493e-46d1-83a7-a861a8a59577-config\") on node \"crc\" DevicePath \"\"" Feb 02 21:39:20 crc kubenswrapper[4789]: I0202 21:39:20.079482 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-mlcmr"] Feb 02 21:39:20 crc kubenswrapper[4789]: W0202 21:39:20.088036 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc7a402ea_2333_422a_a02d_b2ad98a989a4.slice/crio-04ff1f543a56e34077c5d740e659f628754a268094aaf4ed7b9d2d2dbabcaf72 WatchSource:0}: Error finding container 04ff1f543a56e34077c5d740e659f628754a268094aaf4ed7b9d2d2dbabcaf72: Status 404 returned error can't find the container with id 04ff1f543a56e34077c5d740e659f628754a268094aaf4ed7b9d2d2dbabcaf72 Feb 02 21:39:20 crc kubenswrapper[4789]: I0202 21:39:20.114332 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 21:39:20 crc kubenswrapper[4789]: E0202 21:39:20.114884 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3d51bcf-493e-46d1-83a7-a861a8a59577" containerName="dnsmasq-dns" Feb 02 21:39:20 crc kubenswrapper[4789]: I0202 21:39:20.114906 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3d51bcf-493e-46d1-83a7-a861a8a59577" containerName="dnsmasq-dns" Feb 02 21:39:20 crc kubenswrapper[4789]: E0202 21:39:20.114918 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3d51bcf-493e-46d1-83a7-a861a8a59577" containerName="init" Feb 02 21:39:20 crc kubenswrapper[4789]: I0202 21:39:20.114924 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3d51bcf-493e-46d1-83a7-a861a8a59577" containerName="init" Feb 02 21:39:20 crc kubenswrapper[4789]: I0202 21:39:20.115068 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3d51bcf-493e-46d1-83a7-a861a8a59577" containerName="dnsmasq-dns" Feb 02 21:39:20 crc kubenswrapper[4789]: I0202 21:39:20.132759 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 21:39:20 crc kubenswrapper[4789]: I0202 21:39:20.144474 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-z2f8j" Feb 02 21:39:20 crc kubenswrapper[4789]: I0202 21:39:20.144708 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 02 21:39:20 crc kubenswrapper[4789]: I0202 21:39:20.144830 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 02 21:39:20 crc kubenswrapper[4789]: I0202 21:39:20.149781 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 02 21:39:20 crc kubenswrapper[4789]: I0202 21:39:20.154454 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 21:39:20 crc kubenswrapper[4789]: I0202 21:39:20.200639 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-rvwgc"] Feb 02 21:39:20 crc kubenswrapper[4789]: I0202 21:39:20.211148 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 21:39:20 crc kubenswrapper[4789]: I0202 21:39:20.212571 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 21:39:20 crc kubenswrapper[4789]: I0202 21:39:20.216647 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 02 21:39:20 crc kubenswrapper[4789]: I0202 21:39:20.216828 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 02 21:39:20 crc kubenswrapper[4789]: I0202 21:39:20.224205 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 21:39:20 crc kubenswrapper[4789]: I0202 21:39:20.230163 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"78339639-9f0b-4076-a7a3-160f9ae94bdd\") " pod="openstack/glance-default-external-api-0" Feb 02 21:39:20 crc kubenswrapper[4789]: I0202 21:39:20.230259 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/78339639-9f0b-4076-a7a3-160f9ae94bdd-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"78339639-9f0b-4076-a7a3-160f9ae94bdd\") " pod="openstack/glance-default-external-api-0" Feb 02 21:39:20 crc kubenswrapper[4789]: I0202 21:39:20.230307 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78339639-9f0b-4076-a7a3-160f9ae94bdd-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"78339639-9f0b-4076-a7a3-160f9ae94bdd\") " pod="openstack/glance-default-external-api-0" Feb 02 21:39:20 crc kubenswrapper[4789]: I0202 21:39:20.230364 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78339639-9f0b-4076-a7a3-160f9ae94bdd-config-data\") pod \"glance-default-external-api-0\" (UID: \"78339639-9f0b-4076-a7a3-160f9ae94bdd\") " pod="openstack/glance-default-external-api-0" Feb 02 21:39:20 crc kubenswrapper[4789]: I0202 21:39:20.230653 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfczd\" (UniqueName: \"kubernetes.io/projected/78339639-9f0b-4076-a7a3-160f9ae94bdd-kube-api-access-mfczd\") pod \"glance-default-external-api-0\" (UID: \"78339639-9f0b-4076-a7a3-160f9ae94bdd\") " pod="openstack/glance-default-external-api-0" Feb 02 21:39:20 crc kubenswrapper[4789]: I0202 21:39:20.230698 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78339639-9f0b-4076-a7a3-160f9ae94bdd-scripts\") pod \"glance-default-external-api-0\" (UID: \"78339639-9f0b-4076-a7a3-160f9ae94bdd\") " pod="openstack/glance-default-external-api-0" Feb 02 21:39:20 crc kubenswrapper[4789]: I0202 21:39:20.230719 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78339639-9f0b-4076-a7a3-160f9ae94bdd-logs\") pod \"glance-default-external-api-0\" (UID: \"78339639-9f0b-4076-a7a3-160f9ae94bdd\") " pod="openstack/glance-default-external-api-0" Feb 02 21:39:20 crc kubenswrapper[4789]: I0202 21:39:20.230742 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/78339639-9f0b-4076-a7a3-160f9ae94bdd-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"78339639-9f0b-4076-a7a3-160f9ae94bdd\") " pod="openstack/glance-default-external-api-0" Feb 02 21:39:20 crc kubenswrapper[4789]: I0202 21:39:20.305183 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-sk5p7"] Feb 02 21:39:20 crc kubenswrapper[4789]: W0202 21:39:20.313217 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3168dcf5_60b0_4c18_bf26_4b480e7c0f05.slice/crio-deff332c7b0c6a980f65a09e2a8c4a3789365aa5ff107ac00c2535568bfcf312 WatchSource:0}: Error finding container deff332c7b0c6a980f65a09e2a8c4a3789365aa5ff107ac00c2535568bfcf312: Status 404 returned error can't find the container with id deff332c7b0c6a980f65a09e2a8c4a3789365aa5ff107ac00c2535568bfcf312 Feb 02 21:39:20 crc kubenswrapper[4789]: I0202 21:39:20.335725 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78339639-9f0b-4076-a7a3-160f9ae94bdd-scripts\") pod \"glance-default-external-api-0\" (UID: \"78339639-9f0b-4076-a7a3-160f9ae94bdd\") " pod="openstack/glance-default-external-api-0" Feb 02 21:39:20 crc kubenswrapper[4789]: I0202 21:39:20.335883 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4eb9ddc9-027a-434c-bd55-383f7fa4edae-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4eb9ddc9-027a-434c-bd55-383f7fa4edae\") " pod="openstack/glance-default-internal-api-0" Feb 02 21:39:20 crc kubenswrapper[4789]: I0202 21:39:20.335916 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78339639-9f0b-4076-a7a3-160f9ae94bdd-logs\") pod \"glance-default-external-api-0\" (UID: \"78339639-9f0b-4076-a7a3-160f9ae94bdd\") " pod="openstack/glance-default-external-api-0" Feb 02 21:39:20 crc kubenswrapper[4789]: I0202 21:39:20.335948 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/78339639-9f0b-4076-a7a3-160f9ae94bdd-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"78339639-9f0b-4076-a7a3-160f9ae94bdd\") " pod="openstack/glance-default-external-api-0" Feb 02 21:39:20 crc kubenswrapper[4789]: I0202 21:39:20.336015 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4eb9ddc9-027a-434c-bd55-383f7fa4edae-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"4eb9ddc9-027a-434c-bd55-383f7fa4edae\") " pod="openstack/glance-default-internal-api-0" Feb 02 21:39:20 crc kubenswrapper[4789]: I0202 21:39:20.336115 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"78339639-9f0b-4076-a7a3-160f9ae94bdd\") " pod="openstack/glance-default-external-api-0" Feb 02 21:39:20 crc kubenswrapper[4789]: I0202 21:39:20.336151 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4eb9ddc9-027a-434c-bd55-383f7fa4edae-logs\") pod \"glance-default-internal-api-0\" (UID: \"4eb9ddc9-027a-434c-bd55-383f7fa4edae\") " pod="openstack/glance-default-internal-api-0" Feb 02 21:39:20 crc kubenswrapper[4789]: I0202 21:39:20.336197 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4eb9ddc9-027a-434c-bd55-383f7fa4edae-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4eb9ddc9-027a-434c-bd55-383f7fa4edae\") " pod="openstack/glance-default-internal-api-0" Feb 02 21:39:20 crc kubenswrapper[4789]: I0202 21:39:20.336245 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"4eb9ddc9-027a-434c-bd55-383f7fa4edae\") " pod="openstack/glance-default-internal-api-0" Feb 02 21:39:20 crc kubenswrapper[4789]: I0202 21:39:20.336274 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/78339639-9f0b-4076-a7a3-160f9ae94bdd-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"78339639-9f0b-4076-a7a3-160f9ae94bdd\") " pod="openstack/glance-default-external-api-0" Feb 02 21:39:20 crc kubenswrapper[4789]: I0202 21:39:20.336308 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrghl\" (UniqueName: \"kubernetes.io/projected/4eb9ddc9-027a-434c-bd55-383f7fa4edae-kube-api-access-mrghl\") pod \"glance-default-internal-api-0\" (UID: \"4eb9ddc9-027a-434c-bd55-383f7fa4edae\") " pod="openstack/glance-default-internal-api-0" Feb 02 21:39:20 crc kubenswrapper[4789]: I0202 21:39:20.336656 4789 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"78339639-9f0b-4076-a7a3-160f9ae94bdd\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-external-api-0" Feb 02 21:39:20 crc kubenswrapper[4789]: I0202 21:39:20.337420 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/78339639-9f0b-4076-a7a3-160f9ae94bdd-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"78339639-9f0b-4076-a7a3-160f9ae94bdd\") " pod="openstack/glance-default-external-api-0" Feb 02 21:39:20 crc kubenswrapper[4789]: I0202 21:39:20.337532 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78339639-9f0b-4076-a7a3-160f9ae94bdd-logs\") pod \"glance-default-external-api-0\" (UID: \"78339639-9f0b-4076-a7a3-160f9ae94bdd\") " pod="openstack/glance-default-external-api-0" Feb 02 21:39:20 crc kubenswrapper[4789]: I0202 21:39:20.336718 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4eb9ddc9-027a-434c-bd55-383f7fa4edae-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4eb9ddc9-027a-434c-bd55-383f7fa4edae\") " pod="openstack/glance-default-internal-api-0" Feb 02 21:39:20 crc kubenswrapper[4789]: I0202 21:39:20.339477 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4eb9ddc9-027a-434c-bd55-383f7fa4edae-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4eb9ddc9-027a-434c-bd55-383f7fa4edae\") " pod="openstack/glance-default-internal-api-0" Feb 02 21:39:20 crc kubenswrapper[4789]: I0202 21:39:20.339549 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78339639-9f0b-4076-a7a3-160f9ae94bdd-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"78339639-9f0b-4076-a7a3-160f9ae94bdd\") " pod="openstack/glance-default-external-api-0" Feb 02 21:39:20 crc kubenswrapper[4789]: I0202 21:39:20.339737 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78339639-9f0b-4076-a7a3-160f9ae94bdd-config-data\") pod \"glance-default-external-api-0\" (UID: \"78339639-9f0b-4076-a7a3-160f9ae94bdd\") " pod="openstack/glance-default-external-api-0" Feb 02 21:39:20 crc kubenswrapper[4789]: I0202 21:39:20.339776 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfczd\" (UniqueName: \"kubernetes.io/projected/78339639-9f0b-4076-a7a3-160f9ae94bdd-kube-api-access-mfczd\") pod \"glance-default-external-api-0\" (UID: \"78339639-9f0b-4076-a7a3-160f9ae94bdd\") " pod="openstack/glance-default-external-api-0" Feb 02 21:39:20 crc kubenswrapper[4789]: I0202 21:39:20.341633 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78339639-9f0b-4076-a7a3-160f9ae94bdd-scripts\") pod \"glance-default-external-api-0\" (UID: \"78339639-9f0b-4076-a7a3-160f9ae94bdd\") " pod="openstack/glance-default-external-api-0" Feb 02 21:39:20 crc kubenswrapper[4789]: I0202 21:39:20.351112 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/78339639-9f0b-4076-a7a3-160f9ae94bdd-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"78339639-9f0b-4076-a7a3-160f9ae94bdd\") " pod="openstack/glance-default-external-api-0" Feb 02 21:39:20 crc kubenswrapper[4789]: I0202 21:39:20.353354 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78339639-9f0b-4076-a7a3-160f9ae94bdd-config-data\") pod \"glance-default-external-api-0\" (UID: \"78339639-9f0b-4076-a7a3-160f9ae94bdd\") " pod="openstack/glance-default-external-api-0" Feb 02 21:39:20 crc kubenswrapper[4789]: I0202 21:39:20.354073 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78339639-9f0b-4076-a7a3-160f9ae94bdd-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"78339639-9f0b-4076-a7a3-160f9ae94bdd\") " pod="openstack/glance-default-external-api-0" Feb 02 21:39:20 crc kubenswrapper[4789]: I0202 21:39:20.364218 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfczd\" (UniqueName: \"kubernetes.io/projected/78339639-9f0b-4076-a7a3-160f9ae94bdd-kube-api-access-mfczd\") pod \"glance-default-external-api-0\" (UID: \"78339639-9f0b-4076-a7a3-160f9ae94bdd\") " pod="openstack/glance-default-external-api-0" Feb 02 21:39:20 crc kubenswrapper[4789]: I0202 21:39:20.365252 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"78339639-9f0b-4076-a7a3-160f9ae94bdd\") " pod="openstack/glance-default-external-api-0" Feb 02 21:39:20 crc kubenswrapper[4789]: I0202 21:39:20.441218 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4eb9ddc9-027a-434c-bd55-383f7fa4edae-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4eb9ddc9-027a-434c-bd55-383f7fa4edae\") " pod="openstack/glance-default-internal-api-0" Feb 02 21:39:20 crc kubenswrapper[4789]: I0202 21:39:20.441273 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4eb9ddc9-027a-434c-bd55-383f7fa4edae-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"4eb9ddc9-027a-434c-bd55-383f7fa4edae\") " pod="openstack/glance-default-internal-api-0" Feb 02 21:39:20 crc kubenswrapper[4789]: I0202 21:39:20.441326 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4eb9ddc9-027a-434c-bd55-383f7fa4edae-logs\") pod \"glance-default-internal-api-0\" (UID: \"4eb9ddc9-027a-434c-bd55-383f7fa4edae\") " pod="openstack/glance-default-internal-api-0" Feb 02 21:39:20 crc kubenswrapper[4789]: I0202 21:39:20.441352 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4eb9ddc9-027a-434c-bd55-383f7fa4edae-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4eb9ddc9-027a-434c-bd55-383f7fa4edae\") " pod="openstack/glance-default-internal-api-0" Feb 02 21:39:20 crc kubenswrapper[4789]: I0202 21:39:20.441378 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"4eb9ddc9-027a-434c-bd55-383f7fa4edae\") " pod="openstack/glance-default-internal-api-0" Feb 02 21:39:20 crc kubenswrapper[4789]: I0202 21:39:20.441405 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrghl\" (UniqueName: \"kubernetes.io/projected/4eb9ddc9-027a-434c-bd55-383f7fa4edae-kube-api-access-mrghl\") pod \"glance-default-internal-api-0\" (UID: \"4eb9ddc9-027a-434c-bd55-383f7fa4edae\") " pod="openstack/glance-default-internal-api-0" Feb 02 21:39:20 crc kubenswrapper[4789]: I0202 21:39:20.441419 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4eb9ddc9-027a-434c-bd55-383f7fa4edae-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4eb9ddc9-027a-434c-bd55-383f7fa4edae\") " pod="openstack/glance-default-internal-api-0" Feb 02 21:39:20 crc kubenswrapper[4789]: I0202 21:39:20.441436 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4eb9ddc9-027a-434c-bd55-383f7fa4edae-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4eb9ddc9-027a-434c-bd55-383f7fa4edae\") " pod="openstack/glance-default-internal-api-0" Feb 02 21:39:20 crc kubenswrapper[4789]: I0202 21:39:20.444278 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 21:39:20 crc kubenswrapper[4789]: I0202 21:39:20.449699 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4eb9ddc9-027a-434c-bd55-383f7fa4edae-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4eb9ddc9-027a-434c-bd55-383f7fa4edae\") " pod="openstack/glance-default-internal-api-0" Feb 02 21:39:20 crc kubenswrapper[4789]: I0202 21:39:20.449827 4789 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"4eb9ddc9-027a-434c-bd55-383f7fa4edae\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-internal-api-0" Feb 02 21:39:20 crc kubenswrapper[4789]: I0202 21:39:20.450414 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4eb9ddc9-027a-434c-bd55-383f7fa4edae-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4eb9ddc9-027a-434c-bd55-383f7fa4edae\") " pod="openstack/glance-default-internal-api-0" Feb 02 21:39:20 crc kubenswrapper[4789]: I0202 21:39:20.451919 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4eb9ddc9-027a-434c-bd55-383f7fa4edae-logs\") pod \"glance-default-internal-api-0\" (UID: \"4eb9ddc9-027a-434c-bd55-383f7fa4edae\") " pod="openstack/glance-default-internal-api-0" Feb 02 21:39:20 crc kubenswrapper[4789]: I0202 21:39:20.460180 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4eb9ddc9-027a-434c-bd55-383f7fa4edae-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4eb9ddc9-027a-434c-bd55-383f7fa4edae\") " pod="openstack/glance-default-internal-api-0" Feb 02 21:39:20 crc kubenswrapper[4789]: I0202 21:39:20.460768 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 21:39:20 crc kubenswrapper[4789]: I0202 21:39:20.480606 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4eb9ddc9-027a-434c-bd55-383f7fa4edae-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4eb9ddc9-027a-434c-bd55-383f7fa4edae\") " pod="openstack/glance-default-internal-api-0" Feb 02 21:39:20 crc kubenswrapper[4789]: I0202 21:39:20.489130 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4eb9ddc9-027a-434c-bd55-383f7fa4edae-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"4eb9ddc9-027a-434c-bd55-383f7fa4edae\") " pod="openstack/glance-default-internal-api-0" Feb 02 21:39:20 crc kubenswrapper[4789]: I0202 21:39:20.491812 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrghl\" (UniqueName: \"kubernetes.io/projected/4eb9ddc9-027a-434c-bd55-383f7fa4edae-kube-api-access-mrghl\") pod \"glance-default-internal-api-0\" (UID: \"4eb9ddc9-027a-434c-bd55-383f7fa4edae\") " pod="openstack/glance-default-internal-api-0" Feb 02 21:39:20 crc kubenswrapper[4789]: I0202 21:39:20.548760 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-5kfhw"] Feb 02 21:39:20 crc kubenswrapper[4789]: I0202 21:39:20.564683 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"4eb9ddc9-027a-434c-bd55-383f7fa4edae\") " pod="openstack/glance-default-internal-api-0" Feb 02 21:39:20 crc kubenswrapper[4789]: I0202 21:39:20.644459 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-mc8z9"] Feb 02 21:39:20 crc kubenswrapper[4789]: I0202 21:39:20.649724 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-82gwc"] Feb 02 21:39:20 crc kubenswrapper[4789]: W0202 21:39:20.658120 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod22dd9cf7_e9fb_443a_a7d2_46f72c6ee5e3.slice/crio-ca902b7ce030c627fbbd3a801e09b4aff30999be5dce1951ff2a725b75cbf2e9 WatchSource:0}: Error finding container ca902b7ce030c627fbbd3a801e09b4aff30999be5dce1951ff2a725b75cbf2e9: Status 404 returned error can't find the container with id ca902b7ce030c627fbbd3a801e09b4aff30999be5dce1951ff2a725b75cbf2e9 Feb 02 21:39:20 crc kubenswrapper[4789]: I0202 21:39:20.686711 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-ccdfw"] Feb 02 21:39:20 crc kubenswrapper[4789]: I0202 21:39:20.721083 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-rvwgc" event={"ID":"42e496d3-8d68-48a0-a0ca-058126b200a1","Type":"ContainerStarted","Data":"9875869b174e349daf124d55e4a4702547bf8f56e47f23da592c46f1d55a3cc2"} Feb 02 21:39:20 crc kubenswrapper[4789]: I0202 21:39:20.721270 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-rvwgc" event={"ID":"42e496d3-8d68-48a0-a0ca-058126b200a1","Type":"ContainerStarted","Data":"48e775b185e695e7d74ccde962e8346508b1ca678fd0686a16001e5a5b396a00"} Feb 02 21:39:20 crc kubenswrapper[4789]: I0202 21:39:20.728386 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c202a904-ccae-4f90-a284-d7e2a3b5e0f7","Type":"ContainerStarted","Data":"faf0413633a571a2aa2142ad2d16436da04925ec40c209f88a1ab83546f91c60"} Feb 02 21:39:20 crc kubenswrapper[4789]: I0202 21:39:20.730107 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-ccdfw" event={"ID":"22dd9cf7-e9fb-443a-a7d2-46f72c6ee5e3","Type":"ContainerStarted","Data":"ca902b7ce030c627fbbd3a801e09b4aff30999be5dce1951ff2a725b75cbf2e9"} Feb 02 21:39:20 crc kubenswrapper[4789]: I0202 21:39:20.733168 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-vjxxv" event={"ID":"a3d51bcf-493e-46d1-83a7-a861a8a59577","Type":"ContainerDied","Data":"330d84511b202d7bcaddefe5dfdb83b937c00f0b7484471e40de35219a7e5b3e"} Feb 02 21:39:20 crc kubenswrapper[4789]: I0202 21:39:20.733224 4789 scope.go:117] "RemoveContainer" containerID="a6207dd8d2d0ad6b5dca859e615bb3beeaa816c2ef66cbf9255df3545eb9cbda" Feb 02 21:39:20 crc kubenswrapper[4789]: I0202 21:39:20.733189 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-vjxxv" Feb 02 21:39:20 crc kubenswrapper[4789]: I0202 21:39:20.735402 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-5kfhw" event={"ID":"4aaa8d11-6409-415e-836b-b7941b66f6e4","Type":"ContainerStarted","Data":"1026e2a88fba3f933fc4bae38fd4bccbeec97e931acd29f4a33b5a03c4925d7a"} Feb 02 21:39:20 crc kubenswrapper[4789]: I0202 21:39:20.737201 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-mc8z9" event={"ID":"ead77939-6823-47d8-83e8-7dc74b841d49","Type":"ContainerStarted","Data":"02f89e8e48d36b6232eefe92be04f177426cd8f2102e0cb54d7d6b243f49e935"} Feb 02 21:39:20 crc kubenswrapper[4789]: I0202 21:39:20.739216 4789 generic.go:334] "Generic (PLEG): container finished" podID="c7a402ea-2333-422a-a02d-b2ad98a989a4" containerID="1da8b8caf1f2d185eb67847e47b05a216297ca7e86fbd90a538653927b97b50d" exitCode=0 Feb 02 21:39:20 crc kubenswrapper[4789]: I0202 21:39:20.739288 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-mlcmr" event={"ID":"c7a402ea-2333-422a-a02d-b2ad98a989a4","Type":"ContainerDied","Data":"1da8b8caf1f2d185eb67847e47b05a216297ca7e86fbd90a538653927b97b50d"} Feb 02 21:39:20 crc kubenswrapper[4789]: I0202 21:39:20.739307 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-mlcmr" event={"ID":"c7a402ea-2333-422a-a02d-b2ad98a989a4","Type":"ContainerStarted","Data":"04ff1f543a56e34077c5d740e659f628754a268094aaf4ed7b9d2d2dbabcaf72"} Feb 02 21:39:20 crc kubenswrapper[4789]: I0202 21:39:20.753804 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-sk5p7" event={"ID":"3168dcf5-60b0-4c18-bf26-4b480e7c0f05","Type":"ContainerStarted","Data":"2f184873817571d3c96a8961e93b31393468ea2e04736452d72e1bfb0963324c"} Feb 02 21:39:20 crc kubenswrapper[4789]: I0202 21:39:20.753869 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-sk5p7" event={"ID":"3168dcf5-60b0-4c18-bf26-4b480e7c0f05","Type":"ContainerStarted","Data":"deff332c7b0c6a980f65a09e2a8c4a3789365aa5ff107ac00c2535568bfcf312"} Feb 02 21:39:20 crc kubenswrapper[4789]: I0202 21:39:20.766909 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-82gwc" event={"ID":"339cb6ee-98df-41da-81a4-9aaf77f01cc8","Type":"ContainerStarted","Data":"a16a0210247db362364a05313d2bd2879b0e8ae5978cb30d40ff12212e41f138"} Feb 02 21:39:20 crc kubenswrapper[4789]: I0202 21:39:20.771450 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-rvwgc" podStartSLOduration=1.771433009 podStartE2EDuration="1.771433009s" podCreationTimestamp="2026-02-02 21:39:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:39:20.75303867 +0000 UTC m=+1181.048063689" watchObservedRunningTime="2026-02-02 21:39:20.771433009 +0000 UTC m=+1181.066458028" Feb 02 21:39:20 crc kubenswrapper[4789]: I0202 21:39:20.785992 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 21:39:20 crc kubenswrapper[4789]: I0202 21:39:20.788949 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-vjxxv"] Feb 02 21:39:20 crc kubenswrapper[4789]: I0202 21:39:20.802159 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-vjxxv"] Feb 02 21:39:20 crc kubenswrapper[4789]: I0202 21:39:20.827887 4789 scope.go:117] "RemoveContainer" containerID="5aa5a97dd9d445e7950c2f5fde861f59476f37b5ef96aec01ee3db928f31ac72" Feb 02 21:39:20 crc kubenswrapper[4789]: I0202 21:39:20.839691 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-sk5p7" podStartSLOduration=2.839671065 podStartE2EDuration="2.839671065s" podCreationTimestamp="2026-02-02 21:39:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:39:20.82425611 +0000 UTC m=+1181.119281129" watchObservedRunningTime="2026-02-02 21:39:20.839671065 +0000 UTC m=+1181.134696074" Feb 02 21:39:21 crc kubenswrapper[4789]: I0202 21:39:21.207949 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 21:39:21 crc kubenswrapper[4789]: W0202 21:39:21.225404 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod78339639_9f0b_4076_a7a3_160f9ae94bdd.slice/crio-c34453ca10f37397405465ff3016d82a54f1756c5a9b328cf7262f5fdd0402b0 WatchSource:0}: Error finding container c34453ca10f37397405465ff3016d82a54f1756c5a9b328cf7262f5fdd0402b0: Status 404 returned error can't find the container with id c34453ca10f37397405465ff3016d82a54f1756c5a9b328cf7262f5fdd0402b0 Feb 02 21:39:21 crc kubenswrapper[4789]: I0202 21:39:21.260126 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-mlcmr" Feb 02 21:39:21 crc kubenswrapper[4789]: I0202 21:39:21.369823 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c7a402ea-2333-422a-a02d-b2ad98a989a4-ovsdbserver-nb\") pod \"c7a402ea-2333-422a-a02d-b2ad98a989a4\" (UID: \"c7a402ea-2333-422a-a02d-b2ad98a989a4\") " Feb 02 21:39:21 crc kubenswrapper[4789]: I0202 21:39:21.370079 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7a402ea-2333-422a-a02d-b2ad98a989a4-config\") pod \"c7a402ea-2333-422a-a02d-b2ad98a989a4\" (UID: \"c7a402ea-2333-422a-a02d-b2ad98a989a4\") " Feb 02 21:39:21 crc kubenswrapper[4789]: I0202 21:39:21.370125 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c7a402ea-2333-422a-a02d-b2ad98a989a4-dns-svc\") pod \"c7a402ea-2333-422a-a02d-b2ad98a989a4\" (UID: \"c7a402ea-2333-422a-a02d-b2ad98a989a4\") " Feb 02 21:39:21 crc kubenswrapper[4789]: I0202 21:39:21.370144 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c7a402ea-2333-422a-a02d-b2ad98a989a4-ovsdbserver-sb\") pod \"c7a402ea-2333-422a-a02d-b2ad98a989a4\" (UID: \"c7a402ea-2333-422a-a02d-b2ad98a989a4\") " Feb 02 21:39:21 crc kubenswrapper[4789]: I0202 21:39:21.370225 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c7a402ea-2333-422a-a02d-b2ad98a989a4-dns-swift-storage-0\") pod \"c7a402ea-2333-422a-a02d-b2ad98a989a4\" (UID: \"c7a402ea-2333-422a-a02d-b2ad98a989a4\") " Feb 02 21:39:21 crc kubenswrapper[4789]: I0202 21:39:21.370835 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rfvnm\" (UniqueName: \"kubernetes.io/projected/c7a402ea-2333-422a-a02d-b2ad98a989a4-kube-api-access-rfvnm\") pod \"c7a402ea-2333-422a-a02d-b2ad98a989a4\" (UID: \"c7a402ea-2333-422a-a02d-b2ad98a989a4\") " Feb 02 21:39:21 crc kubenswrapper[4789]: I0202 21:39:21.396548 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7a402ea-2333-422a-a02d-b2ad98a989a4-kube-api-access-rfvnm" (OuterVolumeSpecName: "kube-api-access-rfvnm") pod "c7a402ea-2333-422a-a02d-b2ad98a989a4" (UID: "c7a402ea-2333-422a-a02d-b2ad98a989a4"). InnerVolumeSpecName "kube-api-access-rfvnm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:39:21 crc kubenswrapper[4789]: I0202 21:39:21.408718 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7a402ea-2333-422a-a02d-b2ad98a989a4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c7a402ea-2333-422a-a02d-b2ad98a989a4" (UID: "c7a402ea-2333-422a-a02d-b2ad98a989a4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:39:21 crc kubenswrapper[4789]: I0202 21:39:21.417031 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7a402ea-2333-422a-a02d-b2ad98a989a4-config" (OuterVolumeSpecName: "config") pod "c7a402ea-2333-422a-a02d-b2ad98a989a4" (UID: "c7a402ea-2333-422a-a02d-b2ad98a989a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:39:21 crc kubenswrapper[4789]: I0202 21:39:21.428837 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7a402ea-2333-422a-a02d-b2ad98a989a4-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c7a402ea-2333-422a-a02d-b2ad98a989a4" (UID: "c7a402ea-2333-422a-a02d-b2ad98a989a4"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:39:21 crc kubenswrapper[4789]: I0202 21:39:21.443104 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7a402ea-2333-422a-a02d-b2ad98a989a4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c7a402ea-2333-422a-a02d-b2ad98a989a4" (UID: "c7a402ea-2333-422a-a02d-b2ad98a989a4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:39:21 crc kubenswrapper[4789]: I0202 21:39:21.454782 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7a402ea-2333-422a-a02d-b2ad98a989a4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c7a402ea-2333-422a-a02d-b2ad98a989a4" (UID: "c7a402ea-2333-422a-a02d-b2ad98a989a4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:39:21 crc kubenswrapper[4789]: I0202 21:39:21.478234 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 21:39:21 crc kubenswrapper[4789]: I0202 21:39:21.479300 4789 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c7a402ea-2333-422a-a02d-b2ad98a989a4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 21:39:21 crc kubenswrapper[4789]: I0202 21:39:21.479321 4789 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7a402ea-2333-422a-a02d-b2ad98a989a4-config\") on node \"crc\" DevicePath \"\"" Feb 02 21:39:21 crc kubenswrapper[4789]: I0202 21:39:21.479332 4789 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c7a402ea-2333-422a-a02d-b2ad98a989a4-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 21:39:21 crc kubenswrapper[4789]: I0202 21:39:21.479340 4789 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c7a402ea-2333-422a-a02d-b2ad98a989a4-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 21:39:21 crc kubenswrapper[4789]: I0202 21:39:21.479348 4789 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c7a402ea-2333-422a-a02d-b2ad98a989a4-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 02 21:39:21 crc kubenswrapper[4789]: I0202 21:39:21.479357 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rfvnm\" (UniqueName: \"kubernetes.io/projected/c7a402ea-2333-422a-a02d-b2ad98a989a4-kube-api-access-rfvnm\") on node \"crc\" DevicePath \"\"" Feb 02 21:39:21 crc kubenswrapper[4789]: I0202 21:39:21.798979 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4eb9ddc9-027a-434c-bd55-383f7fa4edae","Type":"ContainerStarted","Data":"e2dac4b3a553e9f947c27172c6a59d84b2752d8a2727e5d090f8cfc17cfa6bf6"} Feb 02 21:39:21 crc kubenswrapper[4789]: I0202 21:39:21.815383 4789 generic.go:334] "Generic (PLEG): container finished" podID="339cb6ee-98df-41da-81a4-9aaf77f01cc8" containerID="52c73b93b06db75c8b0ee660bb62854833f9f64f18b216c0fc8214cf979cf14a" exitCode=0 Feb 02 21:39:21 crc kubenswrapper[4789]: I0202 21:39:21.815482 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-82gwc" event={"ID":"339cb6ee-98df-41da-81a4-9aaf77f01cc8","Type":"ContainerDied","Data":"52c73b93b06db75c8b0ee660bb62854833f9f64f18b216c0fc8214cf979cf14a"} Feb 02 21:39:21 crc kubenswrapper[4789]: I0202 21:39:21.827608 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-mlcmr" event={"ID":"c7a402ea-2333-422a-a02d-b2ad98a989a4","Type":"ContainerDied","Data":"04ff1f543a56e34077c5d740e659f628754a268094aaf4ed7b9d2d2dbabcaf72"} Feb 02 21:39:21 crc kubenswrapper[4789]: I0202 21:39:21.827674 4789 scope.go:117] "RemoveContainer" containerID="1da8b8caf1f2d185eb67847e47b05a216297ca7e86fbd90a538653927b97b50d" Feb 02 21:39:21 crc kubenswrapper[4789]: I0202 21:39:21.827801 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-mlcmr" Feb 02 21:39:21 crc kubenswrapper[4789]: I0202 21:39:21.846682 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"78339639-9f0b-4076-a7a3-160f9ae94bdd","Type":"ContainerStarted","Data":"c34453ca10f37397405465ff3016d82a54f1756c5a9b328cf7262f5fdd0402b0"} Feb 02 21:39:21 crc kubenswrapper[4789]: I0202 21:39:21.959391 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 21:39:21 crc kubenswrapper[4789]: I0202 21:39:21.963728 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-mlcmr"] Feb 02 21:39:21 crc kubenswrapper[4789]: I0202 21:39:21.977158 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-mlcmr"] Feb 02 21:39:21 crc kubenswrapper[4789]: I0202 21:39:21.985169 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 21:39:22 crc kubenswrapper[4789]: I0202 21:39:22.021910 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 21:39:22 crc kubenswrapper[4789]: I0202 21:39:22.437370 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3d51bcf-493e-46d1-83a7-a861a8a59577" path="/var/lib/kubelet/pods/a3d51bcf-493e-46d1-83a7-a861a8a59577/volumes" Feb 02 21:39:22 crc kubenswrapper[4789]: I0202 21:39:22.438602 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7a402ea-2333-422a-a02d-b2ad98a989a4" path="/var/lib/kubelet/pods/c7a402ea-2333-422a-a02d-b2ad98a989a4/volumes" Feb 02 21:39:22 crc kubenswrapper[4789]: I0202 21:39:22.870432 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"78339639-9f0b-4076-a7a3-160f9ae94bdd","Type":"ContainerStarted","Data":"a13bfd328e4a360e1b6a8012c4e7d43e622fdf2fdf250c819295fba4183c86ff"} Feb 02 21:39:22 crc kubenswrapper[4789]: I0202 21:39:22.872350 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4eb9ddc9-027a-434c-bd55-383f7fa4edae","Type":"ContainerStarted","Data":"c9378ac28c6b7682c73f513d6ebc6864e1731b3791beee6ad343536b3246f907"} Feb 02 21:39:22 crc kubenswrapper[4789]: I0202 21:39:22.875358 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-82gwc" event={"ID":"339cb6ee-98df-41da-81a4-9aaf77f01cc8","Type":"ContainerStarted","Data":"c592964e1b36e9ddb3659f00af59f06d0458c5e38247a3a727d5a8797b62b739"} Feb 02 21:39:22 crc kubenswrapper[4789]: I0202 21:39:22.875863 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-785d8bcb8c-82gwc" Feb 02 21:39:22 crc kubenswrapper[4789]: I0202 21:39:22.904475 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-785d8bcb8c-82gwc" podStartSLOduration=3.904457609 podStartE2EDuration="3.904457609s" podCreationTimestamp="2026-02-02 21:39:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:39:22.90059404 +0000 UTC m=+1183.195619059" watchObservedRunningTime="2026-02-02 21:39:22.904457609 +0000 UTC m=+1183.199482628" Feb 02 21:39:23 crc kubenswrapper[4789]: I0202 21:39:23.888906 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"78339639-9f0b-4076-a7a3-160f9ae94bdd","Type":"ContainerStarted","Data":"5ef3e6d95e0831ce635cb0a6eaae6beb69438b63431338438baea5c2aa70bd8b"} Feb 02 21:39:23 crc kubenswrapper[4789]: I0202 21:39:23.889259 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="78339639-9f0b-4076-a7a3-160f9ae94bdd" containerName="glance-log" containerID="cri-o://a13bfd328e4a360e1b6a8012c4e7d43e622fdf2fdf250c819295fba4183c86ff" gracePeriod=30 Feb 02 21:39:23 crc kubenswrapper[4789]: I0202 21:39:23.889414 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="78339639-9f0b-4076-a7a3-160f9ae94bdd" containerName="glance-httpd" containerID="cri-o://5ef3e6d95e0831ce635cb0a6eaae6beb69438b63431338438baea5c2aa70bd8b" gracePeriod=30 Feb 02 21:39:23 crc kubenswrapper[4789]: I0202 21:39:23.900791 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4eb9ddc9-027a-434c-bd55-383f7fa4edae","Type":"ContainerStarted","Data":"885d9fb986bd72b653971db46b5836f8636d9d67168d8a3afe7fdb801998e9d6"} Feb 02 21:39:23 crc kubenswrapper[4789]: I0202 21:39:23.914912 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.9148942909999995 podStartE2EDuration="4.914894291s" podCreationTimestamp="2026-02-02 21:39:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:39:23.913002187 +0000 UTC m=+1184.208027206" watchObservedRunningTime="2026-02-02 21:39:23.914894291 +0000 UTC m=+1184.209919310" Feb 02 21:39:24 crc kubenswrapper[4789]: I0202 21:39:24.954174 4789 generic.go:334] "Generic (PLEG): container finished" podID="78339639-9f0b-4076-a7a3-160f9ae94bdd" containerID="5ef3e6d95e0831ce635cb0a6eaae6beb69438b63431338438baea5c2aa70bd8b" exitCode=0 Feb 02 21:39:24 crc kubenswrapper[4789]: I0202 21:39:24.954960 4789 generic.go:334] "Generic (PLEG): container finished" podID="78339639-9f0b-4076-a7a3-160f9ae94bdd" containerID="a13bfd328e4a360e1b6a8012c4e7d43e622fdf2fdf250c819295fba4183c86ff" exitCode=143 Feb 02 21:39:24 crc kubenswrapper[4789]: I0202 21:39:24.954211 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"78339639-9f0b-4076-a7a3-160f9ae94bdd","Type":"ContainerDied","Data":"5ef3e6d95e0831ce635cb0a6eaae6beb69438b63431338438baea5c2aa70bd8b"} Feb 02 21:39:24 crc kubenswrapper[4789]: I0202 21:39:24.955040 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"78339639-9f0b-4076-a7a3-160f9ae94bdd","Type":"ContainerDied","Data":"a13bfd328e4a360e1b6a8012c4e7d43e622fdf2fdf250c819295fba4183c86ff"} Feb 02 21:39:24 crc kubenswrapper[4789]: I0202 21:39:24.955145 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="4eb9ddc9-027a-434c-bd55-383f7fa4edae" containerName="glance-log" containerID="cri-o://c9378ac28c6b7682c73f513d6ebc6864e1731b3791beee6ad343536b3246f907" gracePeriod=30 Feb 02 21:39:24 crc kubenswrapper[4789]: I0202 21:39:24.955402 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="4eb9ddc9-027a-434c-bd55-383f7fa4edae" containerName="glance-httpd" containerID="cri-o://885d9fb986bd72b653971db46b5836f8636d9d67168d8a3afe7fdb801998e9d6" gracePeriod=30 Feb 02 21:39:24 crc kubenswrapper[4789]: I0202 21:39:24.977145 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.977129615 podStartE2EDuration="5.977129615s" podCreationTimestamp="2026-02-02 21:39:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:39:24.97484617 +0000 UTC m=+1185.269871189" watchObservedRunningTime="2026-02-02 21:39:24.977129615 +0000 UTC m=+1185.272154634" Feb 02 21:39:25 crc kubenswrapper[4789]: I0202 21:39:25.969179 4789 generic.go:334] "Generic (PLEG): container finished" podID="3168dcf5-60b0-4c18-bf26-4b480e7c0f05" containerID="2f184873817571d3c96a8961e93b31393468ea2e04736452d72e1bfb0963324c" exitCode=0 Feb 02 21:39:25 crc kubenswrapper[4789]: I0202 21:39:25.969349 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-sk5p7" event={"ID":"3168dcf5-60b0-4c18-bf26-4b480e7c0f05","Type":"ContainerDied","Data":"2f184873817571d3c96a8961e93b31393468ea2e04736452d72e1bfb0963324c"} Feb 02 21:39:25 crc kubenswrapper[4789]: I0202 21:39:25.972292 4789 generic.go:334] "Generic (PLEG): container finished" podID="4eb9ddc9-027a-434c-bd55-383f7fa4edae" containerID="885d9fb986bd72b653971db46b5836f8636d9d67168d8a3afe7fdb801998e9d6" exitCode=0 Feb 02 21:39:25 crc kubenswrapper[4789]: I0202 21:39:25.972313 4789 generic.go:334] "Generic (PLEG): container finished" podID="4eb9ddc9-027a-434c-bd55-383f7fa4edae" containerID="c9378ac28c6b7682c73f513d6ebc6864e1731b3791beee6ad343536b3246f907" exitCode=143 Feb 02 21:39:25 crc kubenswrapper[4789]: I0202 21:39:25.972330 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4eb9ddc9-027a-434c-bd55-383f7fa4edae","Type":"ContainerDied","Data":"885d9fb986bd72b653971db46b5836f8636d9d67168d8a3afe7fdb801998e9d6"} Feb 02 21:39:25 crc kubenswrapper[4789]: I0202 21:39:25.972352 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4eb9ddc9-027a-434c-bd55-383f7fa4edae","Type":"ContainerDied","Data":"c9378ac28c6b7682c73f513d6ebc6864e1731b3791beee6ad343536b3246f907"} Feb 02 21:39:29 crc kubenswrapper[4789]: I0202 21:39:29.895868 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-785d8bcb8c-82gwc" Feb 02 21:39:29 crc kubenswrapper[4789]: I0202 21:39:29.987763 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-d9x7g"] Feb 02 21:39:29 crc kubenswrapper[4789]: I0202 21:39:29.987991 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-d9x7g" podUID="397ad5b0-86b4-40b7-b8c3-bceb1d8aa470" containerName="dnsmasq-dns" containerID="cri-o://8bd4bcf7161f891ce9bf00e247effacfa5f70a433ffc16ebe79845ab753e6f9e" gracePeriod=10 Feb 02 21:39:30 crc kubenswrapper[4789]: I0202 21:39:30.003460 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-sk5p7" Feb 02 21:39:30 crc kubenswrapper[4789]: I0202 21:39:30.017115 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-sk5p7" event={"ID":"3168dcf5-60b0-4c18-bf26-4b480e7c0f05","Type":"ContainerDied","Data":"deff332c7b0c6a980f65a09e2a8c4a3789365aa5ff107ac00c2535568bfcf312"} Feb 02 21:39:30 crc kubenswrapper[4789]: I0202 21:39:30.017174 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="deff332c7b0c6a980f65a09e2a8c4a3789365aa5ff107ac00c2535568bfcf312" Feb 02 21:39:30 crc kubenswrapper[4789]: I0202 21:39:30.017195 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-sk5p7" Feb 02 21:39:30 crc kubenswrapper[4789]: I0202 21:39:30.102487 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3168dcf5-60b0-4c18-bf26-4b480e7c0f05-config-data\") pod \"3168dcf5-60b0-4c18-bf26-4b480e7c0f05\" (UID: \"3168dcf5-60b0-4c18-bf26-4b480e7c0f05\") " Feb 02 21:39:30 crc kubenswrapper[4789]: I0202 21:39:30.102529 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3168dcf5-60b0-4c18-bf26-4b480e7c0f05-scripts\") pod \"3168dcf5-60b0-4c18-bf26-4b480e7c0f05\" (UID: \"3168dcf5-60b0-4c18-bf26-4b480e7c0f05\") " Feb 02 21:39:30 crc kubenswrapper[4789]: I0202 21:39:30.102588 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3168dcf5-60b0-4c18-bf26-4b480e7c0f05-fernet-keys\") pod \"3168dcf5-60b0-4c18-bf26-4b480e7c0f05\" (UID: \"3168dcf5-60b0-4c18-bf26-4b480e7c0f05\") " Feb 02 21:39:30 crc kubenswrapper[4789]: I0202 21:39:30.102609 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nqktz\" (UniqueName: \"kubernetes.io/projected/3168dcf5-60b0-4c18-bf26-4b480e7c0f05-kube-api-access-nqktz\") pod \"3168dcf5-60b0-4c18-bf26-4b480e7c0f05\" (UID: \"3168dcf5-60b0-4c18-bf26-4b480e7c0f05\") " Feb 02 21:39:30 crc kubenswrapper[4789]: I0202 21:39:30.102662 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3168dcf5-60b0-4c18-bf26-4b480e7c0f05-credential-keys\") pod \"3168dcf5-60b0-4c18-bf26-4b480e7c0f05\" (UID: \"3168dcf5-60b0-4c18-bf26-4b480e7c0f05\") " Feb 02 21:39:30 crc kubenswrapper[4789]: I0202 21:39:30.102799 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3168dcf5-60b0-4c18-bf26-4b480e7c0f05-combined-ca-bundle\") pod \"3168dcf5-60b0-4c18-bf26-4b480e7c0f05\" (UID: \"3168dcf5-60b0-4c18-bf26-4b480e7c0f05\") " Feb 02 21:39:30 crc kubenswrapper[4789]: I0202 21:39:30.109539 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3168dcf5-60b0-4c18-bf26-4b480e7c0f05-kube-api-access-nqktz" (OuterVolumeSpecName: "kube-api-access-nqktz") pod "3168dcf5-60b0-4c18-bf26-4b480e7c0f05" (UID: "3168dcf5-60b0-4c18-bf26-4b480e7c0f05"). InnerVolumeSpecName "kube-api-access-nqktz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:39:30 crc kubenswrapper[4789]: I0202 21:39:30.110757 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3168dcf5-60b0-4c18-bf26-4b480e7c0f05-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "3168dcf5-60b0-4c18-bf26-4b480e7c0f05" (UID: "3168dcf5-60b0-4c18-bf26-4b480e7c0f05"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:39:30 crc kubenswrapper[4789]: I0202 21:39:30.110991 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3168dcf5-60b0-4c18-bf26-4b480e7c0f05-scripts" (OuterVolumeSpecName: "scripts") pod "3168dcf5-60b0-4c18-bf26-4b480e7c0f05" (UID: "3168dcf5-60b0-4c18-bf26-4b480e7c0f05"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:39:30 crc kubenswrapper[4789]: I0202 21:39:30.116686 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3168dcf5-60b0-4c18-bf26-4b480e7c0f05-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "3168dcf5-60b0-4c18-bf26-4b480e7c0f05" (UID: "3168dcf5-60b0-4c18-bf26-4b480e7c0f05"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:39:30 crc kubenswrapper[4789]: I0202 21:39:30.138022 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3168dcf5-60b0-4c18-bf26-4b480e7c0f05-config-data" (OuterVolumeSpecName: "config-data") pod "3168dcf5-60b0-4c18-bf26-4b480e7c0f05" (UID: "3168dcf5-60b0-4c18-bf26-4b480e7c0f05"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:39:30 crc kubenswrapper[4789]: I0202 21:39:30.146953 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3168dcf5-60b0-4c18-bf26-4b480e7c0f05-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3168dcf5-60b0-4c18-bf26-4b480e7c0f05" (UID: "3168dcf5-60b0-4c18-bf26-4b480e7c0f05"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:39:30 crc kubenswrapper[4789]: I0202 21:39:30.203995 4789 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3168dcf5-60b0-4c18-bf26-4b480e7c0f05-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 02 21:39:30 crc kubenswrapper[4789]: I0202 21:39:30.204035 4789 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3168dcf5-60b0-4c18-bf26-4b480e7c0f05-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 21:39:30 crc kubenswrapper[4789]: I0202 21:39:30.204045 4789 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3168dcf5-60b0-4c18-bf26-4b480e7c0f05-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 21:39:30 crc kubenswrapper[4789]: I0202 21:39:30.204053 4789 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3168dcf5-60b0-4c18-bf26-4b480e7c0f05-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 21:39:30 crc kubenswrapper[4789]: I0202 21:39:30.204062 4789 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3168dcf5-60b0-4c18-bf26-4b480e7c0f05-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 02 21:39:30 crc kubenswrapper[4789]: I0202 21:39:30.204071 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nqktz\" (UniqueName: \"kubernetes.io/projected/3168dcf5-60b0-4c18-bf26-4b480e7c0f05-kube-api-access-nqktz\") on node \"crc\" DevicePath \"\"" Feb 02 21:39:30 crc kubenswrapper[4789]: I0202 21:39:30.707659 4789 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-d9x7g" podUID="397ad5b0-86b4-40b7-b8c3-bceb1d8aa470" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.115:5353: connect: connection refused" Feb 02 21:39:31 crc kubenswrapper[4789]: I0202 21:39:31.031824 4789 generic.go:334] "Generic (PLEG): container finished" podID="397ad5b0-86b4-40b7-b8c3-bceb1d8aa470" containerID="8bd4bcf7161f891ce9bf00e247effacfa5f70a433ffc16ebe79845ab753e6f9e" exitCode=0 Feb 02 21:39:31 crc kubenswrapper[4789]: I0202 21:39:31.032042 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-d9x7g" event={"ID":"397ad5b0-86b4-40b7-b8c3-bceb1d8aa470","Type":"ContainerDied","Data":"8bd4bcf7161f891ce9bf00e247effacfa5f70a433ffc16ebe79845ab753e6f9e"} Feb 02 21:39:31 crc kubenswrapper[4789]: I0202 21:39:31.116814 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-sk5p7"] Feb 02 21:39:31 crc kubenswrapper[4789]: I0202 21:39:31.127526 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-sk5p7"] Feb 02 21:39:31 crc kubenswrapper[4789]: I0202 21:39:31.222006 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-bvbsm"] Feb 02 21:39:31 crc kubenswrapper[4789]: E0202 21:39:31.222435 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7a402ea-2333-422a-a02d-b2ad98a989a4" containerName="init" Feb 02 21:39:31 crc kubenswrapper[4789]: I0202 21:39:31.222458 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7a402ea-2333-422a-a02d-b2ad98a989a4" containerName="init" Feb 02 21:39:31 crc kubenswrapper[4789]: E0202 21:39:31.222507 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3168dcf5-60b0-4c18-bf26-4b480e7c0f05" containerName="keystone-bootstrap" Feb 02 21:39:31 crc kubenswrapper[4789]: I0202 21:39:31.222516 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="3168dcf5-60b0-4c18-bf26-4b480e7c0f05" containerName="keystone-bootstrap" Feb 02 21:39:31 crc kubenswrapper[4789]: I0202 21:39:31.222731 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="3168dcf5-60b0-4c18-bf26-4b480e7c0f05" containerName="keystone-bootstrap" Feb 02 21:39:31 crc kubenswrapper[4789]: I0202 21:39:31.222762 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7a402ea-2333-422a-a02d-b2ad98a989a4" containerName="init" Feb 02 21:39:31 crc kubenswrapper[4789]: I0202 21:39:31.223404 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-bvbsm" Feb 02 21:39:31 crc kubenswrapper[4789]: I0202 21:39:31.225924 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 02 21:39:31 crc kubenswrapper[4789]: I0202 21:39:31.226046 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 02 21:39:31 crc kubenswrapper[4789]: I0202 21:39:31.226179 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 02 21:39:31 crc kubenswrapper[4789]: I0202 21:39:31.227074 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-2xd6t" Feb 02 21:39:31 crc kubenswrapper[4789]: I0202 21:39:31.227146 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 02 21:39:31 crc kubenswrapper[4789]: I0202 21:39:31.229103 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-bvbsm"] Feb 02 21:39:31 crc kubenswrapper[4789]: I0202 21:39:31.323117 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ddd49c35-2c4d-4fad-a207-ca8d0be92036-config-data\") pod \"keystone-bootstrap-bvbsm\" (UID: \"ddd49c35-2c4d-4fad-a207-ca8d0be92036\") " pod="openstack/keystone-bootstrap-bvbsm" Feb 02 21:39:31 crc kubenswrapper[4789]: I0202 21:39:31.323178 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ddd49c35-2c4d-4fad-a207-ca8d0be92036-credential-keys\") pod \"keystone-bootstrap-bvbsm\" (UID: \"ddd49c35-2c4d-4fad-a207-ca8d0be92036\") " pod="openstack/keystone-bootstrap-bvbsm" Feb 02 21:39:31 crc kubenswrapper[4789]: I0202 21:39:31.323212 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qpf2\" (UniqueName: \"kubernetes.io/projected/ddd49c35-2c4d-4fad-a207-ca8d0be92036-kube-api-access-2qpf2\") pod \"keystone-bootstrap-bvbsm\" (UID: \"ddd49c35-2c4d-4fad-a207-ca8d0be92036\") " pod="openstack/keystone-bootstrap-bvbsm" Feb 02 21:39:31 crc kubenswrapper[4789]: I0202 21:39:31.323290 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ddd49c35-2c4d-4fad-a207-ca8d0be92036-scripts\") pod \"keystone-bootstrap-bvbsm\" (UID: \"ddd49c35-2c4d-4fad-a207-ca8d0be92036\") " pod="openstack/keystone-bootstrap-bvbsm" Feb 02 21:39:31 crc kubenswrapper[4789]: I0202 21:39:31.323310 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddd49c35-2c4d-4fad-a207-ca8d0be92036-combined-ca-bundle\") pod \"keystone-bootstrap-bvbsm\" (UID: \"ddd49c35-2c4d-4fad-a207-ca8d0be92036\") " pod="openstack/keystone-bootstrap-bvbsm" Feb 02 21:39:31 crc kubenswrapper[4789]: I0202 21:39:31.323341 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ddd49c35-2c4d-4fad-a207-ca8d0be92036-fernet-keys\") pod \"keystone-bootstrap-bvbsm\" (UID: \"ddd49c35-2c4d-4fad-a207-ca8d0be92036\") " pod="openstack/keystone-bootstrap-bvbsm" Feb 02 21:39:31 crc kubenswrapper[4789]: I0202 21:39:31.425608 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ddd49c35-2c4d-4fad-a207-ca8d0be92036-fernet-keys\") pod \"keystone-bootstrap-bvbsm\" (UID: \"ddd49c35-2c4d-4fad-a207-ca8d0be92036\") " pod="openstack/keystone-bootstrap-bvbsm" Feb 02 21:39:31 crc kubenswrapper[4789]: I0202 21:39:31.425727 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ddd49c35-2c4d-4fad-a207-ca8d0be92036-config-data\") pod \"keystone-bootstrap-bvbsm\" (UID: \"ddd49c35-2c4d-4fad-a207-ca8d0be92036\") " pod="openstack/keystone-bootstrap-bvbsm" Feb 02 21:39:31 crc kubenswrapper[4789]: I0202 21:39:31.425773 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ddd49c35-2c4d-4fad-a207-ca8d0be92036-credential-keys\") pod \"keystone-bootstrap-bvbsm\" (UID: \"ddd49c35-2c4d-4fad-a207-ca8d0be92036\") " pod="openstack/keystone-bootstrap-bvbsm" Feb 02 21:39:31 crc kubenswrapper[4789]: I0202 21:39:31.425811 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qpf2\" (UniqueName: \"kubernetes.io/projected/ddd49c35-2c4d-4fad-a207-ca8d0be92036-kube-api-access-2qpf2\") pod \"keystone-bootstrap-bvbsm\" (UID: \"ddd49c35-2c4d-4fad-a207-ca8d0be92036\") " pod="openstack/keystone-bootstrap-bvbsm" Feb 02 21:39:31 crc kubenswrapper[4789]: I0202 21:39:31.425882 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ddd49c35-2c4d-4fad-a207-ca8d0be92036-scripts\") pod \"keystone-bootstrap-bvbsm\" (UID: \"ddd49c35-2c4d-4fad-a207-ca8d0be92036\") " pod="openstack/keystone-bootstrap-bvbsm" Feb 02 21:39:31 crc kubenswrapper[4789]: I0202 21:39:31.425905 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddd49c35-2c4d-4fad-a207-ca8d0be92036-combined-ca-bundle\") pod \"keystone-bootstrap-bvbsm\" (UID: \"ddd49c35-2c4d-4fad-a207-ca8d0be92036\") " pod="openstack/keystone-bootstrap-bvbsm" Feb 02 21:39:31 crc kubenswrapper[4789]: I0202 21:39:31.430157 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ddd49c35-2c4d-4fad-a207-ca8d0be92036-credential-keys\") pod \"keystone-bootstrap-bvbsm\" (UID: \"ddd49c35-2c4d-4fad-a207-ca8d0be92036\") " pod="openstack/keystone-bootstrap-bvbsm" Feb 02 21:39:31 crc kubenswrapper[4789]: I0202 21:39:31.430918 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ddd49c35-2c4d-4fad-a207-ca8d0be92036-scripts\") pod \"keystone-bootstrap-bvbsm\" (UID: \"ddd49c35-2c4d-4fad-a207-ca8d0be92036\") " pod="openstack/keystone-bootstrap-bvbsm" Feb 02 21:39:31 crc kubenswrapper[4789]: I0202 21:39:31.431039 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ddd49c35-2c4d-4fad-a207-ca8d0be92036-fernet-keys\") pod \"keystone-bootstrap-bvbsm\" (UID: \"ddd49c35-2c4d-4fad-a207-ca8d0be92036\") " pod="openstack/keystone-bootstrap-bvbsm" Feb 02 21:39:31 crc kubenswrapper[4789]: I0202 21:39:31.431132 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ddd49c35-2c4d-4fad-a207-ca8d0be92036-config-data\") pod \"keystone-bootstrap-bvbsm\" (UID: \"ddd49c35-2c4d-4fad-a207-ca8d0be92036\") " pod="openstack/keystone-bootstrap-bvbsm" Feb 02 21:39:31 crc kubenswrapper[4789]: I0202 21:39:31.433407 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddd49c35-2c4d-4fad-a207-ca8d0be92036-combined-ca-bundle\") pod \"keystone-bootstrap-bvbsm\" (UID: \"ddd49c35-2c4d-4fad-a207-ca8d0be92036\") " pod="openstack/keystone-bootstrap-bvbsm" Feb 02 21:39:31 crc kubenswrapper[4789]: I0202 21:39:31.443148 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qpf2\" (UniqueName: \"kubernetes.io/projected/ddd49c35-2c4d-4fad-a207-ca8d0be92036-kube-api-access-2qpf2\") pod \"keystone-bootstrap-bvbsm\" (UID: \"ddd49c35-2c4d-4fad-a207-ca8d0be92036\") " pod="openstack/keystone-bootstrap-bvbsm" Feb 02 21:39:31 crc kubenswrapper[4789]: I0202 21:39:31.579784 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-bvbsm" Feb 02 21:39:32 crc kubenswrapper[4789]: I0202 21:39:32.431067 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3168dcf5-60b0-4c18-bf26-4b480e7c0f05" path="/var/lib/kubelet/pods/3168dcf5-60b0-4c18-bf26-4b480e7c0f05/volumes" Feb 02 21:39:35 crc kubenswrapper[4789]: I0202 21:39:35.707669 4789 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-d9x7g" podUID="397ad5b0-86b4-40b7-b8c3-bceb1d8aa470" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.115:5353: connect: connection refused" Feb 02 21:39:39 crc kubenswrapper[4789]: E0202 21:39:39.992011 4789 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Feb 02 21:39:39 crc kubenswrapper[4789]: E0202 21:39:39.992615 4789 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7ss2g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-mc8z9_openstack(ead77939-6823-47d8-83e8-7dc74b841d49): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 21:39:39 crc kubenswrapper[4789]: E0202 21:39:39.994100 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-mc8z9" podUID="ead77939-6823-47d8-83e8-7dc74b841d49" Feb 02 21:39:40 crc kubenswrapper[4789]: I0202 21:39:40.100758 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 21:39:40 crc kubenswrapper[4789]: I0202 21:39:40.187851 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 21:39:40 crc kubenswrapper[4789]: I0202 21:39:40.188223 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"78339639-9f0b-4076-a7a3-160f9ae94bdd","Type":"ContainerDied","Data":"c34453ca10f37397405465ff3016d82a54f1756c5a9b328cf7262f5fdd0402b0"} Feb 02 21:39:40 crc kubenswrapper[4789]: I0202 21:39:40.188281 4789 scope.go:117] "RemoveContainer" containerID="5ef3e6d95e0831ce635cb0a6eaae6beb69438b63431338438baea5c2aa70bd8b" Feb 02 21:39:40 crc kubenswrapper[4789]: E0202 21:39:40.189803 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-mc8z9" podUID="ead77939-6823-47d8-83e8-7dc74b841d49" Feb 02 21:39:40 crc kubenswrapper[4789]: I0202 21:39:40.196928 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78339639-9f0b-4076-a7a3-160f9ae94bdd-scripts\") pod \"78339639-9f0b-4076-a7a3-160f9ae94bdd\" (UID: \"78339639-9f0b-4076-a7a3-160f9ae94bdd\") " Feb 02 21:39:40 crc kubenswrapper[4789]: I0202 21:39:40.196990 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78339639-9f0b-4076-a7a3-160f9ae94bdd-combined-ca-bundle\") pod \"78339639-9f0b-4076-a7a3-160f9ae94bdd\" (UID: \"78339639-9f0b-4076-a7a3-160f9ae94bdd\") " Feb 02 21:39:40 crc kubenswrapper[4789]: I0202 21:39:40.197048 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"78339639-9f0b-4076-a7a3-160f9ae94bdd\" (UID: \"78339639-9f0b-4076-a7a3-160f9ae94bdd\") " Feb 02 21:39:40 crc kubenswrapper[4789]: I0202 21:39:40.197067 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/78339639-9f0b-4076-a7a3-160f9ae94bdd-httpd-run\") pod \"78339639-9f0b-4076-a7a3-160f9ae94bdd\" (UID: \"78339639-9f0b-4076-a7a3-160f9ae94bdd\") " Feb 02 21:39:40 crc kubenswrapper[4789]: I0202 21:39:40.197086 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78339639-9f0b-4076-a7a3-160f9ae94bdd-config-data\") pod \"78339639-9f0b-4076-a7a3-160f9ae94bdd\" (UID: \"78339639-9f0b-4076-a7a3-160f9ae94bdd\") " Feb 02 21:39:40 crc kubenswrapper[4789]: I0202 21:39:40.197117 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78339639-9f0b-4076-a7a3-160f9ae94bdd-logs\") pod \"78339639-9f0b-4076-a7a3-160f9ae94bdd\" (UID: \"78339639-9f0b-4076-a7a3-160f9ae94bdd\") " Feb 02 21:39:40 crc kubenswrapper[4789]: I0202 21:39:40.197136 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/78339639-9f0b-4076-a7a3-160f9ae94bdd-public-tls-certs\") pod \"78339639-9f0b-4076-a7a3-160f9ae94bdd\" (UID: \"78339639-9f0b-4076-a7a3-160f9ae94bdd\") " Feb 02 21:39:40 crc kubenswrapper[4789]: I0202 21:39:40.197152 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfczd\" (UniqueName: \"kubernetes.io/projected/78339639-9f0b-4076-a7a3-160f9ae94bdd-kube-api-access-mfczd\") pod \"78339639-9f0b-4076-a7a3-160f9ae94bdd\" (UID: \"78339639-9f0b-4076-a7a3-160f9ae94bdd\") " Feb 02 21:39:40 crc kubenswrapper[4789]: I0202 21:39:40.203858 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78339639-9f0b-4076-a7a3-160f9ae94bdd-scripts" (OuterVolumeSpecName: "scripts") pod "78339639-9f0b-4076-a7a3-160f9ae94bdd" (UID: "78339639-9f0b-4076-a7a3-160f9ae94bdd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:39:40 crc kubenswrapper[4789]: I0202 21:39:40.204085 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78339639-9f0b-4076-a7a3-160f9ae94bdd-logs" (OuterVolumeSpecName: "logs") pod "78339639-9f0b-4076-a7a3-160f9ae94bdd" (UID: "78339639-9f0b-4076-a7a3-160f9ae94bdd"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 21:39:40 crc kubenswrapper[4789]: I0202 21:39:40.204253 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78339639-9f0b-4076-a7a3-160f9ae94bdd-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "78339639-9f0b-4076-a7a3-160f9ae94bdd" (UID: "78339639-9f0b-4076-a7a3-160f9ae94bdd"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 21:39:40 crc kubenswrapper[4789]: I0202 21:39:40.205945 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78339639-9f0b-4076-a7a3-160f9ae94bdd-kube-api-access-mfczd" (OuterVolumeSpecName: "kube-api-access-mfczd") pod "78339639-9f0b-4076-a7a3-160f9ae94bdd" (UID: "78339639-9f0b-4076-a7a3-160f9ae94bdd"). InnerVolumeSpecName "kube-api-access-mfczd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:39:40 crc kubenswrapper[4789]: I0202 21:39:40.211571 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "glance") pod "78339639-9f0b-4076-a7a3-160f9ae94bdd" (UID: "78339639-9f0b-4076-a7a3-160f9ae94bdd"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 02 21:39:40 crc kubenswrapper[4789]: I0202 21:39:40.256899 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78339639-9f0b-4076-a7a3-160f9ae94bdd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "78339639-9f0b-4076-a7a3-160f9ae94bdd" (UID: "78339639-9f0b-4076-a7a3-160f9ae94bdd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:39:40 crc kubenswrapper[4789]: I0202 21:39:40.268518 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78339639-9f0b-4076-a7a3-160f9ae94bdd-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "78339639-9f0b-4076-a7a3-160f9ae94bdd" (UID: "78339639-9f0b-4076-a7a3-160f9ae94bdd"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:39:40 crc kubenswrapper[4789]: I0202 21:39:40.279457 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78339639-9f0b-4076-a7a3-160f9ae94bdd-config-data" (OuterVolumeSpecName: "config-data") pod "78339639-9f0b-4076-a7a3-160f9ae94bdd" (UID: "78339639-9f0b-4076-a7a3-160f9ae94bdd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:39:40 crc kubenswrapper[4789]: I0202 21:39:40.299086 4789 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Feb 02 21:39:40 crc kubenswrapper[4789]: I0202 21:39:40.299119 4789 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/78339639-9f0b-4076-a7a3-160f9ae94bdd-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 02 21:39:40 crc kubenswrapper[4789]: I0202 21:39:40.299128 4789 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78339639-9f0b-4076-a7a3-160f9ae94bdd-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 21:39:40 crc kubenswrapper[4789]: I0202 21:39:40.299146 4789 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78339639-9f0b-4076-a7a3-160f9ae94bdd-logs\") on node \"crc\" DevicePath \"\"" Feb 02 21:39:40 crc kubenswrapper[4789]: I0202 21:39:40.299155 4789 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/78339639-9f0b-4076-a7a3-160f9ae94bdd-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 21:39:40 crc kubenswrapper[4789]: I0202 21:39:40.299166 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mfczd\" (UniqueName: \"kubernetes.io/projected/78339639-9f0b-4076-a7a3-160f9ae94bdd-kube-api-access-mfczd\") on node \"crc\" DevicePath \"\"" Feb 02 21:39:40 crc kubenswrapper[4789]: I0202 21:39:40.299174 4789 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78339639-9f0b-4076-a7a3-160f9ae94bdd-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 21:39:40 crc kubenswrapper[4789]: I0202 21:39:40.299183 4789 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78339639-9f0b-4076-a7a3-160f9ae94bdd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 21:39:40 crc kubenswrapper[4789]: I0202 21:39:40.334247 4789 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Feb 02 21:39:40 crc kubenswrapper[4789]: I0202 21:39:40.401078 4789 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Feb 02 21:39:40 crc kubenswrapper[4789]: I0202 21:39:40.514357 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 21:39:40 crc kubenswrapper[4789]: I0202 21:39:40.555327 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 21:39:40 crc kubenswrapper[4789]: I0202 21:39:40.564752 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 21:39:40 crc kubenswrapper[4789]: E0202 21:39:40.565247 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78339639-9f0b-4076-a7a3-160f9ae94bdd" containerName="glance-httpd" Feb 02 21:39:40 crc kubenswrapper[4789]: I0202 21:39:40.565274 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="78339639-9f0b-4076-a7a3-160f9ae94bdd" containerName="glance-httpd" Feb 02 21:39:40 crc kubenswrapper[4789]: E0202 21:39:40.565299 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78339639-9f0b-4076-a7a3-160f9ae94bdd" containerName="glance-log" Feb 02 21:39:40 crc kubenswrapper[4789]: I0202 21:39:40.565309 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="78339639-9f0b-4076-a7a3-160f9ae94bdd" containerName="glance-log" Feb 02 21:39:40 crc kubenswrapper[4789]: I0202 21:39:40.565564 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="78339639-9f0b-4076-a7a3-160f9ae94bdd" containerName="glance-log" Feb 02 21:39:40 crc kubenswrapper[4789]: I0202 21:39:40.565597 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="78339639-9f0b-4076-a7a3-160f9ae94bdd" containerName="glance-httpd" Feb 02 21:39:40 crc kubenswrapper[4789]: I0202 21:39:40.566670 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 21:39:40 crc kubenswrapper[4789]: I0202 21:39:40.569786 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 02 21:39:40 crc kubenswrapper[4789]: I0202 21:39:40.573809 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 02 21:39:40 crc kubenswrapper[4789]: I0202 21:39:40.580532 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 21:39:40 crc kubenswrapper[4789]: I0202 21:39:40.709565 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/322d725c-ac03-4759-a08c-e534a70d1ec3-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"322d725c-ac03-4759-a08c-e534a70d1ec3\") " pod="openstack/glance-default-external-api-0" Feb 02 21:39:40 crc kubenswrapper[4789]: I0202 21:39:40.709628 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/322d725c-ac03-4759-a08c-e534a70d1ec3-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"322d725c-ac03-4759-a08c-e534a70d1ec3\") " pod="openstack/glance-default-external-api-0" Feb 02 21:39:40 crc kubenswrapper[4789]: I0202 21:39:40.709828 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/322d725c-ac03-4759-a08c-e534a70d1ec3-logs\") pod \"glance-default-external-api-0\" (UID: \"322d725c-ac03-4759-a08c-e534a70d1ec3\") " pod="openstack/glance-default-external-api-0" Feb 02 21:39:40 crc kubenswrapper[4789]: I0202 21:39:40.709995 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/322d725c-ac03-4759-a08c-e534a70d1ec3-config-data\") pod \"glance-default-external-api-0\" (UID: \"322d725c-ac03-4759-a08c-e534a70d1ec3\") " pod="openstack/glance-default-external-api-0" Feb 02 21:39:40 crc kubenswrapper[4789]: I0202 21:39:40.710167 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/322d725c-ac03-4759-a08c-e534a70d1ec3-scripts\") pod \"glance-default-external-api-0\" (UID: \"322d725c-ac03-4759-a08c-e534a70d1ec3\") " pod="openstack/glance-default-external-api-0" Feb 02 21:39:40 crc kubenswrapper[4789]: I0202 21:39:40.710218 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zggzl\" (UniqueName: \"kubernetes.io/projected/322d725c-ac03-4759-a08c-e534a70d1ec3-kube-api-access-zggzl\") pod \"glance-default-external-api-0\" (UID: \"322d725c-ac03-4759-a08c-e534a70d1ec3\") " pod="openstack/glance-default-external-api-0" Feb 02 21:39:40 crc kubenswrapper[4789]: I0202 21:39:40.710246 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/322d725c-ac03-4759-a08c-e534a70d1ec3-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"322d725c-ac03-4759-a08c-e534a70d1ec3\") " pod="openstack/glance-default-external-api-0" Feb 02 21:39:40 crc kubenswrapper[4789]: I0202 21:39:40.710319 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"322d725c-ac03-4759-a08c-e534a70d1ec3\") " pod="openstack/glance-default-external-api-0" Feb 02 21:39:40 crc kubenswrapper[4789]: I0202 21:39:40.812140 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/322d725c-ac03-4759-a08c-e534a70d1ec3-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"322d725c-ac03-4759-a08c-e534a70d1ec3\") " pod="openstack/glance-default-external-api-0" Feb 02 21:39:40 crc kubenswrapper[4789]: I0202 21:39:40.812199 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/322d725c-ac03-4759-a08c-e534a70d1ec3-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"322d725c-ac03-4759-a08c-e534a70d1ec3\") " pod="openstack/glance-default-external-api-0" Feb 02 21:39:40 crc kubenswrapper[4789]: I0202 21:39:40.812249 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/322d725c-ac03-4759-a08c-e534a70d1ec3-logs\") pod \"glance-default-external-api-0\" (UID: \"322d725c-ac03-4759-a08c-e534a70d1ec3\") " pod="openstack/glance-default-external-api-0" Feb 02 21:39:40 crc kubenswrapper[4789]: I0202 21:39:40.812283 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/322d725c-ac03-4759-a08c-e534a70d1ec3-config-data\") pod \"glance-default-external-api-0\" (UID: \"322d725c-ac03-4759-a08c-e534a70d1ec3\") " pod="openstack/glance-default-external-api-0" Feb 02 21:39:40 crc kubenswrapper[4789]: I0202 21:39:40.812340 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/322d725c-ac03-4759-a08c-e534a70d1ec3-scripts\") pod \"glance-default-external-api-0\" (UID: \"322d725c-ac03-4759-a08c-e534a70d1ec3\") " pod="openstack/glance-default-external-api-0" Feb 02 21:39:40 crc kubenswrapper[4789]: I0202 21:39:40.812372 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zggzl\" (UniqueName: \"kubernetes.io/projected/322d725c-ac03-4759-a08c-e534a70d1ec3-kube-api-access-zggzl\") pod \"glance-default-external-api-0\" (UID: \"322d725c-ac03-4759-a08c-e534a70d1ec3\") " pod="openstack/glance-default-external-api-0" Feb 02 21:39:40 crc kubenswrapper[4789]: I0202 21:39:40.812417 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/322d725c-ac03-4759-a08c-e534a70d1ec3-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"322d725c-ac03-4759-a08c-e534a70d1ec3\") " pod="openstack/glance-default-external-api-0" Feb 02 21:39:40 crc kubenswrapper[4789]: I0202 21:39:40.812457 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"322d725c-ac03-4759-a08c-e534a70d1ec3\") " pod="openstack/glance-default-external-api-0" Feb 02 21:39:40 crc kubenswrapper[4789]: I0202 21:39:40.812754 4789 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"322d725c-ac03-4759-a08c-e534a70d1ec3\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-external-api-0" Feb 02 21:39:40 crc kubenswrapper[4789]: I0202 21:39:40.812754 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/322d725c-ac03-4759-a08c-e534a70d1ec3-logs\") pod \"glance-default-external-api-0\" (UID: \"322d725c-ac03-4759-a08c-e534a70d1ec3\") " pod="openstack/glance-default-external-api-0" Feb 02 21:39:40 crc kubenswrapper[4789]: I0202 21:39:40.812936 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/322d725c-ac03-4759-a08c-e534a70d1ec3-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"322d725c-ac03-4759-a08c-e534a70d1ec3\") " pod="openstack/glance-default-external-api-0" Feb 02 21:39:40 crc kubenswrapper[4789]: I0202 21:39:40.817973 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/322d725c-ac03-4759-a08c-e534a70d1ec3-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"322d725c-ac03-4759-a08c-e534a70d1ec3\") " pod="openstack/glance-default-external-api-0" Feb 02 21:39:40 crc kubenswrapper[4789]: I0202 21:39:40.818266 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/322d725c-ac03-4759-a08c-e534a70d1ec3-config-data\") pod \"glance-default-external-api-0\" (UID: \"322d725c-ac03-4759-a08c-e534a70d1ec3\") " pod="openstack/glance-default-external-api-0" Feb 02 21:39:40 crc kubenswrapper[4789]: I0202 21:39:40.823331 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/322d725c-ac03-4759-a08c-e534a70d1ec3-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"322d725c-ac03-4759-a08c-e534a70d1ec3\") " pod="openstack/glance-default-external-api-0" Feb 02 21:39:40 crc kubenswrapper[4789]: I0202 21:39:40.824370 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/322d725c-ac03-4759-a08c-e534a70d1ec3-scripts\") pod \"glance-default-external-api-0\" (UID: \"322d725c-ac03-4759-a08c-e534a70d1ec3\") " pod="openstack/glance-default-external-api-0" Feb 02 21:39:40 crc kubenswrapper[4789]: I0202 21:39:40.830990 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zggzl\" (UniqueName: \"kubernetes.io/projected/322d725c-ac03-4759-a08c-e534a70d1ec3-kube-api-access-zggzl\") pod \"glance-default-external-api-0\" (UID: \"322d725c-ac03-4759-a08c-e534a70d1ec3\") " pod="openstack/glance-default-external-api-0" Feb 02 21:39:40 crc kubenswrapper[4789]: I0202 21:39:40.840349 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"322d725c-ac03-4759-a08c-e534a70d1ec3\") " pod="openstack/glance-default-external-api-0" Feb 02 21:39:40 crc kubenswrapper[4789]: I0202 21:39:40.884274 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 21:39:41 crc kubenswrapper[4789]: I0202 21:39:41.484374 4789 scope.go:117] "RemoveContainer" containerID="a13bfd328e4a360e1b6a8012c4e7d43e622fdf2fdf250c819295fba4183c86ff" Feb 02 21:39:41 crc kubenswrapper[4789]: E0202 21:39:41.495178 4789 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Feb 02 21:39:41 crc kubenswrapper[4789]: E0202 21:39:41.495404 4789 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fwp6d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-5kfhw_openstack(4aaa8d11-6409-415e-836b-b7941b66f6e4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 21:39:41 crc kubenswrapper[4789]: E0202 21:39:41.496759 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-5kfhw" podUID="4aaa8d11-6409-415e-836b-b7941b66f6e4" Feb 02 21:39:41 crc kubenswrapper[4789]: I0202 21:39:41.641233 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-d9x7g" Feb 02 21:39:41 crc kubenswrapper[4789]: I0202 21:39:41.641946 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 21:39:41 crc kubenswrapper[4789]: I0202 21:39:41.729858 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4eb9ddc9-027a-434c-bd55-383f7fa4edae-logs\") pod \"4eb9ddc9-027a-434c-bd55-383f7fa4edae\" (UID: \"4eb9ddc9-027a-434c-bd55-383f7fa4edae\") " Feb 02 21:39:41 crc kubenswrapper[4789]: I0202 21:39:41.729910 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4eb9ddc9-027a-434c-bd55-383f7fa4edae-httpd-run\") pod \"4eb9ddc9-027a-434c-bd55-383f7fa4edae\" (UID: \"4eb9ddc9-027a-434c-bd55-383f7fa4edae\") " Feb 02 21:39:41 crc kubenswrapper[4789]: I0202 21:39:41.729933 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"4eb9ddc9-027a-434c-bd55-383f7fa4edae\" (UID: \"4eb9ddc9-027a-434c-bd55-383f7fa4edae\") " Feb 02 21:39:41 crc kubenswrapper[4789]: I0202 21:39:41.729985 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjhcj\" (UniqueName: \"kubernetes.io/projected/397ad5b0-86b4-40b7-b8c3-bceb1d8aa470-kube-api-access-cjhcj\") pod \"397ad5b0-86b4-40b7-b8c3-bceb1d8aa470\" (UID: \"397ad5b0-86b4-40b7-b8c3-bceb1d8aa470\") " Feb 02 21:39:41 crc kubenswrapper[4789]: I0202 21:39:41.730032 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mrghl\" (UniqueName: \"kubernetes.io/projected/4eb9ddc9-027a-434c-bd55-383f7fa4edae-kube-api-access-mrghl\") pod \"4eb9ddc9-027a-434c-bd55-383f7fa4edae\" (UID: \"4eb9ddc9-027a-434c-bd55-383f7fa4edae\") " Feb 02 21:39:41 crc kubenswrapper[4789]: I0202 21:39:41.730067 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/397ad5b0-86b4-40b7-b8c3-bceb1d8aa470-dns-svc\") pod \"397ad5b0-86b4-40b7-b8c3-bceb1d8aa470\" (UID: \"397ad5b0-86b4-40b7-b8c3-bceb1d8aa470\") " Feb 02 21:39:41 crc kubenswrapper[4789]: I0202 21:39:41.730098 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4eb9ddc9-027a-434c-bd55-383f7fa4edae-combined-ca-bundle\") pod \"4eb9ddc9-027a-434c-bd55-383f7fa4edae\" (UID: \"4eb9ddc9-027a-434c-bd55-383f7fa4edae\") " Feb 02 21:39:41 crc kubenswrapper[4789]: I0202 21:39:41.730188 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/397ad5b0-86b4-40b7-b8c3-bceb1d8aa470-config\") pod \"397ad5b0-86b4-40b7-b8c3-bceb1d8aa470\" (UID: \"397ad5b0-86b4-40b7-b8c3-bceb1d8aa470\") " Feb 02 21:39:41 crc kubenswrapper[4789]: I0202 21:39:41.730252 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4eb9ddc9-027a-434c-bd55-383f7fa4edae-internal-tls-certs\") pod \"4eb9ddc9-027a-434c-bd55-383f7fa4edae\" (UID: \"4eb9ddc9-027a-434c-bd55-383f7fa4edae\") " Feb 02 21:39:41 crc kubenswrapper[4789]: I0202 21:39:41.730306 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/397ad5b0-86b4-40b7-b8c3-bceb1d8aa470-ovsdbserver-sb\") pod \"397ad5b0-86b4-40b7-b8c3-bceb1d8aa470\" (UID: \"397ad5b0-86b4-40b7-b8c3-bceb1d8aa470\") " Feb 02 21:39:41 crc kubenswrapper[4789]: I0202 21:39:41.730428 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4eb9ddc9-027a-434c-bd55-383f7fa4edae-config-data\") pod \"4eb9ddc9-027a-434c-bd55-383f7fa4edae\" (UID: \"4eb9ddc9-027a-434c-bd55-383f7fa4edae\") " Feb 02 21:39:41 crc kubenswrapper[4789]: I0202 21:39:41.730497 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4eb9ddc9-027a-434c-bd55-383f7fa4edae-scripts\") pod \"4eb9ddc9-027a-434c-bd55-383f7fa4edae\" (UID: \"4eb9ddc9-027a-434c-bd55-383f7fa4edae\") " Feb 02 21:39:41 crc kubenswrapper[4789]: I0202 21:39:41.730552 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/397ad5b0-86b4-40b7-b8c3-bceb1d8aa470-ovsdbserver-nb\") pod \"397ad5b0-86b4-40b7-b8c3-bceb1d8aa470\" (UID: \"397ad5b0-86b4-40b7-b8c3-bceb1d8aa470\") " Feb 02 21:39:41 crc kubenswrapper[4789]: I0202 21:39:41.731922 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4eb9ddc9-027a-434c-bd55-383f7fa4edae-logs" (OuterVolumeSpecName: "logs") pod "4eb9ddc9-027a-434c-bd55-383f7fa4edae" (UID: "4eb9ddc9-027a-434c-bd55-383f7fa4edae"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 21:39:41 crc kubenswrapper[4789]: I0202 21:39:41.740686 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "4eb9ddc9-027a-434c-bd55-383f7fa4edae" (UID: "4eb9ddc9-027a-434c-bd55-383f7fa4edae"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 02 21:39:41 crc kubenswrapper[4789]: I0202 21:39:41.741107 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4eb9ddc9-027a-434c-bd55-383f7fa4edae-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "4eb9ddc9-027a-434c-bd55-383f7fa4edae" (UID: "4eb9ddc9-027a-434c-bd55-383f7fa4edae"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 21:39:41 crc kubenswrapper[4789]: I0202 21:39:41.741958 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4eb9ddc9-027a-434c-bd55-383f7fa4edae-kube-api-access-mrghl" (OuterVolumeSpecName: "kube-api-access-mrghl") pod "4eb9ddc9-027a-434c-bd55-383f7fa4edae" (UID: "4eb9ddc9-027a-434c-bd55-383f7fa4edae"). InnerVolumeSpecName "kube-api-access-mrghl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:39:41 crc kubenswrapper[4789]: I0202 21:39:41.746705 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/397ad5b0-86b4-40b7-b8c3-bceb1d8aa470-kube-api-access-cjhcj" (OuterVolumeSpecName: "kube-api-access-cjhcj") pod "397ad5b0-86b4-40b7-b8c3-bceb1d8aa470" (UID: "397ad5b0-86b4-40b7-b8c3-bceb1d8aa470"). InnerVolumeSpecName "kube-api-access-cjhcj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:39:41 crc kubenswrapper[4789]: I0202 21:39:41.777681 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4eb9ddc9-027a-434c-bd55-383f7fa4edae-scripts" (OuterVolumeSpecName: "scripts") pod "4eb9ddc9-027a-434c-bd55-383f7fa4edae" (UID: "4eb9ddc9-027a-434c-bd55-383f7fa4edae"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:39:41 crc kubenswrapper[4789]: I0202 21:39:41.778698 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4eb9ddc9-027a-434c-bd55-383f7fa4edae-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4eb9ddc9-027a-434c-bd55-383f7fa4edae" (UID: "4eb9ddc9-027a-434c-bd55-383f7fa4edae"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:39:41 crc kubenswrapper[4789]: I0202 21:39:41.796901 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/397ad5b0-86b4-40b7-b8c3-bceb1d8aa470-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "397ad5b0-86b4-40b7-b8c3-bceb1d8aa470" (UID: "397ad5b0-86b4-40b7-b8c3-bceb1d8aa470"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:39:41 crc kubenswrapper[4789]: I0202 21:39:41.832794 4789 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4eb9ddc9-027a-434c-bd55-383f7fa4edae-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 21:39:41 crc kubenswrapper[4789]: I0202 21:39:41.832825 4789 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4eb9ddc9-027a-434c-bd55-383f7fa4edae-logs\") on node \"crc\" DevicePath \"\"" Feb 02 21:39:41 crc kubenswrapper[4789]: I0202 21:39:41.832833 4789 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4eb9ddc9-027a-434c-bd55-383f7fa4edae-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 02 21:39:41 crc kubenswrapper[4789]: I0202 21:39:41.832856 4789 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Feb 02 21:39:41 crc kubenswrapper[4789]: I0202 21:39:41.832866 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cjhcj\" (UniqueName: \"kubernetes.io/projected/397ad5b0-86b4-40b7-b8c3-bceb1d8aa470-kube-api-access-cjhcj\") on node \"crc\" DevicePath \"\"" Feb 02 21:39:41 crc kubenswrapper[4789]: I0202 21:39:41.832880 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mrghl\" (UniqueName: \"kubernetes.io/projected/4eb9ddc9-027a-434c-bd55-383f7fa4edae-kube-api-access-mrghl\") on node \"crc\" DevicePath \"\"" Feb 02 21:39:41 crc kubenswrapper[4789]: I0202 21:39:41.832891 4789 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/397ad5b0-86b4-40b7-b8c3-bceb1d8aa470-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 21:39:41 crc kubenswrapper[4789]: I0202 21:39:41.832902 4789 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4eb9ddc9-027a-434c-bd55-383f7fa4edae-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 21:39:41 crc kubenswrapper[4789]: I0202 21:39:41.833073 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4eb9ddc9-027a-434c-bd55-383f7fa4edae-config-data" (OuterVolumeSpecName: "config-data") pod "4eb9ddc9-027a-434c-bd55-383f7fa4edae" (UID: "4eb9ddc9-027a-434c-bd55-383f7fa4edae"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:39:41 crc kubenswrapper[4789]: I0202 21:39:41.838877 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4eb9ddc9-027a-434c-bd55-383f7fa4edae-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "4eb9ddc9-027a-434c-bd55-383f7fa4edae" (UID: "4eb9ddc9-027a-434c-bd55-383f7fa4edae"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:39:41 crc kubenswrapper[4789]: I0202 21:39:41.846228 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/397ad5b0-86b4-40b7-b8c3-bceb1d8aa470-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "397ad5b0-86b4-40b7-b8c3-bceb1d8aa470" (UID: "397ad5b0-86b4-40b7-b8c3-bceb1d8aa470"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:39:41 crc kubenswrapper[4789]: I0202 21:39:41.847651 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/397ad5b0-86b4-40b7-b8c3-bceb1d8aa470-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "397ad5b0-86b4-40b7-b8c3-bceb1d8aa470" (UID: "397ad5b0-86b4-40b7-b8c3-bceb1d8aa470"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:39:41 crc kubenswrapper[4789]: I0202 21:39:41.850943 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/397ad5b0-86b4-40b7-b8c3-bceb1d8aa470-config" (OuterVolumeSpecName: "config") pod "397ad5b0-86b4-40b7-b8c3-bceb1d8aa470" (UID: "397ad5b0-86b4-40b7-b8c3-bceb1d8aa470"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:39:41 crc kubenswrapper[4789]: I0202 21:39:41.864447 4789 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Feb 02 21:39:41 crc kubenswrapper[4789]: I0202 21:39:41.934704 4789 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Feb 02 21:39:41 crc kubenswrapper[4789]: I0202 21:39:41.934753 4789 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/397ad5b0-86b4-40b7-b8c3-bceb1d8aa470-config\") on node \"crc\" DevicePath \"\"" Feb 02 21:39:41 crc kubenswrapper[4789]: I0202 21:39:41.934768 4789 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4eb9ddc9-027a-434c-bd55-383f7fa4edae-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 21:39:41 crc kubenswrapper[4789]: I0202 21:39:41.934784 4789 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/397ad5b0-86b4-40b7-b8c3-bceb1d8aa470-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 21:39:41 crc kubenswrapper[4789]: I0202 21:39:41.934801 4789 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4eb9ddc9-027a-434c-bd55-383f7fa4edae-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 21:39:41 crc kubenswrapper[4789]: I0202 21:39:41.934816 4789 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/397ad5b0-86b4-40b7-b8c3-bceb1d8aa470-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 21:39:41 crc kubenswrapper[4789]: I0202 21:39:41.967420 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-bvbsm"] Feb 02 21:39:41 crc kubenswrapper[4789]: W0202 21:39:41.969543 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podddd49c35_2c4d_4fad_a207_ca8d0be92036.slice/crio-9436e8c6133005abb800e0a952854974dc13ee44ed6f65b825e70deed0f4fc5e WatchSource:0}: Error finding container 9436e8c6133005abb800e0a952854974dc13ee44ed6f65b825e70deed0f4fc5e: Status 404 returned error can't find the container with id 9436e8c6133005abb800e0a952854974dc13ee44ed6f65b825e70deed0f4fc5e Feb 02 21:39:41 crc kubenswrapper[4789]: I0202 21:39:41.980753 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 02 21:39:42 crc kubenswrapper[4789]: W0202 21:39:42.131998 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod322d725c_ac03_4759_a08c_e534a70d1ec3.slice/crio-0bdac393df3c9587fe88dfbc075583684c398f614c50dc66848eee071ccbe9ec WatchSource:0}: Error finding container 0bdac393df3c9587fe88dfbc075583684c398f614c50dc66848eee071ccbe9ec: Status 404 returned error can't find the container with id 0bdac393df3c9587fe88dfbc075583684c398f614c50dc66848eee071ccbe9ec Feb 02 21:39:42 crc kubenswrapper[4789]: I0202 21:39:42.138213 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 21:39:42 crc kubenswrapper[4789]: I0202 21:39:42.204004 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"322d725c-ac03-4759-a08c-e534a70d1ec3","Type":"ContainerStarted","Data":"0bdac393df3c9587fe88dfbc075583684c398f614c50dc66848eee071ccbe9ec"} Feb 02 21:39:42 crc kubenswrapper[4789]: I0202 21:39:42.205436 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c202a904-ccae-4f90-a284-d7e2a3b5e0f7","Type":"ContainerStarted","Data":"fdcc1f90b212df2e933f135304ee045547580c325f7da0217384a6ca1384b603"} Feb 02 21:39:42 crc kubenswrapper[4789]: I0202 21:39:42.207385 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-d9x7g" event={"ID":"397ad5b0-86b4-40b7-b8c3-bceb1d8aa470","Type":"ContainerDied","Data":"22d107c3308406522b2875c46788b9371935f5cc3db46838cf8df44077d054ad"} Feb 02 21:39:42 crc kubenswrapper[4789]: I0202 21:39:42.207424 4789 scope.go:117] "RemoveContainer" containerID="8bd4bcf7161f891ce9bf00e247effacfa5f70a433ffc16ebe79845ab753e6f9e" Feb 02 21:39:42 crc kubenswrapper[4789]: I0202 21:39:42.207441 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-d9x7g" Feb 02 21:39:42 crc kubenswrapper[4789]: I0202 21:39:42.214873 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 21:39:42 crc kubenswrapper[4789]: I0202 21:39:42.215067 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4eb9ddc9-027a-434c-bd55-383f7fa4edae","Type":"ContainerDied","Data":"e2dac4b3a553e9f947c27172c6a59d84b2752d8a2727e5d090f8cfc17cfa6bf6"} Feb 02 21:39:42 crc kubenswrapper[4789]: I0202 21:39:42.224091 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-ccdfw" event={"ID":"22dd9cf7-e9fb-443a-a7d2-46f72c6ee5e3","Type":"ContainerStarted","Data":"411e1d9c2338519d38825314451a3aee0f0e1c6639158094ec4eb5bdee347fa2"} Feb 02 21:39:42 crc kubenswrapper[4789]: I0202 21:39:42.236408 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-bvbsm" event={"ID":"ddd49c35-2c4d-4fad-a207-ca8d0be92036","Type":"ContainerStarted","Data":"9436e8c6133005abb800e0a952854974dc13ee44ed6f65b825e70deed0f4fc5e"} Feb 02 21:39:42 crc kubenswrapper[4789]: I0202 21:39:42.240411 4789 generic.go:334] "Generic (PLEG): container finished" podID="42e496d3-8d68-48a0-a0ca-058126b200a1" containerID="9875869b174e349daf124d55e4a4702547bf8f56e47f23da592c46f1d55a3cc2" exitCode=0 Feb 02 21:39:42 crc kubenswrapper[4789]: I0202 21:39:42.240499 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-rvwgc" event={"ID":"42e496d3-8d68-48a0-a0ca-058126b200a1","Type":"ContainerDied","Data":"9875869b174e349daf124d55e4a4702547bf8f56e47f23da592c46f1d55a3cc2"} Feb 02 21:39:42 crc kubenswrapper[4789]: E0202 21:39:42.241617 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-5kfhw" podUID="4aaa8d11-6409-415e-836b-b7941b66f6e4" Feb 02 21:39:42 crc kubenswrapper[4789]: I0202 21:39:42.245197 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-ccdfw" podStartSLOduration=2.442922884 podStartE2EDuration="23.245179985s" podCreationTimestamp="2026-02-02 21:39:19 +0000 UTC" firstStartedPulling="2026-02-02 21:39:20.662267308 +0000 UTC m=+1180.957292327" lastFinishedPulling="2026-02-02 21:39:41.464524389 +0000 UTC m=+1201.759549428" observedRunningTime="2026-02-02 21:39:42.240795751 +0000 UTC m=+1202.535820770" watchObservedRunningTime="2026-02-02 21:39:42.245179985 +0000 UTC m=+1202.540205004" Feb 02 21:39:42 crc kubenswrapper[4789]: I0202 21:39:42.267346 4789 scope.go:117] "RemoveContainer" containerID="498c1f124125391c7f14850b4a6958307fb18d504f5822b34e731e12b4708eac" Feb 02 21:39:42 crc kubenswrapper[4789]: I0202 21:39:42.299970 4789 scope.go:117] "RemoveContainer" containerID="885d9fb986bd72b653971db46b5836f8636d9d67168d8a3afe7fdb801998e9d6" Feb 02 21:39:42 crc kubenswrapper[4789]: I0202 21:39:42.310967 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-d9x7g"] Feb 02 21:39:42 crc kubenswrapper[4789]: I0202 21:39:42.328828 4789 scope.go:117] "RemoveContainer" containerID="c9378ac28c6b7682c73f513d6ebc6864e1731b3791beee6ad343536b3246f907" Feb 02 21:39:42 crc kubenswrapper[4789]: I0202 21:39:42.335220 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-d9x7g"] Feb 02 21:39:42 crc kubenswrapper[4789]: I0202 21:39:42.350651 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 21:39:42 crc kubenswrapper[4789]: I0202 21:39:42.357690 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 21:39:42 crc kubenswrapper[4789]: I0202 21:39:42.363477 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 21:39:42 crc kubenswrapper[4789]: E0202 21:39:42.363901 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4eb9ddc9-027a-434c-bd55-383f7fa4edae" containerName="glance-log" Feb 02 21:39:42 crc kubenswrapper[4789]: I0202 21:39:42.363923 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="4eb9ddc9-027a-434c-bd55-383f7fa4edae" containerName="glance-log" Feb 02 21:39:42 crc kubenswrapper[4789]: E0202 21:39:42.363942 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="397ad5b0-86b4-40b7-b8c3-bceb1d8aa470" containerName="dnsmasq-dns" Feb 02 21:39:42 crc kubenswrapper[4789]: I0202 21:39:42.363951 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="397ad5b0-86b4-40b7-b8c3-bceb1d8aa470" containerName="dnsmasq-dns" Feb 02 21:39:42 crc kubenswrapper[4789]: E0202 21:39:42.363962 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="397ad5b0-86b4-40b7-b8c3-bceb1d8aa470" containerName="init" Feb 02 21:39:42 crc kubenswrapper[4789]: I0202 21:39:42.363971 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="397ad5b0-86b4-40b7-b8c3-bceb1d8aa470" containerName="init" Feb 02 21:39:42 crc kubenswrapper[4789]: E0202 21:39:42.363993 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4eb9ddc9-027a-434c-bd55-383f7fa4edae" containerName="glance-httpd" Feb 02 21:39:42 crc kubenswrapper[4789]: I0202 21:39:42.363999 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="4eb9ddc9-027a-434c-bd55-383f7fa4edae" containerName="glance-httpd" Feb 02 21:39:42 crc kubenswrapper[4789]: I0202 21:39:42.364161 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="4eb9ddc9-027a-434c-bd55-383f7fa4edae" containerName="glance-log" Feb 02 21:39:42 crc kubenswrapper[4789]: I0202 21:39:42.364178 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="4eb9ddc9-027a-434c-bd55-383f7fa4edae" containerName="glance-httpd" Feb 02 21:39:42 crc kubenswrapper[4789]: I0202 21:39:42.364195 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="397ad5b0-86b4-40b7-b8c3-bceb1d8aa470" containerName="dnsmasq-dns" Feb 02 21:39:42 crc kubenswrapper[4789]: I0202 21:39:42.365101 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 21:39:42 crc kubenswrapper[4789]: I0202 21:39:42.369170 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 21:39:42 crc kubenswrapper[4789]: I0202 21:39:42.369230 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 02 21:39:42 crc kubenswrapper[4789]: I0202 21:39:42.369448 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 02 21:39:42 crc kubenswrapper[4789]: I0202 21:39:42.438816 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="397ad5b0-86b4-40b7-b8c3-bceb1d8aa470" path="/var/lib/kubelet/pods/397ad5b0-86b4-40b7-b8c3-bceb1d8aa470/volumes" Feb 02 21:39:42 crc kubenswrapper[4789]: I0202 21:39:42.439553 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4eb9ddc9-027a-434c-bd55-383f7fa4edae" path="/var/lib/kubelet/pods/4eb9ddc9-027a-434c-bd55-383f7fa4edae/volumes" Feb 02 21:39:42 crc kubenswrapper[4789]: I0202 21:39:42.440151 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78339639-9f0b-4076-a7a3-160f9ae94bdd" path="/var/lib/kubelet/pods/78339639-9f0b-4076-a7a3-160f9ae94bdd/volumes" Feb 02 21:39:42 crc kubenswrapper[4789]: I0202 21:39:42.443486 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dafa0ec8-f504-4174-b323-2a2d9f09ffb8-logs\") pod \"glance-default-internal-api-0\" (UID: \"dafa0ec8-f504-4174-b323-2a2d9f09ffb8\") " pod="openstack/glance-default-internal-api-0" Feb 02 21:39:42 crc kubenswrapper[4789]: I0202 21:39:42.443520 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dafa0ec8-f504-4174-b323-2a2d9f09ffb8-scripts\") pod \"glance-default-internal-api-0\" (UID: \"dafa0ec8-f504-4174-b323-2a2d9f09ffb8\") " pod="openstack/glance-default-internal-api-0" Feb 02 21:39:42 crc kubenswrapper[4789]: I0202 21:39:42.443592 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dafa0ec8-f504-4174-b323-2a2d9f09ffb8-config-data\") pod \"glance-default-internal-api-0\" (UID: \"dafa0ec8-f504-4174-b323-2a2d9f09ffb8\") " pod="openstack/glance-default-internal-api-0" Feb 02 21:39:42 crc kubenswrapper[4789]: I0202 21:39:42.443674 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bf92j\" (UniqueName: \"kubernetes.io/projected/dafa0ec8-f504-4174-b323-2a2d9f09ffb8-kube-api-access-bf92j\") pod \"glance-default-internal-api-0\" (UID: \"dafa0ec8-f504-4174-b323-2a2d9f09ffb8\") " pod="openstack/glance-default-internal-api-0" Feb 02 21:39:42 crc kubenswrapper[4789]: I0202 21:39:42.443720 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"dafa0ec8-f504-4174-b323-2a2d9f09ffb8\") " pod="openstack/glance-default-internal-api-0" Feb 02 21:39:42 crc kubenswrapper[4789]: I0202 21:39:42.443739 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dafa0ec8-f504-4174-b323-2a2d9f09ffb8-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"dafa0ec8-f504-4174-b323-2a2d9f09ffb8\") " pod="openstack/glance-default-internal-api-0" Feb 02 21:39:42 crc kubenswrapper[4789]: I0202 21:39:42.443758 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dafa0ec8-f504-4174-b323-2a2d9f09ffb8-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"dafa0ec8-f504-4174-b323-2a2d9f09ffb8\") " pod="openstack/glance-default-internal-api-0" Feb 02 21:39:42 crc kubenswrapper[4789]: I0202 21:39:42.443780 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dafa0ec8-f504-4174-b323-2a2d9f09ffb8-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"dafa0ec8-f504-4174-b323-2a2d9f09ffb8\") " pod="openstack/glance-default-internal-api-0" Feb 02 21:39:42 crc kubenswrapper[4789]: I0202 21:39:42.557535 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dafa0ec8-f504-4174-b323-2a2d9f09ffb8-logs\") pod \"glance-default-internal-api-0\" (UID: \"dafa0ec8-f504-4174-b323-2a2d9f09ffb8\") " pod="openstack/glance-default-internal-api-0" Feb 02 21:39:42 crc kubenswrapper[4789]: I0202 21:39:42.557826 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dafa0ec8-f504-4174-b323-2a2d9f09ffb8-scripts\") pod \"glance-default-internal-api-0\" (UID: \"dafa0ec8-f504-4174-b323-2a2d9f09ffb8\") " pod="openstack/glance-default-internal-api-0" Feb 02 21:39:42 crc kubenswrapper[4789]: I0202 21:39:42.557914 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dafa0ec8-f504-4174-b323-2a2d9f09ffb8-config-data\") pod \"glance-default-internal-api-0\" (UID: \"dafa0ec8-f504-4174-b323-2a2d9f09ffb8\") " pod="openstack/glance-default-internal-api-0" Feb 02 21:39:42 crc kubenswrapper[4789]: I0202 21:39:42.557954 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dafa0ec8-f504-4174-b323-2a2d9f09ffb8-logs\") pod \"glance-default-internal-api-0\" (UID: \"dafa0ec8-f504-4174-b323-2a2d9f09ffb8\") " pod="openstack/glance-default-internal-api-0" Feb 02 21:39:42 crc kubenswrapper[4789]: I0202 21:39:42.557998 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bf92j\" (UniqueName: \"kubernetes.io/projected/dafa0ec8-f504-4174-b323-2a2d9f09ffb8-kube-api-access-bf92j\") pod \"glance-default-internal-api-0\" (UID: \"dafa0ec8-f504-4174-b323-2a2d9f09ffb8\") " pod="openstack/glance-default-internal-api-0" Feb 02 21:39:42 crc kubenswrapper[4789]: I0202 21:39:42.558049 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"dafa0ec8-f504-4174-b323-2a2d9f09ffb8\") " pod="openstack/glance-default-internal-api-0" Feb 02 21:39:42 crc kubenswrapper[4789]: I0202 21:39:42.558065 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dafa0ec8-f504-4174-b323-2a2d9f09ffb8-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"dafa0ec8-f504-4174-b323-2a2d9f09ffb8\") " pod="openstack/glance-default-internal-api-0" Feb 02 21:39:42 crc kubenswrapper[4789]: I0202 21:39:42.558084 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dafa0ec8-f504-4174-b323-2a2d9f09ffb8-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"dafa0ec8-f504-4174-b323-2a2d9f09ffb8\") " pod="openstack/glance-default-internal-api-0" Feb 02 21:39:42 crc kubenswrapper[4789]: I0202 21:39:42.558110 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dafa0ec8-f504-4174-b323-2a2d9f09ffb8-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"dafa0ec8-f504-4174-b323-2a2d9f09ffb8\") " pod="openstack/glance-default-internal-api-0" Feb 02 21:39:42 crc kubenswrapper[4789]: I0202 21:39:42.558676 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dafa0ec8-f504-4174-b323-2a2d9f09ffb8-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"dafa0ec8-f504-4174-b323-2a2d9f09ffb8\") " pod="openstack/glance-default-internal-api-0" Feb 02 21:39:42 crc kubenswrapper[4789]: I0202 21:39:42.559262 4789 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"dafa0ec8-f504-4174-b323-2a2d9f09ffb8\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-internal-api-0" Feb 02 21:39:42 crc kubenswrapper[4789]: I0202 21:39:42.572524 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dafa0ec8-f504-4174-b323-2a2d9f09ffb8-scripts\") pod \"glance-default-internal-api-0\" (UID: \"dafa0ec8-f504-4174-b323-2a2d9f09ffb8\") " pod="openstack/glance-default-internal-api-0" Feb 02 21:39:42 crc kubenswrapper[4789]: I0202 21:39:42.576031 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dafa0ec8-f504-4174-b323-2a2d9f09ffb8-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"dafa0ec8-f504-4174-b323-2a2d9f09ffb8\") " pod="openstack/glance-default-internal-api-0" Feb 02 21:39:42 crc kubenswrapper[4789]: I0202 21:39:42.576684 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dafa0ec8-f504-4174-b323-2a2d9f09ffb8-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"dafa0ec8-f504-4174-b323-2a2d9f09ffb8\") " pod="openstack/glance-default-internal-api-0" Feb 02 21:39:42 crc kubenswrapper[4789]: I0202 21:39:42.577813 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dafa0ec8-f504-4174-b323-2a2d9f09ffb8-config-data\") pod \"glance-default-internal-api-0\" (UID: \"dafa0ec8-f504-4174-b323-2a2d9f09ffb8\") " pod="openstack/glance-default-internal-api-0" Feb 02 21:39:42 crc kubenswrapper[4789]: I0202 21:39:42.582160 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bf92j\" (UniqueName: \"kubernetes.io/projected/dafa0ec8-f504-4174-b323-2a2d9f09ffb8-kube-api-access-bf92j\") pod \"glance-default-internal-api-0\" (UID: \"dafa0ec8-f504-4174-b323-2a2d9f09ffb8\") " pod="openstack/glance-default-internal-api-0" Feb 02 21:39:42 crc kubenswrapper[4789]: I0202 21:39:42.612754 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"dafa0ec8-f504-4174-b323-2a2d9f09ffb8\") " pod="openstack/glance-default-internal-api-0" Feb 02 21:39:42 crc kubenswrapper[4789]: I0202 21:39:42.697246 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 21:39:43 crc kubenswrapper[4789]: I0202 21:39:43.253464 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-bvbsm" event={"ID":"ddd49c35-2c4d-4fad-a207-ca8d0be92036","Type":"ContainerStarted","Data":"d79719fe99392b86ef4b5ecada444e79deb1ac6615f195af2e8788f2390a175f"} Feb 02 21:39:43 crc kubenswrapper[4789]: I0202 21:39:43.259624 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"322d725c-ac03-4759-a08c-e534a70d1ec3","Type":"ContainerStarted","Data":"bc11dcab315b199d84e1245aed796e1917ddb88b3f9915b4433d912d44938385"} Feb 02 21:39:43 crc kubenswrapper[4789]: I0202 21:39:43.288102 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-bvbsm" podStartSLOduration=12.287809316 podStartE2EDuration="12.287809316s" podCreationTimestamp="2026-02-02 21:39:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:39:43.269498259 +0000 UTC m=+1203.564523278" watchObservedRunningTime="2026-02-02 21:39:43.287809316 +0000 UTC m=+1203.582834325" Feb 02 21:39:43 crc kubenswrapper[4789]: I0202 21:39:43.418289 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 21:39:43 crc kubenswrapper[4789]: I0202 21:39:43.747634 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-rvwgc" Feb 02 21:39:43 crc kubenswrapper[4789]: I0202 21:39:43.785861 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-flplq\" (UniqueName: \"kubernetes.io/projected/42e496d3-8d68-48a0-a0ca-058126b200a1-kube-api-access-flplq\") pod \"42e496d3-8d68-48a0-a0ca-058126b200a1\" (UID: \"42e496d3-8d68-48a0-a0ca-058126b200a1\") " Feb 02 21:39:43 crc kubenswrapper[4789]: I0202 21:39:43.786135 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/42e496d3-8d68-48a0-a0ca-058126b200a1-config\") pod \"42e496d3-8d68-48a0-a0ca-058126b200a1\" (UID: \"42e496d3-8d68-48a0-a0ca-058126b200a1\") " Feb 02 21:39:43 crc kubenswrapper[4789]: I0202 21:39:43.786309 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42e496d3-8d68-48a0-a0ca-058126b200a1-combined-ca-bundle\") pod \"42e496d3-8d68-48a0-a0ca-058126b200a1\" (UID: \"42e496d3-8d68-48a0-a0ca-058126b200a1\") " Feb 02 21:39:43 crc kubenswrapper[4789]: I0202 21:39:43.831746 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42e496d3-8d68-48a0-a0ca-058126b200a1-kube-api-access-flplq" (OuterVolumeSpecName: "kube-api-access-flplq") pod "42e496d3-8d68-48a0-a0ca-058126b200a1" (UID: "42e496d3-8d68-48a0-a0ca-058126b200a1"). InnerVolumeSpecName "kube-api-access-flplq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:39:43 crc kubenswrapper[4789]: I0202 21:39:43.837835 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42e496d3-8d68-48a0-a0ca-058126b200a1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "42e496d3-8d68-48a0-a0ca-058126b200a1" (UID: "42e496d3-8d68-48a0-a0ca-058126b200a1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:39:43 crc kubenswrapper[4789]: I0202 21:39:43.840368 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42e496d3-8d68-48a0-a0ca-058126b200a1-config" (OuterVolumeSpecName: "config") pod "42e496d3-8d68-48a0-a0ca-058126b200a1" (UID: "42e496d3-8d68-48a0-a0ca-058126b200a1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:39:43 crc kubenswrapper[4789]: I0202 21:39:43.887996 4789 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42e496d3-8d68-48a0-a0ca-058126b200a1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 21:39:43 crc kubenswrapper[4789]: I0202 21:39:43.888028 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-flplq\" (UniqueName: \"kubernetes.io/projected/42e496d3-8d68-48a0-a0ca-058126b200a1-kube-api-access-flplq\") on node \"crc\" DevicePath \"\"" Feb 02 21:39:43 crc kubenswrapper[4789]: I0202 21:39:43.888040 4789 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/42e496d3-8d68-48a0-a0ca-058126b200a1-config\") on node \"crc\" DevicePath \"\"" Feb 02 21:39:44 crc kubenswrapper[4789]: I0202 21:39:44.298032 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-rvwgc" event={"ID":"42e496d3-8d68-48a0-a0ca-058126b200a1","Type":"ContainerDied","Data":"48e775b185e695e7d74ccde962e8346508b1ca678fd0686a16001e5a5b396a00"} Feb 02 21:39:44 crc kubenswrapper[4789]: I0202 21:39:44.298324 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="48e775b185e695e7d74ccde962e8346508b1ca678fd0686a16001e5a5b396a00" Feb 02 21:39:44 crc kubenswrapper[4789]: I0202 21:39:44.298373 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-rvwgc" Feb 02 21:39:44 crc kubenswrapper[4789]: I0202 21:39:44.308155 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"dafa0ec8-f504-4174-b323-2a2d9f09ffb8","Type":"ContainerStarted","Data":"2fd71c7d0b224173b8bd6353601c10b697c5e97dfffbd457bae31ef0668e15e1"} Feb 02 21:39:44 crc kubenswrapper[4789]: I0202 21:39:44.308208 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"dafa0ec8-f504-4174-b323-2a2d9f09ffb8","Type":"ContainerStarted","Data":"a4abf86fd97c494fbe2833e450b4cf1cb5725b3d0217bc8fe62563cc198cb670"} Feb 02 21:39:44 crc kubenswrapper[4789]: I0202 21:39:44.310734 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c202a904-ccae-4f90-a284-d7e2a3b5e0f7","Type":"ContainerStarted","Data":"3bca866ca5fa43c30d0609fc4b416a606fd180cd7401b978314705aae2cec2fb"} Feb 02 21:39:44 crc kubenswrapper[4789]: I0202 21:39:44.312401 4789 generic.go:334] "Generic (PLEG): container finished" podID="22dd9cf7-e9fb-443a-a7d2-46f72c6ee5e3" containerID="411e1d9c2338519d38825314451a3aee0f0e1c6639158094ec4eb5bdee347fa2" exitCode=0 Feb 02 21:39:44 crc kubenswrapper[4789]: I0202 21:39:44.312440 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-ccdfw" event={"ID":"22dd9cf7-e9fb-443a-a7d2-46f72c6ee5e3","Type":"ContainerDied","Data":"411e1d9c2338519d38825314451a3aee0f0e1c6639158094ec4eb5bdee347fa2"} Feb 02 21:39:44 crc kubenswrapper[4789]: I0202 21:39:44.314943 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"322d725c-ac03-4759-a08c-e534a70d1ec3","Type":"ContainerStarted","Data":"2a4f42975491d16296d64ff16671d19f3f7ca2af54f908f63979957cce03acbc"} Feb 02 21:39:44 crc kubenswrapper[4789]: I0202 21:39:44.360090 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.360075043 podStartE2EDuration="4.360075043s" podCreationTimestamp="2026-02-02 21:39:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:39:44.358045366 +0000 UTC m=+1204.653070385" watchObservedRunningTime="2026-02-02 21:39:44.360075043 +0000 UTC m=+1204.655100062" Feb 02 21:39:44 crc kubenswrapper[4789]: I0202 21:39:44.521761 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-7vxpx"] Feb 02 21:39:44 crc kubenswrapper[4789]: E0202 21:39:44.522120 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42e496d3-8d68-48a0-a0ca-058126b200a1" containerName="neutron-db-sync" Feb 02 21:39:44 crc kubenswrapper[4789]: I0202 21:39:44.522137 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="42e496d3-8d68-48a0-a0ca-058126b200a1" containerName="neutron-db-sync" Feb 02 21:39:44 crc kubenswrapper[4789]: I0202 21:39:44.522318 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="42e496d3-8d68-48a0-a0ca-058126b200a1" containerName="neutron-db-sync" Feb 02 21:39:44 crc kubenswrapper[4789]: I0202 21:39:44.523322 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-7vxpx" Feb 02 21:39:44 crc kubenswrapper[4789]: I0202 21:39:44.530650 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-7vxpx"] Feb 02 21:39:44 crc kubenswrapper[4789]: I0202 21:39:44.598528 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shdxl\" (UniqueName: \"kubernetes.io/projected/6feea529-c31a-419e-92cd-46a8500def8d-kube-api-access-shdxl\") pod \"dnsmasq-dns-55f844cf75-7vxpx\" (UID: \"6feea529-c31a-419e-92cd-46a8500def8d\") " pod="openstack/dnsmasq-dns-55f844cf75-7vxpx" Feb 02 21:39:44 crc kubenswrapper[4789]: I0202 21:39:44.598824 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6feea529-c31a-419e-92cd-46a8500def8d-dns-svc\") pod \"dnsmasq-dns-55f844cf75-7vxpx\" (UID: \"6feea529-c31a-419e-92cd-46a8500def8d\") " pod="openstack/dnsmasq-dns-55f844cf75-7vxpx" Feb 02 21:39:44 crc kubenswrapper[4789]: I0202 21:39:44.598845 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6feea529-c31a-419e-92cd-46a8500def8d-config\") pod \"dnsmasq-dns-55f844cf75-7vxpx\" (UID: \"6feea529-c31a-419e-92cd-46a8500def8d\") " pod="openstack/dnsmasq-dns-55f844cf75-7vxpx" Feb 02 21:39:44 crc kubenswrapper[4789]: I0202 21:39:44.598961 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6feea529-c31a-419e-92cd-46a8500def8d-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-7vxpx\" (UID: \"6feea529-c31a-419e-92cd-46a8500def8d\") " pod="openstack/dnsmasq-dns-55f844cf75-7vxpx" Feb 02 21:39:44 crc kubenswrapper[4789]: I0202 21:39:44.598981 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6feea529-c31a-419e-92cd-46a8500def8d-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-7vxpx\" (UID: \"6feea529-c31a-419e-92cd-46a8500def8d\") " pod="openstack/dnsmasq-dns-55f844cf75-7vxpx" Feb 02 21:39:44 crc kubenswrapper[4789]: I0202 21:39:44.599013 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6feea529-c31a-419e-92cd-46a8500def8d-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-7vxpx\" (UID: \"6feea529-c31a-419e-92cd-46a8500def8d\") " pod="openstack/dnsmasq-dns-55f844cf75-7vxpx" Feb 02 21:39:44 crc kubenswrapper[4789]: I0202 21:39:44.666295 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5f5c98b5b4-lj8fk"] Feb 02 21:39:44 crc kubenswrapper[4789]: I0202 21:39:44.678016 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5f5c98b5b4-lj8fk"] Feb 02 21:39:44 crc kubenswrapper[4789]: I0202 21:39:44.678141 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5f5c98b5b4-lj8fk" Feb 02 21:39:44 crc kubenswrapper[4789]: I0202 21:39:44.680824 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 02 21:39:44 crc kubenswrapper[4789]: I0202 21:39:44.681312 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 02 21:39:44 crc kubenswrapper[4789]: I0202 21:39:44.681531 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Feb 02 21:39:44 crc kubenswrapper[4789]: I0202 21:39:44.683231 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-wwv7s" Feb 02 21:39:44 crc kubenswrapper[4789]: I0202 21:39:44.704156 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6feea529-c31a-419e-92cd-46a8500def8d-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-7vxpx\" (UID: \"6feea529-c31a-419e-92cd-46a8500def8d\") " pod="openstack/dnsmasq-dns-55f844cf75-7vxpx" Feb 02 21:39:44 crc kubenswrapper[4789]: I0202 21:39:44.704369 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/44bf258b-7d3e-4f0f-8a92-c71ba94c022e-httpd-config\") pod \"neutron-5f5c98b5b4-lj8fk\" (UID: \"44bf258b-7d3e-4f0f-8a92-c71ba94c022e\") " pod="openstack/neutron-5f5c98b5b4-lj8fk" Feb 02 21:39:44 crc kubenswrapper[4789]: I0202 21:39:44.704515 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6feea529-c31a-419e-92cd-46a8500def8d-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-7vxpx\" (UID: \"6feea529-c31a-419e-92cd-46a8500def8d\") " pod="openstack/dnsmasq-dns-55f844cf75-7vxpx" Feb 02 21:39:44 crc kubenswrapper[4789]: I0202 21:39:44.704739 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/44bf258b-7d3e-4f0f-8a92-c71ba94c022e-config\") pod \"neutron-5f5c98b5b4-lj8fk\" (UID: \"44bf258b-7d3e-4f0f-8a92-c71ba94c022e\") " pod="openstack/neutron-5f5c98b5b4-lj8fk" Feb 02 21:39:44 crc kubenswrapper[4789]: I0202 21:39:44.704894 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6feea529-c31a-419e-92cd-46a8500def8d-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-7vxpx\" (UID: \"6feea529-c31a-419e-92cd-46a8500def8d\") " pod="openstack/dnsmasq-dns-55f844cf75-7vxpx" Feb 02 21:39:44 crc kubenswrapper[4789]: I0202 21:39:44.705048 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shdxl\" (UniqueName: \"kubernetes.io/projected/6feea529-c31a-419e-92cd-46a8500def8d-kube-api-access-shdxl\") pod \"dnsmasq-dns-55f844cf75-7vxpx\" (UID: \"6feea529-c31a-419e-92cd-46a8500def8d\") " pod="openstack/dnsmasq-dns-55f844cf75-7vxpx" Feb 02 21:39:44 crc kubenswrapper[4789]: I0202 21:39:44.705208 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6feea529-c31a-419e-92cd-46a8500def8d-dns-svc\") pod \"dnsmasq-dns-55f844cf75-7vxpx\" (UID: \"6feea529-c31a-419e-92cd-46a8500def8d\") " pod="openstack/dnsmasq-dns-55f844cf75-7vxpx" Feb 02 21:39:44 crc kubenswrapper[4789]: I0202 21:39:44.705302 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/44bf258b-7d3e-4f0f-8a92-c71ba94c022e-ovndb-tls-certs\") pod \"neutron-5f5c98b5b4-lj8fk\" (UID: \"44bf258b-7d3e-4f0f-8a92-c71ba94c022e\") " pod="openstack/neutron-5f5c98b5b4-lj8fk" Feb 02 21:39:44 crc kubenswrapper[4789]: I0202 21:39:44.705410 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6feea529-c31a-419e-92cd-46a8500def8d-config\") pod \"dnsmasq-dns-55f844cf75-7vxpx\" (UID: \"6feea529-c31a-419e-92cd-46a8500def8d\") " pod="openstack/dnsmasq-dns-55f844cf75-7vxpx" Feb 02 21:39:44 crc kubenswrapper[4789]: I0202 21:39:44.705555 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6feea529-c31a-419e-92cd-46a8500def8d-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-7vxpx\" (UID: \"6feea529-c31a-419e-92cd-46a8500def8d\") " pod="openstack/dnsmasq-dns-55f844cf75-7vxpx" Feb 02 21:39:44 crc kubenswrapper[4789]: I0202 21:39:44.705562 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmnbm\" (UniqueName: \"kubernetes.io/projected/44bf258b-7d3e-4f0f-8a92-c71ba94c022e-kube-api-access-lmnbm\") pod \"neutron-5f5c98b5b4-lj8fk\" (UID: \"44bf258b-7d3e-4f0f-8a92-c71ba94c022e\") " pod="openstack/neutron-5f5c98b5b4-lj8fk" Feb 02 21:39:44 crc kubenswrapper[4789]: I0202 21:39:44.705673 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44bf258b-7d3e-4f0f-8a92-c71ba94c022e-combined-ca-bundle\") pod \"neutron-5f5c98b5b4-lj8fk\" (UID: \"44bf258b-7d3e-4f0f-8a92-c71ba94c022e\") " pod="openstack/neutron-5f5c98b5b4-lj8fk" Feb 02 21:39:44 crc kubenswrapper[4789]: I0202 21:39:44.706532 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6feea529-c31a-419e-92cd-46a8500def8d-dns-svc\") pod \"dnsmasq-dns-55f844cf75-7vxpx\" (UID: \"6feea529-c31a-419e-92cd-46a8500def8d\") " pod="openstack/dnsmasq-dns-55f844cf75-7vxpx" Feb 02 21:39:44 crc kubenswrapper[4789]: I0202 21:39:44.706862 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6feea529-c31a-419e-92cd-46a8500def8d-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-7vxpx\" (UID: \"6feea529-c31a-419e-92cd-46a8500def8d\") " pod="openstack/dnsmasq-dns-55f844cf75-7vxpx" Feb 02 21:39:44 crc kubenswrapper[4789]: I0202 21:39:44.707252 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6feea529-c31a-419e-92cd-46a8500def8d-config\") pod \"dnsmasq-dns-55f844cf75-7vxpx\" (UID: \"6feea529-c31a-419e-92cd-46a8500def8d\") " pod="openstack/dnsmasq-dns-55f844cf75-7vxpx" Feb 02 21:39:44 crc kubenswrapper[4789]: I0202 21:39:44.708536 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6feea529-c31a-419e-92cd-46a8500def8d-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-7vxpx\" (UID: \"6feea529-c31a-419e-92cd-46a8500def8d\") " pod="openstack/dnsmasq-dns-55f844cf75-7vxpx" Feb 02 21:39:44 crc kubenswrapper[4789]: I0202 21:39:44.732865 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shdxl\" (UniqueName: \"kubernetes.io/projected/6feea529-c31a-419e-92cd-46a8500def8d-kube-api-access-shdxl\") pod \"dnsmasq-dns-55f844cf75-7vxpx\" (UID: \"6feea529-c31a-419e-92cd-46a8500def8d\") " pod="openstack/dnsmasq-dns-55f844cf75-7vxpx" Feb 02 21:39:44 crc kubenswrapper[4789]: I0202 21:39:44.807488 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/44bf258b-7d3e-4f0f-8a92-c71ba94c022e-httpd-config\") pod \"neutron-5f5c98b5b4-lj8fk\" (UID: \"44bf258b-7d3e-4f0f-8a92-c71ba94c022e\") " pod="openstack/neutron-5f5c98b5b4-lj8fk" Feb 02 21:39:44 crc kubenswrapper[4789]: I0202 21:39:44.807545 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/44bf258b-7d3e-4f0f-8a92-c71ba94c022e-config\") pod \"neutron-5f5c98b5b4-lj8fk\" (UID: \"44bf258b-7d3e-4f0f-8a92-c71ba94c022e\") " pod="openstack/neutron-5f5c98b5b4-lj8fk" Feb 02 21:39:44 crc kubenswrapper[4789]: I0202 21:39:44.807586 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/44bf258b-7d3e-4f0f-8a92-c71ba94c022e-ovndb-tls-certs\") pod \"neutron-5f5c98b5b4-lj8fk\" (UID: \"44bf258b-7d3e-4f0f-8a92-c71ba94c022e\") " pod="openstack/neutron-5f5c98b5b4-lj8fk" Feb 02 21:39:44 crc kubenswrapper[4789]: I0202 21:39:44.807621 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmnbm\" (UniqueName: \"kubernetes.io/projected/44bf258b-7d3e-4f0f-8a92-c71ba94c022e-kube-api-access-lmnbm\") pod \"neutron-5f5c98b5b4-lj8fk\" (UID: \"44bf258b-7d3e-4f0f-8a92-c71ba94c022e\") " pod="openstack/neutron-5f5c98b5b4-lj8fk" Feb 02 21:39:44 crc kubenswrapper[4789]: I0202 21:39:44.807647 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44bf258b-7d3e-4f0f-8a92-c71ba94c022e-combined-ca-bundle\") pod \"neutron-5f5c98b5b4-lj8fk\" (UID: \"44bf258b-7d3e-4f0f-8a92-c71ba94c022e\") " pod="openstack/neutron-5f5c98b5b4-lj8fk" Feb 02 21:39:44 crc kubenswrapper[4789]: I0202 21:39:44.815289 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44bf258b-7d3e-4f0f-8a92-c71ba94c022e-combined-ca-bundle\") pod \"neutron-5f5c98b5b4-lj8fk\" (UID: \"44bf258b-7d3e-4f0f-8a92-c71ba94c022e\") " pod="openstack/neutron-5f5c98b5b4-lj8fk" Feb 02 21:39:44 crc kubenswrapper[4789]: I0202 21:39:44.815433 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/44bf258b-7d3e-4f0f-8a92-c71ba94c022e-ovndb-tls-certs\") pod \"neutron-5f5c98b5b4-lj8fk\" (UID: \"44bf258b-7d3e-4f0f-8a92-c71ba94c022e\") " pod="openstack/neutron-5f5c98b5b4-lj8fk" Feb 02 21:39:44 crc kubenswrapper[4789]: I0202 21:39:44.816030 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/44bf258b-7d3e-4f0f-8a92-c71ba94c022e-httpd-config\") pod \"neutron-5f5c98b5b4-lj8fk\" (UID: \"44bf258b-7d3e-4f0f-8a92-c71ba94c022e\") " pod="openstack/neutron-5f5c98b5b4-lj8fk" Feb 02 21:39:44 crc kubenswrapper[4789]: I0202 21:39:44.816069 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/44bf258b-7d3e-4f0f-8a92-c71ba94c022e-config\") pod \"neutron-5f5c98b5b4-lj8fk\" (UID: \"44bf258b-7d3e-4f0f-8a92-c71ba94c022e\") " pod="openstack/neutron-5f5c98b5b4-lj8fk" Feb 02 21:39:44 crc kubenswrapper[4789]: I0202 21:39:44.834204 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmnbm\" (UniqueName: \"kubernetes.io/projected/44bf258b-7d3e-4f0f-8a92-c71ba94c022e-kube-api-access-lmnbm\") pod \"neutron-5f5c98b5b4-lj8fk\" (UID: \"44bf258b-7d3e-4f0f-8a92-c71ba94c022e\") " pod="openstack/neutron-5f5c98b5b4-lj8fk" Feb 02 21:39:44 crc kubenswrapper[4789]: I0202 21:39:44.873881 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-7vxpx" Feb 02 21:39:45 crc kubenswrapper[4789]: I0202 21:39:45.008757 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5f5c98b5b4-lj8fk" Feb 02 21:39:45 crc kubenswrapper[4789]: I0202 21:39:45.354677 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"dafa0ec8-f504-4174-b323-2a2d9f09ffb8","Type":"ContainerStarted","Data":"a69d96438857fda4e73a9d99a5ee151881ca4a7f917490f142b5ff27614c0058"} Feb 02 21:39:45 crc kubenswrapper[4789]: I0202 21:39:45.376666 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-7vxpx"] Feb 02 21:39:45 crc kubenswrapper[4789]: I0202 21:39:45.708497 4789 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-d9x7g" podUID="397ad5b0-86b4-40b7-b8c3-bceb1d8aa470" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.115:5353: i/o timeout" Feb 02 21:39:45 crc kubenswrapper[4789]: I0202 21:39:45.733147 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5f5c98b5b4-lj8fk"] Feb 02 21:39:45 crc kubenswrapper[4789]: I0202 21:39:45.883272 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-ccdfw" Feb 02 21:39:45 crc kubenswrapper[4789]: I0202 21:39:45.946423 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6v482\" (UniqueName: \"kubernetes.io/projected/22dd9cf7-e9fb-443a-a7d2-46f72c6ee5e3-kube-api-access-6v482\") pod \"22dd9cf7-e9fb-443a-a7d2-46f72c6ee5e3\" (UID: \"22dd9cf7-e9fb-443a-a7d2-46f72c6ee5e3\") " Feb 02 21:39:45 crc kubenswrapper[4789]: I0202 21:39:45.946513 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22dd9cf7-e9fb-443a-a7d2-46f72c6ee5e3-config-data\") pod \"22dd9cf7-e9fb-443a-a7d2-46f72c6ee5e3\" (UID: \"22dd9cf7-e9fb-443a-a7d2-46f72c6ee5e3\") " Feb 02 21:39:45 crc kubenswrapper[4789]: I0202 21:39:45.946537 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/22dd9cf7-e9fb-443a-a7d2-46f72c6ee5e3-scripts\") pod \"22dd9cf7-e9fb-443a-a7d2-46f72c6ee5e3\" (UID: \"22dd9cf7-e9fb-443a-a7d2-46f72c6ee5e3\") " Feb 02 21:39:45 crc kubenswrapper[4789]: I0202 21:39:45.946649 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22dd9cf7-e9fb-443a-a7d2-46f72c6ee5e3-combined-ca-bundle\") pod \"22dd9cf7-e9fb-443a-a7d2-46f72c6ee5e3\" (UID: \"22dd9cf7-e9fb-443a-a7d2-46f72c6ee5e3\") " Feb 02 21:39:45 crc kubenswrapper[4789]: I0202 21:39:45.946735 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/22dd9cf7-e9fb-443a-a7d2-46f72c6ee5e3-logs\") pod \"22dd9cf7-e9fb-443a-a7d2-46f72c6ee5e3\" (UID: \"22dd9cf7-e9fb-443a-a7d2-46f72c6ee5e3\") " Feb 02 21:39:45 crc kubenswrapper[4789]: I0202 21:39:45.947712 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/22dd9cf7-e9fb-443a-a7d2-46f72c6ee5e3-logs" (OuterVolumeSpecName: "logs") pod "22dd9cf7-e9fb-443a-a7d2-46f72c6ee5e3" (UID: "22dd9cf7-e9fb-443a-a7d2-46f72c6ee5e3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 21:39:45 crc kubenswrapper[4789]: I0202 21:39:45.952800 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22dd9cf7-e9fb-443a-a7d2-46f72c6ee5e3-kube-api-access-6v482" (OuterVolumeSpecName: "kube-api-access-6v482") pod "22dd9cf7-e9fb-443a-a7d2-46f72c6ee5e3" (UID: "22dd9cf7-e9fb-443a-a7d2-46f72c6ee5e3"). InnerVolumeSpecName "kube-api-access-6v482". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:39:45 crc kubenswrapper[4789]: I0202 21:39:45.954061 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22dd9cf7-e9fb-443a-a7d2-46f72c6ee5e3-scripts" (OuterVolumeSpecName: "scripts") pod "22dd9cf7-e9fb-443a-a7d2-46f72c6ee5e3" (UID: "22dd9cf7-e9fb-443a-a7d2-46f72c6ee5e3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:39:45 crc kubenswrapper[4789]: I0202 21:39:45.988437 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22dd9cf7-e9fb-443a-a7d2-46f72c6ee5e3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "22dd9cf7-e9fb-443a-a7d2-46f72c6ee5e3" (UID: "22dd9cf7-e9fb-443a-a7d2-46f72c6ee5e3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:39:45 crc kubenswrapper[4789]: I0202 21:39:45.991151 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22dd9cf7-e9fb-443a-a7d2-46f72c6ee5e3-config-data" (OuterVolumeSpecName: "config-data") pod "22dd9cf7-e9fb-443a-a7d2-46f72c6ee5e3" (UID: "22dd9cf7-e9fb-443a-a7d2-46f72c6ee5e3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:39:46 crc kubenswrapper[4789]: I0202 21:39:46.050617 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6v482\" (UniqueName: \"kubernetes.io/projected/22dd9cf7-e9fb-443a-a7d2-46f72c6ee5e3-kube-api-access-6v482\") on node \"crc\" DevicePath \"\"" Feb 02 21:39:46 crc kubenswrapper[4789]: I0202 21:39:46.050667 4789 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22dd9cf7-e9fb-443a-a7d2-46f72c6ee5e3-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 21:39:46 crc kubenswrapper[4789]: I0202 21:39:46.050739 4789 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/22dd9cf7-e9fb-443a-a7d2-46f72c6ee5e3-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 21:39:46 crc kubenswrapper[4789]: I0202 21:39:46.050753 4789 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22dd9cf7-e9fb-443a-a7d2-46f72c6ee5e3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 21:39:46 crc kubenswrapper[4789]: I0202 21:39:46.050788 4789 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/22dd9cf7-e9fb-443a-a7d2-46f72c6ee5e3-logs\") on node \"crc\" DevicePath \"\"" Feb 02 21:39:46 crc kubenswrapper[4789]: I0202 21:39:46.377153 4789 generic.go:334] "Generic (PLEG): container finished" podID="ddd49c35-2c4d-4fad-a207-ca8d0be92036" containerID="d79719fe99392b86ef4b5ecada444e79deb1ac6615f195af2e8788f2390a175f" exitCode=0 Feb 02 21:39:46 crc kubenswrapper[4789]: I0202 21:39:46.377283 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-bvbsm" event={"ID":"ddd49c35-2c4d-4fad-a207-ca8d0be92036","Type":"ContainerDied","Data":"d79719fe99392b86ef4b5ecada444e79deb1ac6615f195af2e8788f2390a175f"} Feb 02 21:39:46 crc kubenswrapper[4789]: I0202 21:39:46.380780 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5f5c98b5b4-lj8fk" event={"ID":"44bf258b-7d3e-4f0f-8a92-c71ba94c022e","Type":"ContainerStarted","Data":"a2d524b577d11f277bdb94e2bccad39399ecec6cda872e4edc1e618b0c1296b3"} Feb 02 21:39:46 crc kubenswrapper[4789]: I0202 21:39:46.380835 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5f5c98b5b4-lj8fk" event={"ID":"44bf258b-7d3e-4f0f-8a92-c71ba94c022e","Type":"ContainerStarted","Data":"8f773f2a9b77d6911f8067507b3caf8237f65b491b720c9baee3adbfa16f899e"} Feb 02 21:39:46 crc kubenswrapper[4789]: I0202 21:39:46.380851 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5f5c98b5b4-lj8fk" event={"ID":"44bf258b-7d3e-4f0f-8a92-c71ba94c022e","Type":"ContainerStarted","Data":"b9b750bd22ed42942027f3f078336e77d7b0c494168cdf53f62f49381e5d2ac4"} Feb 02 21:39:46 crc kubenswrapper[4789]: I0202 21:39:46.381073 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5f5c98b5b4-lj8fk" Feb 02 21:39:46 crc kubenswrapper[4789]: I0202 21:39:46.382049 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-ccdfw" event={"ID":"22dd9cf7-e9fb-443a-a7d2-46f72c6ee5e3","Type":"ContainerDied","Data":"ca902b7ce030c627fbbd3a801e09b4aff30999be5dce1951ff2a725b75cbf2e9"} Feb 02 21:39:46 crc kubenswrapper[4789]: I0202 21:39:46.382080 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca902b7ce030c627fbbd3a801e09b4aff30999be5dce1951ff2a725b75cbf2e9" Feb 02 21:39:46 crc kubenswrapper[4789]: I0202 21:39:46.382131 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-ccdfw" Feb 02 21:39:46 crc kubenswrapper[4789]: I0202 21:39:46.382949 4789 generic.go:334] "Generic (PLEG): container finished" podID="6feea529-c31a-419e-92cd-46a8500def8d" containerID="f1ea63c8c5ff6da7af70a37c0e61ad67bc39f4d30460e668eaf6dcc015f338e7" exitCode=0 Feb 02 21:39:46 crc kubenswrapper[4789]: I0202 21:39:46.384072 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-7vxpx" event={"ID":"6feea529-c31a-419e-92cd-46a8500def8d","Type":"ContainerDied","Data":"f1ea63c8c5ff6da7af70a37c0e61ad67bc39f4d30460e668eaf6dcc015f338e7"} Feb 02 21:39:46 crc kubenswrapper[4789]: I0202 21:39:46.384120 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-7vxpx" event={"ID":"6feea529-c31a-419e-92cd-46a8500def8d","Type":"ContainerStarted","Data":"34d84377d39237467b8cf31b7b1802425f5ec004d522e2e1a3300bc825405a64"} Feb 02 21:39:46 crc kubenswrapper[4789]: I0202 21:39:46.423283 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.423268731 podStartE2EDuration="4.423268731s" podCreationTimestamp="2026-02-02 21:39:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:39:46.418952989 +0000 UTC m=+1206.713978008" watchObservedRunningTime="2026-02-02 21:39:46.423268731 +0000 UTC m=+1206.718293750" Feb 02 21:39:46 crc kubenswrapper[4789]: I0202 21:39:46.512432 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5f5c98b5b4-lj8fk" podStartSLOduration=2.512416098 podStartE2EDuration="2.512416098s" podCreationTimestamp="2026-02-02 21:39:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:39:46.457140377 +0000 UTC m=+1206.752165416" watchObservedRunningTime="2026-02-02 21:39:46.512416098 +0000 UTC m=+1206.807441117" Feb 02 21:39:46 crc kubenswrapper[4789]: I0202 21:39:46.535106 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-59cf4774f6-v75lt"] Feb 02 21:39:46 crc kubenswrapper[4789]: E0202 21:39:46.536651 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22dd9cf7-e9fb-443a-a7d2-46f72c6ee5e3" containerName="placement-db-sync" Feb 02 21:39:46 crc kubenswrapper[4789]: I0202 21:39:46.536695 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="22dd9cf7-e9fb-443a-a7d2-46f72c6ee5e3" containerName="placement-db-sync" Feb 02 21:39:46 crc kubenswrapper[4789]: I0202 21:39:46.537882 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="22dd9cf7-e9fb-443a-a7d2-46f72c6ee5e3" containerName="placement-db-sync" Feb 02 21:39:46 crc kubenswrapper[4789]: I0202 21:39:46.540430 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-59cf4774f6-v75lt" Feb 02 21:39:46 crc kubenswrapper[4789]: I0202 21:39:46.548955 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 02 21:39:46 crc kubenswrapper[4789]: I0202 21:39:46.549235 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 02 21:39:46 crc kubenswrapper[4789]: I0202 21:39:46.549535 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-vs2lc" Feb 02 21:39:46 crc kubenswrapper[4789]: I0202 21:39:46.550621 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Feb 02 21:39:46 crc kubenswrapper[4789]: I0202 21:39:46.550784 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Feb 02 21:39:46 crc kubenswrapper[4789]: I0202 21:39:46.558631 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-59cf4774f6-v75lt"] Feb 02 21:39:46 crc kubenswrapper[4789]: I0202 21:39:46.562854 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3e262bc-73ec-4c9b-adbb-9c20b7a6286b-scripts\") pod \"placement-59cf4774f6-v75lt\" (UID: \"d3e262bc-73ec-4c9b-adbb-9c20b7a6286b\") " pod="openstack/placement-59cf4774f6-v75lt" Feb 02 21:39:46 crc kubenswrapper[4789]: I0202 21:39:46.562893 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3e262bc-73ec-4c9b-adbb-9c20b7a6286b-combined-ca-bundle\") pod \"placement-59cf4774f6-v75lt\" (UID: \"d3e262bc-73ec-4c9b-adbb-9c20b7a6286b\") " pod="openstack/placement-59cf4774f6-v75lt" Feb 02 21:39:46 crc kubenswrapper[4789]: I0202 21:39:46.562916 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3e262bc-73ec-4c9b-adbb-9c20b7a6286b-config-data\") pod \"placement-59cf4774f6-v75lt\" (UID: \"d3e262bc-73ec-4c9b-adbb-9c20b7a6286b\") " pod="openstack/placement-59cf4774f6-v75lt" Feb 02 21:39:46 crc kubenswrapper[4789]: I0202 21:39:46.562933 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3e262bc-73ec-4c9b-adbb-9c20b7a6286b-internal-tls-certs\") pod \"placement-59cf4774f6-v75lt\" (UID: \"d3e262bc-73ec-4c9b-adbb-9c20b7a6286b\") " pod="openstack/placement-59cf4774f6-v75lt" Feb 02 21:39:46 crc kubenswrapper[4789]: I0202 21:39:46.562996 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d3e262bc-73ec-4c9b-adbb-9c20b7a6286b-logs\") pod \"placement-59cf4774f6-v75lt\" (UID: \"d3e262bc-73ec-4c9b-adbb-9c20b7a6286b\") " pod="openstack/placement-59cf4774f6-v75lt" Feb 02 21:39:46 crc kubenswrapper[4789]: I0202 21:39:46.563010 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3e262bc-73ec-4c9b-adbb-9c20b7a6286b-public-tls-certs\") pod \"placement-59cf4774f6-v75lt\" (UID: \"d3e262bc-73ec-4c9b-adbb-9c20b7a6286b\") " pod="openstack/placement-59cf4774f6-v75lt" Feb 02 21:39:46 crc kubenswrapper[4789]: I0202 21:39:46.563051 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v42tv\" (UniqueName: \"kubernetes.io/projected/d3e262bc-73ec-4c9b-adbb-9c20b7a6286b-kube-api-access-v42tv\") pod \"placement-59cf4774f6-v75lt\" (UID: \"d3e262bc-73ec-4c9b-adbb-9c20b7a6286b\") " pod="openstack/placement-59cf4774f6-v75lt" Feb 02 21:39:46 crc kubenswrapper[4789]: I0202 21:39:46.664203 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d3e262bc-73ec-4c9b-adbb-9c20b7a6286b-logs\") pod \"placement-59cf4774f6-v75lt\" (UID: \"d3e262bc-73ec-4c9b-adbb-9c20b7a6286b\") " pod="openstack/placement-59cf4774f6-v75lt" Feb 02 21:39:46 crc kubenswrapper[4789]: I0202 21:39:46.664245 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3e262bc-73ec-4c9b-adbb-9c20b7a6286b-public-tls-certs\") pod \"placement-59cf4774f6-v75lt\" (UID: \"d3e262bc-73ec-4c9b-adbb-9c20b7a6286b\") " pod="openstack/placement-59cf4774f6-v75lt" Feb 02 21:39:46 crc kubenswrapper[4789]: I0202 21:39:46.664288 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v42tv\" (UniqueName: \"kubernetes.io/projected/d3e262bc-73ec-4c9b-adbb-9c20b7a6286b-kube-api-access-v42tv\") pod \"placement-59cf4774f6-v75lt\" (UID: \"d3e262bc-73ec-4c9b-adbb-9c20b7a6286b\") " pod="openstack/placement-59cf4774f6-v75lt" Feb 02 21:39:46 crc kubenswrapper[4789]: I0202 21:39:46.664335 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3e262bc-73ec-4c9b-adbb-9c20b7a6286b-scripts\") pod \"placement-59cf4774f6-v75lt\" (UID: \"d3e262bc-73ec-4c9b-adbb-9c20b7a6286b\") " pod="openstack/placement-59cf4774f6-v75lt" Feb 02 21:39:46 crc kubenswrapper[4789]: I0202 21:39:46.664357 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3e262bc-73ec-4c9b-adbb-9c20b7a6286b-combined-ca-bundle\") pod \"placement-59cf4774f6-v75lt\" (UID: \"d3e262bc-73ec-4c9b-adbb-9c20b7a6286b\") " pod="openstack/placement-59cf4774f6-v75lt" Feb 02 21:39:46 crc kubenswrapper[4789]: I0202 21:39:46.664376 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3e262bc-73ec-4c9b-adbb-9c20b7a6286b-config-data\") pod \"placement-59cf4774f6-v75lt\" (UID: \"d3e262bc-73ec-4c9b-adbb-9c20b7a6286b\") " pod="openstack/placement-59cf4774f6-v75lt" Feb 02 21:39:46 crc kubenswrapper[4789]: I0202 21:39:46.664394 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3e262bc-73ec-4c9b-adbb-9c20b7a6286b-internal-tls-certs\") pod \"placement-59cf4774f6-v75lt\" (UID: \"d3e262bc-73ec-4c9b-adbb-9c20b7a6286b\") " pod="openstack/placement-59cf4774f6-v75lt" Feb 02 21:39:46 crc kubenswrapper[4789]: I0202 21:39:46.666405 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d3e262bc-73ec-4c9b-adbb-9c20b7a6286b-logs\") pod \"placement-59cf4774f6-v75lt\" (UID: \"d3e262bc-73ec-4c9b-adbb-9c20b7a6286b\") " pod="openstack/placement-59cf4774f6-v75lt" Feb 02 21:39:46 crc kubenswrapper[4789]: I0202 21:39:46.673387 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3e262bc-73ec-4c9b-adbb-9c20b7a6286b-combined-ca-bundle\") pod \"placement-59cf4774f6-v75lt\" (UID: \"d3e262bc-73ec-4c9b-adbb-9c20b7a6286b\") " pod="openstack/placement-59cf4774f6-v75lt" Feb 02 21:39:46 crc kubenswrapper[4789]: I0202 21:39:46.674411 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3e262bc-73ec-4c9b-adbb-9c20b7a6286b-internal-tls-certs\") pod \"placement-59cf4774f6-v75lt\" (UID: \"d3e262bc-73ec-4c9b-adbb-9c20b7a6286b\") " pod="openstack/placement-59cf4774f6-v75lt" Feb 02 21:39:46 crc kubenswrapper[4789]: I0202 21:39:46.676490 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3e262bc-73ec-4c9b-adbb-9c20b7a6286b-public-tls-certs\") pod \"placement-59cf4774f6-v75lt\" (UID: \"d3e262bc-73ec-4c9b-adbb-9c20b7a6286b\") " pod="openstack/placement-59cf4774f6-v75lt" Feb 02 21:39:46 crc kubenswrapper[4789]: I0202 21:39:46.682284 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3e262bc-73ec-4c9b-adbb-9c20b7a6286b-scripts\") pod \"placement-59cf4774f6-v75lt\" (UID: \"d3e262bc-73ec-4c9b-adbb-9c20b7a6286b\") " pod="openstack/placement-59cf4774f6-v75lt" Feb 02 21:39:46 crc kubenswrapper[4789]: I0202 21:39:46.682993 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3e262bc-73ec-4c9b-adbb-9c20b7a6286b-config-data\") pod \"placement-59cf4774f6-v75lt\" (UID: \"d3e262bc-73ec-4c9b-adbb-9c20b7a6286b\") " pod="openstack/placement-59cf4774f6-v75lt" Feb 02 21:39:46 crc kubenswrapper[4789]: I0202 21:39:46.686111 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v42tv\" (UniqueName: \"kubernetes.io/projected/d3e262bc-73ec-4c9b-adbb-9c20b7a6286b-kube-api-access-v42tv\") pod \"placement-59cf4774f6-v75lt\" (UID: \"d3e262bc-73ec-4c9b-adbb-9c20b7a6286b\") " pod="openstack/placement-59cf4774f6-v75lt" Feb 02 21:39:46 crc kubenswrapper[4789]: I0202 21:39:46.885912 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-59cf4774f6-v75lt" Feb 02 21:39:47 crc kubenswrapper[4789]: I0202 21:39:47.129140 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7c6f8769f9-9q4zq"] Feb 02 21:39:47 crc kubenswrapper[4789]: I0202 21:39:47.130703 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7c6f8769f9-9q4zq" Feb 02 21:39:47 crc kubenswrapper[4789]: I0202 21:39:47.132812 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Feb 02 21:39:47 crc kubenswrapper[4789]: I0202 21:39:47.133020 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Feb 02 21:39:47 crc kubenswrapper[4789]: I0202 21:39:47.142302 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7c6f8769f9-9q4zq"] Feb 02 21:39:47 crc kubenswrapper[4789]: I0202 21:39:47.178430 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e65e509-47bb-47f7-b129-74222d242dc8-public-tls-certs\") pod \"neutron-7c6f8769f9-9q4zq\" (UID: \"0e65e509-47bb-47f7-b129-74222d242dc8\") " pod="openstack/neutron-7c6f8769f9-9q4zq" Feb 02 21:39:47 crc kubenswrapper[4789]: I0202 21:39:47.178476 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e65e509-47bb-47f7-b129-74222d242dc8-combined-ca-bundle\") pod \"neutron-7c6f8769f9-9q4zq\" (UID: \"0e65e509-47bb-47f7-b129-74222d242dc8\") " pod="openstack/neutron-7c6f8769f9-9q4zq" Feb 02 21:39:47 crc kubenswrapper[4789]: I0202 21:39:47.178553 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pwmq\" (UniqueName: \"kubernetes.io/projected/0e65e509-47bb-47f7-b129-74222d242dc8-kube-api-access-8pwmq\") pod \"neutron-7c6f8769f9-9q4zq\" (UID: \"0e65e509-47bb-47f7-b129-74222d242dc8\") " pod="openstack/neutron-7c6f8769f9-9q4zq" Feb 02 21:39:47 crc kubenswrapper[4789]: I0202 21:39:47.178580 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0e65e509-47bb-47f7-b129-74222d242dc8-httpd-config\") pod \"neutron-7c6f8769f9-9q4zq\" (UID: \"0e65e509-47bb-47f7-b129-74222d242dc8\") " pod="openstack/neutron-7c6f8769f9-9q4zq" Feb 02 21:39:47 crc kubenswrapper[4789]: I0202 21:39:47.178622 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e65e509-47bb-47f7-b129-74222d242dc8-internal-tls-certs\") pod \"neutron-7c6f8769f9-9q4zq\" (UID: \"0e65e509-47bb-47f7-b129-74222d242dc8\") " pod="openstack/neutron-7c6f8769f9-9q4zq" Feb 02 21:39:47 crc kubenswrapper[4789]: I0202 21:39:47.178649 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0e65e509-47bb-47f7-b129-74222d242dc8-config\") pod \"neutron-7c6f8769f9-9q4zq\" (UID: \"0e65e509-47bb-47f7-b129-74222d242dc8\") " pod="openstack/neutron-7c6f8769f9-9q4zq" Feb 02 21:39:47 crc kubenswrapper[4789]: I0202 21:39:47.178772 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e65e509-47bb-47f7-b129-74222d242dc8-ovndb-tls-certs\") pod \"neutron-7c6f8769f9-9q4zq\" (UID: \"0e65e509-47bb-47f7-b129-74222d242dc8\") " pod="openstack/neutron-7c6f8769f9-9q4zq" Feb 02 21:39:47 crc kubenswrapper[4789]: I0202 21:39:47.279969 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8pwmq\" (UniqueName: \"kubernetes.io/projected/0e65e509-47bb-47f7-b129-74222d242dc8-kube-api-access-8pwmq\") pod \"neutron-7c6f8769f9-9q4zq\" (UID: \"0e65e509-47bb-47f7-b129-74222d242dc8\") " pod="openstack/neutron-7c6f8769f9-9q4zq" Feb 02 21:39:47 crc kubenswrapper[4789]: I0202 21:39:47.280021 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0e65e509-47bb-47f7-b129-74222d242dc8-httpd-config\") pod \"neutron-7c6f8769f9-9q4zq\" (UID: \"0e65e509-47bb-47f7-b129-74222d242dc8\") " pod="openstack/neutron-7c6f8769f9-9q4zq" Feb 02 21:39:47 crc kubenswrapper[4789]: I0202 21:39:47.280049 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e65e509-47bb-47f7-b129-74222d242dc8-internal-tls-certs\") pod \"neutron-7c6f8769f9-9q4zq\" (UID: \"0e65e509-47bb-47f7-b129-74222d242dc8\") " pod="openstack/neutron-7c6f8769f9-9q4zq" Feb 02 21:39:47 crc kubenswrapper[4789]: I0202 21:39:47.280077 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0e65e509-47bb-47f7-b129-74222d242dc8-config\") pod \"neutron-7c6f8769f9-9q4zq\" (UID: \"0e65e509-47bb-47f7-b129-74222d242dc8\") " pod="openstack/neutron-7c6f8769f9-9q4zq" Feb 02 21:39:47 crc kubenswrapper[4789]: I0202 21:39:47.280127 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e65e509-47bb-47f7-b129-74222d242dc8-ovndb-tls-certs\") pod \"neutron-7c6f8769f9-9q4zq\" (UID: \"0e65e509-47bb-47f7-b129-74222d242dc8\") " pod="openstack/neutron-7c6f8769f9-9q4zq" Feb 02 21:39:47 crc kubenswrapper[4789]: I0202 21:39:47.280169 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e65e509-47bb-47f7-b129-74222d242dc8-public-tls-certs\") pod \"neutron-7c6f8769f9-9q4zq\" (UID: \"0e65e509-47bb-47f7-b129-74222d242dc8\") " pod="openstack/neutron-7c6f8769f9-9q4zq" Feb 02 21:39:47 crc kubenswrapper[4789]: I0202 21:39:47.280189 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e65e509-47bb-47f7-b129-74222d242dc8-combined-ca-bundle\") pod \"neutron-7c6f8769f9-9q4zq\" (UID: \"0e65e509-47bb-47f7-b129-74222d242dc8\") " pod="openstack/neutron-7c6f8769f9-9q4zq" Feb 02 21:39:47 crc kubenswrapper[4789]: I0202 21:39:47.286460 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0e65e509-47bb-47f7-b129-74222d242dc8-httpd-config\") pod \"neutron-7c6f8769f9-9q4zq\" (UID: \"0e65e509-47bb-47f7-b129-74222d242dc8\") " pod="openstack/neutron-7c6f8769f9-9q4zq" Feb 02 21:39:47 crc kubenswrapper[4789]: I0202 21:39:47.287183 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e65e509-47bb-47f7-b129-74222d242dc8-internal-tls-certs\") pod \"neutron-7c6f8769f9-9q4zq\" (UID: \"0e65e509-47bb-47f7-b129-74222d242dc8\") " pod="openstack/neutron-7c6f8769f9-9q4zq" Feb 02 21:39:47 crc kubenswrapper[4789]: I0202 21:39:47.287982 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/0e65e509-47bb-47f7-b129-74222d242dc8-config\") pod \"neutron-7c6f8769f9-9q4zq\" (UID: \"0e65e509-47bb-47f7-b129-74222d242dc8\") " pod="openstack/neutron-7c6f8769f9-9q4zq" Feb 02 21:39:47 crc kubenswrapper[4789]: I0202 21:39:47.288390 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e65e509-47bb-47f7-b129-74222d242dc8-ovndb-tls-certs\") pod \"neutron-7c6f8769f9-9q4zq\" (UID: \"0e65e509-47bb-47f7-b129-74222d242dc8\") " pod="openstack/neutron-7c6f8769f9-9q4zq" Feb 02 21:39:47 crc kubenswrapper[4789]: I0202 21:39:47.288530 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e65e509-47bb-47f7-b129-74222d242dc8-combined-ca-bundle\") pod \"neutron-7c6f8769f9-9q4zq\" (UID: \"0e65e509-47bb-47f7-b129-74222d242dc8\") " pod="openstack/neutron-7c6f8769f9-9q4zq" Feb 02 21:39:47 crc kubenswrapper[4789]: I0202 21:39:47.297480 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e65e509-47bb-47f7-b129-74222d242dc8-public-tls-certs\") pod \"neutron-7c6f8769f9-9q4zq\" (UID: \"0e65e509-47bb-47f7-b129-74222d242dc8\") " pod="openstack/neutron-7c6f8769f9-9q4zq" Feb 02 21:39:47 crc kubenswrapper[4789]: I0202 21:39:47.297852 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pwmq\" (UniqueName: \"kubernetes.io/projected/0e65e509-47bb-47f7-b129-74222d242dc8-kube-api-access-8pwmq\") pod \"neutron-7c6f8769f9-9q4zq\" (UID: \"0e65e509-47bb-47f7-b129-74222d242dc8\") " pod="openstack/neutron-7c6f8769f9-9q4zq" Feb 02 21:39:47 crc kubenswrapper[4789]: I0202 21:39:47.351723 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-59cf4774f6-v75lt"] Feb 02 21:39:47 crc kubenswrapper[4789]: W0202 21:39:47.370009 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd3e262bc_73ec_4c9b_adbb_9c20b7a6286b.slice/crio-8b9760bc60b27ffa55098d8ab2300ec22e9f748839b4a0e941465b24425cad89 WatchSource:0}: Error finding container 8b9760bc60b27ffa55098d8ab2300ec22e9f748839b4a0e941465b24425cad89: Status 404 returned error can't find the container with id 8b9760bc60b27ffa55098d8ab2300ec22e9f748839b4a0e941465b24425cad89 Feb 02 21:39:47 crc kubenswrapper[4789]: I0202 21:39:47.437470 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-59cf4774f6-v75lt" event={"ID":"d3e262bc-73ec-4c9b-adbb-9c20b7a6286b","Type":"ContainerStarted","Data":"8b9760bc60b27ffa55098d8ab2300ec22e9f748839b4a0e941465b24425cad89"} Feb 02 21:39:47 crc kubenswrapper[4789]: I0202 21:39:47.440573 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-7vxpx" event={"ID":"6feea529-c31a-419e-92cd-46a8500def8d","Type":"ContainerStarted","Data":"334df413b266b7e23094c3bbc93f06a2583f26ce70a6c07b68919d209fdbde59"} Feb 02 21:39:47 crc kubenswrapper[4789]: I0202 21:39:47.466547 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7c6f8769f9-9q4zq" Feb 02 21:39:48 crc kubenswrapper[4789]: I0202 21:39:48.449667 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55f844cf75-7vxpx" Feb 02 21:39:48 crc kubenswrapper[4789]: I0202 21:39:48.529286 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55f844cf75-7vxpx" podStartSLOduration=4.529264418 podStartE2EDuration="4.529264418s" podCreationTimestamp="2026-02-02 21:39:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:39:48.491049659 +0000 UTC m=+1208.786074688" watchObservedRunningTime="2026-02-02 21:39:48.529264418 +0000 UTC m=+1208.824289427" Feb 02 21:39:48 crc kubenswrapper[4789]: I0202 21:39:48.923538 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-bvbsm" Feb 02 21:39:49 crc kubenswrapper[4789]: I0202 21:39:49.010152 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddd49c35-2c4d-4fad-a207-ca8d0be92036-combined-ca-bundle\") pod \"ddd49c35-2c4d-4fad-a207-ca8d0be92036\" (UID: \"ddd49c35-2c4d-4fad-a207-ca8d0be92036\") " Feb 02 21:39:49 crc kubenswrapper[4789]: I0202 21:39:49.010232 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ddd49c35-2c4d-4fad-a207-ca8d0be92036-config-data\") pod \"ddd49c35-2c4d-4fad-a207-ca8d0be92036\" (UID: \"ddd49c35-2c4d-4fad-a207-ca8d0be92036\") " Feb 02 21:39:49 crc kubenswrapper[4789]: I0202 21:39:49.010275 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2qpf2\" (UniqueName: \"kubernetes.io/projected/ddd49c35-2c4d-4fad-a207-ca8d0be92036-kube-api-access-2qpf2\") pod \"ddd49c35-2c4d-4fad-a207-ca8d0be92036\" (UID: \"ddd49c35-2c4d-4fad-a207-ca8d0be92036\") " Feb 02 21:39:49 crc kubenswrapper[4789]: I0202 21:39:49.010300 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ddd49c35-2c4d-4fad-a207-ca8d0be92036-scripts\") pod \"ddd49c35-2c4d-4fad-a207-ca8d0be92036\" (UID: \"ddd49c35-2c4d-4fad-a207-ca8d0be92036\") " Feb 02 21:39:49 crc kubenswrapper[4789]: I0202 21:39:49.010364 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ddd49c35-2c4d-4fad-a207-ca8d0be92036-credential-keys\") pod \"ddd49c35-2c4d-4fad-a207-ca8d0be92036\" (UID: \"ddd49c35-2c4d-4fad-a207-ca8d0be92036\") " Feb 02 21:39:49 crc kubenswrapper[4789]: I0202 21:39:49.010457 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ddd49c35-2c4d-4fad-a207-ca8d0be92036-fernet-keys\") pod \"ddd49c35-2c4d-4fad-a207-ca8d0be92036\" (UID: \"ddd49c35-2c4d-4fad-a207-ca8d0be92036\") " Feb 02 21:39:49 crc kubenswrapper[4789]: I0202 21:39:49.017132 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddd49c35-2c4d-4fad-a207-ca8d0be92036-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "ddd49c35-2c4d-4fad-a207-ca8d0be92036" (UID: "ddd49c35-2c4d-4fad-a207-ca8d0be92036"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:39:49 crc kubenswrapper[4789]: I0202 21:39:49.017453 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddd49c35-2c4d-4fad-a207-ca8d0be92036-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "ddd49c35-2c4d-4fad-a207-ca8d0be92036" (UID: "ddd49c35-2c4d-4fad-a207-ca8d0be92036"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:39:49 crc kubenswrapper[4789]: I0202 21:39:49.018700 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddd49c35-2c4d-4fad-a207-ca8d0be92036-scripts" (OuterVolumeSpecName: "scripts") pod "ddd49c35-2c4d-4fad-a207-ca8d0be92036" (UID: "ddd49c35-2c4d-4fad-a207-ca8d0be92036"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:39:49 crc kubenswrapper[4789]: I0202 21:39:49.024740 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ddd49c35-2c4d-4fad-a207-ca8d0be92036-kube-api-access-2qpf2" (OuterVolumeSpecName: "kube-api-access-2qpf2") pod "ddd49c35-2c4d-4fad-a207-ca8d0be92036" (UID: "ddd49c35-2c4d-4fad-a207-ca8d0be92036"). InnerVolumeSpecName "kube-api-access-2qpf2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:39:49 crc kubenswrapper[4789]: I0202 21:39:49.046037 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddd49c35-2c4d-4fad-a207-ca8d0be92036-config-data" (OuterVolumeSpecName: "config-data") pod "ddd49c35-2c4d-4fad-a207-ca8d0be92036" (UID: "ddd49c35-2c4d-4fad-a207-ca8d0be92036"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:39:49 crc kubenswrapper[4789]: I0202 21:39:49.049866 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddd49c35-2c4d-4fad-a207-ca8d0be92036-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ddd49c35-2c4d-4fad-a207-ca8d0be92036" (UID: "ddd49c35-2c4d-4fad-a207-ca8d0be92036"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:39:49 crc kubenswrapper[4789]: I0202 21:39:49.112455 4789 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ddd49c35-2c4d-4fad-a207-ca8d0be92036-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 21:39:49 crc kubenswrapper[4789]: I0202 21:39:49.112561 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2qpf2\" (UniqueName: \"kubernetes.io/projected/ddd49c35-2c4d-4fad-a207-ca8d0be92036-kube-api-access-2qpf2\") on node \"crc\" DevicePath \"\"" Feb 02 21:39:49 crc kubenswrapper[4789]: I0202 21:39:49.112652 4789 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ddd49c35-2c4d-4fad-a207-ca8d0be92036-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 21:39:49 crc kubenswrapper[4789]: I0202 21:39:49.112709 4789 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ddd49c35-2c4d-4fad-a207-ca8d0be92036-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 02 21:39:49 crc kubenswrapper[4789]: I0202 21:39:49.112781 4789 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ddd49c35-2c4d-4fad-a207-ca8d0be92036-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 02 21:39:49 crc kubenswrapper[4789]: I0202 21:39:49.112857 4789 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddd49c35-2c4d-4fad-a207-ca8d0be92036-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 21:39:49 crc kubenswrapper[4789]: I0202 21:39:49.441633 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7c6f8769f9-9q4zq"] Feb 02 21:39:49 crc kubenswrapper[4789]: I0202 21:39:49.462406 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-bvbsm" event={"ID":"ddd49c35-2c4d-4fad-a207-ca8d0be92036","Type":"ContainerDied","Data":"9436e8c6133005abb800e0a952854974dc13ee44ed6f65b825e70deed0f4fc5e"} Feb 02 21:39:49 crc kubenswrapper[4789]: I0202 21:39:49.462447 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-bvbsm" Feb 02 21:39:49 crc kubenswrapper[4789]: I0202 21:39:49.462457 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9436e8c6133005abb800e0a952854974dc13ee44ed6f65b825e70deed0f4fc5e" Feb 02 21:39:49 crc kubenswrapper[4789]: I0202 21:39:49.464255 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-59cf4774f6-v75lt" event={"ID":"d3e262bc-73ec-4c9b-adbb-9c20b7a6286b","Type":"ContainerStarted","Data":"b05e234fc983691d25347715e5ae6456b8853c259473be0f1c5126f4b8c3898a"} Feb 02 21:39:50 crc kubenswrapper[4789]: I0202 21:39:50.105133 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-595cf58668-hfkcq"] Feb 02 21:39:50 crc kubenswrapper[4789]: E0202 21:39:50.105792 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddd49c35-2c4d-4fad-a207-ca8d0be92036" containerName="keystone-bootstrap" Feb 02 21:39:50 crc kubenswrapper[4789]: I0202 21:39:50.105808 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddd49c35-2c4d-4fad-a207-ca8d0be92036" containerName="keystone-bootstrap" Feb 02 21:39:50 crc kubenswrapper[4789]: I0202 21:39:50.105968 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="ddd49c35-2c4d-4fad-a207-ca8d0be92036" containerName="keystone-bootstrap" Feb 02 21:39:50 crc kubenswrapper[4789]: I0202 21:39:50.106518 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-595cf58668-hfkcq" Feb 02 21:39:50 crc kubenswrapper[4789]: I0202 21:39:50.109285 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 02 21:39:50 crc kubenswrapper[4789]: I0202 21:39:50.111092 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Feb 02 21:39:50 crc kubenswrapper[4789]: I0202 21:39:50.111541 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 02 21:39:50 crc kubenswrapper[4789]: I0202 21:39:50.112061 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-2xd6t" Feb 02 21:39:50 crc kubenswrapper[4789]: I0202 21:39:50.112085 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 02 21:39:50 crc kubenswrapper[4789]: I0202 21:39:50.118427 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Feb 02 21:39:50 crc kubenswrapper[4789]: I0202 21:39:50.127495 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-595cf58668-hfkcq"] Feb 02 21:39:50 crc kubenswrapper[4789]: I0202 21:39:50.235881 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f86f59c-9db0-4580-a8f3-2d3fe558c905-public-tls-certs\") pod \"keystone-595cf58668-hfkcq\" (UID: \"0f86f59c-9db0-4580-a8f3-2d3fe558c905\") " pod="openstack/keystone-595cf58668-hfkcq" Feb 02 21:39:50 crc kubenswrapper[4789]: I0202 21:39:50.235980 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0f86f59c-9db0-4580-a8f3-2d3fe558c905-credential-keys\") pod \"keystone-595cf58668-hfkcq\" (UID: \"0f86f59c-9db0-4580-a8f3-2d3fe558c905\") " pod="openstack/keystone-595cf58668-hfkcq" Feb 02 21:39:50 crc kubenswrapper[4789]: I0202 21:39:50.236009 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f86f59c-9db0-4580-a8f3-2d3fe558c905-internal-tls-certs\") pod \"keystone-595cf58668-hfkcq\" (UID: \"0f86f59c-9db0-4580-a8f3-2d3fe558c905\") " pod="openstack/keystone-595cf58668-hfkcq" Feb 02 21:39:50 crc kubenswrapper[4789]: I0202 21:39:50.236030 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f86f59c-9db0-4580-a8f3-2d3fe558c905-combined-ca-bundle\") pod \"keystone-595cf58668-hfkcq\" (UID: \"0f86f59c-9db0-4580-a8f3-2d3fe558c905\") " pod="openstack/keystone-595cf58668-hfkcq" Feb 02 21:39:50 crc kubenswrapper[4789]: I0202 21:39:50.236051 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f86f59c-9db0-4580-a8f3-2d3fe558c905-config-data\") pod \"keystone-595cf58668-hfkcq\" (UID: \"0f86f59c-9db0-4580-a8f3-2d3fe558c905\") " pod="openstack/keystone-595cf58668-hfkcq" Feb 02 21:39:50 crc kubenswrapper[4789]: I0202 21:39:50.236071 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f86f59c-9db0-4580-a8f3-2d3fe558c905-scripts\") pod \"keystone-595cf58668-hfkcq\" (UID: \"0f86f59c-9db0-4580-a8f3-2d3fe558c905\") " pod="openstack/keystone-595cf58668-hfkcq" Feb 02 21:39:50 crc kubenswrapper[4789]: I0202 21:39:50.236099 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0f86f59c-9db0-4580-a8f3-2d3fe558c905-fernet-keys\") pod \"keystone-595cf58668-hfkcq\" (UID: \"0f86f59c-9db0-4580-a8f3-2d3fe558c905\") " pod="openstack/keystone-595cf58668-hfkcq" Feb 02 21:39:50 crc kubenswrapper[4789]: I0202 21:39:50.236149 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzkpp\" (UniqueName: \"kubernetes.io/projected/0f86f59c-9db0-4580-a8f3-2d3fe558c905-kube-api-access-wzkpp\") pod \"keystone-595cf58668-hfkcq\" (UID: \"0f86f59c-9db0-4580-a8f3-2d3fe558c905\") " pod="openstack/keystone-595cf58668-hfkcq" Feb 02 21:39:50 crc kubenswrapper[4789]: I0202 21:39:50.337983 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f86f59c-9db0-4580-a8f3-2d3fe558c905-public-tls-certs\") pod \"keystone-595cf58668-hfkcq\" (UID: \"0f86f59c-9db0-4580-a8f3-2d3fe558c905\") " pod="openstack/keystone-595cf58668-hfkcq" Feb 02 21:39:50 crc kubenswrapper[4789]: I0202 21:39:50.338083 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0f86f59c-9db0-4580-a8f3-2d3fe558c905-credential-keys\") pod \"keystone-595cf58668-hfkcq\" (UID: \"0f86f59c-9db0-4580-a8f3-2d3fe558c905\") " pod="openstack/keystone-595cf58668-hfkcq" Feb 02 21:39:50 crc kubenswrapper[4789]: I0202 21:39:50.338114 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f86f59c-9db0-4580-a8f3-2d3fe558c905-internal-tls-certs\") pod \"keystone-595cf58668-hfkcq\" (UID: \"0f86f59c-9db0-4580-a8f3-2d3fe558c905\") " pod="openstack/keystone-595cf58668-hfkcq" Feb 02 21:39:50 crc kubenswrapper[4789]: I0202 21:39:50.338131 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f86f59c-9db0-4580-a8f3-2d3fe558c905-combined-ca-bundle\") pod \"keystone-595cf58668-hfkcq\" (UID: \"0f86f59c-9db0-4580-a8f3-2d3fe558c905\") " pod="openstack/keystone-595cf58668-hfkcq" Feb 02 21:39:50 crc kubenswrapper[4789]: I0202 21:39:50.338151 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f86f59c-9db0-4580-a8f3-2d3fe558c905-config-data\") pod \"keystone-595cf58668-hfkcq\" (UID: \"0f86f59c-9db0-4580-a8f3-2d3fe558c905\") " pod="openstack/keystone-595cf58668-hfkcq" Feb 02 21:39:50 crc kubenswrapper[4789]: I0202 21:39:50.338173 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f86f59c-9db0-4580-a8f3-2d3fe558c905-scripts\") pod \"keystone-595cf58668-hfkcq\" (UID: \"0f86f59c-9db0-4580-a8f3-2d3fe558c905\") " pod="openstack/keystone-595cf58668-hfkcq" Feb 02 21:39:50 crc kubenswrapper[4789]: I0202 21:39:50.338199 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0f86f59c-9db0-4580-a8f3-2d3fe558c905-fernet-keys\") pod \"keystone-595cf58668-hfkcq\" (UID: \"0f86f59c-9db0-4580-a8f3-2d3fe558c905\") " pod="openstack/keystone-595cf58668-hfkcq" Feb 02 21:39:50 crc kubenswrapper[4789]: I0202 21:39:50.338249 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzkpp\" (UniqueName: \"kubernetes.io/projected/0f86f59c-9db0-4580-a8f3-2d3fe558c905-kube-api-access-wzkpp\") pod \"keystone-595cf58668-hfkcq\" (UID: \"0f86f59c-9db0-4580-a8f3-2d3fe558c905\") " pod="openstack/keystone-595cf58668-hfkcq" Feb 02 21:39:50 crc kubenswrapper[4789]: I0202 21:39:50.343896 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f86f59c-9db0-4580-a8f3-2d3fe558c905-scripts\") pod \"keystone-595cf58668-hfkcq\" (UID: \"0f86f59c-9db0-4580-a8f3-2d3fe558c905\") " pod="openstack/keystone-595cf58668-hfkcq" Feb 02 21:39:50 crc kubenswrapper[4789]: I0202 21:39:50.344301 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f86f59c-9db0-4580-a8f3-2d3fe558c905-internal-tls-certs\") pod \"keystone-595cf58668-hfkcq\" (UID: \"0f86f59c-9db0-4580-a8f3-2d3fe558c905\") " pod="openstack/keystone-595cf58668-hfkcq" Feb 02 21:39:50 crc kubenswrapper[4789]: I0202 21:39:50.344354 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f86f59c-9db0-4580-a8f3-2d3fe558c905-config-data\") pod \"keystone-595cf58668-hfkcq\" (UID: \"0f86f59c-9db0-4580-a8f3-2d3fe558c905\") " pod="openstack/keystone-595cf58668-hfkcq" Feb 02 21:39:50 crc kubenswrapper[4789]: I0202 21:39:50.344587 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0f86f59c-9db0-4580-a8f3-2d3fe558c905-credential-keys\") pod \"keystone-595cf58668-hfkcq\" (UID: \"0f86f59c-9db0-4580-a8f3-2d3fe558c905\") " pod="openstack/keystone-595cf58668-hfkcq" Feb 02 21:39:50 crc kubenswrapper[4789]: I0202 21:39:50.345313 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f86f59c-9db0-4580-a8f3-2d3fe558c905-combined-ca-bundle\") pod \"keystone-595cf58668-hfkcq\" (UID: \"0f86f59c-9db0-4580-a8f3-2d3fe558c905\") " pod="openstack/keystone-595cf58668-hfkcq" Feb 02 21:39:50 crc kubenswrapper[4789]: I0202 21:39:50.345607 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f86f59c-9db0-4580-a8f3-2d3fe558c905-public-tls-certs\") pod \"keystone-595cf58668-hfkcq\" (UID: \"0f86f59c-9db0-4580-a8f3-2d3fe558c905\") " pod="openstack/keystone-595cf58668-hfkcq" Feb 02 21:39:50 crc kubenswrapper[4789]: I0202 21:39:50.352158 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0f86f59c-9db0-4580-a8f3-2d3fe558c905-fernet-keys\") pod \"keystone-595cf58668-hfkcq\" (UID: \"0f86f59c-9db0-4580-a8f3-2d3fe558c905\") " pod="openstack/keystone-595cf58668-hfkcq" Feb 02 21:39:50 crc kubenswrapper[4789]: I0202 21:39:50.358063 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzkpp\" (UniqueName: \"kubernetes.io/projected/0f86f59c-9db0-4580-a8f3-2d3fe558c905-kube-api-access-wzkpp\") pod \"keystone-595cf58668-hfkcq\" (UID: \"0f86f59c-9db0-4580-a8f3-2d3fe558c905\") " pod="openstack/keystone-595cf58668-hfkcq" Feb 02 21:39:50 crc kubenswrapper[4789]: I0202 21:39:50.422825 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-595cf58668-hfkcq" Feb 02 21:39:50 crc kubenswrapper[4789]: I0202 21:39:50.885548 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 02 21:39:50 crc kubenswrapper[4789]: I0202 21:39:50.885628 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 02 21:39:50 crc kubenswrapper[4789]: I0202 21:39:50.921088 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 02 21:39:50 crc kubenswrapper[4789]: I0202 21:39:50.955082 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 02 21:39:51 crc kubenswrapper[4789]: I0202 21:39:51.481349 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 02 21:39:51 crc kubenswrapper[4789]: I0202 21:39:51.481413 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 02 21:39:52 crc kubenswrapper[4789]: I0202 21:39:52.698154 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 02 21:39:52 crc kubenswrapper[4789]: I0202 21:39:52.698194 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 02 21:39:52 crc kubenswrapper[4789]: I0202 21:39:52.729995 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 02 21:39:52 crc kubenswrapper[4789]: W0202 21:39:52.744937 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0e65e509_47bb_47f7_b129_74222d242dc8.slice/crio-8c2cc135c482a5b6aea797e2d0eea8ed8179c9289bfb8d0933a032ac52b26e1f WatchSource:0}: Error finding container 8c2cc135c482a5b6aea797e2d0eea8ed8179c9289bfb8d0933a032ac52b26e1f: Status 404 returned error can't find the container with id 8c2cc135c482a5b6aea797e2d0eea8ed8179c9289bfb8d0933a032ac52b26e1f Feb 02 21:39:52 crc kubenswrapper[4789]: I0202 21:39:52.751352 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 02 21:39:52 crc kubenswrapper[4789]: I0202 21:39:52.842033 4789 patch_prober.go:28] interesting pod/machine-config-daemon-c8vcn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 21:39:52 crc kubenswrapper[4789]: I0202 21:39:52.842116 4789 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 21:39:53 crc kubenswrapper[4789]: I0202 21:39:53.286366 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-595cf58668-hfkcq"] Feb 02 21:39:53 crc kubenswrapper[4789]: W0202 21:39:53.298249 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0f86f59c_9db0_4580_a8f3_2d3fe558c905.slice/crio-7ab85332c67cad5b827eae1955322c4dc6506ff8b855cc9cd54b62c92d431d33 WatchSource:0}: Error finding container 7ab85332c67cad5b827eae1955322c4dc6506ff8b855cc9cd54b62c92d431d33: Status 404 returned error can't find the container with id 7ab85332c67cad5b827eae1955322c4dc6506ff8b855cc9cd54b62c92d431d33 Feb 02 21:39:53 crc kubenswrapper[4789]: I0202 21:39:53.389943 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 02 21:39:53 crc kubenswrapper[4789]: I0202 21:39:53.391857 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 02 21:39:53 crc kubenswrapper[4789]: I0202 21:39:53.506810 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c202a904-ccae-4f90-a284-d7e2a3b5e0f7","Type":"ContainerStarted","Data":"6cae3582ac17969f7c322327aa9fd05ff26ce7194abd35666492ea00cba10c41"} Feb 02 21:39:53 crc kubenswrapper[4789]: I0202 21:39:53.508998 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-595cf58668-hfkcq" event={"ID":"0f86f59c-9db0-4580-a8f3-2d3fe558c905","Type":"ContainerStarted","Data":"bbeb0176ca8c9142d15e473d306a3fc80a2f4568a8a86ce41b47afe19830a87f"} Feb 02 21:39:53 crc kubenswrapper[4789]: I0202 21:39:53.509050 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-595cf58668-hfkcq" event={"ID":"0f86f59c-9db0-4580-a8f3-2d3fe558c905","Type":"ContainerStarted","Data":"7ab85332c67cad5b827eae1955322c4dc6506ff8b855cc9cd54b62c92d431d33"} Feb 02 21:39:53 crc kubenswrapper[4789]: I0202 21:39:53.509119 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-595cf58668-hfkcq" Feb 02 21:39:53 crc kubenswrapper[4789]: I0202 21:39:53.513714 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-59cf4774f6-v75lt" event={"ID":"d3e262bc-73ec-4c9b-adbb-9c20b7a6286b","Type":"ContainerStarted","Data":"45cdfbba5277d68703ad6c24734a170b3eb079fbf6417f6b1a3641194e032e79"} Feb 02 21:39:53 crc kubenswrapper[4789]: I0202 21:39:53.513758 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-59cf4774f6-v75lt" Feb 02 21:39:53 crc kubenswrapper[4789]: I0202 21:39:53.513809 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-59cf4774f6-v75lt" Feb 02 21:39:53 crc kubenswrapper[4789]: I0202 21:39:53.518752 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7c6f8769f9-9q4zq" event={"ID":"0e65e509-47bb-47f7-b129-74222d242dc8","Type":"ContainerStarted","Data":"6b1800b5f6de7d72f18adbb4dbe5b8f41c0f5c63326f6f70efaf329bc7e0debb"} Feb 02 21:39:53 crc kubenswrapper[4789]: I0202 21:39:53.518808 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7c6f8769f9-9q4zq" event={"ID":"0e65e509-47bb-47f7-b129-74222d242dc8","Type":"ContainerStarted","Data":"2158305fa050236e159ac1a89f405cae5ff72d7510d4a2c030187f59ccf546ba"} Feb 02 21:39:53 crc kubenswrapper[4789]: I0202 21:39:53.518822 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7c6f8769f9-9q4zq" event={"ID":"0e65e509-47bb-47f7-b129-74222d242dc8","Type":"ContainerStarted","Data":"8c2cc135c482a5b6aea797e2d0eea8ed8179c9289bfb8d0933a032ac52b26e1f"} Feb 02 21:39:53 crc kubenswrapper[4789]: I0202 21:39:53.519594 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 02 21:39:53 crc kubenswrapper[4789]: I0202 21:39:53.519621 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 02 21:39:53 crc kubenswrapper[4789]: I0202 21:39:53.574935 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-59cf4774f6-v75lt" podStartSLOduration=7.574914853 podStartE2EDuration="7.574914853s" podCreationTimestamp="2026-02-02 21:39:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:39:53.566511216 +0000 UTC m=+1213.861536235" watchObservedRunningTime="2026-02-02 21:39:53.574914853 +0000 UTC m=+1213.869939872" Feb 02 21:39:53 crc kubenswrapper[4789]: I0202 21:39:53.575539 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-595cf58668-hfkcq" podStartSLOduration=3.575534421 podStartE2EDuration="3.575534421s" podCreationTimestamp="2026-02-02 21:39:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:39:53.54255414 +0000 UTC m=+1213.837579159" watchObservedRunningTime="2026-02-02 21:39:53.575534421 +0000 UTC m=+1213.870559440" Feb 02 21:39:53 crc kubenswrapper[4789]: I0202 21:39:53.591273 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7c6f8769f9-9q4zq" podStartSLOduration=6.591253644 podStartE2EDuration="6.591253644s" podCreationTimestamp="2026-02-02 21:39:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:39:53.589878426 +0000 UTC m=+1213.884903445" watchObservedRunningTime="2026-02-02 21:39:53.591253644 +0000 UTC m=+1213.886278663" Feb 02 21:39:54 crc kubenswrapper[4789]: I0202 21:39:54.531065 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7c6f8769f9-9q4zq" Feb 02 21:39:54 crc kubenswrapper[4789]: I0202 21:39:54.875904 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55f844cf75-7vxpx" Feb 02 21:39:54 crc kubenswrapper[4789]: I0202 21:39:54.946853 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-82gwc"] Feb 02 21:39:54 crc kubenswrapper[4789]: I0202 21:39:54.947098 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-785d8bcb8c-82gwc" podUID="339cb6ee-98df-41da-81a4-9aaf77f01cc8" containerName="dnsmasq-dns" containerID="cri-o://c592964e1b36e9ddb3659f00af59f06d0458c5e38247a3a727d5a8797b62b739" gracePeriod=10 Feb 02 21:39:55 crc kubenswrapper[4789]: I0202 21:39:55.540612 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-5kfhw" event={"ID":"4aaa8d11-6409-415e-836b-b7941b66f6e4","Type":"ContainerStarted","Data":"6c6b69de05330d0c2b4df4009c215f51143b7e6e5f0f8eb55ddce7e689d2f46b"} Feb 02 21:39:55 crc kubenswrapper[4789]: I0202 21:39:55.543861 4789 generic.go:334] "Generic (PLEG): container finished" podID="339cb6ee-98df-41da-81a4-9aaf77f01cc8" containerID="c592964e1b36e9ddb3659f00af59f06d0458c5e38247a3a727d5a8797b62b739" exitCode=0 Feb 02 21:39:55 crc kubenswrapper[4789]: I0202 21:39:55.543907 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-82gwc" event={"ID":"339cb6ee-98df-41da-81a4-9aaf77f01cc8","Type":"ContainerDied","Data":"c592964e1b36e9ddb3659f00af59f06d0458c5e38247a3a727d5a8797b62b739"} Feb 02 21:39:55 crc kubenswrapper[4789]: I0202 21:39:55.543923 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-82gwc" event={"ID":"339cb6ee-98df-41da-81a4-9aaf77f01cc8","Type":"ContainerDied","Data":"a16a0210247db362364a05313d2bd2879b0e8ae5978cb30d40ff12212e41f138"} Feb 02 21:39:55 crc kubenswrapper[4789]: I0202 21:39:55.543933 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a16a0210247db362364a05313d2bd2879b0e8ae5978cb30d40ff12212e41f138" Feb 02 21:39:55 crc kubenswrapper[4789]: I0202 21:39:55.545460 4789 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 21:39:55 crc kubenswrapper[4789]: I0202 21:39:55.545475 4789 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 21:39:55 crc kubenswrapper[4789]: I0202 21:39:55.545875 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-mc8z9" event={"ID":"ead77939-6823-47d8-83e8-7dc74b841d49","Type":"ContainerStarted","Data":"1fdc983ab9ac3038cc265e648fd3bb3bee5ec995759e5ffb2049a6a91c398225"} Feb 02 21:39:55 crc kubenswrapper[4789]: I0202 21:39:55.586757 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-mc8z9" podStartSLOduration=2.7231336219999998 podStartE2EDuration="36.586733181s" podCreationTimestamp="2026-02-02 21:39:19 +0000 UTC" firstStartedPulling="2026-02-02 21:39:20.661412713 +0000 UTC m=+1180.956437732" lastFinishedPulling="2026-02-02 21:39:54.525012272 +0000 UTC m=+1214.820037291" observedRunningTime="2026-02-02 21:39:55.585617989 +0000 UTC m=+1215.880643008" watchObservedRunningTime="2026-02-02 21:39:55.586733181 +0000 UTC m=+1215.881758200" Feb 02 21:39:55 crc kubenswrapper[4789]: I0202 21:39:55.590570 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-82gwc" Feb 02 21:39:55 crc kubenswrapper[4789]: I0202 21:39:55.591025 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-5kfhw" podStartSLOduration=2.587340359 podStartE2EDuration="36.591013972s" podCreationTimestamp="2026-02-02 21:39:19 +0000 UTC" firstStartedPulling="2026-02-02 21:39:20.519836817 +0000 UTC m=+1180.814861836" lastFinishedPulling="2026-02-02 21:39:54.52351042 +0000 UTC m=+1214.818535449" observedRunningTime="2026-02-02 21:39:55.568820055 +0000 UTC m=+1215.863845074" watchObservedRunningTime="2026-02-02 21:39:55.591013972 +0000 UTC m=+1215.886038991" Feb 02 21:39:55 crc kubenswrapper[4789]: I0202 21:39:55.661061 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 02 21:39:55 crc kubenswrapper[4789]: I0202 21:39:55.703135 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 02 21:39:55 crc kubenswrapper[4789]: I0202 21:39:55.750131 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/339cb6ee-98df-41da-81a4-9aaf77f01cc8-ovsdbserver-nb\") pod \"339cb6ee-98df-41da-81a4-9aaf77f01cc8\" (UID: \"339cb6ee-98df-41da-81a4-9aaf77f01cc8\") " Feb 02 21:39:55 crc kubenswrapper[4789]: I0202 21:39:55.750213 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/339cb6ee-98df-41da-81a4-9aaf77f01cc8-ovsdbserver-sb\") pod \"339cb6ee-98df-41da-81a4-9aaf77f01cc8\" (UID: \"339cb6ee-98df-41da-81a4-9aaf77f01cc8\") " Feb 02 21:39:55 crc kubenswrapper[4789]: I0202 21:39:55.750259 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/339cb6ee-98df-41da-81a4-9aaf77f01cc8-config\") pod \"339cb6ee-98df-41da-81a4-9aaf77f01cc8\" (UID: \"339cb6ee-98df-41da-81a4-9aaf77f01cc8\") " Feb 02 21:39:55 crc kubenswrapper[4789]: I0202 21:39:55.750366 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/339cb6ee-98df-41da-81a4-9aaf77f01cc8-dns-swift-storage-0\") pod \"339cb6ee-98df-41da-81a4-9aaf77f01cc8\" (UID: \"339cb6ee-98df-41da-81a4-9aaf77f01cc8\") " Feb 02 21:39:55 crc kubenswrapper[4789]: I0202 21:39:55.750885 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/339cb6ee-98df-41da-81a4-9aaf77f01cc8-dns-svc\") pod \"339cb6ee-98df-41da-81a4-9aaf77f01cc8\" (UID: \"339cb6ee-98df-41da-81a4-9aaf77f01cc8\") " Feb 02 21:39:55 crc kubenswrapper[4789]: I0202 21:39:55.750944 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-98kbm\" (UniqueName: \"kubernetes.io/projected/339cb6ee-98df-41da-81a4-9aaf77f01cc8-kube-api-access-98kbm\") pod \"339cb6ee-98df-41da-81a4-9aaf77f01cc8\" (UID: \"339cb6ee-98df-41da-81a4-9aaf77f01cc8\") " Feb 02 21:39:55 crc kubenswrapper[4789]: I0202 21:39:55.781823 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/339cb6ee-98df-41da-81a4-9aaf77f01cc8-kube-api-access-98kbm" (OuterVolumeSpecName: "kube-api-access-98kbm") pod "339cb6ee-98df-41da-81a4-9aaf77f01cc8" (UID: "339cb6ee-98df-41da-81a4-9aaf77f01cc8"). InnerVolumeSpecName "kube-api-access-98kbm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:39:55 crc kubenswrapper[4789]: I0202 21:39:55.852662 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-98kbm\" (UniqueName: \"kubernetes.io/projected/339cb6ee-98df-41da-81a4-9aaf77f01cc8-kube-api-access-98kbm\") on node \"crc\" DevicePath \"\"" Feb 02 21:39:55 crc kubenswrapper[4789]: I0202 21:39:55.855158 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/339cb6ee-98df-41da-81a4-9aaf77f01cc8-config" (OuterVolumeSpecName: "config") pod "339cb6ee-98df-41da-81a4-9aaf77f01cc8" (UID: "339cb6ee-98df-41da-81a4-9aaf77f01cc8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:39:55 crc kubenswrapper[4789]: I0202 21:39:55.855229 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/339cb6ee-98df-41da-81a4-9aaf77f01cc8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "339cb6ee-98df-41da-81a4-9aaf77f01cc8" (UID: "339cb6ee-98df-41da-81a4-9aaf77f01cc8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:39:55 crc kubenswrapper[4789]: I0202 21:39:55.876182 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/339cb6ee-98df-41da-81a4-9aaf77f01cc8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "339cb6ee-98df-41da-81a4-9aaf77f01cc8" (UID: "339cb6ee-98df-41da-81a4-9aaf77f01cc8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:39:55 crc kubenswrapper[4789]: I0202 21:39:55.887145 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/339cb6ee-98df-41da-81a4-9aaf77f01cc8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "339cb6ee-98df-41da-81a4-9aaf77f01cc8" (UID: "339cb6ee-98df-41da-81a4-9aaf77f01cc8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:39:55 crc kubenswrapper[4789]: I0202 21:39:55.891463 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/339cb6ee-98df-41da-81a4-9aaf77f01cc8-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "339cb6ee-98df-41da-81a4-9aaf77f01cc8" (UID: "339cb6ee-98df-41da-81a4-9aaf77f01cc8"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:39:55 crc kubenswrapper[4789]: I0202 21:39:55.955736 4789 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/339cb6ee-98df-41da-81a4-9aaf77f01cc8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 21:39:55 crc kubenswrapper[4789]: I0202 21:39:55.956107 4789 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/339cb6ee-98df-41da-81a4-9aaf77f01cc8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 21:39:55 crc kubenswrapper[4789]: I0202 21:39:55.956121 4789 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/339cb6ee-98df-41da-81a4-9aaf77f01cc8-config\") on node \"crc\" DevicePath \"\"" Feb 02 21:39:55 crc kubenswrapper[4789]: I0202 21:39:55.956132 4789 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/339cb6ee-98df-41da-81a4-9aaf77f01cc8-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 02 21:39:55 crc kubenswrapper[4789]: I0202 21:39:55.956142 4789 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/339cb6ee-98df-41da-81a4-9aaf77f01cc8-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 21:39:56 crc kubenswrapper[4789]: I0202 21:39:56.154292 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-59cf4774f6-v75lt" Feb 02 21:39:56 crc kubenswrapper[4789]: I0202 21:39:56.554529 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-82gwc" Feb 02 21:39:56 crc kubenswrapper[4789]: I0202 21:39:56.585654 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-82gwc"] Feb 02 21:39:56 crc kubenswrapper[4789]: I0202 21:39:56.590930 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-82gwc"] Feb 02 21:39:58 crc kubenswrapper[4789]: I0202 21:39:58.441530 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="339cb6ee-98df-41da-81a4-9aaf77f01cc8" path="/var/lib/kubelet/pods/339cb6ee-98df-41da-81a4-9aaf77f01cc8/volumes" Feb 02 21:39:58 crc kubenswrapper[4789]: I0202 21:39:58.608938 4789 generic.go:334] "Generic (PLEG): container finished" podID="ead77939-6823-47d8-83e8-7dc74b841d49" containerID="1fdc983ab9ac3038cc265e648fd3bb3bee5ec995759e5ffb2049a6a91c398225" exitCode=0 Feb 02 21:39:58 crc kubenswrapper[4789]: I0202 21:39:58.608997 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-mc8z9" event={"ID":"ead77939-6823-47d8-83e8-7dc74b841d49","Type":"ContainerDied","Data":"1fdc983ab9ac3038cc265e648fd3bb3bee5ec995759e5ffb2049a6a91c398225"} Feb 02 21:40:00 crc kubenswrapper[4789]: I0202 21:40:00.627548 4789 generic.go:334] "Generic (PLEG): container finished" podID="4aaa8d11-6409-415e-836b-b7941b66f6e4" containerID="6c6b69de05330d0c2b4df4009c215f51143b7e6e5f0f8eb55ddce7e689d2f46b" exitCode=0 Feb 02 21:40:00 crc kubenswrapper[4789]: I0202 21:40:00.627624 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-5kfhw" event={"ID":"4aaa8d11-6409-415e-836b-b7941b66f6e4","Type":"ContainerDied","Data":"6c6b69de05330d0c2b4df4009c215f51143b7e6e5f0f8eb55ddce7e689d2f46b"} Feb 02 21:40:02 crc kubenswrapper[4789]: I0202 21:40:02.958617 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-mc8z9" Feb 02 21:40:02 crc kubenswrapper[4789]: I0202 21:40:02.968966 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-5kfhw" Feb 02 21:40:03 crc kubenswrapper[4789]: I0202 21:40:03.012625 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4aaa8d11-6409-415e-836b-b7941b66f6e4-combined-ca-bundle\") pod \"4aaa8d11-6409-415e-836b-b7941b66f6e4\" (UID: \"4aaa8d11-6409-415e-836b-b7941b66f6e4\") " Feb 02 21:40:03 crc kubenswrapper[4789]: I0202 21:40:03.012710 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fwp6d\" (UniqueName: \"kubernetes.io/projected/4aaa8d11-6409-415e-836b-b7941b66f6e4-kube-api-access-fwp6d\") pod \"4aaa8d11-6409-415e-836b-b7941b66f6e4\" (UID: \"4aaa8d11-6409-415e-836b-b7941b66f6e4\") " Feb 02 21:40:03 crc kubenswrapper[4789]: I0202 21:40:03.012755 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ead77939-6823-47d8-83e8-7dc74b841d49-db-sync-config-data\") pod \"ead77939-6823-47d8-83e8-7dc74b841d49\" (UID: \"ead77939-6823-47d8-83e8-7dc74b841d49\") " Feb 02 21:40:03 crc kubenswrapper[4789]: I0202 21:40:03.013676 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7ss2g\" (UniqueName: \"kubernetes.io/projected/ead77939-6823-47d8-83e8-7dc74b841d49-kube-api-access-7ss2g\") pod \"ead77939-6823-47d8-83e8-7dc74b841d49\" (UID: \"ead77939-6823-47d8-83e8-7dc74b841d49\") " Feb 02 21:40:03 crc kubenswrapper[4789]: I0202 21:40:03.013740 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4aaa8d11-6409-415e-836b-b7941b66f6e4-etc-machine-id\") pod \"4aaa8d11-6409-415e-836b-b7941b66f6e4\" (UID: \"4aaa8d11-6409-415e-836b-b7941b66f6e4\") " Feb 02 21:40:03 crc kubenswrapper[4789]: I0202 21:40:03.013793 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4aaa8d11-6409-415e-836b-b7941b66f6e4-config-data\") pod \"4aaa8d11-6409-415e-836b-b7941b66f6e4\" (UID: \"4aaa8d11-6409-415e-836b-b7941b66f6e4\") " Feb 02 21:40:03 crc kubenswrapper[4789]: I0202 21:40:03.013834 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4aaa8d11-6409-415e-836b-b7941b66f6e4-db-sync-config-data\") pod \"4aaa8d11-6409-415e-836b-b7941b66f6e4\" (UID: \"4aaa8d11-6409-415e-836b-b7941b66f6e4\") " Feb 02 21:40:03 crc kubenswrapper[4789]: I0202 21:40:03.013878 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ead77939-6823-47d8-83e8-7dc74b841d49-combined-ca-bundle\") pod \"ead77939-6823-47d8-83e8-7dc74b841d49\" (UID: \"ead77939-6823-47d8-83e8-7dc74b841d49\") " Feb 02 21:40:03 crc kubenswrapper[4789]: I0202 21:40:03.013898 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4aaa8d11-6409-415e-836b-b7941b66f6e4-scripts\") pod \"4aaa8d11-6409-415e-836b-b7941b66f6e4\" (UID: \"4aaa8d11-6409-415e-836b-b7941b66f6e4\") " Feb 02 21:40:03 crc kubenswrapper[4789]: I0202 21:40:03.017854 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4aaa8d11-6409-415e-836b-b7941b66f6e4-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "4aaa8d11-6409-415e-836b-b7941b66f6e4" (UID: "4aaa8d11-6409-415e-836b-b7941b66f6e4"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 21:40:03 crc kubenswrapper[4789]: I0202 21:40:03.023203 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ead77939-6823-47d8-83e8-7dc74b841d49-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "ead77939-6823-47d8-83e8-7dc74b841d49" (UID: "ead77939-6823-47d8-83e8-7dc74b841d49"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:40:03 crc kubenswrapper[4789]: I0202 21:40:03.023238 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ead77939-6823-47d8-83e8-7dc74b841d49-kube-api-access-7ss2g" (OuterVolumeSpecName: "kube-api-access-7ss2g") pod "ead77939-6823-47d8-83e8-7dc74b841d49" (UID: "ead77939-6823-47d8-83e8-7dc74b841d49"). InnerVolumeSpecName "kube-api-access-7ss2g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:40:03 crc kubenswrapper[4789]: I0202 21:40:03.023293 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4aaa8d11-6409-415e-836b-b7941b66f6e4-kube-api-access-fwp6d" (OuterVolumeSpecName: "kube-api-access-fwp6d") pod "4aaa8d11-6409-415e-836b-b7941b66f6e4" (UID: "4aaa8d11-6409-415e-836b-b7941b66f6e4"). InnerVolumeSpecName "kube-api-access-fwp6d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:40:03 crc kubenswrapper[4789]: I0202 21:40:03.023329 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4aaa8d11-6409-415e-836b-b7941b66f6e4-scripts" (OuterVolumeSpecName: "scripts") pod "4aaa8d11-6409-415e-836b-b7941b66f6e4" (UID: "4aaa8d11-6409-415e-836b-b7941b66f6e4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:40:03 crc kubenswrapper[4789]: I0202 21:40:03.028801 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4aaa8d11-6409-415e-836b-b7941b66f6e4-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "4aaa8d11-6409-415e-836b-b7941b66f6e4" (UID: "4aaa8d11-6409-415e-836b-b7941b66f6e4"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:40:03 crc kubenswrapper[4789]: I0202 21:40:03.052269 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4aaa8d11-6409-415e-836b-b7941b66f6e4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4aaa8d11-6409-415e-836b-b7941b66f6e4" (UID: "4aaa8d11-6409-415e-836b-b7941b66f6e4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:40:03 crc kubenswrapper[4789]: I0202 21:40:03.063466 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ead77939-6823-47d8-83e8-7dc74b841d49-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ead77939-6823-47d8-83e8-7dc74b841d49" (UID: "ead77939-6823-47d8-83e8-7dc74b841d49"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:40:03 crc kubenswrapper[4789]: I0202 21:40:03.095694 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4aaa8d11-6409-415e-836b-b7941b66f6e4-config-data" (OuterVolumeSpecName: "config-data") pod "4aaa8d11-6409-415e-836b-b7941b66f6e4" (UID: "4aaa8d11-6409-415e-836b-b7941b66f6e4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:40:03 crc kubenswrapper[4789]: I0202 21:40:03.115543 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fwp6d\" (UniqueName: \"kubernetes.io/projected/4aaa8d11-6409-415e-836b-b7941b66f6e4-kube-api-access-fwp6d\") on node \"crc\" DevicePath \"\"" Feb 02 21:40:03 crc kubenswrapper[4789]: I0202 21:40:03.115598 4789 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ead77939-6823-47d8-83e8-7dc74b841d49-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 21:40:03 crc kubenswrapper[4789]: I0202 21:40:03.115611 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7ss2g\" (UniqueName: \"kubernetes.io/projected/ead77939-6823-47d8-83e8-7dc74b841d49-kube-api-access-7ss2g\") on node \"crc\" DevicePath \"\"" Feb 02 21:40:03 crc kubenswrapper[4789]: I0202 21:40:03.115622 4789 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4aaa8d11-6409-415e-836b-b7941b66f6e4-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 02 21:40:03 crc kubenswrapper[4789]: I0202 21:40:03.115633 4789 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4aaa8d11-6409-415e-836b-b7941b66f6e4-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 21:40:03 crc kubenswrapper[4789]: I0202 21:40:03.115646 4789 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4aaa8d11-6409-415e-836b-b7941b66f6e4-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 21:40:03 crc kubenswrapper[4789]: I0202 21:40:03.115656 4789 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ead77939-6823-47d8-83e8-7dc74b841d49-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 21:40:03 crc kubenswrapper[4789]: I0202 21:40:03.115666 4789 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4aaa8d11-6409-415e-836b-b7941b66f6e4-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 21:40:03 crc kubenswrapper[4789]: I0202 21:40:03.115677 4789 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4aaa8d11-6409-415e-836b-b7941b66f6e4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 21:40:03 crc kubenswrapper[4789]: I0202 21:40:03.653659 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-5kfhw" event={"ID":"4aaa8d11-6409-415e-836b-b7941b66f6e4","Type":"ContainerDied","Data":"1026e2a88fba3f933fc4bae38fd4bccbeec97e931acd29f4a33b5a03c4925d7a"} Feb 02 21:40:03 crc kubenswrapper[4789]: I0202 21:40:03.653917 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1026e2a88fba3f933fc4bae38fd4bccbeec97e931acd29f4a33b5a03c4925d7a" Feb 02 21:40:03 crc kubenswrapper[4789]: I0202 21:40:03.653731 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-5kfhw" Feb 02 21:40:03 crc kubenswrapper[4789]: I0202 21:40:03.655373 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-mc8z9" event={"ID":"ead77939-6823-47d8-83e8-7dc74b841d49","Type":"ContainerDied","Data":"02f89e8e48d36b6232eefe92be04f177426cd8f2102e0cb54d7d6b243f49e935"} Feb 02 21:40:03 crc kubenswrapper[4789]: I0202 21:40:03.655396 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02f89e8e48d36b6232eefe92be04f177426cd8f2102e0cb54d7d6b243f49e935" Feb 02 21:40:03 crc kubenswrapper[4789]: I0202 21:40:03.655879 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-mc8z9" Feb 02 21:40:03 crc kubenswrapper[4789]: I0202 21:40:03.657957 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c202a904-ccae-4f90-a284-d7e2a3b5e0f7","Type":"ContainerStarted","Data":"a76f4d61d5ec4963188ac84468b388f9d8eb7c0cb10646c1ce4381479dc2dcc5"} Feb 02 21:40:03 crc kubenswrapper[4789]: I0202 21:40:03.658103 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c202a904-ccae-4f90-a284-d7e2a3b5e0f7" containerName="ceilometer-central-agent" containerID="cri-o://fdcc1f90b212df2e933f135304ee045547580c325f7da0217384a6ca1384b603" gracePeriod=30 Feb 02 21:40:03 crc kubenswrapper[4789]: I0202 21:40:03.658136 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c202a904-ccae-4f90-a284-d7e2a3b5e0f7" containerName="sg-core" containerID="cri-o://6cae3582ac17969f7c322327aa9fd05ff26ce7194abd35666492ea00cba10c41" gracePeriod=30 Feb 02 21:40:03 crc kubenswrapper[4789]: I0202 21:40:03.658148 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c202a904-ccae-4f90-a284-d7e2a3b5e0f7" containerName="ceilometer-notification-agent" containerID="cri-o://3bca866ca5fa43c30d0609fc4b416a606fd180cd7401b978314705aae2cec2fb" gracePeriod=30 Feb 02 21:40:03 crc kubenswrapper[4789]: I0202 21:40:03.658149 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c202a904-ccae-4f90-a284-d7e2a3b5e0f7" containerName="proxy-httpd" containerID="cri-o://a76f4d61d5ec4963188ac84468b388f9d8eb7c0cb10646c1ce4381479dc2dcc5" gracePeriod=30 Feb 02 21:40:03 crc kubenswrapper[4789]: I0202 21:40:03.658108 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 02 21:40:03 crc kubenswrapper[4789]: I0202 21:40:03.703067 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.25222312 podStartE2EDuration="44.703052753s" podCreationTimestamp="2026-02-02 21:39:19 +0000 UTC" firstStartedPulling="2026-02-02 21:39:20.51853028 +0000 UTC m=+1180.813555299" lastFinishedPulling="2026-02-02 21:40:02.969359903 +0000 UTC m=+1223.264384932" observedRunningTime="2026-02-02 21:40:03.700870261 +0000 UTC m=+1223.995895280" watchObservedRunningTime="2026-02-02 21:40:03.703052753 +0000 UTC m=+1223.998077772" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.278129 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 21:40:04 crc kubenswrapper[4789]: E0202 21:40:04.278800 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4aaa8d11-6409-415e-836b-b7941b66f6e4" containerName="cinder-db-sync" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.278814 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="4aaa8d11-6409-415e-836b-b7941b66f6e4" containerName="cinder-db-sync" Feb 02 21:40:04 crc kubenswrapper[4789]: E0202 21:40:04.278848 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="339cb6ee-98df-41da-81a4-9aaf77f01cc8" containerName="dnsmasq-dns" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.278854 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="339cb6ee-98df-41da-81a4-9aaf77f01cc8" containerName="dnsmasq-dns" Feb 02 21:40:04 crc kubenswrapper[4789]: E0202 21:40:04.278863 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="339cb6ee-98df-41da-81a4-9aaf77f01cc8" containerName="init" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.278868 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="339cb6ee-98df-41da-81a4-9aaf77f01cc8" containerName="init" Feb 02 21:40:04 crc kubenswrapper[4789]: E0202 21:40:04.278879 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ead77939-6823-47d8-83e8-7dc74b841d49" containerName="barbican-db-sync" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.278885 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="ead77939-6823-47d8-83e8-7dc74b841d49" containerName="barbican-db-sync" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.279039 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="ead77939-6823-47d8-83e8-7dc74b841d49" containerName="barbican-db-sync" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.279073 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="4aaa8d11-6409-415e-836b-b7941b66f6e4" containerName="cinder-db-sync" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.279089 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="339cb6ee-98df-41da-81a4-9aaf77f01cc8" containerName="dnsmasq-dns" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.280190 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.283395 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.285360 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-n4mwh" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.285510 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.285643 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.294768 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-959f7f8c5-pmqjf"] Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.296148 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-959f7f8c5-pmqjf" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.300145 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.300469 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-v2ntj" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.300677 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.308555 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-959f7f8c5-pmqjf"] Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.326398 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.343793 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a92498f9-dccf-4af4-b92c-41f6e89be39f-config-data\") pod \"cinder-scheduler-0\" (UID: \"a92498f9-dccf-4af4-b92c-41f6e89be39f\") " pod="openstack/cinder-scheduler-0" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.343837 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a92498f9-dccf-4af4-b92c-41f6e89be39f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"a92498f9-dccf-4af4-b92c-41f6e89be39f\") " pod="openstack/cinder-scheduler-0" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.343866 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5c3c85c8-d5b9-48f3-9408-0d9693d0cbf1-config-data-custom\") pod \"barbican-worker-959f7f8c5-pmqjf\" (UID: \"5c3c85c8-d5b9-48f3-9408-0d9693d0cbf1\") " pod="openstack/barbican-worker-959f7f8c5-pmqjf" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.343906 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcr8p\" (UniqueName: \"kubernetes.io/projected/a92498f9-dccf-4af4-b92c-41f6e89be39f-kube-api-access-gcr8p\") pod \"cinder-scheduler-0\" (UID: \"a92498f9-dccf-4af4-b92c-41f6e89be39f\") " pod="openstack/cinder-scheduler-0" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.343949 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c3c85c8-d5b9-48f3-9408-0d9693d0cbf1-combined-ca-bundle\") pod \"barbican-worker-959f7f8c5-pmqjf\" (UID: \"5c3c85c8-d5b9-48f3-9408-0d9693d0cbf1\") " pod="openstack/barbican-worker-959f7f8c5-pmqjf" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.343970 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c3c85c8-d5b9-48f3-9408-0d9693d0cbf1-logs\") pod \"barbican-worker-959f7f8c5-pmqjf\" (UID: \"5c3c85c8-d5b9-48f3-9408-0d9693d0cbf1\") " pod="openstack/barbican-worker-959f7f8c5-pmqjf" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.343994 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a92498f9-dccf-4af4-b92c-41f6e89be39f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"a92498f9-dccf-4af4-b92c-41f6e89be39f\") " pod="openstack/cinder-scheduler-0" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.344024 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c3c85c8-d5b9-48f3-9408-0d9693d0cbf1-config-data\") pod \"barbican-worker-959f7f8c5-pmqjf\" (UID: \"5c3c85c8-d5b9-48f3-9408-0d9693d0cbf1\") " pod="openstack/barbican-worker-959f7f8c5-pmqjf" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.344065 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a92498f9-dccf-4af4-b92c-41f6e89be39f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"a92498f9-dccf-4af4-b92c-41f6e89be39f\") " pod="openstack/cinder-scheduler-0" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.344100 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a92498f9-dccf-4af4-b92c-41f6e89be39f-scripts\") pod \"cinder-scheduler-0\" (UID: \"a92498f9-dccf-4af4-b92c-41f6e89be39f\") " pod="openstack/cinder-scheduler-0" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.344119 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzllv\" (UniqueName: \"kubernetes.io/projected/5c3c85c8-d5b9-48f3-9408-0d9693d0cbf1-kube-api-access-mzllv\") pod \"barbican-worker-959f7f8c5-pmqjf\" (UID: \"5c3c85c8-d5b9-48f3-9408-0d9693d0cbf1\") " pod="openstack/barbican-worker-959f7f8c5-pmqjf" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.361653 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b895b5785-jzmvh"] Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.364383 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b895b5785-jzmvh" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.388112 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-6d964c7466-fpqld"] Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.389543 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6d964c7466-fpqld" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.394146 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.401932 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b895b5785-jzmvh"] Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.441693 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-6d964c7466-fpqld"] Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.449015 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c3c85c8-d5b9-48f3-9408-0d9693d0cbf1-combined-ca-bundle\") pod \"barbican-worker-959f7f8c5-pmqjf\" (UID: \"5c3c85c8-d5b9-48f3-9408-0d9693d0cbf1\") " pod="openstack/barbican-worker-959f7f8c5-pmqjf" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.449058 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c3c85c8-d5b9-48f3-9408-0d9693d0cbf1-logs\") pod \"barbican-worker-959f7f8c5-pmqjf\" (UID: \"5c3c85c8-d5b9-48f3-9408-0d9693d0cbf1\") " pod="openstack/barbican-worker-959f7f8c5-pmqjf" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.449087 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/802bda4f-2363-4ca6-a126-2ccf1448ed71-combined-ca-bundle\") pod \"barbican-keystone-listener-6d964c7466-fpqld\" (UID: \"802bda4f-2363-4ca6-a126-2ccf1448ed71\") " pod="openstack/barbican-keystone-listener-6d964c7466-fpqld" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.449105 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a92498f9-dccf-4af4-b92c-41f6e89be39f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"a92498f9-dccf-4af4-b92c-41f6e89be39f\") " pod="openstack/cinder-scheduler-0" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.449132 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/802bda4f-2363-4ca6-a126-2ccf1448ed71-config-data-custom\") pod \"barbican-keystone-listener-6d964c7466-fpqld\" (UID: \"802bda4f-2363-4ca6-a126-2ccf1448ed71\") " pod="openstack/barbican-keystone-listener-6d964c7466-fpqld" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.449152 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c3c85c8-d5b9-48f3-9408-0d9693d0cbf1-config-data\") pod \"barbican-worker-959f7f8c5-pmqjf\" (UID: \"5c3c85c8-d5b9-48f3-9408-0d9693d0cbf1\") " pod="openstack/barbican-worker-959f7f8c5-pmqjf" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.449177 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvzsv\" (UniqueName: \"kubernetes.io/projected/802bda4f-2363-4ca6-a126-2ccf1448ed71-kube-api-access-bvzsv\") pod \"barbican-keystone-listener-6d964c7466-fpqld\" (UID: \"802bda4f-2363-4ca6-a126-2ccf1448ed71\") " pod="openstack/barbican-keystone-listener-6d964c7466-fpqld" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.449214 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a92498f9-dccf-4af4-b92c-41f6e89be39f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"a92498f9-dccf-4af4-b92c-41f6e89be39f\") " pod="openstack/cinder-scheduler-0" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.449232 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/802bda4f-2363-4ca6-a126-2ccf1448ed71-config-data\") pod \"barbican-keystone-listener-6d964c7466-fpqld\" (UID: \"802bda4f-2363-4ca6-a126-2ccf1448ed71\") " pod="openstack/barbican-keystone-listener-6d964c7466-fpqld" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.449265 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/16836b92-7de6-4e34-b11c-109206f9e76f-dns-svc\") pod \"dnsmasq-dns-b895b5785-jzmvh\" (UID: \"16836b92-7de6-4e34-b11c-109206f9e76f\") " pod="openstack/dnsmasq-dns-b895b5785-jzmvh" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.449287 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a92498f9-dccf-4af4-b92c-41f6e89be39f-scripts\") pod \"cinder-scheduler-0\" (UID: \"a92498f9-dccf-4af4-b92c-41f6e89be39f\") " pod="openstack/cinder-scheduler-0" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.449301 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/16836b92-7de6-4e34-b11c-109206f9e76f-ovsdbserver-nb\") pod \"dnsmasq-dns-b895b5785-jzmvh\" (UID: \"16836b92-7de6-4e34-b11c-109206f9e76f\") " pod="openstack/dnsmasq-dns-b895b5785-jzmvh" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.449329 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzllv\" (UniqueName: \"kubernetes.io/projected/5c3c85c8-d5b9-48f3-9408-0d9693d0cbf1-kube-api-access-mzllv\") pod \"barbican-worker-959f7f8c5-pmqjf\" (UID: \"5c3c85c8-d5b9-48f3-9408-0d9693d0cbf1\") " pod="openstack/barbican-worker-959f7f8c5-pmqjf" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.449351 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16836b92-7de6-4e34-b11c-109206f9e76f-config\") pod \"dnsmasq-dns-b895b5785-jzmvh\" (UID: \"16836b92-7de6-4e34-b11c-109206f9e76f\") " pod="openstack/dnsmasq-dns-b895b5785-jzmvh" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.449377 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/802bda4f-2363-4ca6-a126-2ccf1448ed71-logs\") pod \"barbican-keystone-listener-6d964c7466-fpqld\" (UID: \"802bda4f-2363-4ca6-a126-2ccf1448ed71\") " pod="openstack/barbican-keystone-listener-6d964c7466-fpqld" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.449405 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a92498f9-dccf-4af4-b92c-41f6e89be39f-config-data\") pod \"cinder-scheduler-0\" (UID: \"a92498f9-dccf-4af4-b92c-41f6e89be39f\") " pod="openstack/cinder-scheduler-0" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.449424 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a92498f9-dccf-4af4-b92c-41f6e89be39f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"a92498f9-dccf-4af4-b92c-41f6e89be39f\") " pod="openstack/cinder-scheduler-0" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.449444 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/16836b92-7de6-4e34-b11c-109206f9e76f-dns-swift-storage-0\") pod \"dnsmasq-dns-b895b5785-jzmvh\" (UID: \"16836b92-7de6-4e34-b11c-109206f9e76f\") " pod="openstack/dnsmasq-dns-b895b5785-jzmvh" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.449465 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5c3c85c8-d5b9-48f3-9408-0d9693d0cbf1-config-data-custom\") pod \"barbican-worker-959f7f8c5-pmqjf\" (UID: \"5c3c85c8-d5b9-48f3-9408-0d9693d0cbf1\") " pod="openstack/barbican-worker-959f7f8c5-pmqjf" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.449502 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bwz8\" (UniqueName: \"kubernetes.io/projected/16836b92-7de6-4e34-b11c-109206f9e76f-kube-api-access-9bwz8\") pod \"dnsmasq-dns-b895b5785-jzmvh\" (UID: \"16836b92-7de6-4e34-b11c-109206f9e76f\") " pod="openstack/dnsmasq-dns-b895b5785-jzmvh" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.449522 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gcr8p\" (UniqueName: \"kubernetes.io/projected/a92498f9-dccf-4af4-b92c-41f6e89be39f-kube-api-access-gcr8p\") pod \"cinder-scheduler-0\" (UID: \"a92498f9-dccf-4af4-b92c-41f6e89be39f\") " pod="openstack/cinder-scheduler-0" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.449547 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/16836b92-7de6-4e34-b11c-109206f9e76f-ovsdbserver-sb\") pod \"dnsmasq-dns-b895b5785-jzmvh\" (UID: \"16836b92-7de6-4e34-b11c-109206f9e76f\") " pod="openstack/dnsmasq-dns-b895b5785-jzmvh" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.450934 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c3c85c8-d5b9-48f3-9408-0d9693d0cbf1-logs\") pod \"barbican-worker-959f7f8c5-pmqjf\" (UID: \"5c3c85c8-d5b9-48f3-9408-0d9693d0cbf1\") " pod="openstack/barbican-worker-959f7f8c5-pmqjf" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.456222 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a92498f9-dccf-4af4-b92c-41f6e89be39f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"a92498f9-dccf-4af4-b92c-41f6e89be39f\") " pod="openstack/cinder-scheduler-0" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.456957 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a92498f9-dccf-4af4-b92c-41f6e89be39f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"a92498f9-dccf-4af4-b92c-41f6e89be39f\") " pod="openstack/cinder-scheduler-0" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.457670 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c3c85c8-d5b9-48f3-9408-0d9693d0cbf1-combined-ca-bundle\") pod \"barbican-worker-959f7f8c5-pmqjf\" (UID: \"5c3c85c8-d5b9-48f3-9408-0d9693d0cbf1\") " pod="openstack/barbican-worker-959f7f8c5-pmqjf" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.460249 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5c3c85c8-d5b9-48f3-9408-0d9693d0cbf1-config-data-custom\") pod \"barbican-worker-959f7f8c5-pmqjf\" (UID: \"5c3c85c8-d5b9-48f3-9408-0d9693d0cbf1\") " pod="openstack/barbican-worker-959f7f8c5-pmqjf" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.460248 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a92498f9-dccf-4af4-b92c-41f6e89be39f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"a92498f9-dccf-4af4-b92c-41f6e89be39f\") " pod="openstack/cinder-scheduler-0" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.460710 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c3c85c8-d5b9-48f3-9408-0d9693d0cbf1-config-data\") pod \"barbican-worker-959f7f8c5-pmqjf\" (UID: \"5c3c85c8-d5b9-48f3-9408-0d9693d0cbf1\") " pod="openstack/barbican-worker-959f7f8c5-pmqjf" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.462046 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a92498f9-dccf-4af4-b92c-41f6e89be39f-scripts\") pod \"cinder-scheduler-0\" (UID: \"a92498f9-dccf-4af4-b92c-41f6e89be39f\") " pod="openstack/cinder-scheduler-0" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.475265 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a92498f9-dccf-4af4-b92c-41f6e89be39f-config-data\") pod \"cinder-scheduler-0\" (UID: \"a92498f9-dccf-4af4-b92c-41f6e89be39f\") " pod="openstack/cinder-scheduler-0" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.483758 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b895b5785-jzmvh"] Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.484319 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gcr8p\" (UniqueName: \"kubernetes.io/projected/a92498f9-dccf-4af4-b92c-41f6e89be39f-kube-api-access-gcr8p\") pod \"cinder-scheduler-0\" (UID: \"a92498f9-dccf-4af4-b92c-41f6e89be39f\") " pod="openstack/cinder-scheduler-0" Feb 02 21:40:04 crc kubenswrapper[4789]: E0202 21:40:04.485101 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[config dns-svc dns-swift-storage-0 kube-api-access-9bwz8 ovsdbserver-nb ovsdbserver-sb], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-b895b5785-jzmvh" podUID="16836b92-7de6-4e34-b11c-109206f9e76f" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.496177 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzllv\" (UniqueName: \"kubernetes.io/projected/5c3c85c8-d5b9-48f3-9408-0d9693d0cbf1-kube-api-access-mzllv\") pod \"barbican-worker-959f7f8c5-pmqjf\" (UID: \"5c3c85c8-d5b9-48f3-9408-0d9693d0cbf1\") " pod="openstack/barbican-worker-959f7f8c5-pmqjf" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.524810 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-lg4td"] Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.526166 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-lg4td" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.552265 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/16836b92-7de6-4e34-b11c-109206f9e76f-dns-swift-storage-0\") pod \"dnsmasq-dns-b895b5785-jzmvh\" (UID: \"16836b92-7de6-4e34-b11c-109206f9e76f\") " pod="openstack/dnsmasq-dns-b895b5785-jzmvh" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.552324 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bwz8\" (UniqueName: \"kubernetes.io/projected/16836b92-7de6-4e34-b11c-109206f9e76f-kube-api-access-9bwz8\") pod \"dnsmasq-dns-b895b5785-jzmvh\" (UID: \"16836b92-7de6-4e34-b11c-109206f9e76f\") " pod="openstack/dnsmasq-dns-b895b5785-jzmvh" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.552346 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/16836b92-7de6-4e34-b11c-109206f9e76f-ovsdbserver-sb\") pod \"dnsmasq-dns-b895b5785-jzmvh\" (UID: \"16836b92-7de6-4e34-b11c-109206f9e76f\") " pod="openstack/dnsmasq-dns-b895b5785-jzmvh" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.552391 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/802bda4f-2363-4ca6-a126-2ccf1448ed71-combined-ca-bundle\") pod \"barbican-keystone-listener-6d964c7466-fpqld\" (UID: \"802bda4f-2363-4ca6-a126-2ccf1448ed71\") " pod="openstack/barbican-keystone-listener-6d964c7466-fpqld" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.552417 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/802bda4f-2363-4ca6-a126-2ccf1448ed71-config-data-custom\") pod \"barbican-keystone-listener-6d964c7466-fpqld\" (UID: \"802bda4f-2363-4ca6-a126-2ccf1448ed71\") " pod="openstack/barbican-keystone-listener-6d964c7466-fpqld" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.552443 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvzsv\" (UniqueName: \"kubernetes.io/projected/802bda4f-2363-4ca6-a126-2ccf1448ed71-kube-api-access-bvzsv\") pod \"barbican-keystone-listener-6d964c7466-fpqld\" (UID: \"802bda4f-2363-4ca6-a126-2ccf1448ed71\") " pod="openstack/barbican-keystone-listener-6d964c7466-fpqld" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.552469 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/802bda4f-2363-4ca6-a126-2ccf1448ed71-config-data\") pod \"barbican-keystone-listener-6d964c7466-fpqld\" (UID: \"802bda4f-2363-4ca6-a126-2ccf1448ed71\") " pod="openstack/barbican-keystone-listener-6d964c7466-fpqld" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.552492 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/16836b92-7de6-4e34-b11c-109206f9e76f-dns-svc\") pod \"dnsmasq-dns-b895b5785-jzmvh\" (UID: \"16836b92-7de6-4e34-b11c-109206f9e76f\") " pod="openstack/dnsmasq-dns-b895b5785-jzmvh" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.552509 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/16836b92-7de6-4e34-b11c-109206f9e76f-ovsdbserver-nb\") pod \"dnsmasq-dns-b895b5785-jzmvh\" (UID: \"16836b92-7de6-4e34-b11c-109206f9e76f\") " pod="openstack/dnsmasq-dns-b895b5785-jzmvh" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.552529 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16836b92-7de6-4e34-b11c-109206f9e76f-config\") pod \"dnsmasq-dns-b895b5785-jzmvh\" (UID: \"16836b92-7de6-4e34-b11c-109206f9e76f\") " pod="openstack/dnsmasq-dns-b895b5785-jzmvh" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.552566 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/802bda4f-2363-4ca6-a126-2ccf1448ed71-logs\") pod \"barbican-keystone-listener-6d964c7466-fpqld\" (UID: \"802bda4f-2363-4ca6-a126-2ccf1448ed71\") " pod="openstack/barbican-keystone-listener-6d964c7466-fpqld" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.553059 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/802bda4f-2363-4ca6-a126-2ccf1448ed71-logs\") pod \"barbican-keystone-listener-6d964c7466-fpqld\" (UID: \"802bda4f-2363-4ca6-a126-2ccf1448ed71\") " pod="openstack/barbican-keystone-listener-6d964c7466-fpqld" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.554346 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/16836b92-7de6-4e34-b11c-109206f9e76f-dns-swift-storage-0\") pod \"dnsmasq-dns-b895b5785-jzmvh\" (UID: \"16836b92-7de6-4e34-b11c-109206f9e76f\") " pod="openstack/dnsmasq-dns-b895b5785-jzmvh" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.555068 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/16836b92-7de6-4e34-b11c-109206f9e76f-ovsdbserver-sb\") pod \"dnsmasq-dns-b895b5785-jzmvh\" (UID: \"16836b92-7de6-4e34-b11c-109206f9e76f\") " pod="openstack/dnsmasq-dns-b895b5785-jzmvh" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.558309 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/802bda4f-2363-4ca6-a126-2ccf1448ed71-combined-ca-bundle\") pod \"barbican-keystone-listener-6d964c7466-fpqld\" (UID: \"802bda4f-2363-4ca6-a126-2ccf1448ed71\") " pod="openstack/barbican-keystone-listener-6d964c7466-fpqld" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.558873 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/16836b92-7de6-4e34-b11c-109206f9e76f-dns-svc\") pod \"dnsmasq-dns-b895b5785-jzmvh\" (UID: \"16836b92-7de6-4e34-b11c-109206f9e76f\") " pod="openstack/dnsmasq-dns-b895b5785-jzmvh" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.559424 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/16836b92-7de6-4e34-b11c-109206f9e76f-ovsdbserver-nb\") pod \"dnsmasq-dns-b895b5785-jzmvh\" (UID: \"16836b92-7de6-4e34-b11c-109206f9e76f\") " pod="openstack/dnsmasq-dns-b895b5785-jzmvh" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.565310 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-lg4td"] Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.565357 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.565873 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16836b92-7de6-4e34-b11c-109206f9e76f-config\") pod \"dnsmasq-dns-b895b5785-jzmvh\" (UID: \"16836b92-7de6-4e34-b11c-109206f9e76f\") " pod="openstack/dnsmasq-dns-b895b5785-jzmvh" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.579101 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/802bda4f-2363-4ca6-a126-2ccf1448ed71-config-data\") pod \"barbican-keystone-listener-6d964c7466-fpqld\" (UID: \"802bda4f-2363-4ca6-a126-2ccf1448ed71\") " pod="openstack/barbican-keystone-listener-6d964c7466-fpqld" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.581325 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bwz8\" (UniqueName: \"kubernetes.io/projected/16836b92-7de6-4e34-b11c-109206f9e76f-kube-api-access-9bwz8\") pod \"dnsmasq-dns-b895b5785-jzmvh\" (UID: \"16836b92-7de6-4e34-b11c-109206f9e76f\") " pod="openstack/dnsmasq-dns-b895b5785-jzmvh" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.582628 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.585288 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/802bda4f-2363-4ca6-a126-2ccf1448ed71-config-data-custom\") pod \"barbican-keystone-listener-6d964c7466-fpqld\" (UID: \"802bda4f-2363-4ca6-a126-2ccf1448ed71\") " pod="openstack/barbican-keystone-listener-6d964c7466-fpqld" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.588489 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.596042 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.607836 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvzsv\" (UniqueName: \"kubernetes.io/projected/802bda4f-2363-4ca6-a126-2ccf1448ed71-kube-api-access-bvzsv\") pod \"barbican-keystone-listener-6d964c7466-fpqld\" (UID: \"802bda4f-2363-4ca6-a126-2ccf1448ed71\") " pod="openstack/barbican-keystone-listener-6d964c7466-fpqld" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.610059 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.620793 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-959f7f8c5-pmqjf" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.621254 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-76b9c6fd6-4jjdd"] Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.623008 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-76b9c6fd6-4jjdd" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.630024 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.634143 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-76b9c6fd6-4jjdd"] Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.671551 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1f3d2a87-d6d0-4d7f-8686-90e1731ff80e-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-lg4td\" (UID: \"1f3d2a87-d6d0-4d7f-8686-90e1731ff80e\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lg4td" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.671614 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f3d2a87-d6d0-4d7f-8686-90e1731ff80e-config\") pod \"dnsmasq-dns-5c9776ccc5-lg4td\" (UID: \"1f3d2a87-d6d0-4d7f-8686-90e1731ff80e\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lg4td" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.671668 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1f3d2a87-d6d0-4d7f-8686-90e1731ff80e-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-lg4td\" (UID: \"1f3d2a87-d6d0-4d7f-8686-90e1731ff80e\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lg4td" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.671750 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6gp5\" (UniqueName: \"kubernetes.io/projected/1f3d2a87-d6d0-4d7f-8686-90e1731ff80e-kube-api-access-b6gp5\") pod \"dnsmasq-dns-5c9776ccc5-lg4td\" (UID: \"1f3d2a87-d6d0-4d7f-8686-90e1731ff80e\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lg4td" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.671779 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1f3d2a87-d6d0-4d7f-8686-90e1731ff80e-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-lg4td\" (UID: \"1f3d2a87-d6d0-4d7f-8686-90e1731ff80e\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lg4td" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.671849 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1f3d2a87-d6d0-4d7f-8686-90e1731ff80e-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-lg4td\" (UID: \"1f3d2a87-d6d0-4d7f-8686-90e1731ff80e\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lg4td" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.707559 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6d964c7466-fpqld" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.722854 4789 generic.go:334] "Generic (PLEG): container finished" podID="c202a904-ccae-4f90-a284-d7e2a3b5e0f7" containerID="a76f4d61d5ec4963188ac84468b388f9d8eb7c0cb10646c1ce4381479dc2dcc5" exitCode=0 Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.722890 4789 generic.go:334] "Generic (PLEG): container finished" podID="c202a904-ccae-4f90-a284-d7e2a3b5e0f7" containerID="6cae3582ac17969f7c322327aa9fd05ff26ce7194abd35666492ea00cba10c41" exitCode=2 Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.722897 4789 generic.go:334] "Generic (PLEG): container finished" podID="c202a904-ccae-4f90-a284-d7e2a3b5e0f7" containerID="fdcc1f90b212df2e933f135304ee045547580c325f7da0217384a6ca1384b603" exitCode=0 Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.722968 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b895b5785-jzmvh" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.724685 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c202a904-ccae-4f90-a284-d7e2a3b5e0f7","Type":"ContainerDied","Data":"a76f4d61d5ec4963188ac84468b388f9d8eb7c0cb10646c1ce4381479dc2dcc5"} Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.724776 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c202a904-ccae-4f90-a284-d7e2a3b5e0f7","Type":"ContainerDied","Data":"6cae3582ac17969f7c322327aa9fd05ff26ce7194abd35666492ea00cba10c41"} Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.724837 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c202a904-ccae-4f90-a284-d7e2a3b5e0f7","Type":"ContainerDied","Data":"fdcc1f90b212df2e933f135304ee045547580c325f7da0217384a6ca1384b603"} Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.750830 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b895b5785-jzmvh" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.773865 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwg84\" (UniqueName: \"kubernetes.io/projected/f72393b9-cc1a-4feb-9089-259fd674fb19-kube-api-access-kwg84\") pod \"barbican-api-76b9c6fd6-4jjdd\" (UID: \"f72393b9-cc1a-4feb-9089-259fd674fb19\") " pod="openstack/barbican-api-76b9c6fd6-4jjdd" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.773938 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1f3d2a87-d6d0-4d7f-8686-90e1731ff80e-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-lg4td\" (UID: \"1f3d2a87-d6d0-4d7f-8686-90e1731ff80e\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lg4td" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.774031 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f3d2a87-d6d0-4d7f-8686-90e1731ff80e-config\") pod \"dnsmasq-dns-5c9776ccc5-lg4td\" (UID: \"1f3d2a87-d6d0-4d7f-8686-90e1731ff80e\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lg4td" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.774136 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f72393b9-cc1a-4feb-9089-259fd674fb19-config-data\") pod \"barbican-api-76b9c6fd6-4jjdd\" (UID: \"f72393b9-cc1a-4feb-9089-259fd674fb19\") " pod="openstack/barbican-api-76b9c6fd6-4jjdd" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.774161 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f72393b9-cc1a-4feb-9089-259fd674fb19-logs\") pod \"barbican-api-76b9c6fd6-4jjdd\" (UID: \"f72393b9-cc1a-4feb-9089-259fd674fb19\") " pod="openstack/barbican-api-76b9c6fd6-4jjdd" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.774185 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f72393b9-cc1a-4feb-9089-259fd674fb19-config-data-custom\") pod \"barbican-api-76b9c6fd6-4jjdd\" (UID: \"f72393b9-cc1a-4feb-9089-259fd674fb19\") " pod="openstack/barbican-api-76b9c6fd6-4jjdd" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.774202 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f72393b9-cc1a-4feb-9089-259fd674fb19-combined-ca-bundle\") pod \"barbican-api-76b9c6fd6-4jjdd\" (UID: \"f72393b9-cc1a-4feb-9089-259fd674fb19\") " pod="openstack/barbican-api-76b9c6fd6-4jjdd" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.774248 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1f3d2a87-d6d0-4d7f-8686-90e1731ff80e-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-lg4td\" (UID: \"1f3d2a87-d6d0-4d7f-8686-90e1731ff80e\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lg4td" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.774276 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zncxh\" (UniqueName: \"kubernetes.io/projected/3e0c700a-39af-4c13-806d-4c049fe7bb85-kube-api-access-zncxh\") pod \"cinder-api-0\" (UID: \"3e0c700a-39af-4c13-806d-4c049fe7bb85\") " pod="openstack/cinder-api-0" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.774301 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e0c700a-39af-4c13-806d-4c049fe7bb85-scripts\") pod \"cinder-api-0\" (UID: \"3e0c700a-39af-4c13-806d-4c049fe7bb85\") " pod="openstack/cinder-api-0" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.774356 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6gp5\" (UniqueName: \"kubernetes.io/projected/1f3d2a87-d6d0-4d7f-8686-90e1731ff80e-kube-api-access-b6gp5\") pod \"dnsmasq-dns-5c9776ccc5-lg4td\" (UID: \"1f3d2a87-d6d0-4d7f-8686-90e1731ff80e\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lg4td" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.774388 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1f3d2a87-d6d0-4d7f-8686-90e1731ff80e-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-lg4td\" (UID: \"1f3d2a87-d6d0-4d7f-8686-90e1731ff80e\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lg4td" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.774436 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3e0c700a-39af-4c13-806d-4c049fe7bb85-etc-machine-id\") pod \"cinder-api-0\" (UID: \"3e0c700a-39af-4c13-806d-4c049fe7bb85\") " pod="openstack/cinder-api-0" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.774473 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e0c700a-39af-4c13-806d-4c049fe7bb85-logs\") pod \"cinder-api-0\" (UID: \"3e0c700a-39af-4c13-806d-4c049fe7bb85\") " pod="openstack/cinder-api-0" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.774543 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3e0c700a-39af-4c13-806d-4c049fe7bb85-config-data-custom\") pod \"cinder-api-0\" (UID: \"3e0c700a-39af-4c13-806d-4c049fe7bb85\") " pod="openstack/cinder-api-0" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.774568 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e0c700a-39af-4c13-806d-4c049fe7bb85-config-data\") pod \"cinder-api-0\" (UID: \"3e0c700a-39af-4c13-806d-4c049fe7bb85\") " pod="openstack/cinder-api-0" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.774610 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e0c700a-39af-4c13-806d-4c049fe7bb85-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"3e0c700a-39af-4c13-806d-4c049fe7bb85\") " pod="openstack/cinder-api-0" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.774647 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1f3d2a87-d6d0-4d7f-8686-90e1731ff80e-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-lg4td\" (UID: \"1f3d2a87-d6d0-4d7f-8686-90e1731ff80e\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lg4td" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.774673 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1f3d2a87-d6d0-4d7f-8686-90e1731ff80e-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-lg4td\" (UID: \"1f3d2a87-d6d0-4d7f-8686-90e1731ff80e\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lg4td" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.775442 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1f3d2a87-d6d0-4d7f-8686-90e1731ff80e-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-lg4td\" (UID: \"1f3d2a87-d6d0-4d7f-8686-90e1731ff80e\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lg4td" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.776673 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1f3d2a87-d6d0-4d7f-8686-90e1731ff80e-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-lg4td\" (UID: \"1f3d2a87-d6d0-4d7f-8686-90e1731ff80e\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lg4td" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.777268 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1f3d2a87-d6d0-4d7f-8686-90e1731ff80e-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-lg4td\" (UID: \"1f3d2a87-d6d0-4d7f-8686-90e1731ff80e\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lg4td" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.777666 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f3d2a87-d6d0-4d7f-8686-90e1731ff80e-config\") pod \"dnsmasq-dns-5c9776ccc5-lg4td\" (UID: \"1f3d2a87-d6d0-4d7f-8686-90e1731ff80e\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lg4td" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.791143 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6gp5\" (UniqueName: \"kubernetes.io/projected/1f3d2a87-d6d0-4d7f-8686-90e1731ff80e-kube-api-access-b6gp5\") pod \"dnsmasq-dns-5c9776ccc5-lg4td\" (UID: \"1f3d2a87-d6d0-4d7f-8686-90e1731ff80e\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lg4td" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.880389 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/16836b92-7de6-4e34-b11c-109206f9e76f-ovsdbserver-nb\") pod \"16836b92-7de6-4e34-b11c-109206f9e76f\" (UID: \"16836b92-7de6-4e34-b11c-109206f9e76f\") " Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.880479 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/16836b92-7de6-4e34-b11c-109206f9e76f-ovsdbserver-sb\") pod \"16836b92-7de6-4e34-b11c-109206f9e76f\" (UID: \"16836b92-7de6-4e34-b11c-109206f9e76f\") " Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.880520 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/16836b92-7de6-4e34-b11c-109206f9e76f-dns-swift-storage-0\") pod \"16836b92-7de6-4e34-b11c-109206f9e76f\" (UID: \"16836b92-7de6-4e34-b11c-109206f9e76f\") " Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.880541 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9bwz8\" (UniqueName: \"kubernetes.io/projected/16836b92-7de6-4e34-b11c-109206f9e76f-kube-api-access-9bwz8\") pod \"16836b92-7de6-4e34-b11c-109206f9e76f\" (UID: \"16836b92-7de6-4e34-b11c-109206f9e76f\") " Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.880660 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16836b92-7de6-4e34-b11c-109206f9e76f-config\") pod \"16836b92-7de6-4e34-b11c-109206f9e76f\" (UID: \"16836b92-7de6-4e34-b11c-109206f9e76f\") " Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.880714 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/16836b92-7de6-4e34-b11c-109206f9e76f-dns-svc\") pod \"16836b92-7de6-4e34-b11c-109206f9e76f\" (UID: \"16836b92-7de6-4e34-b11c-109206f9e76f\") " Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.880940 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e0c700a-39af-4c13-806d-4c049fe7bb85-scripts\") pod \"cinder-api-0\" (UID: \"3e0c700a-39af-4c13-806d-4c049fe7bb85\") " pod="openstack/cinder-api-0" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.880986 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3e0c700a-39af-4c13-806d-4c049fe7bb85-etc-machine-id\") pod \"cinder-api-0\" (UID: \"3e0c700a-39af-4c13-806d-4c049fe7bb85\") " pod="openstack/cinder-api-0" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.881008 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e0c700a-39af-4c13-806d-4c049fe7bb85-logs\") pod \"cinder-api-0\" (UID: \"3e0c700a-39af-4c13-806d-4c049fe7bb85\") " pod="openstack/cinder-api-0" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.881036 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3e0c700a-39af-4c13-806d-4c049fe7bb85-config-data-custom\") pod \"cinder-api-0\" (UID: \"3e0c700a-39af-4c13-806d-4c049fe7bb85\") " pod="openstack/cinder-api-0" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.881053 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e0c700a-39af-4c13-806d-4c049fe7bb85-config-data\") pod \"cinder-api-0\" (UID: \"3e0c700a-39af-4c13-806d-4c049fe7bb85\") " pod="openstack/cinder-api-0" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.881076 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e0c700a-39af-4c13-806d-4c049fe7bb85-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"3e0c700a-39af-4c13-806d-4c049fe7bb85\") " pod="openstack/cinder-api-0" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.881112 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwg84\" (UniqueName: \"kubernetes.io/projected/f72393b9-cc1a-4feb-9089-259fd674fb19-kube-api-access-kwg84\") pod \"barbican-api-76b9c6fd6-4jjdd\" (UID: \"f72393b9-cc1a-4feb-9089-259fd674fb19\") " pod="openstack/barbican-api-76b9c6fd6-4jjdd" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.881165 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f72393b9-cc1a-4feb-9089-259fd674fb19-config-data\") pod \"barbican-api-76b9c6fd6-4jjdd\" (UID: \"f72393b9-cc1a-4feb-9089-259fd674fb19\") " pod="openstack/barbican-api-76b9c6fd6-4jjdd" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.881181 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f72393b9-cc1a-4feb-9089-259fd674fb19-logs\") pod \"barbican-api-76b9c6fd6-4jjdd\" (UID: \"f72393b9-cc1a-4feb-9089-259fd674fb19\") " pod="openstack/barbican-api-76b9c6fd6-4jjdd" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.881199 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f72393b9-cc1a-4feb-9089-259fd674fb19-config-data-custom\") pod \"barbican-api-76b9c6fd6-4jjdd\" (UID: \"f72393b9-cc1a-4feb-9089-259fd674fb19\") " pod="openstack/barbican-api-76b9c6fd6-4jjdd" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.881215 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f72393b9-cc1a-4feb-9089-259fd674fb19-combined-ca-bundle\") pod \"barbican-api-76b9c6fd6-4jjdd\" (UID: \"f72393b9-cc1a-4feb-9089-259fd674fb19\") " pod="openstack/barbican-api-76b9c6fd6-4jjdd" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.881237 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zncxh\" (UniqueName: \"kubernetes.io/projected/3e0c700a-39af-4c13-806d-4c049fe7bb85-kube-api-access-zncxh\") pod \"cinder-api-0\" (UID: \"3e0c700a-39af-4c13-806d-4c049fe7bb85\") " pod="openstack/cinder-api-0" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.881900 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16836b92-7de6-4e34-b11c-109206f9e76f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "16836b92-7de6-4e34-b11c-109206f9e76f" (UID: "16836b92-7de6-4e34-b11c-109206f9e76f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.882758 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16836b92-7de6-4e34-b11c-109206f9e76f-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "16836b92-7de6-4e34-b11c-109206f9e76f" (UID: "16836b92-7de6-4e34-b11c-109206f9e76f"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.882974 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16836b92-7de6-4e34-b11c-109206f9e76f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "16836b92-7de6-4e34-b11c-109206f9e76f" (UID: "16836b92-7de6-4e34-b11c-109206f9e76f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.883131 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f72393b9-cc1a-4feb-9089-259fd674fb19-logs\") pod \"barbican-api-76b9c6fd6-4jjdd\" (UID: \"f72393b9-cc1a-4feb-9089-259fd674fb19\") " pod="openstack/barbican-api-76b9c6fd6-4jjdd" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.883553 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3e0c700a-39af-4c13-806d-4c049fe7bb85-etc-machine-id\") pod \"cinder-api-0\" (UID: \"3e0c700a-39af-4c13-806d-4c049fe7bb85\") " pod="openstack/cinder-api-0" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.883758 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16836b92-7de6-4e34-b11c-109206f9e76f-config" (OuterVolumeSpecName: "config") pod "16836b92-7de6-4e34-b11c-109206f9e76f" (UID: "16836b92-7de6-4e34-b11c-109206f9e76f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.883866 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e0c700a-39af-4c13-806d-4c049fe7bb85-logs\") pod \"cinder-api-0\" (UID: \"3e0c700a-39af-4c13-806d-4c049fe7bb85\") " pod="openstack/cinder-api-0" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.884097 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16836b92-7de6-4e34-b11c-109206f9e76f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "16836b92-7de6-4e34-b11c-109206f9e76f" (UID: "16836b92-7de6-4e34-b11c-109206f9e76f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.887478 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16836b92-7de6-4e34-b11c-109206f9e76f-kube-api-access-9bwz8" (OuterVolumeSpecName: "kube-api-access-9bwz8") pod "16836b92-7de6-4e34-b11c-109206f9e76f" (UID: "16836b92-7de6-4e34-b11c-109206f9e76f"). InnerVolumeSpecName "kube-api-access-9bwz8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.887981 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e0c700a-39af-4c13-806d-4c049fe7bb85-scripts\") pod \"cinder-api-0\" (UID: \"3e0c700a-39af-4c13-806d-4c049fe7bb85\") " pod="openstack/cinder-api-0" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.889602 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3e0c700a-39af-4c13-806d-4c049fe7bb85-config-data-custom\") pod \"cinder-api-0\" (UID: \"3e0c700a-39af-4c13-806d-4c049fe7bb85\") " pod="openstack/cinder-api-0" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.892108 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e0c700a-39af-4c13-806d-4c049fe7bb85-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"3e0c700a-39af-4c13-806d-4c049fe7bb85\") " pod="openstack/cinder-api-0" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.893380 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e0c700a-39af-4c13-806d-4c049fe7bb85-config-data\") pod \"cinder-api-0\" (UID: \"3e0c700a-39af-4c13-806d-4c049fe7bb85\") " pod="openstack/cinder-api-0" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.899465 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f72393b9-cc1a-4feb-9089-259fd674fb19-config-data-custom\") pod \"barbican-api-76b9c6fd6-4jjdd\" (UID: \"f72393b9-cc1a-4feb-9089-259fd674fb19\") " pod="openstack/barbican-api-76b9c6fd6-4jjdd" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.900004 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f72393b9-cc1a-4feb-9089-259fd674fb19-combined-ca-bundle\") pod \"barbican-api-76b9c6fd6-4jjdd\" (UID: \"f72393b9-cc1a-4feb-9089-259fd674fb19\") " pod="openstack/barbican-api-76b9c6fd6-4jjdd" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.901927 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f72393b9-cc1a-4feb-9089-259fd674fb19-config-data\") pod \"barbican-api-76b9c6fd6-4jjdd\" (UID: \"f72393b9-cc1a-4feb-9089-259fd674fb19\") " pod="openstack/barbican-api-76b9c6fd6-4jjdd" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.903115 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwg84\" (UniqueName: \"kubernetes.io/projected/f72393b9-cc1a-4feb-9089-259fd674fb19-kube-api-access-kwg84\") pod \"barbican-api-76b9c6fd6-4jjdd\" (UID: \"f72393b9-cc1a-4feb-9089-259fd674fb19\") " pod="openstack/barbican-api-76b9c6fd6-4jjdd" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.903706 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zncxh\" (UniqueName: \"kubernetes.io/projected/3e0c700a-39af-4c13-806d-4c049fe7bb85-kube-api-access-zncxh\") pod \"cinder-api-0\" (UID: \"3e0c700a-39af-4c13-806d-4c049fe7bb85\") " pod="openstack/cinder-api-0" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.923942 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-lg4td" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.939076 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.972352 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-76b9c6fd6-4jjdd" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.983336 4789 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16836b92-7de6-4e34-b11c-109206f9e76f-config\") on node \"crc\" DevicePath \"\"" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.983366 4789 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/16836b92-7de6-4e34-b11c-109206f9e76f-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.983379 4789 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/16836b92-7de6-4e34-b11c-109206f9e76f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.983390 4789 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/16836b92-7de6-4e34-b11c-109206f9e76f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.983401 4789 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/16836b92-7de6-4e34-b11c-109206f9e76f-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 02 21:40:04 crc kubenswrapper[4789]: I0202 21:40:04.983414 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9bwz8\" (UniqueName: \"kubernetes.io/projected/16836b92-7de6-4e34-b11c-109206f9e76f-kube-api-access-9bwz8\") on node \"crc\" DevicePath \"\"" Feb 02 21:40:05 crc kubenswrapper[4789]: I0202 21:40:05.144625 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 21:40:05 crc kubenswrapper[4789]: I0202 21:40:05.211856 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-959f7f8c5-pmqjf"] Feb 02 21:40:05 crc kubenswrapper[4789]: W0202 21:40:05.221747 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5c3c85c8_d5b9_48f3_9408_0d9693d0cbf1.slice/crio-7f48a67ac57235127ac9a38de9ba7ffb00f02956d68819cd8606b5387e065667 WatchSource:0}: Error finding container 7f48a67ac57235127ac9a38de9ba7ffb00f02956d68819cd8606b5387e065667: Status 404 returned error can't find the container with id 7f48a67ac57235127ac9a38de9ba7ffb00f02956d68819cd8606b5387e065667 Feb 02 21:40:05 crc kubenswrapper[4789]: I0202 21:40:05.326993 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-6d964c7466-fpqld"] Feb 02 21:40:05 crc kubenswrapper[4789]: I0202 21:40:05.534664 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-lg4td"] Feb 02 21:40:05 crc kubenswrapper[4789]: I0202 21:40:05.548052 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 02 21:40:05 crc kubenswrapper[4789]: I0202 21:40:05.606410 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-76b9c6fd6-4jjdd"] Feb 02 21:40:05 crc kubenswrapper[4789]: W0202 21:40:05.620198 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf72393b9_cc1a_4feb_9089_259fd674fb19.slice/crio-649b76ad00cf6b85c009544c2004dbbd2e85763bd5dcf8993c42b2685a6ee53d WatchSource:0}: Error finding container 649b76ad00cf6b85c009544c2004dbbd2e85763bd5dcf8993c42b2685a6ee53d: Status 404 returned error can't find the container with id 649b76ad00cf6b85c009544c2004dbbd2e85763bd5dcf8993c42b2685a6ee53d Feb 02 21:40:05 crc kubenswrapper[4789]: I0202 21:40:05.736368 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-959f7f8c5-pmqjf" event={"ID":"5c3c85c8-d5b9-48f3-9408-0d9693d0cbf1","Type":"ContainerStarted","Data":"7f48a67ac57235127ac9a38de9ba7ffb00f02956d68819cd8606b5387e065667"} Feb 02 21:40:05 crc kubenswrapper[4789]: I0202 21:40:05.737439 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-lg4td" event={"ID":"1f3d2a87-d6d0-4d7f-8686-90e1731ff80e","Type":"ContainerStarted","Data":"f75dec15fb3d61d94dbdefaeca2dae47c26f933e298782cc28ebff9d650dafdc"} Feb 02 21:40:05 crc kubenswrapper[4789]: I0202 21:40:05.738114 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3e0c700a-39af-4c13-806d-4c049fe7bb85","Type":"ContainerStarted","Data":"05dba8d4e9be229ea0a40b38229f599a32e766e30f263e25582f5562937c4898"} Feb 02 21:40:05 crc kubenswrapper[4789]: I0202 21:40:05.743103 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-76b9c6fd6-4jjdd" event={"ID":"f72393b9-cc1a-4feb-9089-259fd674fb19","Type":"ContainerStarted","Data":"649b76ad00cf6b85c009544c2004dbbd2e85763bd5dcf8993c42b2685a6ee53d"} Feb 02 21:40:05 crc kubenswrapper[4789]: I0202 21:40:05.744864 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6d964c7466-fpqld" event={"ID":"802bda4f-2363-4ca6-a126-2ccf1448ed71","Type":"ContainerStarted","Data":"46aa30d8fda521d7d43ed37e4ab7c1f2d26f48e3a6b0587028c5fd3ccb7aec9e"} Feb 02 21:40:05 crc kubenswrapper[4789]: I0202 21:40:05.753572 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b895b5785-jzmvh" Feb 02 21:40:05 crc kubenswrapper[4789]: I0202 21:40:05.753648 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a92498f9-dccf-4af4-b92c-41f6e89be39f","Type":"ContainerStarted","Data":"de06af86bfa1f2e0a47fbe28d2a53c372610635437d020f2870f1effba7d2eb5"} Feb 02 21:40:05 crc kubenswrapper[4789]: I0202 21:40:05.804632 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b895b5785-jzmvh"] Feb 02 21:40:05 crc kubenswrapper[4789]: I0202 21:40:05.827652 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b895b5785-jzmvh"] Feb 02 21:40:06 crc kubenswrapper[4789]: I0202 21:40:06.435138 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16836b92-7de6-4e34-b11c-109206f9e76f" path="/var/lib/kubelet/pods/16836b92-7de6-4e34-b11c-109206f9e76f/volumes" Feb 02 21:40:06 crc kubenswrapper[4789]: I0202 21:40:06.728566 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 02 21:40:06 crc kubenswrapper[4789]: I0202 21:40:06.766040 4789 generic.go:334] "Generic (PLEG): container finished" podID="c202a904-ccae-4f90-a284-d7e2a3b5e0f7" containerID="3bca866ca5fa43c30d0609fc4b416a606fd180cd7401b978314705aae2cec2fb" exitCode=0 Feb 02 21:40:06 crc kubenswrapper[4789]: I0202 21:40:06.766108 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c202a904-ccae-4f90-a284-d7e2a3b5e0f7","Type":"ContainerDied","Data":"3bca866ca5fa43c30d0609fc4b416a606fd180cd7401b978314705aae2cec2fb"} Feb 02 21:40:06 crc kubenswrapper[4789]: I0202 21:40:06.768649 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-76b9c6fd6-4jjdd" event={"ID":"f72393b9-cc1a-4feb-9089-259fd674fb19","Type":"ContainerStarted","Data":"f33d6cab0e4515f9fc25b935375bb01cac91eedda8fde6dad6fe745f2af17953"} Feb 02 21:40:06 crc kubenswrapper[4789]: I0202 21:40:06.768700 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-76b9c6fd6-4jjdd" event={"ID":"f72393b9-cc1a-4feb-9089-259fd674fb19","Type":"ContainerStarted","Data":"09e3ebecf8199c9c2015b9af6285917464358df228585d459b775ab95c46f735"} Feb 02 21:40:06 crc kubenswrapper[4789]: I0202 21:40:06.769739 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-76b9c6fd6-4jjdd" Feb 02 21:40:06 crc kubenswrapper[4789]: I0202 21:40:06.769785 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-76b9c6fd6-4jjdd" Feb 02 21:40:06 crc kubenswrapper[4789]: I0202 21:40:06.777156 4789 generic.go:334] "Generic (PLEG): container finished" podID="1f3d2a87-d6d0-4d7f-8686-90e1731ff80e" containerID="8c409f67673a79ef50f62a1df7708d4692141395f8dba4a50218b03da7b468fd" exitCode=0 Feb 02 21:40:06 crc kubenswrapper[4789]: I0202 21:40:06.777230 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-lg4td" event={"ID":"1f3d2a87-d6d0-4d7f-8686-90e1731ff80e","Type":"ContainerDied","Data":"8c409f67673a79ef50f62a1df7708d4692141395f8dba4a50218b03da7b468fd"} Feb 02 21:40:06 crc kubenswrapper[4789]: I0202 21:40:06.783536 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3e0c700a-39af-4c13-806d-4c049fe7bb85","Type":"ContainerStarted","Data":"fc89442da9c302f532838e2ef8a3b0d94646c34b1ef04de31a587f543a63c699"} Feb 02 21:40:06 crc kubenswrapper[4789]: I0202 21:40:06.791924 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-76b9c6fd6-4jjdd" podStartSLOduration=2.791911253 podStartE2EDuration="2.791911253s" podCreationTimestamp="2026-02-02 21:40:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:40:06.784063941 +0000 UTC m=+1227.079088960" watchObservedRunningTime="2026-02-02 21:40:06.791911253 +0000 UTC m=+1227.086936272" Feb 02 21:40:06 crc kubenswrapper[4789]: I0202 21:40:06.831010 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 21:40:06 crc kubenswrapper[4789]: I0202 21:40:06.941355 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c202a904-ccae-4f90-a284-d7e2a3b5e0f7-log-httpd\") pod \"c202a904-ccae-4f90-a284-d7e2a3b5e0f7\" (UID: \"c202a904-ccae-4f90-a284-d7e2a3b5e0f7\") " Feb 02 21:40:06 crc kubenswrapper[4789]: I0202 21:40:06.941407 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c202a904-ccae-4f90-a284-d7e2a3b5e0f7-combined-ca-bundle\") pod \"c202a904-ccae-4f90-a284-d7e2a3b5e0f7\" (UID: \"c202a904-ccae-4f90-a284-d7e2a3b5e0f7\") " Feb 02 21:40:06 crc kubenswrapper[4789]: I0202 21:40:06.941491 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c202a904-ccae-4f90-a284-d7e2a3b5e0f7-sg-core-conf-yaml\") pod \"c202a904-ccae-4f90-a284-d7e2a3b5e0f7\" (UID: \"c202a904-ccae-4f90-a284-d7e2a3b5e0f7\") " Feb 02 21:40:06 crc kubenswrapper[4789]: I0202 21:40:06.941632 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c202a904-ccae-4f90-a284-d7e2a3b5e0f7-run-httpd\") pod \"c202a904-ccae-4f90-a284-d7e2a3b5e0f7\" (UID: \"c202a904-ccae-4f90-a284-d7e2a3b5e0f7\") " Feb 02 21:40:06 crc kubenswrapper[4789]: I0202 21:40:06.941674 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c202a904-ccae-4f90-a284-d7e2a3b5e0f7-config-data\") pod \"c202a904-ccae-4f90-a284-d7e2a3b5e0f7\" (UID: \"c202a904-ccae-4f90-a284-d7e2a3b5e0f7\") " Feb 02 21:40:06 crc kubenswrapper[4789]: I0202 21:40:06.941699 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c202a904-ccae-4f90-a284-d7e2a3b5e0f7-scripts\") pod \"c202a904-ccae-4f90-a284-d7e2a3b5e0f7\" (UID: \"c202a904-ccae-4f90-a284-d7e2a3b5e0f7\") " Feb 02 21:40:06 crc kubenswrapper[4789]: I0202 21:40:06.941745 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptb7f\" (UniqueName: \"kubernetes.io/projected/c202a904-ccae-4f90-a284-d7e2a3b5e0f7-kube-api-access-ptb7f\") pod \"c202a904-ccae-4f90-a284-d7e2a3b5e0f7\" (UID: \"c202a904-ccae-4f90-a284-d7e2a3b5e0f7\") " Feb 02 21:40:06 crc kubenswrapper[4789]: I0202 21:40:06.941799 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c202a904-ccae-4f90-a284-d7e2a3b5e0f7-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c202a904-ccae-4f90-a284-d7e2a3b5e0f7" (UID: "c202a904-ccae-4f90-a284-d7e2a3b5e0f7"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 21:40:06 crc kubenswrapper[4789]: I0202 21:40:06.942397 4789 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c202a904-ccae-4f90-a284-d7e2a3b5e0f7-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 21:40:06 crc kubenswrapper[4789]: I0202 21:40:06.942994 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c202a904-ccae-4f90-a284-d7e2a3b5e0f7-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c202a904-ccae-4f90-a284-d7e2a3b5e0f7" (UID: "c202a904-ccae-4f90-a284-d7e2a3b5e0f7"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 21:40:06 crc kubenswrapper[4789]: I0202 21:40:06.945014 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c202a904-ccae-4f90-a284-d7e2a3b5e0f7-scripts" (OuterVolumeSpecName: "scripts") pod "c202a904-ccae-4f90-a284-d7e2a3b5e0f7" (UID: "c202a904-ccae-4f90-a284-d7e2a3b5e0f7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:40:06 crc kubenswrapper[4789]: I0202 21:40:06.945289 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c202a904-ccae-4f90-a284-d7e2a3b5e0f7-kube-api-access-ptb7f" (OuterVolumeSpecName: "kube-api-access-ptb7f") pod "c202a904-ccae-4f90-a284-d7e2a3b5e0f7" (UID: "c202a904-ccae-4f90-a284-d7e2a3b5e0f7"). InnerVolumeSpecName "kube-api-access-ptb7f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:40:06 crc kubenswrapper[4789]: I0202 21:40:06.963741 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c202a904-ccae-4f90-a284-d7e2a3b5e0f7-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c202a904-ccae-4f90-a284-d7e2a3b5e0f7" (UID: "c202a904-ccae-4f90-a284-d7e2a3b5e0f7"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:40:07 crc kubenswrapper[4789]: I0202 21:40:07.011613 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c202a904-ccae-4f90-a284-d7e2a3b5e0f7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c202a904-ccae-4f90-a284-d7e2a3b5e0f7" (UID: "c202a904-ccae-4f90-a284-d7e2a3b5e0f7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:40:07 crc kubenswrapper[4789]: I0202 21:40:07.038184 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c202a904-ccae-4f90-a284-d7e2a3b5e0f7-config-data" (OuterVolumeSpecName: "config-data") pod "c202a904-ccae-4f90-a284-d7e2a3b5e0f7" (UID: "c202a904-ccae-4f90-a284-d7e2a3b5e0f7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:40:07 crc kubenswrapper[4789]: I0202 21:40:07.044327 4789 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c202a904-ccae-4f90-a284-d7e2a3b5e0f7-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 21:40:07 crc kubenswrapper[4789]: I0202 21:40:07.044364 4789 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c202a904-ccae-4f90-a284-d7e2a3b5e0f7-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 21:40:07 crc kubenswrapper[4789]: I0202 21:40:07.044375 4789 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c202a904-ccae-4f90-a284-d7e2a3b5e0f7-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 21:40:07 crc kubenswrapper[4789]: I0202 21:40:07.044385 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ptb7f\" (UniqueName: \"kubernetes.io/projected/c202a904-ccae-4f90-a284-d7e2a3b5e0f7-kube-api-access-ptb7f\") on node \"crc\" DevicePath \"\"" Feb 02 21:40:07 crc kubenswrapper[4789]: I0202 21:40:07.044398 4789 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c202a904-ccae-4f90-a284-d7e2a3b5e0f7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 21:40:07 crc kubenswrapper[4789]: I0202 21:40:07.044407 4789 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c202a904-ccae-4f90-a284-d7e2a3b5e0f7-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 02 21:40:07 crc kubenswrapper[4789]: I0202 21:40:07.796601 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a92498f9-dccf-4af4-b92c-41f6e89be39f","Type":"ContainerStarted","Data":"7f749f84676267d30d6db302074fd1dfd49ec9175dc0d6b18b63c496efa8f4f9"} Feb 02 21:40:07 crc kubenswrapper[4789]: I0202 21:40:07.799056 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-959f7f8c5-pmqjf" event={"ID":"5c3c85c8-d5b9-48f3-9408-0d9693d0cbf1","Type":"ContainerStarted","Data":"7529e703a7ba79a3c7d9ce9adbb48f6652641d0b42790d00cab813d47b85c9b6"} Feb 02 21:40:07 crc kubenswrapper[4789]: I0202 21:40:07.799104 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-959f7f8c5-pmqjf" event={"ID":"5c3c85c8-d5b9-48f3-9408-0d9693d0cbf1","Type":"ContainerStarted","Data":"515297fe8dbc3fc649d583e30d1f7a1830bea72e21b40dc9d104ef3455ab5cb1"} Feb 02 21:40:07 crc kubenswrapper[4789]: I0202 21:40:07.805792 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-lg4td" event={"ID":"1f3d2a87-d6d0-4d7f-8686-90e1731ff80e","Type":"ContainerStarted","Data":"f03611fc3b4f64a18d8ad989aa70fe1f61fd98739fb82be5104c8aa9c9ec863c"} Feb 02 21:40:07 crc kubenswrapper[4789]: I0202 21:40:07.806417 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c9776ccc5-lg4td" Feb 02 21:40:07 crc kubenswrapper[4789]: I0202 21:40:07.809901 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 21:40:07 crc kubenswrapper[4789]: I0202 21:40:07.812931 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c202a904-ccae-4f90-a284-d7e2a3b5e0f7","Type":"ContainerDied","Data":"faf0413633a571a2aa2142ad2d16436da04925ec40c209f88a1ab83546f91c60"} Feb 02 21:40:07 crc kubenswrapper[4789]: I0202 21:40:07.812968 4789 scope.go:117] "RemoveContainer" containerID="a76f4d61d5ec4963188ac84468b388f9d8eb7c0cb10646c1ce4381479dc2dcc5" Feb 02 21:40:07 crc kubenswrapper[4789]: I0202 21:40:07.835082 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-959f7f8c5-pmqjf" podStartSLOduration=1.775662997 podStartE2EDuration="3.835067178s" podCreationTimestamp="2026-02-02 21:40:04 +0000 UTC" firstStartedPulling="2026-02-02 21:40:05.22594595 +0000 UTC m=+1225.520970969" lastFinishedPulling="2026-02-02 21:40:07.285350141 +0000 UTC m=+1227.580375150" observedRunningTime="2026-02-02 21:40:07.832752973 +0000 UTC m=+1228.127778012" watchObservedRunningTime="2026-02-02 21:40:07.835067178 +0000 UTC m=+1228.130092197" Feb 02 21:40:07 crc kubenswrapper[4789]: I0202 21:40:07.837975 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6d964c7466-fpqld" event={"ID":"802bda4f-2363-4ca6-a126-2ccf1448ed71","Type":"ContainerStarted","Data":"0a62728aedd4480cfd181d88be8ac00afa4f69cd9f3b44bd97a2e8305a5f31af"} Feb 02 21:40:07 crc kubenswrapper[4789]: I0202 21:40:07.838007 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6d964c7466-fpqld" event={"ID":"802bda4f-2363-4ca6-a126-2ccf1448ed71","Type":"ContainerStarted","Data":"0fe697a1f2000589c5ab93c3c47f9c76ebfb685c854fd86b08766edfb2d1a375"} Feb 02 21:40:07 crc kubenswrapper[4789]: I0202 21:40:07.862869 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c9776ccc5-lg4td" podStartSLOduration=3.862851453 podStartE2EDuration="3.862851453s" podCreationTimestamp="2026-02-02 21:40:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:40:07.852297805 +0000 UTC m=+1228.147322824" watchObservedRunningTime="2026-02-02 21:40:07.862851453 +0000 UTC m=+1228.157876472" Feb 02 21:40:07 crc kubenswrapper[4789]: I0202 21:40:07.889828 4789 scope.go:117] "RemoveContainer" containerID="6cae3582ac17969f7c322327aa9fd05ff26ce7194abd35666492ea00cba10c41" Feb 02 21:40:07 crc kubenswrapper[4789]: I0202 21:40:07.892517 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 21:40:07 crc kubenswrapper[4789]: I0202 21:40:07.910160 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 02 21:40:07 crc kubenswrapper[4789]: I0202 21:40:07.925634 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 02 21:40:07 crc kubenswrapper[4789]: E0202 21:40:07.926050 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c202a904-ccae-4f90-a284-d7e2a3b5e0f7" containerName="ceilometer-notification-agent" Feb 02 21:40:07 crc kubenswrapper[4789]: I0202 21:40:07.926068 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="c202a904-ccae-4f90-a284-d7e2a3b5e0f7" containerName="ceilometer-notification-agent" Feb 02 21:40:07 crc kubenswrapper[4789]: E0202 21:40:07.926084 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c202a904-ccae-4f90-a284-d7e2a3b5e0f7" containerName="proxy-httpd" Feb 02 21:40:07 crc kubenswrapper[4789]: I0202 21:40:07.926091 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="c202a904-ccae-4f90-a284-d7e2a3b5e0f7" containerName="proxy-httpd" Feb 02 21:40:07 crc kubenswrapper[4789]: E0202 21:40:07.926100 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c202a904-ccae-4f90-a284-d7e2a3b5e0f7" containerName="sg-core" Feb 02 21:40:07 crc kubenswrapper[4789]: I0202 21:40:07.926107 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="c202a904-ccae-4f90-a284-d7e2a3b5e0f7" containerName="sg-core" Feb 02 21:40:07 crc kubenswrapper[4789]: E0202 21:40:07.926128 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c202a904-ccae-4f90-a284-d7e2a3b5e0f7" containerName="ceilometer-central-agent" Feb 02 21:40:07 crc kubenswrapper[4789]: I0202 21:40:07.926134 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="c202a904-ccae-4f90-a284-d7e2a3b5e0f7" containerName="ceilometer-central-agent" Feb 02 21:40:07 crc kubenswrapper[4789]: I0202 21:40:07.926320 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="c202a904-ccae-4f90-a284-d7e2a3b5e0f7" containerName="ceilometer-central-agent" Feb 02 21:40:07 crc kubenswrapper[4789]: I0202 21:40:07.926336 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="c202a904-ccae-4f90-a284-d7e2a3b5e0f7" containerName="ceilometer-notification-agent" Feb 02 21:40:07 crc kubenswrapper[4789]: I0202 21:40:07.926352 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="c202a904-ccae-4f90-a284-d7e2a3b5e0f7" containerName="sg-core" Feb 02 21:40:07 crc kubenswrapper[4789]: I0202 21:40:07.926363 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="c202a904-ccae-4f90-a284-d7e2a3b5e0f7" containerName="proxy-httpd" Feb 02 21:40:07 crc kubenswrapper[4789]: I0202 21:40:07.927919 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 21:40:07 crc kubenswrapper[4789]: I0202 21:40:07.932159 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-6d964c7466-fpqld" podStartSLOduration=2.00457201 podStartE2EDuration="3.932143409s" podCreationTimestamp="2026-02-02 21:40:04 +0000 UTC" firstStartedPulling="2026-02-02 21:40:05.3297238 +0000 UTC m=+1225.624748819" lastFinishedPulling="2026-02-02 21:40:07.257295199 +0000 UTC m=+1227.552320218" observedRunningTime="2026-02-02 21:40:07.902200743 +0000 UTC m=+1228.197225752" watchObservedRunningTime="2026-02-02 21:40:07.932143409 +0000 UTC m=+1228.227168428" Feb 02 21:40:07 crc kubenswrapper[4789]: I0202 21:40:07.937267 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 02 21:40:07 crc kubenswrapper[4789]: I0202 21:40:07.937925 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 02 21:40:07 crc kubenswrapper[4789]: I0202 21:40:07.943305 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 21:40:07 crc kubenswrapper[4789]: I0202 21:40:07.958966 4789 scope.go:117] "RemoveContainer" containerID="3bca866ca5fa43c30d0609fc4b416a606fd180cd7401b978314705aae2cec2fb" Feb 02 21:40:07 crc kubenswrapper[4789]: I0202 21:40:07.995744 4789 scope.go:117] "RemoveContainer" containerID="fdcc1f90b212df2e933f135304ee045547580c325f7da0217384a6ca1384b603" Feb 02 21:40:08 crc kubenswrapper[4789]: I0202 21:40:08.068963 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5aa4ba1a-609b-483a-9754-8edf78aa7005-config-data\") pod \"ceilometer-0\" (UID: \"5aa4ba1a-609b-483a-9754-8edf78aa7005\") " pod="openstack/ceilometer-0" Feb 02 21:40:08 crc kubenswrapper[4789]: I0202 21:40:08.069202 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5aa4ba1a-609b-483a-9754-8edf78aa7005-scripts\") pod \"ceilometer-0\" (UID: \"5aa4ba1a-609b-483a-9754-8edf78aa7005\") " pod="openstack/ceilometer-0" Feb 02 21:40:08 crc kubenswrapper[4789]: I0202 21:40:08.069309 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxwbt\" (UniqueName: \"kubernetes.io/projected/5aa4ba1a-609b-483a-9754-8edf78aa7005-kube-api-access-kxwbt\") pod \"ceilometer-0\" (UID: \"5aa4ba1a-609b-483a-9754-8edf78aa7005\") " pod="openstack/ceilometer-0" Feb 02 21:40:08 crc kubenswrapper[4789]: I0202 21:40:08.069519 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5aa4ba1a-609b-483a-9754-8edf78aa7005-log-httpd\") pod \"ceilometer-0\" (UID: \"5aa4ba1a-609b-483a-9754-8edf78aa7005\") " pod="openstack/ceilometer-0" Feb 02 21:40:08 crc kubenswrapper[4789]: I0202 21:40:08.069680 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5aa4ba1a-609b-483a-9754-8edf78aa7005-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5aa4ba1a-609b-483a-9754-8edf78aa7005\") " pod="openstack/ceilometer-0" Feb 02 21:40:08 crc kubenswrapper[4789]: I0202 21:40:08.069744 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5aa4ba1a-609b-483a-9754-8edf78aa7005-run-httpd\") pod \"ceilometer-0\" (UID: \"5aa4ba1a-609b-483a-9754-8edf78aa7005\") " pod="openstack/ceilometer-0" Feb 02 21:40:08 crc kubenswrapper[4789]: I0202 21:40:08.069826 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5aa4ba1a-609b-483a-9754-8edf78aa7005-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5aa4ba1a-609b-483a-9754-8edf78aa7005\") " pod="openstack/ceilometer-0" Feb 02 21:40:08 crc kubenswrapper[4789]: I0202 21:40:08.171632 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5aa4ba1a-609b-483a-9754-8edf78aa7005-log-httpd\") pod \"ceilometer-0\" (UID: \"5aa4ba1a-609b-483a-9754-8edf78aa7005\") " pod="openstack/ceilometer-0" Feb 02 21:40:08 crc kubenswrapper[4789]: I0202 21:40:08.172028 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5aa4ba1a-609b-483a-9754-8edf78aa7005-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5aa4ba1a-609b-483a-9754-8edf78aa7005\") " pod="openstack/ceilometer-0" Feb 02 21:40:08 crc kubenswrapper[4789]: I0202 21:40:08.172845 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5aa4ba1a-609b-483a-9754-8edf78aa7005-run-httpd\") pod \"ceilometer-0\" (UID: \"5aa4ba1a-609b-483a-9754-8edf78aa7005\") " pod="openstack/ceilometer-0" Feb 02 21:40:08 crc kubenswrapper[4789]: I0202 21:40:08.172996 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5aa4ba1a-609b-483a-9754-8edf78aa7005-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5aa4ba1a-609b-483a-9754-8edf78aa7005\") " pod="openstack/ceilometer-0" Feb 02 21:40:08 crc kubenswrapper[4789]: I0202 21:40:08.173127 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5aa4ba1a-609b-483a-9754-8edf78aa7005-run-httpd\") pod \"ceilometer-0\" (UID: \"5aa4ba1a-609b-483a-9754-8edf78aa7005\") " pod="openstack/ceilometer-0" Feb 02 21:40:08 crc kubenswrapper[4789]: I0202 21:40:08.172123 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5aa4ba1a-609b-483a-9754-8edf78aa7005-log-httpd\") pod \"ceilometer-0\" (UID: \"5aa4ba1a-609b-483a-9754-8edf78aa7005\") " pod="openstack/ceilometer-0" Feb 02 21:40:08 crc kubenswrapper[4789]: I0202 21:40:08.173499 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5aa4ba1a-609b-483a-9754-8edf78aa7005-config-data\") pod \"ceilometer-0\" (UID: \"5aa4ba1a-609b-483a-9754-8edf78aa7005\") " pod="openstack/ceilometer-0" Feb 02 21:40:08 crc kubenswrapper[4789]: I0202 21:40:08.173667 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5aa4ba1a-609b-483a-9754-8edf78aa7005-scripts\") pod \"ceilometer-0\" (UID: \"5aa4ba1a-609b-483a-9754-8edf78aa7005\") " pod="openstack/ceilometer-0" Feb 02 21:40:08 crc kubenswrapper[4789]: I0202 21:40:08.173768 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxwbt\" (UniqueName: \"kubernetes.io/projected/5aa4ba1a-609b-483a-9754-8edf78aa7005-kube-api-access-kxwbt\") pod \"ceilometer-0\" (UID: \"5aa4ba1a-609b-483a-9754-8edf78aa7005\") " pod="openstack/ceilometer-0" Feb 02 21:40:08 crc kubenswrapper[4789]: I0202 21:40:08.176415 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5aa4ba1a-609b-483a-9754-8edf78aa7005-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5aa4ba1a-609b-483a-9754-8edf78aa7005\") " pod="openstack/ceilometer-0" Feb 02 21:40:08 crc kubenswrapper[4789]: I0202 21:40:08.179887 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5aa4ba1a-609b-483a-9754-8edf78aa7005-scripts\") pod \"ceilometer-0\" (UID: \"5aa4ba1a-609b-483a-9754-8edf78aa7005\") " pod="openstack/ceilometer-0" Feb 02 21:40:08 crc kubenswrapper[4789]: I0202 21:40:08.180118 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5aa4ba1a-609b-483a-9754-8edf78aa7005-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5aa4ba1a-609b-483a-9754-8edf78aa7005\") " pod="openstack/ceilometer-0" Feb 02 21:40:08 crc kubenswrapper[4789]: I0202 21:40:08.180278 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5aa4ba1a-609b-483a-9754-8edf78aa7005-config-data\") pod \"ceilometer-0\" (UID: \"5aa4ba1a-609b-483a-9754-8edf78aa7005\") " pod="openstack/ceilometer-0" Feb 02 21:40:08 crc kubenswrapper[4789]: I0202 21:40:08.199557 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxwbt\" (UniqueName: \"kubernetes.io/projected/5aa4ba1a-609b-483a-9754-8edf78aa7005-kube-api-access-kxwbt\") pod \"ceilometer-0\" (UID: \"5aa4ba1a-609b-483a-9754-8edf78aa7005\") " pod="openstack/ceilometer-0" Feb 02 21:40:08 crc kubenswrapper[4789]: I0202 21:40:08.266205 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 21:40:08 crc kubenswrapper[4789]: I0202 21:40:08.440508 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c202a904-ccae-4f90-a284-d7e2a3b5e0f7" path="/var/lib/kubelet/pods/c202a904-ccae-4f90-a284-d7e2a3b5e0f7/volumes" Feb 02 21:40:08 crc kubenswrapper[4789]: I0202 21:40:08.762419 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 21:40:08 crc kubenswrapper[4789]: W0202 21:40:08.772446 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5aa4ba1a_609b_483a_9754_8edf78aa7005.slice/crio-70f6f1e5ef75bb62ec00183491fc73f4856d00ad44462794e8992028c2521221 WatchSource:0}: Error finding container 70f6f1e5ef75bb62ec00183491fc73f4856d00ad44462794e8992028c2521221: Status 404 returned error can't find the container with id 70f6f1e5ef75bb62ec00183491fc73f4856d00ad44462794e8992028c2521221 Feb 02 21:40:08 crc kubenswrapper[4789]: I0202 21:40:08.846687 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a92498f9-dccf-4af4-b92c-41f6e89be39f","Type":"ContainerStarted","Data":"c11d939ff2d5fb61221ce2e270fc2f6b3b1046d00d31fecd8f7b5af21a0e45fa"} Feb 02 21:40:08 crc kubenswrapper[4789]: I0202 21:40:08.848268 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5aa4ba1a-609b-483a-9754-8edf78aa7005","Type":"ContainerStarted","Data":"70f6f1e5ef75bb62ec00183491fc73f4856d00ad44462794e8992028c2521221"} Feb 02 21:40:08 crc kubenswrapper[4789]: I0202 21:40:08.850653 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3e0c700a-39af-4c13-806d-4c049fe7bb85","Type":"ContainerStarted","Data":"956d3f1d07f0104b3a25469c0eb7299c4a5a0290b88633c7743892128eef2ade"} Feb 02 21:40:08 crc kubenswrapper[4789]: I0202 21:40:08.850892 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="3e0c700a-39af-4c13-806d-4c049fe7bb85" containerName="cinder-api-log" containerID="cri-o://fc89442da9c302f532838e2ef8a3b0d94646c34b1ef04de31a587f543a63c699" gracePeriod=30 Feb 02 21:40:08 crc kubenswrapper[4789]: I0202 21:40:08.851008 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="3e0c700a-39af-4c13-806d-4c049fe7bb85" containerName="cinder-api" containerID="cri-o://956d3f1d07f0104b3a25469c0eb7299c4a5a0290b88633c7743892128eef2ade" gracePeriod=30 Feb 02 21:40:08 crc kubenswrapper[4789]: I0202 21:40:08.851124 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 02 21:40:08 crc kubenswrapper[4789]: I0202 21:40:08.878950 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.873817253 podStartE2EDuration="4.878929684s" podCreationTimestamp="2026-02-02 21:40:04 +0000 UTC" firstStartedPulling="2026-02-02 21:40:05.142815634 +0000 UTC m=+1225.437840653" lastFinishedPulling="2026-02-02 21:40:06.147928065 +0000 UTC m=+1226.442953084" observedRunningTime="2026-02-02 21:40:08.866824563 +0000 UTC m=+1229.161849592" watchObservedRunningTime="2026-02-02 21:40:08.878929684 +0000 UTC m=+1229.173954713" Feb 02 21:40:08 crc kubenswrapper[4789]: I0202 21:40:08.906562 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.906540444 podStartE2EDuration="4.906540444s" podCreationTimestamp="2026-02-02 21:40:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:40:08.898162327 +0000 UTC m=+1229.193187346" watchObservedRunningTime="2026-02-02 21:40:08.906540444 +0000 UTC m=+1229.201565473" Feb 02 21:40:09 crc kubenswrapper[4789]: I0202 21:40:09.417619 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 02 21:40:09 crc kubenswrapper[4789]: I0202 21:40:09.596740 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 02 21:40:09 crc kubenswrapper[4789]: I0202 21:40:09.598641 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3e0c700a-39af-4c13-806d-4c049fe7bb85-etc-machine-id\") pod \"3e0c700a-39af-4c13-806d-4c049fe7bb85\" (UID: \"3e0c700a-39af-4c13-806d-4c049fe7bb85\") " Feb 02 21:40:09 crc kubenswrapper[4789]: I0202 21:40:09.598791 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e0c700a-39af-4c13-806d-4c049fe7bb85-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "3e0c700a-39af-4c13-806d-4c049fe7bb85" (UID: "3e0c700a-39af-4c13-806d-4c049fe7bb85"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 21:40:09 crc kubenswrapper[4789]: I0202 21:40:09.599000 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e0c700a-39af-4c13-806d-4c049fe7bb85-config-data\") pod \"3e0c700a-39af-4c13-806d-4c049fe7bb85\" (UID: \"3e0c700a-39af-4c13-806d-4c049fe7bb85\") " Feb 02 21:40:09 crc kubenswrapper[4789]: I0202 21:40:09.600111 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e0c700a-39af-4c13-806d-4c049fe7bb85-logs\") pod \"3e0c700a-39af-4c13-806d-4c049fe7bb85\" (UID: \"3e0c700a-39af-4c13-806d-4c049fe7bb85\") " Feb 02 21:40:09 crc kubenswrapper[4789]: I0202 21:40:09.600350 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zncxh\" (UniqueName: \"kubernetes.io/projected/3e0c700a-39af-4c13-806d-4c049fe7bb85-kube-api-access-zncxh\") pod \"3e0c700a-39af-4c13-806d-4c049fe7bb85\" (UID: \"3e0c700a-39af-4c13-806d-4c049fe7bb85\") " Feb 02 21:40:09 crc kubenswrapper[4789]: I0202 21:40:09.600445 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e0c700a-39af-4c13-806d-4c049fe7bb85-scripts\") pod \"3e0c700a-39af-4c13-806d-4c049fe7bb85\" (UID: \"3e0c700a-39af-4c13-806d-4c049fe7bb85\") " Feb 02 21:40:09 crc kubenswrapper[4789]: I0202 21:40:09.600562 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e0c700a-39af-4c13-806d-4c049fe7bb85-logs" (OuterVolumeSpecName: "logs") pod "3e0c700a-39af-4c13-806d-4c049fe7bb85" (UID: "3e0c700a-39af-4c13-806d-4c049fe7bb85"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 21:40:09 crc kubenswrapper[4789]: I0202 21:40:09.600535 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3e0c700a-39af-4c13-806d-4c049fe7bb85-config-data-custom\") pod \"3e0c700a-39af-4c13-806d-4c049fe7bb85\" (UID: \"3e0c700a-39af-4c13-806d-4c049fe7bb85\") " Feb 02 21:40:09 crc kubenswrapper[4789]: I0202 21:40:09.600762 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e0c700a-39af-4c13-806d-4c049fe7bb85-combined-ca-bundle\") pod \"3e0c700a-39af-4c13-806d-4c049fe7bb85\" (UID: \"3e0c700a-39af-4c13-806d-4c049fe7bb85\") " Feb 02 21:40:09 crc kubenswrapper[4789]: I0202 21:40:09.601729 4789 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3e0c700a-39af-4c13-806d-4c049fe7bb85-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 02 21:40:09 crc kubenswrapper[4789]: I0202 21:40:09.601787 4789 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e0c700a-39af-4c13-806d-4c049fe7bb85-logs\") on node \"crc\" DevicePath \"\"" Feb 02 21:40:09 crc kubenswrapper[4789]: I0202 21:40:09.603005 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e0c700a-39af-4c13-806d-4c049fe7bb85-scripts" (OuterVolumeSpecName: "scripts") pod "3e0c700a-39af-4c13-806d-4c049fe7bb85" (UID: "3e0c700a-39af-4c13-806d-4c049fe7bb85"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:40:09 crc kubenswrapper[4789]: I0202 21:40:09.604360 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e0c700a-39af-4c13-806d-4c049fe7bb85-kube-api-access-zncxh" (OuterVolumeSpecName: "kube-api-access-zncxh") pod "3e0c700a-39af-4c13-806d-4c049fe7bb85" (UID: "3e0c700a-39af-4c13-806d-4c049fe7bb85"). InnerVolumeSpecName "kube-api-access-zncxh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:40:09 crc kubenswrapper[4789]: I0202 21:40:09.607238 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e0c700a-39af-4c13-806d-4c049fe7bb85-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "3e0c700a-39af-4c13-806d-4c049fe7bb85" (UID: "3e0c700a-39af-4c13-806d-4c049fe7bb85"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:40:09 crc kubenswrapper[4789]: I0202 21:40:09.647652 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e0c700a-39af-4c13-806d-4c049fe7bb85-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3e0c700a-39af-4c13-806d-4c049fe7bb85" (UID: "3e0c700a-39af-4c13-806d-4c049fe7bb85"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:40:09 crc kubenswrapper[4789]: I0202 21:40:09.648140 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e0c700a-39af-4c13-806d-4c049fe7bb85-config-data" (OuterVolumeSpecName: "config-data") pod "3e0c700a-39af-4c13-806d-4c049fe7bb85" (UID: "3e0c700a-39af-4c13-806d-4c049fe7bb85"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:40:09 crc kubenswrapper[4789]: I0202 21:40:09.703453 4789 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e0c700a-39af-4c13-806d-4c049fe7bb85-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 21:40:09 crc kubenswrapper[4789]: I0202 21:40:09.703484 4789 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3e0c700a-39af-4c13-806d-4c049fe7bb85-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 02 21:40:09 crc kubenswrapper[4789]: I0202 21:40:09.703496 4789 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e0c700a-39af-4c13-806d-4c049fe7bb85-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 21:40:09 crc kubenswrapper[4789]: I0202 21:40:09.703506 4789 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e0c700a-39af-4c13-806d-4c049fe7bb85-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 21:40:09 crc kubenswrapper[4789]: I0202 21:40:09.703514 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zncxh\" (UniqueName: \"kubernetes.io/projected/3e0c700a-39af-4c13-806d-4c049fe7bb85-kube-api-access-zncxh\") on node \"crc\" DevicePath \"\"" Feb 02 21:40:09 crc kubenswrapper[4789]: I0202 21:40:09.866086 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5aa4ba1a-609b-483a-9754-8edf78aa7005","Type":"ContainerStarted","Data":"3f71d60c1d0c4efb65058a7c28d385073be727dfe87a5b819bf554bb70893395"} Feb 02 21:40:09 crc kubenswrapper[4789]: I0202 21:40:09.867941 4789 generic.go:334] "Generic (PLEG): container finished" podID="3e0c700a-39af-4c13-806d-4c049fe7bb85" containerID="956d3f1d07f0104b3a25469c0eb7299c4a5a0290b88633c7743892128eef2ade" exitCode=0 Feb 02 21:40:09 crc kubenswrapper[4789]: I0202 21:40:09.867985 4789 generic.go:334] "Generic (PLEG): container finished" podID="3e0c700a-39af-4c13-806d-4c049fe7bb85" containerID="fc89442da9c302f532838e2ef8a3b0d94646c34b1ef04de31a587f543a63c699" exitCode=143 Feb 02 21:40:09 crc kubenswrapper[4789]: I0202 21:40:09.868010 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3e0c700a-39af-4c13-806d-4c049fe7bb85","Type":"ContainerDied","Data":"956d3f1d07f0104b3a25469c0eb7299c4a5a0290b88633c7743892128eef2ade"} Feb 02 21:40:09 crc kubenswrapper[4789]: I0202 21:40:09.868097 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3e0c700a-39af-4c13-806d-4c049fe7bb85","Type":"ContainerDied","Data":"fc89442da9c302f532838e2ef8a3b0d94646c34b1ef04de31a587f543a63c699"} Feb 02 21:40:09 crc kubenswrapper[4789]: I0202 21:40:09.868120 4789 scope.go:117] "RemoveContainer" containerID="956d3f1d07f0104b3a25469c0eb7299c4a5a0290b88633c7743892128eef2ade" Feb 02 21:40:09 crc kubenswrapper[4789]: I0202 21:40:09.868203 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 02 21:40:09 crc kubenswrapper[4789]: I0202 21:40:09.868121 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3e0c700a-39af-4c13-806d-4c049fe7bb85","Type":"ContainerDied","Data":"05dba8d4e9be229ea0a40b38229f599a32e766e30f263e25582f5562937c4898"} Feb 02 21:40:10 crc kubenswrapper[4789]: I0202 21:40:10.048560 4789 scope.go:117] "RemoveContainer" containerID="fc89442da9c302f532838e2ef8a3b0d94646c34b1ef04de31a587f543a63c699" Feb 02 21:40:10 crc kubenswrapper[4789]: I0202 21:40:10.058884 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 02 21:40:10 crc kubenswrapper[4789]: I0202 21:40:10.081431 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 02 21:40:10 crc kubenswrapper[4789]: I0202 21:40:10.088840 4789 scope.go:117] "RemoveContainer" containerID="956d3f1d07f0104b3a25469c0eb7299c4a5a0290b88633c7743892128eef2ade" Feb 02 21:40:10 crc kubenswrapper[4789]: I0202 21:40:10.089680 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 02 21:40:10 crc kubenswrapper[4789]: E0202 21:40:10.090111 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e0c700a-39af-4c13-806d-4c049fe7bb85" containerName="cinder-api-log" Feb 02 21:40:10 crc kubenswrapper[4789]: I0202 21:40:10.090123 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e0c700a-39af-4c13-806d-4c049fe7bb85" containerName="cinder-api-log" Feb 02 21:40:10 crc kubenswrapper[4789]: E0202 21:40:10.090132 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e0c700a-39af-4c13-806d-4c049fe7bb85" containerName="cinder-api" Feb 02 21:40:10 crc kubenswrapper[4789]: I0202 21:40:10.090138 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e0c700a-39af-4c13-806d-4c049fe7bb85" containerName="cinder-api" Feb 02 21:40:10 crc kubenswrapper[4789]: I0202 21:40:10.090287 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e0c700a-39af-4c13-806d-4c049fe7bb85" containerName="cinder-api" Feb 02 21:40:10 crc kubenswrapper[4789]: I0202 21:40:10.090306 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e0c700a-39af-4c13-806d-4c049fe7bb85" containerName="cinder-api-log" Feb 02 21:40:10 crc kubenswrapper[4789]: I0202 21:40:10.091251 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 02 21:40:10 crc kubenswrapper[4789]: E0202 21:40:10.091703 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"956d3f1d07f0104b3a25469c0eb7299c4a5a0290b88633c7743892128eef2ade\": container with ID starting with 956d3f1d07f0104b3a25469c0eb7299c4a5a0290b88633c7743892128eef2ade not found: ID does not exist" containerID="956d3f1d07f0104b3a25469c0eb7299c4a5a0290b88633c7743892128eef2ade" Feb 02 21:40:10 crc kubenswrapper[4789]: I0202 21:40:10.091834 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"956d3f1d07f0104b3a25469c0eb7299c4a5a0290b88633c7743892128eef2ade"} err="failed to get container status \"956d3f1d07f0104b3a25469c0eb7299c4a5a0290b88633c7743892128eef2ade\": rpc error: code = NotFound desc = could not find container \"956d3f1d07f0104b3a25469c0eb7299c4a5a0290b88633c7743892128eef2ade\": container with ID starting with 956d3f1d07f0104b3a25469c0eb7299c4a5a0290b88633c7743892128eef2ade not found: ID does not exist" Feb 02 21:40:10 crc kubenswrapper[4789]: I0202 21:40:10.091940 4789 scope.go:117] "RemoveContainer" containerID="fc89442da9c302f532838e2ef8a3b0d94646c34b1ef04de31a587f543a63c699" Feb 02 21:40:10 crc kubenswrapper[4789]: E0202 21:40:10.095743 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc89442da9c302f532838e2ef8a3b0d94646c34b1ef04de31a587f543a63c699\": container with ID starting with fc89442da9c302f532838e2ef8a3b0d94646c34b1ef04de31a587f543a63c699 not found: ID does not exist" containerID="fc89442da9c302f532838e2ef8a3b0d94646c34b1ef04de31a587f543a63c699" Feb 02 21:40:10 crc kubenswrapper[4789]: I0202 21:40:10.096066 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc89442da9c302f532838e2ef8a3b0d94646c34b1ef04de31a587f543a63c699"} err="failed to get container status \"fc89442da9c302f532838e2ef8a3b0d94646c34b1ef04de31a587f543a63c699\": rpc error: code = NotFound desc = could not find container \"fc89442da9c302f532838e2ef8a3b0d94646c34b1ef04de31a587f543a63c699\": container with ID starting with fc89442da9c302f532838e2ef8a3b0d94646c34b1ef04de31a587f543a63c699 not found: ID does not exist" Feb 02 21:40:10 crc kubenswrapper[4789]: I0202 21:40:10.096094 4789 scope.go:117] "RemoveContainer" containerID="956d3f1d07f0104b3a25469c0eb7299c4a5a0290b88633c7743892128eef2ade" Feb 02 21:40:10 crc kubenswrapper[4789]: I0202 21:40:10.095979 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 02 21:40:10 crc kubenswrapper[4789]: I0202 21:40:10.096063 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 02 21:40:10 crc kubenswrapper[4789]: I0202 21:40:10.096299 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 02 21:40:10 crc kubenswrapper[4789]: I0202 21:40:10.099715 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"956d3f1d07f0104b3a25469c0eb7299c4a5a0290b88633c7743892128eef2ade"} err="failed to get container status \"956d3f1d07f0104b3a25469c0eb7299c4a5a0290b88633c7743892128eef2ade\": rpc error: code = NotFound desc = could not find container \"956d3f1d07f0104b3a25469c0eb7299c4a5a0290b88633c7743892128eef2ade\": container with ID starting with 956d3f1d07f0104b3a25469c0eb7299c4a5a0290b88633c7743892128eef2ade not found: ID does not exist" Feb 02 21:40:10 crc kubenswrapper[4789]: I0202 21:40:10.099749 4789 scope.go:117] "RemoveContainer" containerID="fc89442da9c302f532838e2ef8a3b0d94646c34b1ef04de31a587f543a63c699" Feb 02 21:40:10 crc kubenswrapper[4789]: I0202 21:40:10.103954 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc89442da9c302f532838e2ef8a3b0d94646c34b1ef04de31a587f543a63c699"} err="failed to get container status \"fc89442da9c302f532838e2ef8a3b0d94646c34b1ef04de31a587f543a63c699\": rpc error: code = NotFound desc = could not find container \"fc89442da9c302f532838e2ef8a3b0d94646c34b1ef04de31a587f543a63c699\": container with ID starting with fc89442da9c302f532838e2ef8a3b0d94646c34b1ef04de31a587f543a63c699 not found: ID does not exist" Feb 02 21:40:10 crc kubenswrapper[4789]: I0202 21:40:10.107473 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 02 21:40:10 crc kubenswrapper[4789]: I0202 21:40:10.211666 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7acbb536-0a08-4132-a84a-848735b0e7f4-config-data\") pod \"cinder-api-0\" (UID: \"7acbb536-0a08-4132-a84a-848735b0e7f4\") " pod="openstack/cinder-api-0" Feb 02 21:40:10 crc kubenswrapper[4789]: I0202 21:40:10.211958 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7acbb536-0a08-4132-a84a-848735b0e7f4-etc-machine-id\") pod \"cinder-api-0\" (UID: \"7acbb536-0a08-4132-a84a-848735b0e7f4\") " pod="openstack/cinder-api-0" Feb 02 21:40:10 crc kubenswrapper[4789]: I0202 21:40:10.212068 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7acbb536-0a08-4132-a84a-848735b0e7f4-config-data-custom\") pod \"cinder-api-0\" (UID: \"7acbb536-0a08-4132-a84a-848735b0e7f4\") " pod="openstack/cinder-api-0" Feb 02 21:40:10 crc kubenswrapper[4789]: I0202 21:40:10.212151 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7acbb536-0a08-4132-a84a-848735b0e7f4-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"7acbb536-0a08-4132-a84a-848735b0e7f4\") " pod="openstack/cinder-api-0" Feb 02 21:40:10 crc kubenswrapper[4789]: I0202 21:40:10.212223 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7acbb536-0a08-4132-a84a-848735b0e7f4-scripts\") pod \"cinder-api-0\" (UID: \"7acbb536-0a08-4132-a84a-848735b0e7f4\") " pod="openstack/cinder-api-0" Feb 02 21:40:10 crc kubenswrapper[4789]: I0202 21:40:10.212302 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7acbb536-0a08-4132-a84a-848735b0e7f4-public-tls-certs\") pod \"cinder-api-0\" (UID: \"7acbb536-0a08-4132-a84a-848735b0e7f4\") " pod="openstack/cinder-api-0" Feb 02 21:40:10 crc kubenswrapper[4789]: I0202 21:40:10.212370 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7acbb536-0a08-4132-a84a-848735b0e7f4-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"7acbb536-0a08-4132-a84a-848735b0e7f4\") " pod="openstack/cinder-api-0" Feb 02 21:40:10 crc kubenswrapper[4789]: I0202 21:40:10.212477 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7acbb536-0a08-4132-a84a-848735b0e7f4-logs\") pod \"cinder-api-0\" (UID: \"7acbb536-0a08-4132-a84a-848735b0e7f4\") " pod="openstack/cinder-api-0" Feb 02 21:40:10 crc kubenswrapper[4789]: I0202 21:40:10.212599 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mbtn\" (UniqueName: \"kubernetes.io/projected/7acbb536-0a08-4132-a84a-848735b0e7f4-kube-api-access-2mbtn\") pod \"cinder-api-0\" (UID: \"7acbb536-0a08-4132-a84a-848735b0e7f4\") " pod="openstack/cinder-api-0" Feb 02 21:40:10 crc kubenswrapper[4789]: I0202 21:40:10.313985 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7acbb536-0a08-4132-a84a-848735b0e7f4-logs\") pod \"cinder-api-0\" (UID: \"7acbb536-0a08-4132-a84a-848735b0e7f4\") " pod="openstack/cinder-api-0" Feb 02 21:40:10 crc kubenswrapper[4789]: I0202 21:40:10.314030 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mbtn\" (UniqueName: \"kubernetes.io/projected/7acbb536-0a08-4132-a84a-848735b0e7f4-kube-api-access-2mbtn\") pod \"cinder-api-0\" (UID: \"7acbb536-0a08-4132-a84a-848735b0e7f4\") " pod="openstack/cinder-api-0" Feb 02 21:40:10 crc kubenswrapper[4789]: I0202 21:40:10.314084 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7acbb536-0a08-4132-a84a-848735b0e7f4-config-data\") pod \"cinder-api-0\" (UID: \"7acbb536-0a08-4132-a84a-848735b0e7f4\") " pod="openstack/cinder-api-0" Feb 02 21:40:10 crc kubenswrapper[4789]: I0202 21:40:10.314155 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7acbb536-0a08-4132-a84a-848735b0e7f4-etc-machine-id\") pod \"cinder-api-0\" (UID: \"7acbb536-0a08-4132-a84a-848735b0e7f4\") " pod="openstack/cinder-api-0" Feb 02 21:40:10 crc kubenswrapper[4789]: I0202 21:40:10.314170 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7acbb536-0a08-4132-a84a-848735b0e7f4-config-data-custom\") pod \"cinder-api-0\" (UID: \"7acbb536-0a08-4132-a84a-848735b0e7f4\") " pod="openstack/cinder-api-0" Feb 02 21:40:10 crc kubenswrapper[4789]: I0202 21:40:10.314193 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7acbb536-0a08-4132-a84a-848735b0e7f4-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"7acbb536-0a08-4132-a84a-848735b0e7f4\") " pod="openstack/cinder-api-0" Feb 02 21:40:10 crc kubenswrapper[4789]: I0202 21:40:10.314216 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7acbb536-0a08-4132-a84a-848735b0e7f4-scripts\") pod \"cinder-api-0\" (UID: \"7acbb536-0a08-4132-a84a-848735b0e7f4\") " pod="openstack/cinder-api-0" Feb 02 21:40:10 crc kubenswrapper[4789]: I0202 21:40:10.314240 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7acbb536-0a08-4132-a84a-848735b0e7f4-public-tls-certs\") pod \"cinder-api-0\" (UID: \"7acbb536-0a08-4132-a84a-848735b0e7f4\") " pod="openstack/cinder-api-0" Feb 02 21:40:10 crc kubenswrapper[4789]: I0202 21:40:10.314260 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7acbb536-0a08-4132-a84a-848735b0e7f4-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"7acbb536-0a08-4132-a84a-848735b0e7f4\") " pod="openstack/cinder-api-0" Feb 02 21:40:10 crc kubenswrapper[4789]: I0202 21:40:10.314532 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7acbb536-0a08-4132-a84a-848735b0e7f4-etc-machine-id\") pod \"cinder-api-0\" (UID: \"7acbb536-0a08-4132-a84a-848735b0e7f4\") " pod="openstack/cinder-api-0" Feb 02 21:40:10 crc kubenswrapper[4789]: I0202 21:40:10.314640 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7acbb536-0a08-4132-a84a-848735b0e7f4-logs\") pod \"cinder-api-0\" (UID: \"7acbb536-0a08-4132-a84a-848735b0e7f4\") " pod="openstack/cinder-api-0" Feb 02 21:40:10 crc kubenswrapper[4789]: I0202 21:40:10.320952 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7acbb536-0a08-4132-a84a-848735b0e7f4-scripts\") pod \"cinder-api-0\" (UID: \"7acbb536-0a08-4132-a84a-848735b0e7f4\") " pod="openstack/cinder-api-0" Feb 02 21:40:10 crc kubenswrapper[4789]: I0202 21:40:10.321363 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7acbb536-0a08-4132-a84a-848735b0e7f4-config-data\") pod \"cinder-api-0\" (UID: \"7acbb536-0a08-4132-a84a-848735b0e7f4\") " pod="openstack/cinder-api-0" Feb 02 21:40:10 crc kubenswrapper[4789]: I0202 21:40:10.325118 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7acbb536-0a08-4132-a84a-848735b0e7f4-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"7acbb536-0a08-4132-a84a-848735b0e7f4\") " pod="openstack/cinder-api-0" Feb 02 21:40:10 crc kubenswrapper[4789]: I0202 21:40:10.326564 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7acbb536-0a08-4132-a84a-848735b0e7f4-public-tls-certs\") pod \"cinder-api-0\" (UID: \"7acbb536-0a08-4132-a84a-848735b0e7f4\") " pod="openstack/cinder-api-0" Feb 02 21:40:10 crc kubenswrapper[4789]: I0202 21:40:10.337313 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7acbb536-0a08-4132-a84a-848735b0e7f4-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"7acbb536-0a08-4132-a84a-848735b0e7f4\") " pod="openstack/cinder-api-0" Feb 02 21:40:10 crc kubenswrapper[4789]: I0202 21:40:10.344458 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7acbb536-0a08-4132-a84a-848735b0e7f4-config-data-custom\") pod \"cinder-api-0\" (UID: \"7acbb536-0a08-4132-a84a-848735b0e7f4\") " pod="openstack/cinder-api-0" Feb 02 21:40:10 crc kubenswrapper[4789]: I0202 21:40:10.348266 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mbtn\" (UniqueName: \"kubernetes.io/projected/7acbb536-0a08-4132-a84a-848735b0e7f4-kube-api-access-2mbtn\") pod \"cinder-api-0\" (UID: \"7acbb536-0a08-4132-a84a-848735b0e7f4\") " pod="openstack/cinder-api-0" Feb 02 21:40:10 crc kubenswrapper[4789]: I0202 21:40:10.422191 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 02 21:40:10 crc kubenswrapper[4789]: I0202 21:40:10.432162 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e0c700a-39af-4c13-806d-4c049fe7bb85" path="/var/lib/kubelet/pods/3e0c700a-39af-4c13-806d-4c049fe7bb85/volumes" Feb 02 21:40:10 crc kubenswrapper[4789]: I0202 21:40:10.900484 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5aa4ba1a-609b-483a-9754-8edf78aa7005","Type":"ContainerStarted","Data":"8462926ba635cfdccaca393080803e41831db8362d17e2c80b17df9d05bdc58a"} Feb 02 21:40:10 crc kubenswrapper[4789]: I0202 21:40:10.900888 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5aa4ba1a-609b-483a-9754-8edf78aa7005","Type":"ContainerStarted","Data":"a2a34b67ac44800a4e7113e77632478a598a725c39031ea40fca40f80ce116f7"} Feb 02 21:40:10 crc kubenswrapper[4789]: W0202 21:40:10.936425 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7acbb536_0a08_4132_a84a_848735b0e7f4.slice/crio-786426b72abb5ee16b53d1263b8a0ac435b1b567312952f1f1931e7409c1d80f WatchSource:0}: Error finding container 786426b72abb5ee16b53d1263b8a0ac435b1b567312952f1f1931e7409c1d80f: Status 404 returned error can't find the container with id 786426b72abb5ee16b53d1263b8a0ac435b1b567312952f1f1931e7409c1d80f Feb 02 21:40:10 crc kubenswrapper[4789]: I0202 21:40:10.952660 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 02 21:40:11 crc kubenswrapper[4789]: I0202 21:40:11.273673 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6b8b9b54f6-jfnqs"] Feb 02 21:40:11 crc kubenswrapper[4789]: I0202 21:40:11.278244 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6b8b9b54f6-jfnqs" Feb 02 21:40:11 crc kubenswrapper[4789]: I0202 21:40:11.280156 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Feb 02 21:40:11 crc kubenswrapper[4789]: I0202 21:40:11.281790 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Feb 02 21:40:11 crc kubenswrapper[4789]: I0202 21:40:11.294106 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6b8b9b54f6-jfnqs"] Feb 02 21:40:11 crc kubenswrapper[4789]: I0202 21:40:11.368311 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7d53e4c0-add2-4cfd-bbea-e0a1d3196091-config-data-custom\") pod \"barbican-api-6b8b9b54f6-jfnqs\" (UID: \"7d53e4c0-add2-4cfd-bbea-e0a1d3196091\") " pod="openstack/barbican-api-6b8b9b54f6-jfnqs" Feb 02 21:40:11 crc kubenswrapper[4789]: I0202 21:40:11.368352 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d53e4c0-add2-4cfd-bbea-e0a1d3196091-logs\") pod \"barbican-api-6b8b9b54f6-jfnqs\" (UID: \"7d53e4c0-add2-4cfd-bbea-e0a1d3196091\") " pod="openstack/barbican-api-6b8b9b54f6-jfnqs" Feb 02 21:40:11 crc kubenswrapper[4789]: I0202 21:40:11.368406 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d53e4c0-add2-4cfd-bbea-e0a1d3196091-config-data\") pod \"barbican-api-6b8b9b54f6-jfnqs\" (UID: \"7d53e4c0-add2-4cfd-bbea-e0a1d3196091\") " pod="openstack/barbican-api-6b8b9b54f6-jfnqs" Feb 02 21:40:11 crc kubenswrapper[4789]: I0202 21:40:11.368436 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d53e4c0-add2-4cfd-bbea-e0a1d3196091-internal-tls-certs\") pod \"barbican-api-6b8b9b54f6-jfnqs\" (UID: \"7d53e4c0-add2-4cfd-bbea-e0a1d3196091\") " pod="openstack/barbican-api-6b8b9b54f6-jfnqs" Feb 02 21:40:11 crc kubenswrapper[4789]: I0202 21:40:11.368554 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d53e4c0-add2-4cfd-bbea-e0a1d3196091-combined-ca-bundle\") pod \"barbican-api-6b8b9b54f6-jfnqs\" (UID: \"7d53e4c0-add2-4cfd-bbea-e0a1d3196091\") " pod="openstack/barbican-api-6b8b9b54f6-jfnqs" Feb 02 21:40:11 crc kubenswrapper[4789]: I0202 21:40:11.368610 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jhtx\" (UniqueName: \"kubernetes.io/projected/7d53e4c0-add2-4cfd-bbea-e0a1d3196091-kube-api-access-9jhtx\") pod \"barbican-api-6b8b9b54f6-jfnqs\" (UID: \"7d53e4c0-add2-4cfd-bbea-e0a1d3196091\") " pod="openstack/barbican-api-6b8b9b54f6-jfnqs" Feb 02 21:40:11 crc kubenswrapper[4789]: I0202 21:40:11.368631 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d53e4c0-add2-4cfd-bbea-e0a1d3196091-public-tls-certs\") pod \"barbican-api-6b8b9b54f6-jfnqs\" (UID: \"7d53e4c0-add2-4cfd-bbea-e0a1d3196091\") " pod="openstack/barbican-api-6b8b9b54f6-jfnqs" Feb 02 21:40:11 crc kubenswrapper[4789]: I0202 21:40:11.471109 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d53e4c0-add2-4cfd-bbea-e0a1d3196091-config-data\") pod \"barbican-api-6b8b9b54f6-jfnqs\" (UID: \"7d53e4c0-add2-4cfd-bbea-e0a1d3196091\") " pod="openstack/barbican-api-6b8b9b54f6-jfnqs" Feb 02 21:40:11 crc kubenswrapper[4789]: I0202 21:40:11.471168 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d53e4c0-add2-4cfd-bbea-e0a1d3196091-internal-tls-certs\") pod \"barbican-api-6b8b9b54f6-jfnqs\" (UID: \"7d53e4c0-add2-4cfd-bbea-e0a1d3196091\") " pod="openstack/barbican-api-6b8b9b54f6-jfnqs" Feb 02 21:40:11 crc kubenswrapper[4789]: I0202 21:40:11.471254 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d53e4c0-add2-4cfd-bbea-e0a1d3196091-combined-ca-bundle\") pod \"barbican-api-6b8b9b54f6-jfnqs\" (UID: \"7d53e4c0-add2-4cfd-bbea-e0a1d3196091\") " pod="openstack/barbican-api-6b8b9b54f6-jfnqs" Feb 02 21:40:11 crc kubenswrapper[4789]: I0202 21:40:11.471330 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jhtx\" (UniqueName: \"kubernetes.io/projected/7d53e4c0-add2-4cfd-bbea-e0a1d3196091-kube-api-access-9jhtx\") pod \"barbican-api-6b8b9b54f6-jfnqs\" (UID: \"7d53e4c0-add2-4cfd-bbea-e0a1d3196091\") " pod="openstack/barbican-api-6b8b9b54f6-jfnqs" Feb 02 21:40:11 crc kubenswrapper[4789]: I0202 21:40:11.471360 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d53e4c0-add2-4cfd-bbea-e0a1d3196091-public-tls-certs\") pod \"barbican-api-6b8b9b54f6-jfnqs\" (UID: \"7d53e4c0-add2-4cfd-bbea-e0a1d3196091\") " pod="openstack/barbican-api-6b8b9b54f6-jfnqs" Feb 02 21:40:11 crc kubenswrapper[4789]: I0202 21:40:11.471392 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7d53e4c0-add2-4cfd-bbea-e0a1d3196091-config-data-custom\") pod \"barbican-api-6b8b9b54f6-jfnqs\" (UID: \"7d53e4c0-add2-4cfd-bbea-e0a1d3196091\") " pod="openstack/barbican-api-6b8b9b54f6-jfnqs" Feb 02 21:40:11 crc kubenswrapper[4789]: I0202 21:40:11.471410 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d53e4c0-add2-4cfd-bbea-e0a1d3196091-logs\") pod \"barbican-api-6b8b9b54f6-jfnqs\" (UID: \"7d53e4c0-add2-4cfd-bbea-e0a1d3196091\") " pod="openstack/barbican-api-6b8b9b54f6-jfnqs" Feb 02 21:40:11 crc kubenswrapper[4789]: I0202 21:40:11.471926 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d53e4c0-add2-4cfd-bbea-e0a1d3196091-logs\") pod \"barbican-api-6b8b9b54f6-jfnqs\" (UID: \"7d53e4c0-add2-4cfd-bbea-e0a1d3196091\") " pod="openstack/barbican-api-6b8b9b54f6-jfnqs" Feb 02 21:40:11 crc kubenswrapper[4789]: I0202 21:40:11.478330 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7d53e4c0-add2-4cfd-bbea-e0a1d3196091-config-data-custom\") pod \"barbican-api-6b8b9b54f6-jfnqs\" (UID: \"7d53e4c0-add2-4cfd-bbea-e0a1d3196091\") " pod="openstack/barbican-api-6b8b9b54f6-jfnqs" Feb 02 21:40:11 crc kubenswrapper[4789]: I0202 21:40:11.478691 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d53e4c0-add2-4cfd-bbea-e0a1d3196091-public-tls-certs\") pod \"barbican-api-6b8b9b54f6-jfnqs\" (UID: \"7d53e4c0-add2-4cfd-bbea-e0a1d3196091\") " pod="openstack/barbican-api-6b8b9b54f6-jfnqs" Feb 02 21:40:11 crc kubenswrapper[4789]: I0202 21:40:11.479291 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d53e4c0-add2-4cfd-bbea-e0a1d3196091-config-data\") pod \"barbican-api-6b8b9b54f6-jfnqs\" (UID: \"7d53e4c0-add2-4cfd-bbea-e0a1d3196091\") " pod="openstack/barbican-api-6b8b9b54f6-jfnqs" Feb 02 21:40:11 crc kubenswrapper[4789]: I0202 21:40:11.479455 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d53e4c0-add2-4cfd-bbea-e0a1d3196091-combined-ca-bundle\") pod \"barbican-api-6b8b9b54f6-jfnqs\" (UID: \"7d53e4c0-add2-4cfd-bbea-e0a1d3196091\") " pod="openstack/barbican-api-6b8b9b54f6-jfnqs" Feb 02 21:40:11 crc kubenswrapper[4789]: I0202 21:40:11.484220 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d53e4c0-add2-4cfd-bbea-e0a1d3196091-internal-tls-certs\") pod \"barbican-api-6b8b9b54f6-jfnqs\" (UID: \"7d53e4c0-add2-4cfd-bbea-e0a1d3196091\") " pod="openstack/barbican-api-6b8b9b54f6-jfnqs" Feb 02 21:40:11 crc kubenswrapper[4789]: I0202 21:40:11.488377 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jhtx\" (UniqueName: \"kubernetes.io/projected/7d53e4c0-add2-4cfd-bbea-e0a1d3196091-kube-api-access-9jhtx\") pod \"barbican-api-6b8b9b54f6-jfnqs\" (UID: \"7d53e4c0-add2-4cfd-bbea-e0a1d3196091\") " pod="openstack/barbican-api-6b8b9b54f6-jfnqs" Feb 02 21:40:11 crc kubenswrapper[4789]: I0202 21:40:11.613073 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6b8b9b54f6-jfnqs" Feb 02 21:40:11 crc kubenswrapper[4789]: I0202 21:40:11.947302 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7acbb536-0a08-4132-a84a-848735b0e7f4","Type":"ContainerStarted","Data":"c6597dc6aaeaebf47e22acb882e2ae643e5ed20e86abaacc9a1e3bf64ebb15a3"} Feb 02 21:40:11 crc kubenswrapper[4789]: I0202 21:40:11.947666 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7acbb536-0a08-4132-a84a-848735b0e7f4","Type":"ContainerStarted","Data":"786426b72abb5ee16b53d1263b8a0ac435b1b567312952f1f1931e7409c1d80f"} Feb 02 21:40:12 crc kubenswrapper[4789]: I0202 21:40:12.203902 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6b8b9b54f6-jfnqs"] Feb 02 21:40:12 crc kubenswrapper[4789]: I0202 21:40:12.973229 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6b8b9b54f6-jfnqs" event={"ID":"7d53e4c0-add2-4cfd-bbea-e0a1d3196091","Type":"ContainerStarted","Data":"25969b57d6ee15da22b2fd6fac46c116130225ea93ef2013003c96e7fe1d6cca"} Feb 02 21:40:12 crc kubenswrapper[4789]: I0202 21:40:12.973765 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6b8b9b54f6-jfnqs" event={"ID":"7d53e4c0-add2-4cfd-bbea-e0a1d3196091","Type":"ContainerStarted","Data":"4d137886e123097c6077816303161de8f1beb2278c8b0ec65bb058b0d9f03c90"} Feb 02 21:40:12 crc kubenswrapper[4789]: I0202 21:40:12.973781 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6b8b9b54f6-jfnqs" event={"ID":"7d53e4c0-add2-4cfd-bbea-e0a1d3196091","Type":"ContainerStarted","Data":"7a775f6f6d427969f7331fcfc27a064e0d64b044f52aba9cb29e1ee6c0b0084f"} Feb 02 21:40:12 crc kubenswrapper[4789]: I0202 21:40:12.974960 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6b8b9b54f6-jfnqs" Feb 02 21:40:12 crc kubenswrapper[4789]: I0202 21:40:12.974991 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6b8b9b54f6-jfnqs" Feb 02 21:40:12 crc kubenswrapper[4789]: I0202 21:40:12.984929 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7acbb536-0a08-4132-a84a-848735b0e7f4","Type":"ContainerStarted","Data":"25ed4343b75caa0616ab66903bb372442dbf22b4a29f2c30b9fcf20df97021f7"} Feb 02 21:40:12 crc kubenswrapper[4789]: I0202 21:40:12.985710 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 02 21:40:13 crc kubenswrapper[4789]: I0202 21:40:13.008330 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6b8b9b54f6-jfnqs" podStartSLOduration=2.008309066 podStartE2EDuration="2.008309066s" podCreationTimestamp="2026-02-02 21:40:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:40:12.991654855 +0000 UTC m=+1233.286679874" watchObservedRunningTime="2026-02-02 21:40:13.008309066 +0000 UTC m=+1233.303334095" Feb 02 21:40:13 crc kubenswrapper[4789]: I0202 21:40:13.022690 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.022669582 podStartE2EDuration="3.022669582s" podCreationTimestamp="2026-02-02 21:40:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:40:13.021456658 +0000 UTC m=+1233.316481697" watchObservedRunningTime="2026-02-02 21:40:13.022669582 +0000 UTC m=+1233.317694601" Feb 02 21:40:14 crc kubenswrapper[4789]: I0202 21:40:14.000170 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5aa4ba1a-609b-483a-9754-8edf78aa7005","Type":"ContainerStarted","Data":"3828f3d41f48ee6fecf554ae9ad3599ba717793245db0423f1febe380ff11d25"} Feb 02 21:40:14 crc kubenswrapper[4789]: I0202 21:40:14.032974 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.92160214 podStartE2EDuration="7.032944624s" podCreationTimestamp="2026-02-02 21:40:07 +0000 UTC" firstStartedPulling="2026-02-02 21:40:08.775370081 +0000 UTC m=+1229.070395100" lastFinishedPulling="2026-02-02 21:40:12.886712575 +0000 UTC m=+1233.181737584" observedRunningTime="2026-02-02 21:40:14.02818313 +0000 UTC m=+1234.323208159" watchObservedRunningTime="2026-02-02 21:40:14.032944624 +0000 UTC m=+1234.327969673" Feb 02 21:40:14 crc kubenswrapper[4789]: I0202 21:40:14.818670 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 02 21:40:14 crc kubenswrapper[4789]: I0202 21:40:14.894294 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 21:40:14 crc kubenswrapper[4789]: I0202 21:40:14.926015 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c9776ccc5-lg4td" Feb 02 21:40:15 crc kubenswrapper[4789]: I0202 21:40:15.006925 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-7vxpx"] Feb 02 21:40:15 crc kubenswrapper[4789]: I0202 21:40:15.007160 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-55f844cf75-7vxpx" podUID="6feea529-c31a-419e-92cd-46a8500def8d" containerName="dnsmasq-dns" containerID="cri-o://334df413b266b7e23094c3bbc93f06a2583f26ce70a6c07b68919d209fdbde59" gracePeriod=10 Feb 02 21:40:15 crc kubenswrapper[4789]: I0202 21:40:15.027116 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="a92498f9-dccf-4af4-b92c-41f6e89be39f" containerName="cinder-scheduler" containerID="cri-o://7f749f84676267d30d6db302074fd1dfd49ec9175dc0d6b18b63c496efa8f4f9" gracePeriod=30 Feb 02 21:40:15 crc kubenswrapper[4789]: I0202 21:40:15.027678 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 02 21:40:15 crc kubenswrapper[4789]: I0202 21:40:15.028228 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="a92498f9-dccf-4af4-b92c-41f6e89be39f" containerName="probe" containerID="cri-o://c11d939ff2d5fb61221ce2e270fc2f6b3b1046d00d31fecd8f7b5af21a0e45fa" gracePeriod=30 Feb 02 21:40:15 crc kubenswrapper[4789]: I0202 21:40:15.053171 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5f5c98b5b4-lj8fk" Feb 02 21:40:15 crc kubenswrapper[4789]: I0202 21:40:15.319520 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7c6f8769f9-9q4zq"] Feb 02 21:40:15 crc kubenswrapper[4789]: I0202 21:40:15.319762 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7c6f8769f9-9q4zq" podUID="0e65e509-47bb-47f7-b129-74222d242dc8" containerName="neutron-api" containerID="cri-o://2158305fa050236e159ac1a89f405cae5ff72d7510d4a2c030187f59ccf546ba" gracePeriod=30 Feb 02 21:40:15 crc kubenswrapper[4789]: I0202 21:40:15.320117 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7c6f8769f9-9q4zq" podUID="0e65e509-47bb-47f7-b129-74222d242dc8" containerName="neutron-httpd" containerID="cri-o://6b1800b5f6de7d72f18adbb4dbe5b8f41c0f5c63326f6f70efaf329bc7e0debb" gracePeriod=30 Feb 02 21:40:15 crc kubenswrapper[4789]: I0202 21:40:15.362193 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7c4994f5f-462kb"] Feb 02 21:40:15 crc kubenswrapper[4789]: I0202 21:40:15.363622 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7c4994f5f-462kb" Feb 02 21:40:15 crc kubenswrapper[4789]: I0202 21:40:15.401790 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7c4994f5f-462kb"] Feb 02 21:40:15 crc kubenswrapper[4789]: I0202 21:40:15.456310 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/78b23a1f-cc85-4767-b19c-6069adfc735a-config\") pod \"neutron-7c4994f5f-462kb\" (UID: \"78b23a1f-cc85-4767-b19c-6069adfc735a\") " pod="openstack/neutron-7c4994f5f-462kb" Feb 02 21:40:15 crc kubenswrapper[4789]: I0202 21:40:15.456381 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/78b23a1f-cc85-4767-b19c-6069adfc735a-ovndb-tls-certs\") pod \"neutron-7c4994f5f-462kb\" (UID: \"78b23a1f-cc85-4767-b19c-6069adfc735a\") " pod="openstack/neutron-7c4994f5f-462kb" Feb 02 21:40:15 crc kubenswrapper[4789]: I0202 21:40:15.456404 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/78b23a1f-cc85-4767-b19c-6069adfc735a-public-tls-certs\") pod \"neutron-7c4994f5f-462kb\" (UID: \"78b23a1f-cc85-4767-b19c-6069adfc735a\") " pod="openstack/neutron-7c4994f5f-462kb" Feb 02 21:40:15 crc kubenswrapper[4789]: I0202 21:40:15.456418 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2bgm\" (UniqueName: \"kubernetes.io/projected/78b23a1f-cc85-4767-b19c-6069adfc735a-kube-api-access-s2bgm\") pod \"neutron-7c4994f5f-462kb\" (UID: \"78b23a1f-cc85-4767-b19c-6069adfc735a\") " pod="openstack/neutron-7c4994f5f-462kb" Feb 02 21:40:15 crc kubenswrapper[4789]: I0202 21:40:15.456438 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78b23a1f-cc85-4767-b19c-6069adfc735a-combined-ca-bundle\") pod \"neutron-7c4994f5f-462kb\" (UID: \"78b23a1f-cc85-4767-b19c-6069adfc735a\") " pod="openstack/neutron-7c4994f5f-462kb" Feb 02 21:40:15 crc kubenswrapper[4789]: I0202 21:40:15.456471 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/78b23a1f-cc85-4767-b19c-6069adfc735a-internal-tls-certs\") pod \"neutron-7c4994f5f-462kb\" (UID: \"78b23a1f-cc85-4767-b19c-6069adfc735a\") " pod="openstack/neutron-7c4994f5f-462kb" Feb 02 21:40:15 crc kubenswrapper[4789]: I0202 21:40:15.456500 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/78b23a1f-cc85-4767-b19c-6069adfc735a-httpd-config\") pod \"neutron-7c4994f5f-462kb\" (UID: \"78b23a1f-cc85-4767-b19c-6069adfc735a\") " pod="openstack/neutron-7c4994f5f-462kb" Feb 02 21:40:15 crc kubenswrapper[4789]: I0202 21:40:15.557985 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/78b23a1f-cc85-4767-b19c-6069adfc735a-ovndb-tls-certs\") pod \"neutron-7c4994f5f-462kb\" (UID: \"78b23a1f-cc85-4767-b19c-6069adfc735a\") " pod="openstack/neutron-7c4994f5f-462kb" Feb 02 21:40:15 crc kubenswrapper[4789]: I0202 21:40:15.558022 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/78b23a1f-cc85-4767-b19c-6069adfc735a-public-tls-certs\") pod \"neutron-7c4994f5f-462kb\" (UID: \"78b23a1f-cc85-4767-b19c-6069adfc735a\") " pod="openstack/neutron-7c4994f5f-462kb" Feb 02 21:40:15 crc kubenswrapper[4789]: I0202 21:40:15.558038 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2bgm\" (UniqueName: \"kubernetes.io/projected/78b23a1f-cc85-4767-b19c-6069adfc735a-kube-api-access-s2bgm\") pod \"neutron-7c4994f5f-462kb\" (UID: \"78b23a1f-cc85-4767-b19c-6069adfc735a\") " pod="openstack/neutron-7c4994f5f-462kb" Feb 02 21:40:15 crc kubenswrapper[4789]: I0202 21:40:15.558056 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78b23a1f-cc85-4767-b19c-6069adfc735a-combined-ca-bundle\") pod \"neutron-7c4994f5f-462kb\" (UID: \"78b23a1f-cc85-4767-b19c-6069adfc735a\") " pod="openstack/neutron-7c4994f5f-462kb" Feb 02 21:40:15 crc kubenswrapper[4789]: I0202 21:40:15.558120 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/78b23a1f-cc85-4767-b19c-6069adfc735a-internal-tls-certs\") pod \"neutron-7c4994f5f-462kb\" (UID: \"78b23a1f-cc85-4767-b19c-6069adfc735a\") " pod="openstack/neutron-7c4994f5f-462kb" Feb 02 21:40:15 crc kubenswrapper[4789]: I0202 21:40:15.558149 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/78b23a1f-cc85-4767-b19c-6069adfc735a-httpd-config\") pod \"neutron-7c4994f5f-462kb\" (UID: \"78b23a1f-cc85-4767-b19c-6069adfc735a\") " pod="openstack/neutron-7c4994f5f-462kb" Feb 02 21:40:15 crc kubenswrapper[4789]: I0202 21:40:15.558237 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/78b23a1f-cc85-4767-b19c-6069adfc735a-config\") pod \"neutron-7c4994f5f-462kb\" (UID: \"78b23a1f-cc85-4767-b19c-6069adfc735a\") " pod="openstack/neutron-7c4994f5f-462kb" Feb 02 21:40:15 crc kubenswrapper[4789]: I0202 21:40:15.569051 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/78b23a1f-cc85-4767-b19c-6069adfc735a-config\") pod \"neutron-7c4994f5f-462kb\" (UID: \"78b23a1f-cc85-4767-b19c-6069adfc735a\") " pod="openstack/neutron-7c4994f5f-462kb" Feb 02 21:40:15 crc kubenswrapper[4789]: I0202 21:40:15.570866 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/78b23a1f-cc85-4767-b19c-6069adfc735a-internal-tls-certs\") pod \"neutron-7c4994f5f-462kb\" (UID: \"78b23a1f-cc85-4767-b19c-6069adfc735a\") " pod="openstack/neutron-7c4994f5f-462kb" Feb 02 21:40:15 crc kubenswrapper[4789]: I0202 21:40:15.578298 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/78b23a1f-cc85-4767-b19c-6069adfc735a-ovndb-tls-certs\") pod \"neutron-7c4994f5f-462kb\" (UID: \"78b23a1f-cc85-4767-b19c-6069adfc735a\") " pod="openstack/neutron-7c4994f5f-462kb" Feb 02 21:40:15 crc kubenswrapper[4789]: I0202 21:40:15.581249 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/78b23a1f-cc85-4767-b19c-6069adfc735a-public-tls-certs\") pod \"neutron-7c4994f5f-462kb\" (UID: \"78b23a1f-cc85-4767-b19c-6069adfc735a\") " pod="openstack/neutron-7c4994f5f-462kb" Feb 02 21:40:15 crc kubenswrapper[4789]: I0202 21:40:15.599367 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/78b23a1f-cc85-4767-b19c-6069adfc735a-httpd-config\") pod \"neutron-7c4994f5f-462kb\" (UID: \"78b23a1f-cc85-4767-b19c-6069adfc735a\") " pod="openstack/neutron-7c4994f5f-462kb" Feb 02 21:40:15 crc kubenswrapper[4789]: I0202 21:40:15.600990 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78b23a1f-cc85-4767-b19c-6069adfc735a-combined-ca-bundle\") pod \"neutron-7c4994f5f-462kb\" (UID: \"78b23a1f-cc85-4767-b19c-6069adfc735a\") " pod="openstack/neutron-7c4994f5f-462kb" Feb 02 21:40:15 crc kubenswrapper[4789]: I0202 21:40:15.613466 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2bgm\" (UniqueName: \"kubernetes.io/projected/78b23a1f-cc85-4767-b19c-6069adfc735a-kube-api-access-s2bgm\") pod \"neutron-7c4994f5f-462kb\" (UID: \"78b23a1f-cc85-4767-b19c-6069adfc735a\") " pod="openstack/neutron-7c4994f5f-462kb" Feb 02 21:40:15 crc kubenswrapper[4789]: I0202 21:40:15.690906 4789 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-7c6f8769f9-9q4zq" podUID="0e65e509-47bb-47f7-b129-74222d242dc8" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.154:9696/\": read tcp 10.217.0.2:41634->10.217.0.154:9696: read: connection reset by peer" Feb 02 21:40:15 crc kubenswrapper[4789]: I0202 21:40:15.741764 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7c4994f5f-462kb" Feb 02 21:40:15 crc kubenswrapper[4789]: I0202 21:40:15.834969 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-7vxpx" Feb 02 21:40:15 crc kubenswrapper[4789]: I0202 21:40:15.964743 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6feea529-c31a-419e-92cd-46a8500def8d-ovsdbserver-nb\") pod \"6feea529-c31a-419e-92cd-46a8500def8d\" (UID: \"6feea529-c31a-419e-92cd-46a8500def8d\") " Feb 02 21:40:15 crc kubenswrapper[4789]: I0202 21:40:15.965060 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6feea529-c31a-419e-92cd-46a8500def8d-ovsdbserver-sb\") pod \"6feea529-c31a-419e-92cd-46a8500def8d\" (UID: \"6feea529-c31a-419e-92cd-46a8500def8d\") " Feb 02 21:40:15 crc kubenswrapper[4789]: I0202 21:40:15.965086 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6feea529-c31a-419e-92cd-46a8500def8d-dns-svc\") pod \"6feea529-c31a-419e-92cd-46a8500def8d\" (UID: \"6feea529-c31a-419e-92cd-46a8500def8d\") " Feb 02 21:40:15 crc kubenswrapper[4789]: I0202 21:40:15.965125 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6feea529-c31a-419e-92cd-46a8500def8d-config\") pod \"6feea529-c31a-419e-92cd-46a8500def8d\" (UID: \"6feea529-c31a-419e-92cd-46a8500def8d\") " Feb 02 21:40:15 crc kubenswrapper[4789]: I0202 21:40:15.965156 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6feea529-c31a-419e-92cd-46a8500def8d-dns-swift-storage-0\") pod \"6feea529-c31a-419e-92cd-46a8500def8d\" (UID: \"6feea529-c31a-419e-92cd-46a8500def8d\") " Feb 02 21:40:15 crc kubenswrapper[4789]: I0202 21:40:15.965597 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-shdxl\" (UniqueName: \"kubernetes.io/projected/6feea529-c31a-419e-92cd-46a8500def8d-kube-api-access-shdxl\") pod \"6feea529-c31a-419e-92cd-46a8500def8d\" (UID: \"6feea529-c31a-419e-92cd-46a8500def8d\") " Feb 02 21:40:15 crc kubenswrapper[4789]: I0202 21:40:15.974298 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6feea529-c31a-419e-92cd-46a8500def8d-kube-api-access-shdxl" (OuterVolumeSpecName: "kube-api-access-shdxl") pod "6feea529-c31a-419e-92cd-46a8500def8d" (UID: "6feea529-c31a-419e-92cd-46a8500def8d"). InnerVolumeSpecName "kube-api-access-shdxl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:40:16 crc kubenswrapper[4789]: I0202 21:40:16.031370 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6feea529-c31a-419e-92cd-46a8500def8d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6feea529-c31a-419e-92cd-46a8500def8d" (UID: "6feea529-c31a-419e-92cd-46a8500def8d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:40:16 crc kubenswrapper[4789]: I0202 21:40:16.042183 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6feea529-c31a-419e-92cd-46a8500def8d-config" (OuterVolumeSpecName: "config") pod "6feea529-c31a-419e-92cd-46a8500def8d" (UID: "6feea529-c31a-419e-92cd-46a8500def8d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:40:16 crc kubenswrapper[4789]: I0202 21:40:16.043874 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6feea529-c31a-419e-92cd-46a8500def8d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6feea529-c31a-419e-92cd-46a8500def8d" (UID: "6feea529-c31a-419e-92cd-46a8500def8d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:40:16 crc kubenswrapper[4789]: I0202 21:40:16.045425 4789 generic.go:334] "Generic (PLEG): container finished" podID="6feea529-c31a-419e-92cd-46a8500def8d" containerID="334df413b266b7e23094c3bbc93f06a2583f26ce70a6c07b68919d209fdbde59" exitCode=0 Feb 02 21:40:16 crc kubenswrapper[4789]: I0202 21:40:16.045526 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-7vxpx" event={"ID":"6feea529-c31a-419e-92cd-46a8500def8d","Type":"ContainerDied","Data":"334df413b266b7e23094c3bbc93f06a2583f26ce70a6c07b68919d209fdbde59"} Feb 02 21:40:16 crc kubenswrapper[4789]: I0202 21:40:16.045558 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-7vxpx" event={"ID":"6feea529-c31a-419e-92cd-46a8500def8d","Type":"ContainerDied","Data":"34d84377d39237467b8cf31b7b1802425f5ec004d522e2e1a3300bc825405a64"} Feb 02 21:40:16 crc kubenswrapper[4789]: I0202 21:40:16.045579 4789 scope.go:117] "RemoveContainer" containerID="334df413b266b7e23094c3bbc93f06a2583f26ce70a6c07b68919d209fdbde59" Feb 02 21:40:16 crc kubenswrapper[4789]: I0202 21:40:16.045718 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-7vxpx" Feb 02 21:40:16 crc kubenswrapper[4789]: I0202 21:40:16.049240 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6feea529-c31a-419e-92cd-46a8500def8d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6feea529-c31a-419e-92cd-46a8500def8d" (UID: "6feea529-c31a-419e-92cd-46a8500def8d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:40:16 crc kubenswrapper[4789]: I0202 21:40:16.049786 4789 generic.go:334] "Generic (PLEG): container finished" podID="0e65e509-47bb-47f7-b129-74222d242dc8" containerID="6b1800b5f6de7d72f18adbb4dbe5b8f41c0f5c63326f6f70efaf329bc7e0debb" exitCode=0 Feb 02 21:40:16 crc kubenswrapper[4789]: I0202 21:40:16.049853 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7c6f8769f9-9q4zq" event={"ID":"0e65e509-47bb-47f7-b129-74222d242dc8","Type":"ContainerDied","Data":"6b1800b5f6de7d72f18adbb4dbe5b8f41c0f5c63326f6f70efaf329bc7e0debb"} Feb 02 21:40:16 crc kubenswrapper[4789]: I0202 21:40:16.053206 4789 generic.go:334] "Generic (PLEG): container finished" podID="a92498f9-dccf-4af4-b92c-41f6e89be39f" containerID="c11d939ff2d5fb61221ce2e270fc2f6b3b1046d00d31fecd8f7b5af21a0e45fa" exitCode=0 Feb 02 21:40:16 crc kubenswrapper[4789]: I0202 21:40:16.054702 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a92498f9-dccf-4af4-b92c-41f6e89be39f","Type":"ContainerDied","Data":"c11d939ff2d5fb61221ce2e270fc2f6b3b1046d00d31fecd8f7b5af21a0e45fa"} Feb 02 21:40:16 crc kubenswrapper[4789]: I0202 21:40:16.066188 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6feea529-c31a-419e-92cd-46a8500def8d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "6feea529-c31a-419e-92cd-46a8500def8d" (UID: "6feea529-c31a-419e-92cd-46a8500def8d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:40:16 crc kubenswrapper[4789]: I0202 21:40:16.068047 4789 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6feea529-c31a-419e-92cd-46a8500def8d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 21:40:16 crc kubenswrapper[4789]: I0202 21:40:16.068070 4789 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6feea529-c31a-419e-92cd-46a8500def8d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 21:40:16 crc kubenswrapper[4789]: I0202 21:40:16.068079 4789 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6feea529-c31a-419e-92cd-46a8500def8d-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 21:40:16 crc kubenswrapper[4789]: I0202 21:40:16.068088 4789 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6feea529-c31a-419e-92cd-46a8500def8d-config\") on node \"crc\" DevicePath \"\"" Feb 02 21:40:16 crc kubenswrapper[4789]: I0202 21:40:16.068096 4789 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6feea529-c31a-419e-92cd-46a8500def8d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 02 21:40:16 crc kubenswrapper[4789]: I0202 21:40:16.068106 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-shdxl\" (UniqueName: \"kubernetes.io/projected/6feea529-c31a-419e-92cd-46a8500def8d-kube-api-access-shdxl\") on node \"crc\" DevicePath \"\"" Feb 02 21:40:16 crc kubenswrapper[4789]: I0202 21:40:16.088001 4789 scope.go:117] "RemoveContainer" containerID="f1ea63c8c5ff6da7af70a37c0e61ad67bc39f4d30460e668eaf6dcc015f338e7" Feb 02 21:40:16 crc kubenswrapper[4789]: I0202 21:40:16.120157 4789 scope.go:117] "RemoveContainer" containerID="334df413b266b7e23094c3bbc93f06a2583f26ce70a6c07b68919d209fdbde59" Feb 02 21:40:16 crc kubenswrapper[4789]: E0202 21:40:16.121141 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"334df413b266b7e23094c3bbc93f06a2583f26ce70a6c07b68919d209fdbde59\": container with ID starting with 334df413b266b7e23094c3bbc93f06a2583f26ce70a6c07b68919d209fdbde59 not found: ID does not exist" containerID="334df413b266b7e23094c3bbc93f06a2583f26ce70a6c07b68919d209fdbde59" Feb 02 21:40:16 crc kubenswrapper[4789]: I0202 21:40:16.121200 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"334df413b266b7e23094c3bbc93f06a2583f26ce70a6c07b68919d209fdbde59"} err="failed to get container status \"334df413b266b7e23094c3bbc93f06a2583f26ce70a6c07b68919d209fdbde59\": rpc error: code = NotFound desc = could not find container \"334df413b266b7e23094c3bbc93f06a2583f26ce70a6c07b68919d209fdbde59\": container with ID starting with 334df413b266b7e23094c3bbc93f06a2583f26ce70a6c07b68919d209fdbde59 not found: ID does not exist" Feb 02 21:40:16 crc kubenswrapper[4789]: I0202 21:40:16.121225 4789 scope.go:117] "RemoveContainer" containerID="f1ea63c8c5ff6da7af70a37c0e61ad67bc39f4d30460e668eaf6dcc015f338e7" Feb 02 21:40:16 crc kubenswrapper[4789]: E0202 21:40:16.121631 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1ea63c8c5ff6da7af70a37c0e61ad67bc39f4d30460e668eaf6dcc015f338e7\": container with ID starting with f1ea63c8c5ff6da7af70a37c0e61ad67bc39f4d30460e668eaf6dcc015f338e7 not found: ID does not exist" containerID="f1ea63c8c5ff6da7af70a37c0e61ad67bc39f4d30460e668eaf6dcc015f338e7" Feb 02 21:40:16 crc kubenswrapper[4789]: I0202 21:40:16.121648 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1ea63c8c5ff6da7af70a37c0e61ad67bc39f4d30460e668eaf6dcc015f338e7"} err="failed to get container status \"f1ea63c8c5ff6da7af70a37c0e61ad67bc39f4d30460e668eaf6dcc015f338e7\": rpc error: code = NotFound desc = could not find container \"f1ea63c8c5ff6da7af70a37c0e61ad67bc39f4d30460e668eaf6dcc015f338e7\": container with ID starting with f1ea63c8c5ff6da7af70a37c0e61ad67bc39f4d30460e668eaf6dcc015f338e7 not found: ID does not exist" Feb 02 21:40:16 crc kubenswrapper[4789]: I0202 21:40:16.379714 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7c4994f5f-462kb"] Feb 02 21:40:16 crc kubenswrapper[4789]: I0202 21:40:16.389732 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-7vxpx"] Feb 02 21:40:16 crc kubenswrapper[4789]: W0202 21:40:16.392492 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod78b23a1f_cc85_4767_b19c_6069adfc735a.slice/crio-737db95ddfbcb8d98a2987ef6db8aae192a0aad144cbba6515c03c59773b5e1c WatchSource:0}: Error finding container 737db95ddfbcb8d98a2987ef6db8aae192a0aad144cbba6515c03c59773b5e1c: Status 404 returned error can't find the container with id 737db95ddfbcb8d98a2987ef6db8aae192a0aad144cbba6515c03c59773b5e1c Feb 02 21:40:16 crc kubenswrapper[4789]: I0202 21:40:16.397206 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-7vxpx"] Feb 02 21:40:16 crc kubenswrapper[4789]: I0202 21:40:16.442735 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6feea529-c31a-419e-92cd-46a8500def8d" path="/var/lib/kubelet/pods/6feea529-c31a-419e-92cd-46a8500def8d/volumes" Feb 02 21:40:16 crc kubenswrapper[4789]: I0202 21:40:16.878185 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-76b9c6fd6-4jjdd" Feb 02 21:40:16 crc kubenswrapper[4789]: I0202 21:40:16.913881 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-59cf4774f6-v75lt" Feb 02 21:40:17 crc kubenswrapper[4789]: I0202 21:40:17.058267 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-76b9c6fd6-4jjdd" Feb 02 21:40:17 crc kubenswrapper[4789]: I0202 21:40:17.108678 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7c4994f5f-462kb" event={"ID":"78b23a1f-cc85-4767-b19c-6069adfc735a","Type":"ContainerStarted","Data":"299b4734565096b1be6400a79e47dcc680e20c6351889626bc796a381f662a16"} Feb 02 21:40:17 crc kubenswrapper[4789]: I0202 21:40:17.108712 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7c4994f5f-462kb" event={"ID":"78b23a1f-cc85-4767-b19c-6069adfc735a","Type":"ContainerStarted","Data":"553d373b31d254acbe2370697ade07f36e41177b6244fed11902fec65d96f129"} Feb 02 21:40:17 crc kubenswrapper[4789]: I0202 21:40:17.108721 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7c4994f5f-462kb" event={"ID":"78b23a1f-cc85-4767-b19c-6069adfc735a","Type":"ContainerStarted","Data":"737db95ddfbcb8d98a2987ef6db8aae192a0aad144cbba6515c03c59773b5e1c"} Feb 02 21:40:17 crc kubenswrapper[4789]: I0202 21:40:17.108743 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7c4994f5f-462kb" Feb 02 21:40:17 crc kubenswrapper[4789]: I0202 21:40:17.175328 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7c4994f5f-462kb" podStartSLOduration=2.175308259 podStartE2EDuration="2.175308259s" podCreationTimestamp="2026-02-02 21:40:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:40:17.151106235 +0000 UTC m=+1237.446131264" watchObservedRunningTime="2026-02-02 21:40:17.175308259 +0000 UTC m=+1237.470333278" Feb 02 21:40:17 crc kubenswrapper[4789]: I0202 21:40:17.304636 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-68d9498c68-84jcz"] Feb 02 21:40:17 crc kubenswrapper[4789]: E0202 21:40:17.305030 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6feea529-c31a-419e-92cd-46a8500def8d" containerName="init" Feb 02 21:40:17 crc kubenswrapper[4789]: I0202 21:40:17.305040 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="6feea529-c31a-419e-92cd-46a8500def8d" containerName="init" Feb 02 21:40:17 crc kubenswrapper[4789]: E0202 21:40:17.305060 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6feea529-c31a-419e-92cd-46a8500def8d" containerName="dnsmasq-dns" Feb 02 21:40:17 crc kubenswrapper[4789]: I0202 21:40:17.305067 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="6feea529-c31a-419e-92cd-46a8500def8d" containerName="dnsmasq-dns" Feb 02 21:40:17 crc kubenswrapper[4789]: I0202 21:40:17.305246 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="6feea529-c31a-419e-92cd-46a8500def8d" containerName="dnsmasq-dns" Feb 02 21:40:17 crc kubenswrapper[4789]: I0202 21:40:17.306133 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-68d9498c68-84jcz" Feb 02 21:40:17 crc kubenswrapper[4789]: I0202 21:40:17.318539 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-68d9498c68-84jcz"] Feb 02 21:40:17 crc kubenswrapper[4789]: I0202 21:40:17.409540 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5d8r\" (UniqueName: \"kubernetes.io/projected/349cede5-331c-4454-8c9c-fda8fe886f07-kube-api-access-c5d8r\") pod \"placement-68d9498c68-84jcz\" (UID: \"349cede5-331c-4454-8c9c-fda8fe886f07\") " pod="openstack/placement-68d9498c68-84jcz" Feb 02 21:40:17 crc kubenswrapper[4789]: I0202 21:40:17.409618 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/349cede5-331c-4454-8c9c-fda8fe886f07-scripts\") pod \"placement-68d9498c68-84jcz\" (UID: \"349cede5-331c-4454-8c9c-fda8fe886f07\") " pod="openstack/placement-68d9498c68-84jcz" Feb 02 21:40:17 crc kubenswrapper[4789]: I0202 21:40:17.409688 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/349cede5-331c-4454-8c9c-fda8fe886f07-config-data\") pod \"placement-68d9498c68-84jcz\" (UID: \"349cede5-331c-4454-8c9c-fda8fe886f07\") " pod="openstack/placement-68d9498c68-84jcz" Feb 02 21:40:17 crc kubenswrapper[4789]: I0202 21:40:17.409703 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/349cede5-331c-4454-8c9c-fda8fe886f07-public-tls-certs\") pod \"placement-68d9498c68-84jcz\" (UID: \"349cede5-331c-4454-8c9c-fda8fe886f07\") " pod="openstack/placement-68d9498c68-84jcz" Feb 02 21:40:17 crc kubenswrapper[4789]: I0202 21:40:17.409719 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/349cede5-331c-4454-8c9c-fda8fe886f07-logs\") pod \"placement-68d9498c68-84jcz\" (UID: \"349cede5-331c-4454-8c9c-fda8fe886f07\") " pod="openstack/placement-68d9498c68-84jcz" Feb 02 21:40:17 crc kubenswrapper[4789]: I0202 21:40:17.409749 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/349cede5-331c-4454-8c9c-fda8fe886f07-internal-tls-certs\") pod \"placement-68d9498c68-84jcz\" (UID: \"349cede5-331c-4454-8c9c-fda8fe886f07\") " pod="openstack/placement-68d9498c68-84jcz" Feb 02 21:40:17 crc kubenswrapper[4789]: I0202 21:40:17.409778 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/349cede5-331c-4454-8c9c-fda8fe886f07-combined-ca-bundle\") pod \"placement-68d9498c68-84jcz\" (UID: \"349cede5-331c-4454-8c9c-fda8fe886f07\") " pod="openstack/placement-68d9498c68-84jcz" Feb 02 21:40:17 crc kubenswrapper[4789]: I0202 21:40:17.459636 4789 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-7c6f8769f9-9q4zq" podUID="0e65e509-47bb-47f7-b129-74222d242dc8" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.154:9696/\": dial tcp 10.217.0.154:9696: connect: connection refused" Feb 02 21:40:17 crc kubenswrapper[4789]: I0202 21:40:17.511220 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5d8r\" (UniqueName: \"kubernetes.io/projected/349cede5-331c-4454-8c9c-fda8fe886f07-kube-api-access-c5d8r\") pod \"placement-68d9498c68-84jcz\" (UID: \"349cede5-331c-4454-8c9c-fda8fe886f07\") " pod="openstack/placement-68d9498c68-84jcz" Feb 02 21:40:17 crc kubenswrapper[4789]: I0202 21:40:17.511296 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/349cede5-331c-4454-8c9c-fda8fe886f07-scripts\") pod \"placement-68d9498c68-84jcz\" (UID: \"349cede5-331c-4454-8c9c-fda8fe886f07\") " pod="openstack/placement-68d9498c68-84jcz" Feb 02 21:40:17 crc kubenswrapper[4789]: I0202 21:40:17.511408 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/349cede5-331c-4454-8c9c-fda8fe886f07-config-data\") pod \"placement-68d9498c68-84jcz\" (UID: \"349cede5-331c-4454-8c9c-fda8fe886f07\") " pod="openstack/placement-68d9498c68-84jcz" Feb 02 21:40:17 crc kubenswrapper[4789]: I0202 21:40:17.511436 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/349cede5-331c-4454-8c9c-fda8fe886f07-public-tls-certs\") pod \"placement-68d9498c68-84jcz\" (UID: \"349cede5-331c-4454-8c9c-fda8fe886f07\") " pod="openstack/placement-68d9498c68-84jcz" Feb 02 21:40:17 crc kubenswrapper[4789]: I0202 21:40:17.511457 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/349cede5-331c-4454-8c9c-fda8fe886f07-logs\") pod \"placement-68d9498c68-84jcz\" (UID: \"349cede5-331c-4454-8c9c-fda8fe886f07\") " pod="openstack/placement-68d9498c68-84jcz" Feb 02 21:40:17 crc kubenswrapper[4789]: I0202 21:40:17.511501 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/349cede5-331c-4454-8c9c-fda8fe886f07-internal-tls-certs\") pod \"placement-68d9498c68-84jcz\" (UID: \"349cede5-331c-4454-8c9c-fda8fe886f07\") " pod="openstack/placement-68d9498c68-84jcz" Feb 02 21:40:17 crc kubenswrapper[4789]: I0202 21:40:17.511556 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/349cede5-331c-4454-8c9c-fda8fe886f07-combined-ca-bundle\") pod \"placement-68d9498c68-84jcz\" (UID: \"349cede5-331c-4454-8c9c-fda8fe886f07\") " pod="openstack/placement-68d9498c68-84jcz" Feb 02 21:40:17 crc kubenswrapper[4789]: I0202 21:40:17.512181 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/349cede5-331c-4454-8c9c-fda8fe886f07-logs\") pod \"placement-68d9498c68-84jcz\" (UID: \"349cede5-331c-4454-8c9c-fda8fe886f07\") " pod="openstack/placement-68d9498c68-84jcz" Feb 02 21:40:17 crc kubenswrapper[4789]: I0202 21:40:17.515943 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/349cede5-331c-4454-8c9c-fda8fe886f07-scripts\") pod \"placement-68d9498c68-84jcz\" (UID: \"349cede5-331c-4454-8c9c-fda8fe886f07\") " pod="openstack/placement-68d9498c68-84jcz" Feb 02 21:40:17 crc kubenswrapper[4789]: I0202 21:40:17.517493 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/349cede5-331c-4454-8c9c-fda8fe886f07-public-tls-certs\") pod \"placement-68d9498c68-84jcz\" (UID: \"349cede5-331c-4454-8c9c-fda8fe886f07\") " pod="openstack/placement-68d9498c68-84jcz" Feb 02 21:40:17 crc kubenswrapper[4789]: I0202 21:40:17.521363 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/349cede5-331c-4454-8c9c-fda8fe886f07-combined-ca-bundle\") pod \"placement-68d9498c68-84jcz\" (UID: \"349cede5-331c-4454-8c9c-fda8fe886f07\") " pod="openstack/placement-68d9498c68-84jcz" Feb 02 21:40:17 crc kubenswrapper[4789]: I0202 21:40:17.521792 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/349cede5-331c-4454-8c9c-fda8fe886f07-config-data\") pod \"placement-68d9498c68-84jcz\" (UID: \"349cede5-331c-4454-8c9c-fda8fe886f07\") " pod="openstack/placement-68d9498c68-84jcz" Feb 02 21:40:17 crc kubenswrapper[4789]: I0202 21:40:17.530706 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/349cede5-331c-4454-8c9c-fda8fe886f07-internal-tls-certs\") pod \"placement-68d9498c68-84jcz\" (UID: \"349cede5-331c-4454-8c9c-fda8fe886f07\") " pod="openstack/placement-68d9498c68-84jcz" Feb 02 21:40:17 crc kubenswrapper[4789]: I0202 21:40:17.533262 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5d8r\" (UniqueName: \"kubernetes.io/projected/349cede5-331c-4454-8c9c-fda8fe886f07-kube-api-access-c5d8r\") pod \"placement-68d9498c68-84jcz\" (UID: \"349cede5-331c-4454-8c9c-fda8fe886f07\") " pod="openstack/placement-68d9498c68-84jcz" Feb 02 21:40:17 crc kubenswrapper[4789]: I0202 21:40:17.632262 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-68d9498c68-84jcz" Feb 02 21:40:18 crc kubenswrapper[4789]: I0202 21:40:18.135947 4789 generic.go:334] "Generic (PLEG): container finished" podID="a92498f9-dccf-4af4-b92c-41f6e89be39f" containerID="7f749f84676267d30d6db302074fd1dfd49ec9175dc0d6b18b63c496efa8f4f9" exitCode=0 Feb 02 21:40:18 crc kubenswrapper[4789]: I0202 21:40:18.136031 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a92498f9-dccf-4af4-b92c-41f6e89be39f","Type":"ContainerDied","Data":"7f749f84676267d30d6db302074fd1dfd49ec9175dc0d6b18b63c496efa8f4f9"} Feb 02 21:40:18 crc kubenswrapper[4789]: I0202 21:40:18.156980 4789 generic.go:334] "Generic (PLEG): container finished" podID="0e65e509-47bb-47f7-b129-74222d242dc8" containerID="2158305fa050236e159ac1a89f405cae5ff72d7510d4a2c030187f59ccf546ba" exitCode=0 Feb 02 21:40:18 crc kubenswrapper[4789]: I0202 21:40:18.157349 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7c6f8769f9-9q4zq" event={"ID":"0e65e509-47bb-47f7-b129-74222d242dc8","Type":"ContainerDied","Data":"2158305fa050236e159ac1a89f405cae5ff72d7510d4a2c030187f59ccf546ba"} Feb 02 21:40:18 crc kubenswrapper[4789]: I0202 21:40:18.180154 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-68d9498c68-84jcz"] Feb 02 21:40:18 crc kubenswrapper[4789]: I0202 21:40:18.415189 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 02 21:40:18 crc kubenswrapper[4789]: I0202 21:40:18.554121 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a92498f9-dccf-4af4-b92c-41f6e89be39f-combined-ca-bundle\") pod \"a92498f9-dccf-4af4-b92c-41f6e89be39f\" (UID: \"a92498f9-dccf-4af4-b92c-41f6e89be39f\") " Feb 02 21:40:18 crc kubenswrapper[4789]: I0202 21:40:18.554161 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a92498f9-dccf-4af4-b92c-41f6e89be39f-scripts\") pod \"a92498f9-dccf-4af4-b92c-41f6e89be39f\" (UID: \"a92498f9-dccf-4af4-b92c-41f6e89be39f\") " Feb 02 21:40:18 crc kubenswrapper[4789]: I0202 21:40:18.554311 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a92498f9-dccf-4af4-b92c-41f6e89be39f-etc-machine-id\") pod \"a92498f9-dccf-4af4-b92c-41f6e89be39f\" (UID: \"a92498f9-dccf-4af4-b92c-41f6e89be39f\") " Feb 02 21:40:18 crc kubenswrapper[4789]: I0202 21:40:18.554368 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gcr8p\" (UniqueName: \"kubernetes.io/projected/a92498f9-dccf-4af4-b92c-41f6e89be39f-kube-api-access-gcr8p\") pod \"a92498f9-dccf-4af4-b92c-41f6e89be39f\" (UID: \"a92498f9-dccf-4af4-b92c-41f6e89be39f\") " Feb 02 21:40:18 crc kubenswrapper[4789]: I0202 21:40:18.554439 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a92498f9-dccf-4af4-b92c-41f6e89be39f-config-data-custom\") pod \"a92498f9-dccf-4af4-b92c-41f6e89be39f\" (UID: \"a92498f9-dccf-4af4-b92c-41f6e89be39f\") " Feb 02 21:40:18 crc kubenswrapper[4789]: I0202 21:40:18.554456 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a92498f9-dccf-4af4-b92c-41f6e89be39f-config-data\") pod \"a92498f9-dccf-4af4-b92c-41f6e89be39f\" (UID: \"a92498f9-dccf-4af4-b92c-41f6e89be39f\") " Feb 02 21:40:18 crc kubenswrapper[4789]: I0202 21:40:18.558110 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a92498f9-dccf-4af4-b92c-41f6e89be39f-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "a92498f9-dccf-4af4-b92c-41f6e89be39f" (UID: "a92498f9-dccf-4af4-b92c-41f6e89be39f"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 21:40:18 crc kubenswrapper[4789]: I0202 21:40:18.568072 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a92498f9-dccf-4af4-b92c-41f6e89be39f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a92498f9-dccf-4af4-b92c-41f6e89be39f" (UID: "a92498f9-dccf-4af4-b92c-41f6e89be39f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:40:18 crc kubenswrapper[4789]: I0202 21:40:18.577736 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a92498f9-dccf-4af4-b92c-41f6e89be39f-kube-api-access-gcr8p" (OuterVolumeSpecName: "kube-api-access-gcr8p") pod "a92498f9-dccf-4af4-b92c-41f6e89be39f" (UID: "a92498f9-dccf-4af4-b92c-41f6e89be39f"). InnerVolumeSpecName "kube-api-access-gcr8p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:40:18 crc kubenswrapper[4789]: I0202 21:40:18.582816 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a92498f9-dccf-4af4-b92c-41f6e89be39f-scripts" (OuterVolumeSpecName: "scripts") pod "a92498f9-dccf-4af4-b92c-41f6e89be39f" (UID: "a92498f9-dccf-4af4-b92c-41f6e89be39f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:40:18 crc kubenswrapper[4789]: I0202 21:40:18.627832 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7c6f8769f9-9q4zq" Feb 02 21:40:18 crc kubenswrapper[4789]: I0202 21:40:18.658696 4789 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a92498f9-dccf-4af4-b92c-41f6e89be39f-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 21:40:18 crc kubenswrapper[4789]: I0202 21:40:18.658972 4789 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a92498f9-dccf-4af4-b92c-41f6e89be39f-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 02 21:40:18 crc kubenswrapper[4789]: I0202 21:40:18.659047 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gcr8p\" (UniqueName: \"kubernetes.io/projected/a92498f9-dccf-4af4-b92c-41f6e89be39f-kube-api-access-gcr8p\") on node \"crc\" DevicePath \"\"" Feb 02 21:40:18 crc kubenswrapper[4789]: I0202 21:40:18.659101 4789 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a92498f9-dccf-4af4-b92c-41f6e89be39f-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 02 21:40:18 crc kubenswrapper[4789]: I0202 21:40:18.682763 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a92498f9-dccf-4af4-b92c-41f6e89be39f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a92498f9-dccf-4af4-b92c-41f6e89be39f" (UID: "a92498f9-dccf-4af4-b92c-41f6e89be39f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:40:18 crc kubenswrapper[4789]: I0202 21:40:18.762169 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e65e509-47bb-47f7-b129-74222d242dc8-combined-ca-bundle\") pod \"0e65e509-47bb-47f7-b129-74222d242dc8\" (UID: \"0e65e509-47bb-47f7-b129-74222d242dc8\") " Feb 02 21:40:18 crc kubenswrapper[4789]: I0202 21:40:18.762533 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pwmq\" (UniqueName: \"kubernetes.io/projected/0e65e509-47bb-47f7-b129-74222d242dc8-kube-api-access-8pwmq\") pod \"0e65e509-47bb-47f7-b129-74222d242dc8\" (UID: \"0e65e509-47bb-47f7-b129-74222d242dc8\") " Feb 02 21:40:18 crc kubenswrapper[4789]: I0202 21:40:18.763124 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0e65e509-47bb-47f7-b129-74222d242dc8-config\") pod \"0e65e509-47bb-47f7-b129-74222d242dc8\" (UID: \"0e65e509-47bb-47f7-b129-74222d242dc8\") " Feb 02 21:40:18 crc kubenswrapper[4789]: I0202 21:40:18.763151 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e65e509-47bb-47f7-b129-74222d242dc8-internal-tls-certs\") pod \"0e65e509-47bb-47f7-b129-74222d242dc8\" (UID: \"0e65e509-47bb-47f7-b129-74222d242dc8\") " Feb 02 21:40:18 crc kubenswrapper[4789]: I0202 21:40:18.763197 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0e65e509-47bb-47f7-b129-74222d242dc8-httpd-config\") pod \"0e65e509-47bb-47f7-b129-74222d242dc8\" (UID: \"0e65e509-47bb-47f7-b129-74222d242dc8\") " Feb 02 21:40:18 crc kubenswrapper[4789]: I0202 21:40:18.763236 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e65e509-47bb-47f7-b129-74222d242dc8-public-tls-certs\") pod \"0e65e509-47bb-47f7-b129-74222d242dc8\" (UID: \"0e65e509-47bb-47f7-b129-74222d242dc8\") " Feb 02 21:40:18 crc kubenswrapper[4789]: I0202 21:40:18.763257 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e65e509-47bb-47f7-b129-74222d242dc8-ovndb-tls-certs\") pod \"0e65e509-47bb-47f7-b129-74222d242dc8\" (UID: \"0e65e509-47bb-47f7-b129-74222d242dc8\") " Feb 02 21:40:18 crc kubenswrapper[4789]: I0202 21:40:18.763751 4789 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a92498f9-dccf-4af4-b92c-41f6e89be39f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 21:40:18 crc kubenswrapper[4789]: I0202 21:40:18.774683 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e65e509-47bb-47f7-b129-74222d242dc8-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "0e65e509-47bb-47f7-b129-74222d242dc8" (UID: "0e65e509-47bb-47f7-b129-74222d242dc8"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:40:18 crc kubenswrapper[4789]: I0202 21:40:18.782360 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a92498f9-dccf-4af4-b92c-41f6e89be39f-config-data" (OuterVolumeSpecName: "config-data") pod "a92498f9-dccf-4af4-b92c-41f6e89be39f" (UID: "a92498f9-dccf-4af4-b92c-41f6e89be39f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:40:18 crc kubenswrapper[4789]: I0202 21:40:18.783849 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e65e509-47bb-47f7-b129-74222d242dc8-kube-api-access-8pwmq" (OuterVolumeSpecName: "kube-api-access-8pwmq") pod "0e65e509-47bb-47f7-b129-74222d242dc8" (UID: "0e65e509-47bb-47f7-b129-74222d242dc8"). InnerVolumeSpecName "kube-api-access-8pwmq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:40:18 crc kubenswrapper[4789]: I0202 21:40:18.831767 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e65e509-47bb-47f7-b129-74222d242dc8-config" (OuterVolumeSpecName: "config") pod "0e65e509-47bb-47f7-b129-74222d242dc8" (UID: "0e65e509-47bb-47f7-b129-74222d242dc8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:40:18 crc kubenswrapper[4789]: I0202 21:40:18.840793 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e65e509-47bb-47f7-b129-74222d242dc8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0e65e509-47bb-47f7-b129-74222d242dc8" (UID: "0e65e509-47bb-47f7-b129-74222d242dc8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:40:18 crc kubenswrapper[4789]: I0202 21:40:18.866348 4789 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a92498f9-dccf-4af4-b92c-41f6e89be39f-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 21:40:18 crc kubenswrapper[4789]: I0202 21:40:18.866550 4789 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e65e509-47bb-47f7-b129-74222d242dc8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 21:40:18 crc kubenswrapper[4789]: I0202 21:40:18.866629 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8pwmq\" (UniqueName: \"kubernetes.io/projected/0e65e509-47bb-47f7-b129-74222d242dc8-kube-api-access-8pwmq\") on node \"crc\" DevicePath \"\"" Feb 02 21:40:18 crc kubenswrapper[4789]: I0202 21:40:18.866690 4789 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/0e65e509-47bb-47f7-b129-74222d242dc8-config\") on node \"crc\" DevicePath \"\"" Feb 02 21:40:18 crc kubenswrapper[4789]: I0202 21:40:18.866772 4789 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0e65e509-47bb-47f7-b129-74222d242dc8-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 02 21:40:18 crc kubenswrapper[4789]: I0202 21:40:18.900063 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e65e509-47bb-47f7-b129-74222d242dc8-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "0e65e509-47bb-47f7-b129-74222d242dc8" (UID: "0e65e509-47bb-47f7-b129-74222d242dc8"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:40:18 crc kubenswrapper[4789]: I0202 21:40:18.903730 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e65e509-47bb-47f7-b129-74222d242dc8-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "0e65e509-47bb-47f7-b129-74222d242dc8" (UID: "0e65e509-47bb-47f7-b129-74222d242dc8"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:40:18 crc kubenswrapper[4789]: I0202 21:40:18.930836 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e65e509-47bb-47f7-b129-74222d242dc8-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "0e65e509-47bb-47f7-b129-74222d242dc8" (UID: "0e65e509-47bb-47f7-b129-74222d242dc8"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:40:18 crc kubenswrapper[4789]: I0202 21:40:18.967875 4789 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e65e509-47bb-47f7-b129-74222d242dc8-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 21:40:18 crc kubenswrapper[4789]: I0202 21:40:18.967908 4789 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e65e509-47bb-47f7-b129-74222d242dc8-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 21:40:18 crc kubenswrapper[4789]: I0202 21:40:18.967917 4789 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e65e509-47bb-47f7-b129-74222d242dc8-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 21:40:19 crc kubenswrapper[4789]: I0202 21:40:19.170144 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-68d9498c68-84jcz" event={"ID":"349cede5-331c-4454-8c9c-fda8fe886f07","Type":"ContainerStarted","Data":"40a59db16d790bc9ade9d424000123015ece03fbc62bfe3a010f70a44b900736"} Feb 02 21:40:19 crc kubenswrapper[4789]: I0202 21:40:19.170186 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-68d9498c68-84jcz" event={"ID":"349cede5-331c-4454-8c9c-fda8fe886f07","Type":"ContainerStarted","Data":"7594027e1aa66be1d86466bb05745dd33d3b9a0771c64f3b195b5d3c4ef5fbca"} Feb 02 21:40:19 crc kubenswrapper[4789]: I0202 21:40:19.170198 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-68d9498c68-84jcz" event={"ID":"349cede5-331c-4454-8c9c-fda8fe886f07","Type":"ContainerStarted","Data":"332ec0c69f4ecd8301ba4e0268ea9cf965fa65934dfdb14dfd73d8fdbe8fbed3"} Feb 02 21:40:19 crc kubenswrapper[4789]: I0202 21:40:19.171021 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-68d9498c68-84jcz" Feb 02 21:40:19 crc kubenswrapper[4789]: I0202 21:40:19.171040 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-68d9498c68-84jcz" Feb 02 21:40:19 crc kubenswrapper[4789]: I0202 21:40:19.179303 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a92498f9-dccf-4af4-b92c-41f6e89be39f","Type":"ContainerDied","Data":"de06af86bfa1f2e0a47fbe28d2a53c372610635437d020f2870f1effba7d2eb5"} Feb 02 21:40:19 crc kubenswrapper[4789]: I0202 21:40:19.179323 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 02 21:40:19 crc kubenswrapper[4789]: I0202 21:40:19.180045 4789 scope.go:117] "RemoveContainer" containerID="c11d939ff2d5fb61221ce2e270fc2f6b3b1046d00d31fecd8f7b5af21a0e45fa" Feb 02 21:40:19 crc kubenswrapper[4789]: I0202 21:40:19.188963 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7c6f8769f9-9q4zq" event={"ID":"0e65e509-47bb-47f7-b129-74222d242dc8","Type":"ContainerDied","Data":"8c2cc135c482a5b6aea797e2d0eea8ed8179c9289bfb8d0933a032ac52b26e1f"} Feb 02 21:40:19 crc kubenswrapper[4789]: I0202 21:40:19.189064 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7c6f8769f9-9q4zq" Feb 02 21:40:19 crc kubenswrapper[4789]: I0202 21:40:19.259977 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-68d9498c68-84jcz" podStartSLOduration=2.259933589 podStartE2EDuration="2.259933589s" podCreationTimestamp="2026-02-02 21:40:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:40:19.20201163 +0000 UTC m=+1239.497036649" watchObservedRunningTime="2026-02-02 21:40:19.259933589 +0000 UTC m=+1239.554958608" Feb 02 21:40:19 crc kubenswrapper[4789]: I0202 21:40:19.260052 4789 scope.go:117] "RemoveContainer" containerID="7f749f84676267d30d6db302074fd1dfd49ec9175dc0d6b18b63c496efa8f4f9" Feb 02 21:40:19 crc kubenswrapper[4789]: I0202 21:40:19.268936 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 21:40:19 crc kubenswrapper[4789]: I0202 21:40:19.283400 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 21:40:19 crc kubenswrapper[4789]: I0202 21:40:19.287799 4789 scope.go:117] "RemoveContainer" containerID="6b1800b5f6de7d72f18adbb4dbe5b8f41c0f5c63326f6f70efaf329bc7e0debb" Feb 02 21:40:19 crc kubenswrapper[4789]: I0202 21:40:19.294189 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7c6f8769f9-9q4zq"] Feb 02 21:40:19 crc kubenswrapper[4789]: I0202 21:40:19.315090 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-7c6f8769f9-9q4zq"] Feb 02 21:40:19 crc kubenswrapper[4789]: I0202 21:40:19.319093 4789 scope.go:117] "RemoveContainer" containerID="2158305fa050236e159ac1a89f405cae5ff72d7510d4a2c030187f59ccf546ba" Feb 02 21:40:19 crc kubenswrapper[4789]: I0202 21:40:19.322337 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 21:40:19 crc kubenswrapper[4789]: E0202 21:40:19.322690 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a92498f9-dccf-4af4-b92c-41f6e89be39f" containerName="cinder-scheduler" Feb 02 21:40:19 crc kubenswrapper[4789]: I0202 21:40:19.322708 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="a92498f9-dccf-4af4-b92c-41f6e89be39f" containerName="cinder-scheduler" Feb 02 21:40:19 crc kubenswrapper[4789]: E0202 21:40:19.322722 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a92498f9-dccf-4af4-b92c-41f6e89be39f" containerName="probe" Feb 02 21:40:19 crc kubenswrapper[4789]: I0202 21:40:19.322728 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="a92498f9-dccf-4af4-b92c-41f6e89be39f" containerName="probe" Feb 02 21:40:19 crc kubenswrapper[4789]: E0202 21:40:19.322738 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e65e509-47bb-47f7-b129-74222d242dc8" containerName="neutron-httpd" Feb 02 21:40:19 crc kubenswrapper[4789]: I0202 21:40:19.322744 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e65e509-47bb-47f7-b129-74222d242dc8" containerName="neutron-httpd" Feb 02 21:40:19 crc kubenswrapper[4789]: E0202 21:40:19.322807 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e65e509-47bb-47f7-b129-74222d242dc8" containerName="neutron-api" Feb 02 21:40:19 crc kubenswrapper[4789]: I0202 21:40:19.322815 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e65e509-47bb-47f7-b129-74222d242dc8" containerName="neutron-api" Feb 02 21:40:19 crc kubenswrapper[4789]: I0202 21:40:19.322973 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e65e509-47bb-47f7-b129-74222d242dc8" containerName="neutron-api" Feb 02 21:40:19 crc kubenswrapper[4789]: I0202 21:40:19.322987 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="a92498f9-dccf-4af4-b92c-41f6e89be39f" containerName="probe" Feb 02 21:40:19 crc kubenswrapper[4789]: I0202 21:40:19.323004 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="a92498f9-dccf-4af4-b92c-41f6e89be39f" containerName="cinder-scheduler" Feb 02 21:40:19 crc kubenswrapper[4789]: I0202 21:40:19.323016 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e65e509-47bb-47f7-b129-74222d242dc8" containerName="neutron-httpd" Feb 02 21:40:19 crc kubenswrapper[4789]: I0202 21:40:19.343323 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 21:40:19 crc kubenswrapper[4789]: I0202 21:40:19.345770 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 02 21:40:19 crc kubenswrapper[4789]: I0202 21:40:19.348096 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 02 21:40:19 crc kubenswrapper[4789]: I0202 21:40:19.481767 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57c9f301-615a-4182-b17e-3ae250e8335c-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"57c9f301-615a-4182-b17e-3ae250e8335c\") " pod="openstack/cinder-scheduler-0" Feb 02 21:40:19 crc kubenswrapper[4789]: I0202 21:40:19.481966 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57c9f301-615a-4182-b17e-3ae250e8335c-scripts\") pod \"cinder-scheduler-0\" (UID: \"57c9f301-615a-4182-b17e-3ae250e8335c\") " pod="openstack/cinder-scheduler-0" Feb 02 21:40:19 crc kubenswrapper[4789]: I0202 21:40:19.482088 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/57c9f301-615a-4182-b17e-3ae250e8335c-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"57c9f301-615a-4182-b17e-3ae250e8335c\") " pod="openstack/cinder-scheduler-0" Feb 02 21:40:19 crc kubenswrapper[4789]: I0202 21:40:19.482122 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57c9f301-615a-4182-b17e-3ae250e8335c-config-data\") pod \"cinder-scheduler-0\" (UID: \"57c9f301-615a-4182-b17e-3ae250e8335c\") " pod="openstack/cinder-scheduler-0" Feb 02 21:40:19 crc kubenswrapper[4789]: I0202 21:40:19.482149 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cqb5\" (UniqueName: \"kubernetes.io/projected/57c9f301-615a-4182-b17e-3ae250e8335c-kube-api-access-7cqb5\") pod \"cinder-scheduler-0\" (UID: \"57c9f301-615a-4182-b17e-3ae250e8335c\") " pod="openstack/cinder-scheduler-0" Feb 02 21:40:19 crc kubenswrapper[4789]: I0202 21:40:19.482189 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/57c9f301-615a-4182-b17e-3ae250e8335c-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"57c9f301-615a-4182-b17e-3ae250e8335c\") " pod="openstack/cinder-scheduler-0" Feb 02 21:40:19 crc kubenswrapper[4789]: I0202 21:40:19.584169 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57c9f301-615a-4182-b17e-3ae250e8335c-scripts\") pod \"cinder-scheduler-0\" (UID: \"57c9f301-615a-4182-b17e-3ae250e8335c\") " pod="openstack/cinder-scheduler-0" Feb 02 21:40:19 crc kubenswrapper[4789]: I0202 21:40:19.584410 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/57c9f301-615a-4182-b17e-3ae250e8335c-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"57c9f301-615a-4182-b17e-3ae250e8335c\") " pod="openstack/cinder-scheduler-0" Feb 02 21:40:19 crc kubenswrapper[4789]: I0202 21:40:19.584479 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/57c9f301-615a-4182-b17e-3ae250e8335c-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"57c9f301-615a-4182-b17e-3ae250e8335c\") " pod="openstack/cinder-scheduler-0" Feb 02 21:40:19 crc kubenswrapper[4789]: I0202 21:40:19.584508 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57c9f301-615a-4182-b17e-3ae250e8335c-config-data\") pod \"cinder-scheduler-0\" (UID: \"57c9f301-615a-4182-b17e-3ae250e8335c\") " pod="openstack/cinder-scheduler-0" Feb 02 21:40:19 crc kubenswrapper[4789]: I0202 21:40:19.584760 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7cqb5\" (UniqueName: \"kubernetes.io/projected/57c9f301-615a-4182-b17e-3ae250e8335c-kube-api-access-7cqb5\") pod \"cinder-scheduler-0\" (UID: \"57c9f301-615a-4182-b17e-3ae250e8335c\") " pod="openstack/cinder-scheduler-0" Feb 02 21:40:19 crc kubenswrapper[4789]: I0202 21:40:19.584949 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/57c9f301-615a-4182-b17e-3ae250e8335c-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"57c9f301-615a-4182-b17e-3ae250e8335c\") " pod="openstack/cinder-scheduler-0" Feb 02 21:40:19 crc kubenswrapper[4789]: I0202 21:40:19.585444 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57c9f301-615a-4182-b17e-3ae250e8335c-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"57c9f301-615a-4182-b17e-3ae250e8335c\") " pod="openstack/cinder-scheduler-0" Feb 02 21:40:19 crc kubenswrapper[4789]: I0202 21:40:19.589060 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57c9f301-615a-4182-b17e-3ae250e8335c-config-data\") pod \"cinder-scheduler-0\" (UID: \"57c9f301-615a-4182-b17e-3ae250e8335c\") " pod="openstack/cinder-scheduler-0" Feb 02 21:40:19 crc kubenswrapper[4789]: I0202 21:40:19.589927 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/57c9f301-615a-4182-b17e-3ae250e8335c-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"57c9f301-615a-4182-b17e-3ae250e8335c\") " pod="openstack/cinder-scheduler-0" Feb 02 21:40:19 crc kubenswrapper[4789]: I0202 21:40:19.595757 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57c9f301-615a-4182-b17e-3ae250e8335c-scripts\") pod \"cinder-scheduler-0\" (UID: \"57c9f301-615a-4182-b17e-3ae250e8335c\") " pod="openstack/cinder-scheduler-0" Feb 02 21:40:19 crc kubenswrapper[4789]: I0202 21:40:19.600273 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57c9f301-615a-4182-b17e-3ae250e8335c-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"57c9f301-615a-4182-b17e-3ae250e8335c\") " pod="openstack/cinder-scheduler-0" Feb 02 21:40:19 crc kubenswrapper[4789]: I0202 21:40:19.611005 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7cqb5\" (UniqueName: \"kubernetes.io/projected/57c9f301-615a-4182-b17e-3ae250e8335c-kube-api-access-7cqb5\") pod \"cinder-scheduler-0\" (UID: \"57c9f301-615a-4182-b17e-3ae250e8335c\") " pod="openstack/cinder-scheduler-0" Feb 02 21:40:19 crc kubenswrapper[4789]: I0202 21:40:19.665623 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 02 21:40:20 crc kubenswrapper[4789]: I0202 21:40:20.193393 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 21:40:20 crc kubenswrapper[4789]: I0202 21:40:20.200583 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"57c9f301-615a-4182-b17e-3ae250e8335c","Type":"ContainerStarted","Data":"157fe788c3f15e076361528ba462a9f7d3a289ad3d8310817e32a82af5c86217"} Feb 02 21:40:20 crc kubenswrapper[4789]: I0202 21:40:20.454321 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e65e509-47bb-47f7-b129-74222d242dc8" path="/var/lib/kubelet/pods/0e65e509-47bb-47f7-b129-74222d242dc8/volumes" Feb 02 21:40:20 crc kubenswrapper[4789]: I0202 21:40:20.454891 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a92498f9-dccf-4af4-b92c-41f6e89be39f" path="/var/lib/kubelet/pods/a92498f9-dccf-4af4-b92c-41f6e89be39f/volumes" Feb 02 21:40:21 crc kubenswrapper[4789]: I0202 21:40:21.233251 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"57c9f301-615a-4182-b17e-3ae250e8335c","Type":"ContainerStarted","Data":"fbe1157b2a6d65b0c7188f948585dfc0be3a3d76f5c3b57620ea3d6091a4927c"} Feb 02 21:40:21 crc kubenswrapper[4789]: I0202 21:40:21.931290 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-595cf58668-hfkcq" Feb 02 21:40:22 crc kubenswrapper[4789]: I0202 21:40:22.240897 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"57c9f301-615a-4182-b17e-3ae250e8335c","Type":"ContainerStarted","Data":"ecc06c5902aa50d55c9a1d5a9d91397ab8aa463f6ac87ac09a03b387026f2890"} Feb 02 21:40:22 crc kubenswrapper[4789]: I0202 21:40:22.262949 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.262931731 podStartE2EDuration="3.262931731s" podCreationTimestamp="2026-02-02 21:40:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:40:22.256836588 +0000 UTC m=+1242.551861607" watchObservedRunningTime="2026-02-02 21:40:22.262931731 +0000 UTC m=+1242.557956750" Feb 02 21:40:22 crc kubenswrapper[4789]: I0202 21:40:22.479918 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 02 21:40:22 crc kubenswrapper[4789]: I0202 21:40:22.841889 4789 patch_prober.go:28] interesting pod/machine-config-daemon-c8vcn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 21:40:22 crc kubenswrapper[4789]: I0202 21:40:22.841942 4789 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 21:40:22 crc kubenswrapper[4789]: I0202 21:40:22.955581 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6b8b9b54f6-jfnqs" Feb 02 21:40:23 crc kubenswrapper[4789]: I0202 21:40:23.111155 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6b8b9b54f6-jfnqs" Feb 02 21:40:23 crc kubenswrapper[4789]: I0202 21:40:23.169362 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-76b9c6fd6-4jjdd"] Feb 02 21:40:23 crc kubenswrapper[4789]: I0202 21:40:23.169609 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-76b9c6fd6-4jjdd" podUID="f72393b9-cc1a-4feb-9089-259fd674fb19" containerName="barbican-api-log" containerID="cri-o://09e3ebecf8199c9c2015b9af6285917464358df228585d459b775ab95c46f735" gracePeriod=30 Feb 02 21:40:23 crc kubenswrapper[4789]: I0202 21:40:23.169702 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-76b9c6fd6-4jjdd" podUID="f72393b9-cc1a-4feb-9089-259fd674fb19" containerName="barbican-api" containerID="cri-o://f33d6cab0e4515f9fc25b935375bb01cac91eedda8fde6dad6fe745f2af17953" gracePeriod=30 Feb 02 21:40:24 crc kubenswrapper[4789]: I0202 21:40:24.282814 4789 generic.go:334] "Generic (PLEG): container finished" podID="f72393b9-cc1a-4feb-9089-259fd674fb19" containerID="09e3ebecf8199c9c2015b9af6285917464358df228585d459b775ab95c46f735" exitCode=143 Feb 02 21:40:24 crc kubenswrapper[4789]: I0202 21:40:24.282901 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-76b9c6fd6-4jjdd" event={"ID":"f72393b9-cc1a-4feb-9089-259fd674fb19","Type":"ContainerDied","Data":"09e3ebecf8199c9c2015b9af6285917464358df228585d459b775ab95c46f735"} Feb 02 21:40:24 crc kubenswrapper[4789]: I0202 21:40:24.666531 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 02 21:40:24 crc kubenswrapper[4789]: I0202 21:40:24.705937 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 02 21:40:24 crc kubenswrapper[4789]: I0202 21:40:24.707623 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 02 21:40:24 crc kubenswrapper[4789]: I0202 21:40:24.714600 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 02 21:40:24 crc kubenswrapper[4789]: I0202 21:40:24.714694 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 02 21:40:24 crc kubenswrapper[4789]: I0202 21:40:24.714822 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-f9rrw" Feb 02 21:40:24 crc kubenswrapper[4789]: I0202 21:40:24.722751 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 02 21:40:24 crc kubenswrapper[4789]: I0202 21:40:24.886122 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mchs\" (UniqueName: \"kubernetes.io/projected/61697303-0c26-461f-b8c3-f9716cc0a308-kube-api-access-9mchs\") pod \"openstackclient\" (UID: \"61697303-0c26-461f-b8c3-f9716cc0a308\") " pod="openstack/openstackclient" Feb 02 21:40:24 crc kubenswrapper[4789]: I0202 21:40:24.886619 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/61697303-0c26-461f-b8c3-f9716cc0a308-openstack-config-secret\") pod \"openstackclient\" (UID: \"61697303-0c26-461f-b8c3-f9716cc0a308\") " pod="openstack/openstackclient" Feb 02 21:40:24 crc kubenswrapper[4789]: I0202 21:40:24.886700 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/61697303-0c26-461f-b8c3-f9716cc0a308-openstack-config\") pod \"openstackclient\" (UID: \"61697303-0c26-461f-b8c3-f9716cc0a308\") " pod="openstack/openstackclient" Feb 02 21:40:24 crc kubenswrapper[4789]: I0202 21:40:24.886719 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61697303-0c26-461f-b8c3-f9716cc0a308-combined-ca-bundle\") pod \"openstackclient\" (UID: \"61697303-0c26-461f-b8c3-f9716cc0a308\") " pod="openstack/openstackclient" Feb 02 21:40:24 crc kubenswrapper[4789]: I0202 21:40:24.936267 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Feb 02 21:40:24 crc kubenswrapper[4789]: E0202 21:40:24.936918 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle kube-api-access-9mchs openstack-config openstack-config-secret], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/openstackclient" podUID="61697303-0c26-461f-b8c3-f9716cc0a308" Feb 02 21:40:24 crc kubenswrapper[4789]: I0202 21:40:24.947197 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Feb 02 21:40:24 crc kubenswrapper[4789]: I0202 21:40:24.988339 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/61697303-0c26-461f-b8c3-f9716cc0a308-openstack-config\") pod \"openstackclient\" (UID: \"61697303-0c26-461f-b8c3-f9716cc0a308\") " pod="openstack/openstackclient" Feb 02 21:40:24 crc kubenswrapper[4789]: I0202 21:40:24.988382 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61697303-0c26-461f-b8c3-f9716cc0a308-combined-ca-bundle\") pod \"openstackclient\" (UID: \"61697303-0c26-461f-b8c3-f9716cc0a308\") " pod="openstack/openstackclient" Feb 02 21:40:24 crc kubenswrapper[4789]: I0202 21:40:24.988438 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mchs\" (UniqueName: \"kubernetes.io/projected/61697303-0c26-461f-b8c3-f9716cc0a308-kube-api-access-9mchs\") pod \"openstackclient\" (UID: \"61697303-0c26-461f-b8c3-f9716cc0a308\") " pod="openstack/openstackclient" Feb 02 21:40:24 crc kubenswrapper[4789]: I0202 21:40:24.988520 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/61697303-0c26-461f-b8c3-f9716cc0a308-openstack-config-secret\") pod \"openstackclient\" (UID: \"61697303-0c26-461f-b8c3-f9716cc0a308\") " pod="openstack/openstackclient" Feb 02 21:40:24 crc kubenswrapper[4789]: I0202 21:40:24.990527 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/61697303-0c26-461f-b8c3-f9716cc0a308-openstack-config\") pod \"openstackclient\" (UID: \"61697303-0c26-461f-b8c3-f9716cc0a308\") " pod="openstack/openstackclient" Feb 02 21:40:24 crc kubenswrapper[4789]: E0202 21:40:24.991329 4789 projected.go:194] Error preparing data for projected volume kube-api-access-9mchs for pod openstack/openstackclient: failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: User "system:node:crc" cannot create resource "serviceaccounts/token" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Feb 02 21:40:24 crc kubenswrapper[4789]: E0202 21:40:24.991389 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/61697303-0c26-461f-b8c3-f9716cc0a308-kube-api-access-9mchs podName:61697303-0c26-461f-b8c3-f9716cc0a308 nodeName:}" failed. No retries permitted until 2026-02-02 21:40:25.491371954 +0000 UTC m=+1245.786396973 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-9mchs" (UniqueName: "kubernetes.io/projected/61697303-0c26-461f-b8c3-f9716cc0a308-kube-api-access-9mchs") pod "openstackclient" (UID: "61697303-0c26-461f-b8c3-f9716cc0a308") : failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: User "system:node:crc" cannot create resource "serviceaccounts/token" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Feb 02 21:40:24 crc kubenswrapper[4789]: I0202 21:40:24.994658 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/61697303-0c26-461f-b8c3-f9716cc0a308-openstack-config-secret\") pod \"openstackclient\" (UID: \"61697303-0c26-461f-b8c3-f9716cc0a308\") " pod="openstack/openstackclient" Feb 02 21:40:25 crc kubenswrapper[4789]: I0202 21:40:25.000248 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61697303-0c26-461f-b8c3-f9716cc0a308-combined-ca-bundle\") pod \"openstackclient\" (UID: \"61697303-0c26-461f-b8c3-f9716cc0a308\") " pod="openstack/openstackclient" Feb 02 21:40:25 crc kubenswrapper[4789]: I0202 21:40:25.012477 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 02 21:40:25 crc kubenswrapper[4789]: I0202 21:40:25.014417 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 02 21:40:25 crc kubenswrapper[4789]: I0202 21:40:25.024219 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 02 21:40:25 crc kubenswrapper[4789]: I0202 21:40:25.191500 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whdmt\" (UniqueName: \"kubernetes.io/projected/2e82084e-a68b-4e41-9d23-8888ab97e53e-kube-api-access-whdmt\") pod \"openstackclient\" (UID: \"2e82084e-a68b-4e41-9d23-8888ab97e53e\") " pod="openstack/openstackclient" Feb 02 21:40:25 crc kubenswrapper[4789]: I0202 21:40:25.191573 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2e82084e-a68b-4e41-9d23-8888ab97e53e-openstack-config-secret\") pod \"openstackclient\" (UID: \"2e82084e-a68b-4e41-9d23-8888ab97e53e\") " pod="openstack/openstackclient" Feb 02 21:40:25 crc kubenswrapper[4789]: I0202 21:40:25.191708 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2e82084e-a68b-4e41-9d23-8888ab97e53e-openstack-config\") pod \"openstackclient\" (UID: \"2e82084e-a68b-4e41-9d23-8888ab97e53e\") " pod="openstack/openstackclient" Feb 02 21:40:25 crc kubenswrapper[4789]: I0202 21:40:25.191739 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e82084e-a68b-4e41-9d23-8888ab97e53e-combined-ca-bundle\") pod \"openstackclient\" (UID: \"2e82084e-a68b-4e41-9d23-8888ab97e53e\") " pod="openstack/openstackclient" Feb 02 21:40:25 crc kubenswrapper[4789]: I0202 21:40:25.290409 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 02 21:40:25 crc kubenswrapper[4789]: I0202 21:40:25.292873 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2e82084e-a68b-4e41-9d23-8888ab97e53e-openstack-config-secret\") pod \"openstackclient\" (UID: \"2e82084e-a68b-4e41-9d23-8888ab97e53e\") " pod="openstack/openstackclient" Feb 02 21:40:25 crc kubenswrapper[4789]: I0202 21:40:25.292956 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2e82084e-a68b-4e41-9d23-8888ab97e53e-openstack-config\") pod \"openstackclient\" (UID: \"2e82084e-a68b-4e41-9d23-8888ab97e53e\") " pod="openstack/openstackclient" Feb 02 21:40:25 crc kubenswrapper[4789]: I0202 21:40:25.292987 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e82084e-a68b-4e41-9d23-8888ab97e53e-combined-ca-bundle\") pod \"openstackclient\" (UID: \"2e82084e-a68b-4e41-9d23-8888ab97e53e\") " pod="openstack/openstackclient" Feb 02 21:40:25 crc kubenswrapper[4789]: I0202 21:40:25.293062 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whdmt\" (UniqueName: \"kubernetes.io/projected/2e82084e-a68b-4e41-9d23-8888ab97e53e-kube-api-access-whdmt\") pod \"openstackclient\" (UID: \"2e82084e-a68b-4e41-9d23-8888ab97e53e\") " pod="openstack/openstackclient" Feb 02 21:40:25 crc kubenswrapper[4789]: I0202 21:40:25.293854 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2e82084e-a68b-4e41-9d23-8888ab97e53e-openstack-config\") pod \"openstackclient\" (UID: \"2e82084e-a68b-4e41-9d23-8888ab97e53e\") " pod="openstack/openstackclient" Feb 02 21:40:25 crc kubenswrapper[4789]: I0202 21:40:25.294954 4789 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="61697303-0c26-461f-b8c3-f9716cc0a308" podUID="2e82084e-a68b-4e41-9d23-8888ab97e53e" Feb 02 21:40:25 crc kubenswrapper[4789]: I0202 21:40:25.300129 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2e82084e-a68b-4e41-9d23-8888ab97e53e-openstack-config-secret\") pod \"openstackclient\" (UID: \"2e82084e-a68b-4e41-9d23-8888ab97e53e\") " pod="openstack/openstackclient" Feb 02 21:40:25 crc kubenswrapper[4789]: I0202 21:40:25.300393 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e82084e-a68b-4e41-9d23-8888ab97e53e-combined-ca-bundle\") pod \"openstackclient\" (UID: \"2e82084e-a68b-4e41-9d23-8888ab97e53e\") " pod="openstack/openstackclient" Feb 02 21:40:25 crc kubenswrapper[4789]: I0202 21:40:25.317213 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whdmt\" (UniqueName: \"kubernetes.io/projected/2e82084e-a68b-4e41-9d23-8888ab97e53e-kube-api-access-whdmt\") pod \"openstackclient\" (UID: \"2e82084e-a68b-4e41-9d23-8888ab97e53e\") " pod="openstack/openstackclient" Feb 02 21:40:25 crc kubenswrapper[4789]: I0202 21:40:25.391752 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 02 21:40:25 crc kubenswrapper[4789]: I0202 21:40:25.473829 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 02 21:40:25 crc kubenswrapper[4789]: I0202 21:40:25.499295 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mchs\" (UniqueName: \"kubernetes.io/projected/61697303-0c26-461f-b8c3-f9716cc0a308-kube-api-access-9mchs\") pod \"openstackclient\" (UID: \"61697303-0c26-461f-b8c3-f9716cc0a308\") " pod="openstack/openstackclient" Feb 02 21:40:25 crc kubenswrapper[4789]: E0202 21:40:25.505961 4789 projected.go:194] Error preparing data for projected volume kube-api-access-9mchs for pod openstack/openstackclient: failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: the UID in the bound object reference (61697303-0c26-461f-b8c3-f9716cc0a308) does not match the UID in record. The object might have been deleted and then recreated Feb 02 21:40:25 crc kubenswrapper[4789]: E0202 21:40:25.506017 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/61697303-0c26-461f-b8c3-f9716cc0a308-kube-api-access-9mchs podName:61697303-0c26-461f-b8c3-f9716cc0a308 nodeName:}" failed. No retries permitted until 2026-02-02 21:40:26.505998945 +0000 UTC m=+1246.801023964 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-9mchs" (UniqueName: "kubernetes.io/projected/61697303-0c26-461f-b8c3-f9716cc0a308-kube-api-access-9mchs") pod "openstackclient" (UID: "61697303-0c26-461f-b8c3-f9716cc0a308") : failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: the UID in the bound object reference (61697303-0c26-461f-b8c3-f9716cc0a308) does not match the UID in record. The object might have been deleted and then recreated Feb 02 21:40:25 crc kubenswrapper[4789]: I0202 21:40:25.600289 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/61697303-0c26-461f-b8c3-f9716cc0a308-openstack-config\") pod \"61697303-0c26-461f-b8c3-f9716cc0a308\" (UID: \"61697303-0c26-461f-b8c3-f9716cc0a308\") " Feb 02 21:40:25 crc kubenswrapper[4789]: I0202 21:40:25.600476 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61697303-0c26-461f-b8c3-f9716cc0a308-combined-ca-bundle\") pod \"61697303-0c26-461f-b8c3-f9716cc0a308\" (UID: \"61697303-0c26-461f-b8c3-f9716cc0a308\") " Feb 02 21:40:25 crc kubenswrapper[4789]: I0202 21:40:25.600539 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/61697303-0c26-461f-b8c3-f9716cc0a308-openstack-config-secret\") pod \"61697303-0c26-461f-b8c3-f9716cc0a308\" (UID: \"61697303-0c26-461f-b8c3-f9716cc0a308\") " Feb 02 21:40:25 crc kubenswrapper[4789]: I0202 21:40:25.601052 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9mchs\" (UniqueName: \"kubernetes.io/projected/61697303-0c26-461f-b8c3-f9716cc0a308-kube-api-access-9mchs\") on node \"crc\" DevicePath \"\"" Feb 02 21:40:25 crc kubenswrapper[4789]: I0202 21:40:25.603462 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61697303-0c26-461f-b8c3-f9716cc0a308-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "61697303-0c26-461f-b8c3-f9716cc0a308" (UID: "61697303-0c26-461f-b8c3-f9716cc0a308"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:40:25 crc kubenswrapper[4789]: I0202 21:40:25.608783 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61697303-0c26-461f-b8c3-f9716cc0a308-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "61697303-0c26-461f-b8c3-f9716cc0a308" (UID: "61697303-0c26-461f-b8c3-f9716cc0a308"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:40:25 crc kubenswrapper[4789]: I0202 21:40:25.613316 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61697303-0c26-461f-b8c3-f9716cc0a308-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "61697303-0c26-461f-b8c3-f9716cc0a308" (UID: "61697303-0c26-461f-b8c3-f9716cc0a308"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:40:25 crc kubenswrapper[4789]: I0202 21:40:25.702880 4789 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/61697303-0c26-461f-b8c3-f9716cc0a308-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 02 21:40:25 crc kubenswrapper[4789]: I0202 21:40:25.702912 4789 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61697303-0c26-461f-b8c3-f9716cc0a308-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 21:40:25 crc kubenswrapper[4789]: I0202 21:40:25.702924 4789 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/61697303-0c26-461f-b8c3-f9716cc0a308-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 02 21:40:25 crc kubenswrapper[4789]: I0202 21:40:25.882048 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 02 21:40:25 crc kubenswrapper[4789]: W0202 21:40:25.888534 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2e82084e_a68b_4e41_9d23_8888ab97e53e.slice/crio-b7a3eefbe3e67595c7995ca3bcfa3c0d093bf36e637ff67fea32bc12e4c67a32 WatchSource:0}: Error finding container b7a3eefbe3e67595c7995ca3bcfa3c0d093bf36e637ff67fea32bc12e4c67a32: Status 404 returned error can't find the container with id b7a3eefbe3e67595c7995ca3bcfa3c0d093bf36e637ff67fea32bc12e4c67a32 Feb 02 21:40:26 crc kubenswrapper[4789]: I0202 21:40:26.300801 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"2e82084e-a68b-4e41-9d23-8888ab97e53e","Type":"ContainerStarted","Data":"b7a3eefbe3e67595c7995ca3bcfa3c0d093bf36e637ff67fea32bc12e4c67a32"} Feb 02 21:40:26 crc kubenswrapper[4789]: I0202 21:40:26.300845 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 02 21:40:26 crc kubenswrapper[4789]: I0202 21:40:26.303547 4789 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="61697303-0c26-461f-b8c3-f9716cc0a308" podUID="2e82084e-a68b-4e41-9d23-8888ab97e53e" Feb 02 21:40:26 crc kubenswrapper[4789]: I0202 21:40:26.371574 4789 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-76b9c6fd6-4jjdd" podUID="f72393b9-cc1a-4feb-9089-259fd674fb19" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.162:9311/healthcheck\": read tcp 10.217.0.2:44272->10.217.0.162:9311: read: connection reset by peer" Feb 02 21:40:26 crc kubenswrapper[4789]: I0202 21:40:26.372162 4789 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-76b9c6fd6-4jjdd" podUID="f72393b9-cc1a-4feb-9089-259fd674fb19" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.162:9311/healthcheck\": read tcp 10.217.0.2:44282->10.217.0.162:9311: read: connection reset by peer" Feb 02 21:40:26 crc kubenswrapper[4789]: I0202 21:40:26.433941 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61697303-0c26-461f-b8c3-f9716cc0a308" path="/var/lib/kubelet/pods/61697303-0c26-461f-b8c3-f9716cc0a308/volumes" Feb 02 21:40:26 crc kubenswrapper[4789]: I0202 21:40:26.879016 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-76b9c6fd6-4jjdd" Feb 02 21:40:27 crc kubenswrapper[4789]: I0202 21:40:27.025630 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f72393b9-cc1a-4feb-9089-259fd674fb19-config-data-custom\") pod \"f72393b9-cc1a-4feb-9089-259fd674fb19\" (UID: \"f72393b9-cc1a-4feb-9089-259fd674fb19\") " Feb 02 21:40:27 crc kubenswrapper[4789]: I0202 21:40:27.025977 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f72393b9-cc1a-4feb-9089-259fd674fb19-combined-ca-bundle\") pod \"f72393b9-cc1a-4feb-9089-259fd674fb19\" (UID: \"f72393b9-cc1a-4feb-9089-259fd674fb19\") " Feb 02 21:40:27 crc kubenswrapper[4789]: I0202 21:40:27.026027 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f72393b9-cc1a-4feb-9089-259fd674fb19-config-data\") pod \"f72393b9-cc1a-4feb-9089-259fd674fb19\" (UID: \"f72393b9-cc1a-4feb-9089-259fd674fb19\") " Feb 02 21:40:27 crc kubenswrapper[4789]: I0202 21:40:27.026145 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kwg84\" (UniqueName: \"kubernetes.io/projected/f72393b9-cc1a-4feb-9089-259fd674fb19-kube-api-access-kwg84\") pod \"f72393b9-cc1a-4feb-9089-259fd674fb19\" (UID: \"f72393b9-cc1a-4feb-9089-259fd674fb19\") " Feb 02 21:40:27 crc kubenswrapper[4789]: I0202 21:40:27.026168 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f72393b9-cc1a-4feb-9089-259fd674fb19-logs\") pod \"f72393b9-cc1a-4feb-9089-259fd674fb19\" (UID: \"f72393b9-cc1a-4feb-9089-259fd674fb19\") " Feb 02 21:40:27 crc kubenswrapper[4789]: I0202 21:40:27.026823 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f72393b9-cc1a-4feb-9089-259fd674fb19-logs" (OuterVolumeSpecName: "logs") pod "f72393b9-cc1a-4feb-9089-259fd674fb19" (UID: "f72393b9-cc1a-4feb-9089-259fd674fb19"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 21:40:27 crc kubenswrapper[4789]: I0202 21:40:27.033243 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f72393b9-cc1a-4feb-9089-259fd674fb19-kube-api-access-kwg84" (OuterVolumeSpecName: "kube-api-access-kwg84") pod "f72393b9-cc1a-4feb-9089-259fd674fb19" (UID: "f72393b9-cc1a-4feb-9089-259fd674fb19"). InnerVolumeSpecName "kube-api-access-kwg84". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:40:27 crc kubenswrapper[4789]: I0202 21:40:27.035303 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f72393b9-cc1a-4feb-9089-259fd674fb19-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "f72393b9-cc1a-4feb-9089-259fd674fb19" (UID: "f72393b9-cc1a-4feb-9089-259fd674fb19"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:40:27 crc kubenswrapper[4789]: I0202 21:40:27.066402 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f72393b9-cc1a-4feb-9089-259fd674fb19-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f72393b9-cc1a-4feb-9089-259fd674fb19" (UID: "f72393b9-cc1a-4feb-9089-259fd674fb19"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:40:27 crc kubenswrapper[4789]: I0202 21:40:27.110980 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f72393b9-cc1a-4feb-9089-259fd674fb19-config-data" (OuterVolumeSpecName: "config-data") pod "f72393b9-cc1a-4feb-9089-259fd674fb19" (UID: "f72393b9-cc1a-4feb-9089-259fd674fb19"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:40:27 crc kubenswrapper[4789]: I0202 21:40:27.128927 4789 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f72393b9-cc1a-4feb-9089-259fd674fb19-logs\") on node \"crc\" DevicePath \"\"" Feb 02 21:40:27 crc kubenswrapper[4789]: I0202 21:40:27.128960 4789 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f72393b9-cc1a-4feb-9089-259fd674fb19-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 02 21:40:27 crc kubenswrapper[4789]: I0202 21:40:27.128971 4789 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f72393b9-cc1a-4feb-9089-259fd674fb19-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 21:40:27 crc kubenswrapper[4789]: I0202 21:40:27.128984 4789 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f72393b9-cc1a-4feb-9089-259fd674fb19-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 21:40:27 crc kubenswrapper[4789]: I0202 21:40:27.128994 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kwg84\" (UniqueName: \"kubernetes.io/projected/f72393b9-cc1a-4feb-9089-259fd674fb19-kube-api-access-kwg84\") on node \"crc\" DevicePath \"\"" Feb 02 21:40:27 crc kubenswrapper[4789]: I0202 21:40:27.320310 4789 generic.go:334] "Generic (PLEG): container finished" podID="f72393b9-cc1a-4feb-9089-259fd674fb19" containerID="f33d6cab0e4515f9fc25b935375bb01cac91eedda8fde6dad6fe745f2af17953" exitCode=0 Feb 02 21:40:27 crc kubenswrapper[4789]: I0202 21:40:27.320351 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-76b9c6fd6-4jjdd" event={"ID":"f72393b9-cc1a-4feb-9089-259fd674fb19","Type":"ContainerDied","Data":"f33d6cab0e4515f9fc25b935375bb01cac91eedda8fde6dad6fe745f2af17953"} Feb 02 21:40:27 crc kubenswrapper[4789]: I0202 21:40:27.320382 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-76b9c6fd6-4jjdd" event={"ID":"f72393b9-cc1a-4feb-9089-259fd674fb19","Type":"ContainerDied","Data":"649b76ad00cf6b85c009544c2004dbbd2e85763bd5dcf8993c42b2685a6ee53d"} Feb 02 21:40:27 crc kubenswrapper[4789]: I0202 21:40:27.320403 4789 scope.go:117] "RemoveContainer" containerID="f33d6cab0e4515f9fc25b935375bb01cac91eedda8fde6dad6fe745f2af17953" Feb 02 21:40:27 crc kubenswrapper[4789]: I0202 21:40:27.320411 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-76b9c6fd6-4jjdd" Feb 02 21:40:27 crc kubenswrapper[4789]: I0202 21:40:27.353769 4789 scope.go:117] "RemoveContainer" containerID="09e3ebecf8199c9c2015b9af6285917464358df228585d459b775ab95c46f735" Feb 02 21:40:27 crc kubenswrapper[4789]: I0202 21:40:27.366785 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-76b9c6fd6-4jjdd"] Feb 02 21:40:27 crc kubenswrapper[4789]: I0202 21:40:27.372849 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-76b9c6fd6-4jjdd"] Feb 02 21:40:27 crc kubenswrapper[4789]: I0202 21:40:27.373915 4789 scope.go:117] "RemoveContainer" containerID="f33d6cab0e4515f9fc25b935375bb01cac91eedda8fde6dad6fe745f2af17953" Feb 02 21:40:27 crc kubenswrapper[4789]: E0202 21:40:27.374385 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f33d6cab0e4515f9fc25b935375bb01cac91eedda8fde6dad6fe745f2af17953\": container with ID starting with f33d6cab0e4515f9fc25b935375bb01cac91eedda8fde6dad6fe745f2af17953 not found: ID does not exist" containerID="f33d6cab0e4515f9fc25b935375bb01cac91eedda8fde6dad6fe745f2af17953" Feb 02 21:40:27 crc kubenswrapper[4789]: I0202 21:40:27.374428 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f33d6cab0e4515f9fc25b935375bb01cac91eedda8fde6dad6fe745f2af17953"} err="failed to get container status \"f33d6cab0e4515f9fc25b935375bb01cac91eedda8fde6dad6fe745f2af17953\": rpc error: code = NotFound desc = could not find container \"f33d6cab0e4515f9fc25b935375bb01cac91eedda8fde6dad6fe745f2af17953\": container with ID starting with f33d6cab0e4515f9fc25b935375bb01cac91eedda8fde6dad6fe745f2af17953 not found: ID does not exist" Feb 02 21:40:27 crc kubenswrapper[4789]: I0202 21:40:27.374457 4789 scope.go:117] "RemoveContainer" containerID="09e3ebecf8199c9c2015b9af6285917464358df228585d459b775ab95c46f735" Feb 02 21:40:27 crc kubenswrapper[4789]: E0202 21:40:27.374864 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09e3ebecf8199c9c2015b9af6285917464358df228585d459b775ab95c46f735\": container with ID starting with 09e3ebecf8199c9c2015b9af6285917464358df228585d459b775ab95c46f735 not found: ID does not exist" containerID="09e3ebecf8199c9c2015b9af6285917464358df228585d459b775ab95c46f735" Feb 02 21:40:27 crc kubenswrapper[4789]: I0202 21:40:27.374902 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09e3ebecf8199c9c2015b9af6285917464358df228585d459b775ab95c46f735"} err="failed to get container status \"09e3ebecf8199c9c2015b9af6285917464358df228585d459b775ab95c46f735\": rpc error: code = NotFound desc = could not find container \"09e3ebecf8199c9c2015b9af6285917464358df228585d459b775ab95c46f735\": container with ID starting with 09e3ebecf8199c9c2015b9af6285917464358df228585d459b775ab95c46f735 not found: ID does not exist" Feb 02 21:40:28 crc kubenswrapper[4789]: I0202 21:40:28.429913 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f72393b9-cc1a-4feb-9089-259fd674fb19" path="/var/lib/kubelet/pods/f72393b9-cc1a-4feb-9089-259fd674fb19/volumes" Feb 02 21:40:29 crc kubenswrapper[4789]: I0202 21:40:29.290879 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-64bb487f87-44hz8"] Feb 02 21:40:29 crc kubenswrapper[4789]: E0202 21:40:29.291379 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f72393b9-cc1a-4feb-9089-259fd674fb19" containerName="barbican-api" Feb 02 21:40:29 crc kubenswrapper[4789]: I0202 21:40:29.291395 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="f72393b9-cc1a-4feb-9089-259fd674fb19" containerName="barbican-api" Feb 02 21:40:29 crc kubenswrapper[4789]: E0202 21:40:29.291432 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f72393b9-cc1a-4feb-9089-259fd674fb19" containerName="barbican-api-log" Feb 02 21:40:29 crc kubenswrapper[4789]: I0202 21:40:29.291451 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="f72393b9-cc1a-4feb-9089-259fd674fb19" containerName="barbican-api-log" Feb 02 21:40:29 crc kubenswrapper[4789]: I0202 21:40:29.291652 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="f72393b9-cc1a-4feb-9089-259fd674fb19" containerName="barbican-api" Feb 02 21:40:29 crc kubenswrapper[4789]: I0202 21:40:29.291688 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="f72393b9-cc1a-4feb-9089-259fd674fb19" containerName="barbican-api-log" Feb 02 21:40:29 crc kubenswrapper[4789]: I0202 21:40:29.292784 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-64bb487f87-44hz8" Feb 02 21:40:29 crc kubenswrapper[4789]: I0202 21:40:29.297977 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 02 21:40:29 crc kubenswrapper[4789]: I0202 21:40:29.298002 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Feb 02 21:40:29 crc kubenswrapper[4789]: I0202 21:40:29.298198 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Feb 02 21:40:29 crc kubenswrapper[4789]: I0202 21:40:29.304227 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-64bb487f87-44hz8"] Feb 02 21:40:29 crc kubenswrapper[4789]: I0202 21:40:29.475370 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxwlc\" (UniqueName: \"kubernetes.io/projected/c08255d0-1dd6-4556-8f30-65367b7739f7-kube-api-access-rxwlc\") pod \"swift-proxy-64bb487f87-44hz8\" (UID: \"c08255d0-1dd6-4556-8f30-65367b7739f7\") " pod="openstack/swift-proxy-64bb487f87-44hz8" Feb 02 21:40:29 crc kubenswrapper[4789]: I0202 21:40:29.475415 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c08255d0-1dd6-4556-8f30-65367b7739f7-log-httpd\") pod \"swift-proxy-64bb487f87-44hz8\" (UID: \"c08255d0-1dd6-4556-8f30-65367b7739f7\") " pod="openstack/swift-proxy-64bb487f87-44hz8" Feb 02 21:40:29 crc kubenswrapper[4789]: I0202 21:40:29.475506 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c08255d0-1dd6-4556-8f30-65367b7739f7-config-data\") pod \"swift-proxy-64bb487f87-44hz8\" (UID: \"c08255d0-1dd6-4556-8f30-65367b7739f7\") " pod="openstack/swift-proxy-64bb487f87-44hz8" Feb 02 21:40:29 crc kubenswrapper[4789]: I0202 21:40:29.475614 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c08255d0-1dd6-4556-8f30-65367b7739f7-combined-ca-bundle\") pod \"swift-proxy-64bb487f87-44hz8\" (UID: \"c08255d0-1dd6-4556-8f30-65367b7739f7\") " pod="openstack/swift-proxy-64bb487f87-44hz8" Feb 02 21:40:29 crc kubenswrapper[4789]: I0202 21:40:29.475672 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c08255d0-1dd6-4556-8f30-65367b7739f7-run-httpd\") pod \"swift-proxy-64bb487f87-44hz8\" (UID: \"c08255d0-1dd6-4556-8f30-65367b7739f7\") " pod="openstack/swift-proxy-64bb487f87-44hz8" Feb 02 21:40:29 crc kubenswrapper[4789]: I0202 21:40:29.475697 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c08255d0-1dd6-4556-8f30-65367b7739f7-public-tls-certs\") pod \"swift-proxy-64bb487f87-44hz8\" (UID: \"c08255d0-1dd6-4556-8f30-65367b7739f7\") " pod="openstack/swift-proxy-64bb487f87-44hz8" Feb 02 21:40:29 crc kubenswrapper[4789]: I0202 21:40:29.475719 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c08255d0-1dd6-4556-8f30-65367b7739f7-etc-swift\") pod \"swift-proxy-64bb487f87-44hz8\" (UID: \"c08255d0-1dd6-4556-8f30-65367b7739f7\") " pod="openstack/swift-proxy-64bb487f87-44hz8" Feb 02 21:40:29 crc kubenswrapper[4789]: I0202 21:40:29.475744 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c08255d0-1dd6-4556-8f30-65367b7739f7-internal-tls-certs\") pod \"swift-proxy-64bb487f87-44hz8\" (UID: \"c08255d0-1dd6-4556-8f30-65367b7739f7\") " pod="openstack/swift-proxy-64bb487f87-44hz8" Feb 02 21:40:29 crc kubenswrapper[4789]: I0202 21:40:29.577264 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c08255d0-1dd6-4556-8f30-65367b7739f7-internal-tls-certs\") pod \"swift-proxy-64bb487f87-44hz8\" (UID: \"c08255d0-1dd6-4556-8f30-65367b7739f7\") " pod="openstack/swift-proxy-64bb487f87-44hz8" Feb 02 21:40:29 crc kubenswrapper[4789]: I0202 21:40:29.577379 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxwlc\" (UniqueName: \"kubernetes.io/projected/c08255d0-1dd6-4556-8f30-65367b7739f7-kube-api-access-rxwlc\") pod \"swift-proxy-64bb487f87-44hz8\" (UID: \"c08255d0-1dd6-4556-8f30-65367b7739f7\") " pod="openstack/swift-proxy-64bb487f87-44hz8" Feb 02 21:40:29 crc kubenswrapper[4789]: I0202 21:40:29.577444 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c08255d0-1dd6-4556-8f30-65367b7739f7-log-httpd\") pod \"swift-proxy-64bb487f87-44hz8\" (UID: \"c08255d0-1dd6-4556-8f30-65367b7739f7\") " pod="openstack/swift-proxy-64bb487f87-44hz8" Feb 02 21:40:29 crc kubenswrapper[4789]: I0202 21:40:29.577975 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c08255d0-1dd6-4556-8f30-65367b7739f7-log-httpd\") pod \"swift-proxy-64bb487f87-44hz8\" (UID: \"c08255d0-1dd6-4556-8f30-65367b7739f7\") " pod="openstack/swift-proxy-64bb487f87-44hz8" Feb 02 21:40:29 crc kubenswrapper[4789]: I0202 21:40:29.578036 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c08255d0-1dd6-4556-8f30-65367b7739f7-config-data\") pod \"swift-proxy-64bb487f87-44hz8\" (UID: \"c08255d0-1dd6-4556-8f30-65367b7739f7\") " pod="openstack/swift-proxy-64bb487f87-44hz8" Feb 02 21:40:29 crc kubenswrapper[4789]: I0202 21:40:29.578141 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c08255d0-1dd6-4556-8f30-65367b7739f7-combined-ca-bundle\") pod \"swift-proxy-64bb487f87-44hz8\" (UID: \"c08255d0-1dd6-4556-8f30-65367b7739f7\") " pod="openstack/swift-proxy-64bb487f87-44hz8" Feb 02 21:40:29 crc kubenswrapper[4789]: I0202 21:40:29.578520 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c08255d0-1dd6-4556-8f30-65367b7739f7-run-httpd\") pod \"swift-proxy-64bb487f87-44hz8\" (UID: \"c08255d0-1dd6-4556-8f30-65367b7739f7\") " pod="openstack/swift-proxy-64bb487f87-44hz8" Feb 02 21:40:29 crc kubenswrapper[4789]: I0202 21:40:29.578555 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c08255d0-1dd6-4556-8f30-65367b7739f7-public-tls-certs\") pod \"swift-proxy-64bb487f87-44hz8\" (UID: \"c08255d0-1dd6-4556-8f30-65367b7739f7\") " pod="openstack/swift-proxy-64bb487f87-44hz8" Feb 02 21:40:29 crc kubenswrapper[4789]: I0202 21:40:29.578576 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c08255d0-1dd6-4556-8f30-65367b7739f7-etc-swift\") pod \"swift-proxy-64bb487f87-44hz8\" (UID: \"c08255d0-1dd6-4556-8f30-65367b7739f7\") " pod="openstack/swift-proxy-64bb487f87-44hz8" Feb 02 21:40:29 crc kubenswrapper[4789]: I0202 21:40:29.578839 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c08255d0-1dd6-4556-8f30-65367b7739f7-run-httpd\") pod \"swift-proxy-64bb487f87-44hz8\" (UID: \"c08255d0-1dd6-4556-8f30-65367b7739f7\") " pod="openstack/swift-proxy-64bb487f87-44hz8" Feb 02 21:40:29 crc kubenswrapper[4789]: I0202 21:40:29.583119 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c08255d0-1dd6-4556-8f30-65367b7739f7-combined-ca-bundle\") pod \"swift-proxy-64bb487f87-44hz8\" (UID: \"c08255d0-1dd6-4556-8f30-65367b7739f7\") " pod="openstack/swift-proxy-64bb487f87-44hz8" Feb 02 21:40:29 crc kubenswrapper[4789]: I0202 21:40:29.583611 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c08255d0-1dd6-4556-8f30-65367b7739f7-config-data\") pod \"swift-proxy-64bb487f87-44hz8\" (UID: \"c08255d0-1dd6-4556-8f30-65367b7739f7\") " pod="openstack/swift-proxy-64bb487f87-44hz8" Feb 02 21:40:29 crc kubenswrapper[4789]: I0202 21:40:29.584515 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c08255d0-1dd6-4556-8f30-65367b7739f7-etc-swift\") pod \"swift-proxy-64bb487f87-44hz8\" (UID: \"c08255d0-1dd6-4556-8f30-65367b7739f7\") " pod="openstack/swift-proxy-64bb487f87-44hz8" Feb 02 21:40:29 crc kubenswrapper[4789]: I0202 21:40:29.585207 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c08255d0-1dd6-4556-8f30-65367b7739f7-public-tls-certs\") pod \"swift-proxy-64bb487f87-44hz8\" (UID: \"c08255d0-1dd6-4556-8f30-65367b7739f7\") " pod="openstack/swift-proxy-64bb487f87-44hz8" Feb 02 21:40:29 crc kubenswrapper[4789]: I0202 21:40:29.585509 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c08255d0-1dd6-4556-8f30-65367b7739f7-internal-tls-certs\") pod \"swift-proxy-64bb487f87-44hz8\" (UID: \"c08255d0-1dd6-4556-8f30-65367b7739f7\") " pod="openstack/swift-proxy-64bb487f87-44hz8" Feb 02 21:40:29 crc kubenswrapper[4789]: I0202 21:40:29.592084 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxwlc\" (UniqueName: \"kubernetes.io/projected/c08255d0-1dd6-4556-8f30-65367b7739f7-kube-api-access-rxwlc\") pod \"swift-proxy-64bb487f87-44hz8\" (UID: \"c08255d0-1dd6-4556-8f30-65367b7739f7\") " pod="openstack/swift-proxy-64bb487f87-44hz8" Feb 02 21:40:29 crc kubenswrapper[4789]: I0202 21:40:29.613148 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-64bb487f87-44hz8" Feb 02 21:40:29 crc kubenswrapper[4789]: I0202 21:40:29.903806 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 02 21:40:30 crc kubenswrapper[4789]: I0202 21:40:30.179075 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-64bb487f87-44hz8"] Feb 02 21:40:31 crc kubenswrapper[4789]: I0202 21:40:31.328446 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 21:40:31 crc kubenswrapper[4789]: I0202 21:40:31.329031 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5aa4ba1a-609b-483a-9754-8edf78aa7005" containerName="ceilometer-central-agent" containerID="cri-o://3f71d60c1d0c4efb65058a7c28d385073be727dfe87a5b819bf554bb70893395" gracePeriod=30 Feb 02 21:40:31 crc kubenswrapper[4789]: I0202 21:40:31.329726 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5aa4ba1a-609b-483a-9754-8edf78aa7005" containerName="proxy-httpd" containerID="cri-o://3828f3d41f48ee6fecf554ae9ad3599ba717793245db0423f1febe380ff11d25" gracePeriod=30 Feb 02 21:40:31 crc kubenswrapper[4789]: I0202 21:40:31.329779 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5aa4ba1a-609b-483a-9754-8edf78aa7005" containerName="sg-core" containerID="cri-o://8462926ba635cfdccaca393080803e41831db8362d17e2c80b17df9d05bdc58a" gracePeriod=30 Feb 02 21:40:31 crc kubenswrapper[4789]: I0202 21:40:31.329814 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5aa4ba1a-609b-483a-9754-8edf78aa7005" containerName="ceilometer-notification-agent" containerID="cri-o://a2a34b67ac44800a4e7113e77632478a598a725c39031ea40fca40f80ce116f7" gracePeriod=30 Feb 02 21:40:31 crc kubenswrapper[4789]: I0202 21:40:31.334268 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 02 21:40:32 crc kubenswrapper[4789]: I0202 21:40:32.384468 4789 generic.go:334] "Generic (PLEG): container finished" podID="5aa4ba1a-609b-483a-9754-8edf78aa7005" containerID="3828f3d41f48ee6fecf554ae9ad3599ba717793245db0423f1febe380ff11d25" exitCode=0 Feb 02 21:40:32 crc kubenswrapper[4789]: I0202 21:40:32.384528 4789 generic.go:334] "Generic (PLEG): container finished" podID="5aa4ba1a-609b-483a-9754-8edf78aa7005" containerID="8462926ba635cfdccaca393080803e41831db8362d17e2c80b17df9d05bdc58a" exitCode=2 Feb 02 21:40:32 crc kubenswrapper[4789]: I0202 21:40:32.384536 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5aa4ba1a-609b-483a-9754-8edf78aa7005","Type":"ContainerDied","Data":"3828f3d41f48ee6fecf554ae9ad3599ba717793245db0423f1febe380ff11d25"} Feb 02 21:40:32 crc kubenswrapper[4789]: I0202 21:40:32.384613 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5aa4ba1a-609b-483a-9754-8edf78aa7005","Type":"ContainerDied","Data":"8462926ba635cfdccaca393080803e41831db8362d17e2c80b17df9d05bdc58a"} Feb 02 21:40:32 crc kubenswrapper[4789]: I0202 21:40:32.384628 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5aa4ba1a-609b-483a-9754-8edf78aa7005","Type":"ContainerDied","Data":"a2a34b67ac44800a4e7113e77632478a598a725c39031ea40fca40f80ce116f7"} Feb 02 21:40:32 crc kubenswrapper[4789]: I0202 21:40:32.384554 4789 generic.go:334] "Generic (PLEG): container finished" podID="5aa4ba1a-609b-483a-9754-8edf78aa7005" containerID="a2a34b67ac44800a4e7113e77632478a598a725c39031ea40fca40f80ce116f7" exitCode=0 Feb 02 21:40:32 crc kubenswrapper[4789]: I0202 21:40:32.384661 4789 generic.go:334] "Generic (PLEG): container finished" podID="5aa4ba1a-609b-483a-9754-8edf78aa7005" containerID="3f71d60c1d0c4efb65058a7c28d385073be727dfe87a5b819bf554bb70893395" exitCode=0 Feb 02 21:40:32 crc kubenswrapper[4789]: I0202 21:40:32.384682 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5aa4ba1a-609b-483a-9754-8edf78aa7005","Type":"ContainerDied","Data":"3f71d60c1d0c4efb65058a7c28d385073be727dfe87a5b819bf554bb70893395"} Feb 02 21:40:35 crc kubenswrapper[4789]: W0202 21:40:35.390390 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc08255d0_1dd6_4556_8f30_65367b7739f7.slice/crio-8851dc22e78005742ed4deebbb727c494051bb43cf575e215b7106870d3c7a31 WatchSource:0}: Error finding container 8851dc22e78005742ed4deebbb727c494051bb43cf575e215b7106870d3c7a31: Status 404 returned error can't find the container with id 8851dc22e78005742ed4deebbb727c494051bb43cf575e215b7106870d3c7a31 Feb 02 21:40:35 crc kubenswrapper[4789]: I0202 21:40:35.437914 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-64bb487f87-44hz8" event={"ID":"c08255d0-1dd6-4556-8f30-65367b7739f7","Type":"ContainerStarted","Data":"8851dc22e78005742ed4deebbb727c494051bb43cf575e215b7106870d3c7a31"} Feb 02 21:40:35 crc kubenswrapper[4789]: I0202 21:40:35.761242 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 21:40:35 crc kubenswrapper[4789]: I0202 21:40:35.785885 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 21:40:35 crc kubenswrapper[4789]: I0202 21:40:35.786147 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="322d725c-ac03-4759-a08c-e534a70d1ec3" containerName="glance-log" containerID="cri-o://bc11dcab315b199d84e1245aed796e1917ddb88b3f9915b4433d912d44938385" gracePeriod=30 Feb 02 21:40:35 crc kubenswrapper[4789]: I0202 21:40:35.786295 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="322d725c-ac03-4759-a08c-e534a70d1ec3" containerName="glance-httpd" containerID="cri-o://2a4f42975491d16296d64ff16671d19f3f7ca2af54f908f63979957cce03acbc" gracePeriod=30 Feb 02 21:40:35 crc kubenswrapper[4789]: I0202 21:40:35.892072 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5aa4ba1a-609b-483a-9754-8edf78aa7005-config-data\") pod \"5aa4ba1a-609b-483a-9754-8edf78aa7005\" (UID: \"5aa4ba1a-609b-483a-9754-8edf78aa7005\") " Feb 02 21:40:35 crc kubenswrapper[4789]: I0202 21:40:35.892697 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5aa4ba1a-609b-483a-9754-8edf78aa7005-sg-core-conf-yaml\") pod \"5aa4ba1a-609b-483a-9754-8edf78aa7005\" (UID: \"5aa4ba1a-609b-483a-9754-8edf78aa7005\") " Feb 02 21:40:35 crc kubenswrapper[4789]: I0202 21:40:35.892742 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5aa4ba1a-609b-483a-9754-8edf78aa7005-run-httpd\") pod \"5aa4ba1a-609b-483a-9754-8edf78aa7005\" (UID: \"5aa4ba1a-609b-483a-9754-8edf78aa7005\") " Feb 02 21:40:35 crc kubenswrapper[4789]: I0202 21:40:35.892762 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5aa4ba1a-609b-483a-9754-8edf78aa7005-log-httpd\") pod \"5aa4ba1a-609b-483a-9754-8edf78aa7005\" (UID: \"5aa4ba1a-609b-483a-9754-8edf78aa7005\") " Feb 02 21:40:35 crc kubenswrapper[4789]: I0202 21:40:35.892879 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5aa4ba1a-609b-483a-9754-8edf78aa7005-combined-ca-bundle\") pod \"5aa4ba1a-609b-483a-9754-8edf78aa7005\" (UID: \"5aa4ba1a-609b-483a-9754-8edf78aa7005\") " Feb 02 21:40:35 crc kubenswrapper[4789]: I0202 21:40:35.892921 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kxwbt\" (UniqueName: \"kubernetes.io/projected/5aa4ba1a-609b-483a-9754-8edf78aa7005-kube-api-access-kxwbt\") pod \"5aa4ba1a-609b-483a-9754-8edf78aa7005\" (UID: \"5aa4ba1a-609b-483a-9754-8edf78aa7005\") " Feb 02 21:40:35 crc kubenswrapper[4789]: I0202 21:40:35.892975 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5aa4ba1a-609b-483a-9754-8edf78aa7005-scripts\") pod \"5aa4ba1a-609b-483a-9754-8edf78aa7005\" (UID: \"5aa4ba1a-609b-483a-9754-8edf78aa7005\") " Feb 02 21:40:35 crc kubenswrapper[4789]: I0202 21:40:35.893328 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5aa4ba1a-609b-483a-9754-8edf78aa7005-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "5aa4ba1a-609b-483a-9754-8edf78aa7005" (UID: "5aa4ba1a-609b-483a-9754-8edf78aa7005"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 21:40:35 crc kubenswrapper[4789]: I0202 21:40:35.893528 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5aa4ba1a-609b-483a-9754-8edf78aa7005-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "5aa4ba1a-609b-483a-9754-8edf78aa7005" (UID: "5aa4ba1a-609b-483a-9754-8edf78aa7005"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 21:40:35 crc kubenswrapper[4789]: I0202 21:40:35.897148 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5aa4ba1a-609b-483a-9754-8edf78aa7005-kube-api-access-kxwbt" (OuterVolumeSpecName: "kube-api-access-kxwbt") pod "5aa4ba1a-609b-483a-9754-8edf78aa7005" (UID: "5aa4ba1a-609b-483a-9754-8edf78aa7005"). InnerVolumeSpecName "kube-api-access-kxwbt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:40:35 crc kubenswrapper[4789]: I0202 21:40:35.911074 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5aa4ba1a-609b-483a-9754-8edf78aa7005-scripts" (OuterVolumeSpecName: "scripts") pod "5aa4ba1a-609b-483a-9754-8edf78aa7005" (UID: "5aa4ba1a-609b-483a-9754-8edf78aa7005"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:40:35 crc kubenswrapper[4789]: I0202 21:40:35.931825 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5aa4ba1a-609b-483a-9754-8edf78aa7005-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "5aa4ba1a-609b-483a-9754-8edf78aa7005" (UID: "5aa4ba1a-609b-483a-9754-8edf78aa7005"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:40:35 crc kubenswrapper[4789]: I0202 21:40:35.986440 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5aa4ba1a-609b-483a-9754-8edf78aa7005-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5aa4ba1a-609b-483a-9754-8edf78aa7005" (UID: "5aa4ba1a-609b-483a-9754-8edf78aa7005"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:40:35 crc kubenswrapper[4789]: I0202 21:40:35.994787 4789 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5aa4ba1a-609b-483a-9754-8edf78aa7005-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 02 21:40:35 crc kubenswrapper[4789]: I0202 21:40:35.994854 4789 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5aa4ba1a-609b-483a-9754-8edf78aa7005-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 21:40:35 crc kubenswrapper[4789]: I0202 21:40:35.994863 4789 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5aa4ba1a-609b-483a-9754-8edf78aa7005-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 21:40:35 crc kubenswrapper[4789]: I0202 21:40:35.994872 4789 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5aa4ba1a-609b-483a-9754-8edf78aa7005-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 21:40:35 crc kubenswrapper[4789]: I0202 21:40:35.994881 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kxwbt\" (UniqueName: \"kubernetes.io/projected/5aa4ba1a-609b-483a-9754-8edf78aa7005-kube-api-access-kxwbt\") on node \"crc\" DevicePath \"\"" Feb 02 21:40:35 crc kubenswrapper[4789]: I0202 21:40:35.994893 4789 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5aa4ba1a-609b-483a-9754-8edf78aa7005-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 21:40:35 crc kubenswrapper[4789]: I0202 21:40:35.998149 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5aa4ba1a-609b-483a-9754-8edf78aa7005-config-data" (OuterVolumeSpecName: "config-data") pod "5aa4ba1a-609b-483a-9754-8edf78aa7005" (UID: "5aa4ba1a-609b-483a-9754-8edf78aa7005"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:40:36 crc kubenswrapper[4789]: I0202 21:40:36.095926 4789 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5aa4ba1a-609b-483a-9754-8edf78aa7005-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 21:40:36 crc kubenswrapper[4789]: I0202 21:40:36.431741 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 21:40:36 crc kubenswrapper[4789]: I0202 21:40:36.431981 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="dafa0ec8-f504-4174-b323-2a2d9f09ffb8" containerName="glance-log" containerID="cri-o://2fd71c7d0b224173b8bd6353601c10b697c5e97dfffbd457bae31ef0668e15e1" gracePeriod=30 Feb 02 21:40:36 crc kubenswrapper[4789]: I0202 21:40:36.432162 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="dafa0ec8-f504-4174-b323-2a2d9f09ffb8" containerName="glance-httpd" containerID="cri-o://a69d96438857fda4e73a9d99a5ee151881ca4a7f917490f142b5ff27614c0058" gracePeriod=30 Feb 02 21:40:36 crc kubenswrapper[4789]: I0202 21:40:36.461047 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-64bb487f87-44hz8" event={"ID":"c08255d0-1dd6-4556-8f30-65367b7739f7","Type":"ContainerStarted","Data":"7d439ddc975b276958da145eaf095401f24feaac00038ca172395cfdde929e83"} Feb 02 21:40:36 crc kubenswrapper[4789]: I0202 21:40:36.461102 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-64bb487f87-44hz8" event={"ID":"c08255d0-1dd6-4556-8f30-65367b7739f7","Type":"ContainerStarted","Data":"711efcab439843aaeb94f91469a73186433bd21cfd9a9a56c0f9006d0ae1c9be"} Feb 02 21:40:36 crc kubenswrapper[4789]: I0202 21:40:36.461296 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-64bb487f87-44hz8" Feb 02 21:40:36 crc kubenswrapper[4789]: I0202 21:40:36.461323 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-64bb487f87-44hz8" Feb 02 21:40:36 crc kubenswrapper[4789]: I0202 21:40:36.471051 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5aa4ba1a-609b-483a-9754-8edf78aa7005","Type":"ContainerDied","Data":"70f6f1e5ef75bb62ec00183491fc73f4856d00ad44462794e8992028c2521221"} Feb 02 21:40:36 crc kubenswrapper[4789]: I0202 21:40:36.471099 4789 scope.go:117] "RemoveContainer" containerID="3828f3d41f48ee6fecf554ae9ad3599ba717793245db0423f1febe380ff11d25" Feb 02 21:40:36 crc kubenswrapper[4789]: I0202 21:40:36.471299 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 21:40:36 crc kubenswrapper[4789]: I0202 21:40:36.485201 4789 generic.go:334] "Generic (PLEG): container finished" podID="322d725c-ac03-4759-a08c-e534a70d1ec3" containerID="bc11dcab315b199d84e1245aed796e1917ddb88b3f9915b4433d912d44938385" exitCode=143 Feb 02 21:40:36 crc kubenswrapper[4789]: I0202 21:40:36.485327 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"322d725c-ac03-4759-a08c-e534a70d1ec3","Type":"ContainerDied","Data":"bc11dcab315b199d84e1245aed796e1917ddb88b3f9915b4433d912d44938385"} Feb 02 21:40:36 crc kubenswrapper[4789]: I0202 21:40:36.491743 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"2e82084e-a68b-4e41-9d23-8888ab97e53e","Type":"ContainerStarted","Data":"80ee62a2d791f82f667128eb01b609adcf2ee71d4a2647cc5abe16482c589540"} Feb 02 21:40:36 crc kubenswrapper[4789]: I0202 21:40:36.492231 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-64bb487f87-44hz8" podStartSLOduration=7.492214561 podStartE2EDuration="7.492214561s" podCreationTimestamp="2026-02-02 21:40:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:40:36.484664708 +0000 UTC m=+1256.779689727" watchObservedRunningTime="2026-02-02 21:40:36.492214561 +0000 UTC m=+1256.787239580" Feb 02 21:40:36 crc kubenswrapper[4789]: I0202 21:40:36.506755 4789 scope.go:117] "RemoveContainer" containerID="8462926ba635cfdccaca393080803e41831db8362d17e2c80b17df9d05bdc58a" Feb 02 21:40:36 crc kubenswrapper[4789]: I0202 21:40:36.506920 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 21:40:36 crc kubenswrapper[4789]: I0202 21:40:36.512804 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 02 21:40:36 crc kubenswrapper[4789]: I0202 21:40:36.522351 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.9075956 podStartE2EDuration="12.522328923s" podCreationTimestamp="2026-02-02 21:40:24 +0000 UTC" firstStartedPulling="2026-02-02 21:40:25.891503032 +0000 UTC m=+1246.186528061" lastFinishedPulling="2026-02-02 21:40:35.506236365 +0000 UTC m=+1255.801261384" observedRunningTime="2026-02-02 21:40:36.521059677 +0000 UTC m=+1256.816084696" watchObservedRunningTime="2026-02-02 21:40:36.522328923 +0000 UTC m=+1256.817353942" Feb 02 21:40:36 crc kubenswrapper[4789]: I0202 21:40:36.597230 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 02 21:40:36 crc kubenswrapper[4789]: E0202 21:40:36.598282 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5aa4ba1a-609b-483a-9754-8edf78aa7005" containerName="proxy-httpd" Feb 02 21:40:36 crc kubenswrapper[4789]: I0202 21:40:36.598314 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="5aa4ba1a-609b-483a-9754-8edf78aa7005" containerName="proxy-httpd" Feb 02 21:40:36 crc kubenswrapper[4789]: E0202 21:40:36.598352 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5aa4ba1a-609b-483a-9754-8edf78aa7005" containerName="sg-core" Feb 02 21:40:36 crc kubenswrapper[4789]: I0202 21:40:36.598362 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="5aa4ba1a-609b-483a-9754-8edf78aa7005" containerName="sg-core" Feb 02 21:40:36 crc kubenswrapper[4789]: E0202 21:40:36.598389 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5aa4ba1a-609b-483a-9754-8edf78aa7005" containerName="ceilometer-notification-agent" Feb 02 21:40:36 crc kubenswrapper[4789]: I0202 21:40:36.598397 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="5aa4ba1a-609b-483a-9754-8edf78aa7005" containerName="ceilometer-notification-agent" Feb 02 21:40:36 crc kubenswrapper[4789]: E0202 21:40:36.598435 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5aa4ba1a-609b-483a-9754-8edf78aa7005" containerName="ceilometer-central-agent" Feb 02 21:40:36 crc kubenswrapper[4789]: I0202 21:40:36.598444 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="5aa4ba1a-609b-483a-9754-8edf78aa7005" containerName="ceilometer-central-agent" Feb 02 21:40:36 crc kubenswrapper[4789]: I0202 21:40:36.598853 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="5aa4ba1a-609b-483a-9754-8edf78aa7005" containerName="sg-core" Feb 02 21:40:36 crc kubenswrapper[4789]: I0202 21:40:36.598909 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="5aa4ba1a-609b-483a-9754-8edf78aa7005" containerName="ceilometer-central-agent" Feb 02 21:40:36 crc kubenswrapper[4789]: I0202 21:40:36.598930 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="5aa4ba1a-609b-483a-9754-8edf78aa7005" containerName="ceilometer-notification-agent" Feb 02 21:40:36 crc kubenswrapper[4789]: I0202 21:40:36.598955 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="5aa4ba1a-609b-483a-9754-8edf78aa7005" containerName="proxy-httpd" Feb 02 21:40:36 crc kubenswrapper[4789]: I0202 21:40:36.610607 4789 scope.go:117] "RemoveContainer" containerID="a2a34b67ac44800a4e7113e77632478a598a725c39031ea40fca40f80ce116f7" Feb 02 21:40:36 crc kubenswrapper[4789]: I0202 21:40:36.612540 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 21:40:36 crc kubenswrapper[4789]: I0202 21:40:36.618969 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 02 21:40:36 crc kubenswrapper[4789]: I0202 21:40:36.622832 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 02 21:40:36 crc kubenswrapper[4789]: I0202 21:40:36.625374 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 21:40:36 crc kubenswrapper[4789]: I0202 21:40:36.634168 4789 scope.go:117] "RemoveContainer" containerID="3f71d60c1d0c4efb65058a7c28d385073be727dfe87a5b819bf554bb70893395" Feb 02 21:40:36 crc kubenswrapper[4789]: I0202 21:40:36.713887 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9fbf342f-e489-4914-99d2-d2b5da9a3e75-scripts\") pod \"ceilometer-0\" (UID: \"9fbf342f-e489-4914-99d2-d2b5da9a3e75\") " pod="openstack/ceilometer-0" Feb 02 21:40:36 crc kubenswrapper[4789]: I0202 21:40:36.713962 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9fbf342f-e489-4914-99d2-d2b5da9a3e75-run-httpd\") pod \"ceilometer-0\" (UID: \"9fbf342f-e489-4914-99d2-d2b5da9a3e75\") " pod="openstack/ceilometer-0" Feb 02 21:40:36 crc kubenswrapper[4789]: I0202 21:40:36.714015 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fbf342f-e489-4914-99d2-d2b5da9a3e75-config-data\") pod \"ceilometer-0\" (UID: \"9fbf342f-e489-4914-99d2-d2b5da9a3e75\") " pod="openstack/ceilometer-0" Feb 02 21:40:36 crc kubenswrapper[4789]: I0202 21:40:36.714043 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9fbf342f-e489-4914-99d2-d2b5da9a3e75-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9fbf342f-e489-4914-99d2-d2b5da9a3e75\") " pod="openstack/ceilometer-0" Feb 02 21:40:36 crc kubenswrapper[4789]: I0202 21:40:36.714080 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fbf342f-e489-4914-99d2-d2b5da9a3e75-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9fbf342f-e489-4914-99d2-d2b5da9a3e75\") " pod="openstack/ceilometer-0" Feb 02 21:40:36 crc kubenswrapper[4789]: I0202 21:40:36.714104 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrp48\" (UniqueName: \"kubernetes.io/projected/9fbf342f-e489-4914-99d2-d2b5da9a3e75-kube-api-access-mrp48\") pod \"ceilometer-0\" (UID: \"9fbf342f-e489-4914-99d2-d2b5da9a3e75\") " pod="openstack/ceilometer-0" Feb 02 21:40:36 crc kubenswrapper[4789]: I0202 21:40:36.714143 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9fbf342f-e489-4914-99d2-d2b5da9a3e75-log-httpd\") pod \"ceilometer-0\" (UID: \"9fbf342f-e489-4914-99d2-d2b5da9a3e75\") " pod="openstack/ceilometer-0" Feb 02 21:40:36 crc kubenswrapper[4789]: I0202 21:40:36.815761 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9fbf342f-e489-4914-99d2-d2b5da9a3e75-run-httpd\") pod \"ceilometer-0\" (UID: \"9fbf342f-e489-4914-99d2-d2b5da9a3e75\") " pod="openstack/ceilometer-0" Feb 02 21:40:36 crc kubenswrapper[4789]: I0202 21:40:36.815835 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fbf342f-e489-4914-99d2-d2b5da9a3e75-config-data\") pod \"ceilometer-0\" (UID: \"9fbf342f-e489-4914-99d2-d2b5da9a3e75\") " pod="openstack/ceilometer-0" Feb 02 21:40:36 crc kubenswrapper[4789]: I0202 21:40:36.815863 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9fbf342f-e489-4914-99d2-d2b5da9a3e75-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9fbf342f-e489-4914-99d2-d2b5da9a3e75\") " pod="openstack/ceilometer-0" Feb 02 21:40:36 crc kubenswrapper[4789]: I0202 21:40:36.815893 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fbf342f-e489-4914-99d2-d2b5da9a3e75-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9fbf342f-e489-4914-99d2-d2b5da9a3e75\") " pod="openstack/ceilometer-0" Feb 02 21:40:36 crc kubenswrapper[4789]: I0202 21:40:36.815912 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrp48\" (UniqueName: \"kubernetes.io/projected/9fbf342f-e489-4914-99d2-d2b5da9a3e75-kube-api-access-mrp48\") pod \"ceilometer-0\" (UID: \"9fbf342f-e489-4914-99d2-d2b5da9a3e75\") " pod="openstack/ceilometer-0" Feb 02 21:40:36 crc kubenswrapper[4789]: I0202 21:40:36.815944 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9fbf342f-e489-4914-99d2-d2b5da9a3e75-log-httpd\") pod \"ceilometer-0\" (UID: \"9fbf342f-e489-4914-99d2-d2b5da9a3e75\") " pod="openstack/ceilometer-0" Feb 02 21:40:36 crc kubenswrapper[4789]: I0202 21:40:36.815982 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9fbf342f-e489-4914-99d2-d2b5da9a3e75-scripts\") pod \"ceilometer-0\" (UID: \"9fbf342f-e489-4914-99d2-d2b5da9a3e75\") " pod="openstack/ceilometer-0" Feb 02 21:40:36 crc kubenswrapper[4789]: I0202 21:40:36.816233 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9fbf342f-e489-4914-99d2-d2b5da9a3e75-run-httpd\") pod \"ceilometer-0\" (UID: \"9fbf342f-e489-4914-99d2-d2b5da9a3e75\") " pod="openstack/ceilometer-0" Feb 02 21:40:36 crc kubenswrapper[4789]: I0202 21:40:36.816733 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9fbf342f-e489-4914-99d2-d2b5da9a3e75-log-httpd\") pod \"ceilometer-0\" (UID: \"9fbf342f-e489-4914-99d2-d2b5da9a3e75\") " pod="openstack/ceilometer-0" Feb 02 21:40:36 crc kubenswrapper[4789]: I0202 21:40:36.823421 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9fbf342f-e489-4914-99d2-d2b5da9a3e75-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9fbf342f-e489-4914-99d2-d2b5da9a3e75\") " pod="openstack/ceilometer-0" Feb 02 21:40:36 crc kubenswrapper[4789]: I0202 21:40:36.823529 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fbf342f-e489-4914-99d2-d2b5da9a3e75-config-data\") pod \"ceilometer-0\" (UID: \"9fbf342f-e489-4914-99d2-d2b5da9a3e75\") " pod="openstack/ceilometer-0" Feb 02 21:40:36 crc kubenswrapper[4789]: I0202 21:40:36.838277 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fbf342f-e489-4914-99d2-d2b5da9a3e75-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9fbf342f-e489-4914-99d2-d2b5da9a3e75\") " pod="openstack/ceilometer-0" Feb 02 21:40:36 crc kubenswrapper[4789]: I0202 21:40:36.838810 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9fbf342f-e489-4914-99d2-d2b5da9a3e75-scripts\") pod \"ceilometer-0\" (UID: \"9fbf342f-e489-4914-99d2-d2b5da9a3e75\") " pod="openstack/ceilometer-0" Feb 02 21:40:36 crc kubenswrapper[4789]: I0202 21:40:36.851328 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrp48\" (UniqueName: \"kubernetes.io/projected/9fbf342f-e489-4914-99d2-d2b5da9a3e75-kube-api-access-mrp48\") pod \"ceilometer-0\" (UID: \"9fbf342f-e489-4914-99d2-d2b5da9a3e75\") " pod="openstack/ceilometer-0" Feb 02 21:40:36 crc kubenswrapper[4789]: I0202 21:40:36.951176 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 21:40:37 crc kubenswrapper[4789]: I0202 21:40:37.019376 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 21:40:37 crc kubenswrapper[4789]: I0202 21:40:37.057523 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 21:40:37 crc kubenswrapper[4789]: I0202 21:40:37.058042 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="b8a53bc3-3ae7-4358-8574-1adcd8d4fefb" containerName="kube-state-metrics" containerID="cri-o://fa090ae6cb37c6c6300df62319102a788932fa9ed0df451d265faa5feeafcb5f" gracePeriod=30 Feb 02 21:40:37 crc kubenswrapper[4789]: I0202 21:40:37.502522 4789 generic.go:334] "Generic (PLEG): container finished" podID="dafa0ec8-f504-4174-b323-2a2d9f09ffb8" containerID="2fd71c7d0b224173b8bd6353601c10b697c5e97dfffbd457bae31ef0668e15e1" exitCode=143 Feb 02 21:40:37 crc kubenswrapper[4789]: I0202 21:40:37.502676 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"dafa0ec8-f504-4174-b323-2a2d9f09ffb8","Type":"ContainerDied","Data":"2fd71c7d0b224173b8bd6353601c10b697c5e97dfffbd457bae31ef0668e15e1"} Feb 02 21:40:37 crc kubenswrapper[4789]: I0202 21:40:37.509295 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 21:40:37 crc kubenswrapper[4789]: W0202 21:40:37.512907 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9fbf342f_e489_4914_99d2_d2b5da9a3e75.slice/crio-b456ca077c446b37143dd2847877295e046e152a000a6409622b37b8780c638e WatchSource:0}: Error finding container b456ca077c446b37143dd2847877295e046e152a000a6409622b37b8780c638e: Status 404 returned error can't find the container with id b456ca077c446b37143dd2847877295e046e152a000a6409622b37b8780c638e Feb 02 21:40:37 crc kubenswrapper[4789]: I0202 21:40:37.513077 4789 generic.go:334] "Generic (PLEG): container finished" podID="b8a53bc3-3ae7-4358-8574-1adcd8d4fefb" containerID="fa090ae6cb37c6c6300df62319102a788932fa9ed0df451d265faa5feeafcb5f" exitCode=2 Feb 02 21:40:37 crc kubenswrapper[4789]: I0202 21:40:37.513164 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"b8a53bc3-3ae7-4358-8574-1adcd8d4fefb","Type":"ContainerDied","Data":"fa090ae6cb37c6c6300df62319102a788932fa9ed0df451d265faa5feeafcb5f"} Feb 02 21:40:37 crc kubenswrapper[4789]: I0202 21:40:37.601675 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 02 21:40:37 crc kubenswrapper[4789]: I0202 21:40:37.737765 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9ccpc\" (UniqueName: \"kubernetes.io/projected/b8a53bc3-3ae7-4358-8574-1adcd8d4fefb-kube-api-access-9ccpc\") pod \"b8a53bc3-3ae7-4358-8574-1adcd8d4fefb\" (UID: \"b8a53bc3-3ae7-4358-8574-1adcd8d4fefb\") " Feb 02 21:40:37 crc kubenswrapper[4789]: I0202 21:40:37.753303 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8a53bc3-3ae7-4358-8574-1adcd8d4fefb-kube-api-access-9ccpc" (OuterVolumeSpecName: "kube-api-access-9ccpc") pod "b8a53bc3-3ae7-4358-8574-1adcd8d4fefb" (UID: "b8a53bc3-3ae7-4358-8574-1adcd8d4fefb"). InnerVolumeSpecName "kube-api-access-9ccpc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:40:37 crc kubenswrapper[4789]: I0202 21:40:37.840032 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9ccpc\" (UniqueName: \"kubernetes.io/projected/b8a53bc3-3ae7-4358-8574-1adcd8d4fefb-kube-api-access-9ccpc\") on node \"crc\" DevicePath \"\"" Feb 02 21:40:38 crc kubenswrapper[4789]: I0202 21:40:38.434741 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5aa4ba1a-609b-483a-9754-8edf78aa7005" path="/var/lib/kubelet/pods/5aa4ba1a-609b-483a-9754-8edf78aa7005/volumes" Feb 02 21:40:38 crc kubenswrapper[4789]: I0202 21:40:38.523252 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9fbf342f-e489-4914-99d2-d2b5da9a3e75","Type":"ContainerStarted","Data":"3da3de917d02b853103d72b4a2655d21606b0e94c091cada13d7340c8a07120d"} Feb 02 21:40:38 crc kubenswrapper[4789]: I0202 21:40:38.523298 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9fbf342f-e489-4914-99d2-d2b5da9a3e75","Type":"ContainerStarted","Data":"b456ca077c446b37143dd2847877295e046e152a000a6409622b37b8780c638e"} Feb 02 21:40:38 crc kubenswrapper[4789]: I0202 21:40:38.526169 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"b8a53bc3-3ae7-4358-8574-1adcd8d4fefb","Type":"ContainerDied","Data":"2345d36d16828447040cfec3a598737da8e60528ef54f41721c3920a078bff47"} Feb 02 21:40:38 crc kubenswrapper[4789]: I0202 21:40:38.526218 4789 scope.go:117] "RemoveContainer" containerID="fa090ae6cb37c6c6300df62319102a788932fa9ed0df451d265faa5feeafcb5f" Feb 02 21:40:38 crc kubenswrapper[4789]: I0202 21:40:38.526225 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 02 21:40:38 crc kubenswrapper[4789]: I0202 21:40:38.561625 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 21:40:38 crc kubenswrapper[4789]: I0202 21:40:38.582217 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 21:40:38 crc kubenswrapper[4789]: I0202 21:40:38.589428 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 21:40:38 crc kubenswrapper[4789]: E0202 21:40:38.589884 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8a53bc3-3ae7-4358-8574-1adcd8d4fefb" containerName="kube-state-metrics" Feb 02 21:40:38 crc kubenswrapper[4789]: I0202 21:40:38.589904 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8a53bc3-3ae7-4358-8574-1adcd8d4fefb" containerName="kube-state-metrics" Feb 02 21:40:38 crc kubenswrapper[4789]: I0202 21:40:38.590107 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8a53bc3-3ae7-4358-8574-1adcd8d4fefb" containerName="kube-state-metrics" Feb 02 21:40:38 crc kubenswrapper[4789]: I0202 21:40:38.590700 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 02 21:40:38 crc kubenswrapper[4789]: I0202 21:40:38.592851 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Feb 02 21:40:38 crc kubenswrapper[4789]: I0202 21:40:38.593262 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Feb 02 21:40:38 crc kubenswrapper[4789]: I0202 21:40:38.598065 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 21:40:38 crc kubenswrapper[4789]: I0202 21:40:38.656543 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/212c4e72-7988-4770-ba07-ae0362baac7e-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"212c4e72-7988-4770-ba07-ae0362baac7e\") " pod="openstack/kube-state-metrics-0" Feb 02 21:40:38 crc kubenswrapper[4789]: I0202 21:40:38.656627 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/212c4e72-7988-4770-ba07-ae0362baac7e-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"212c4e72-7988-4770-ba07-ae0362baac7e\") " pod="openstack/kube-state-metrics-0" Feb 02 21:40:38 crc kubenswrapper[4789]: I0202 21:40:38.656650 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5sb78\" (UniqueName: \"kubernetes.io/projected/212c4e72-7988-4770-ba07-ae0362baac7e-kube-api-access-5sb78\") pod \"kube-state-metrics-0\" (UID: \"212c4e72-7988-4770-ba07-ae0362baac7e\") " pod="openstack/kube-state-metrics-0" Feb 02 21:40:38 crc kubenswrapper[4789]: I0202 21:40:38.656759 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/212c4e72-7988-4770-ba07-ae0362baac7e-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"212c4e72-7988-4770-ba07-ae0362baac7e\") " pod="openstack/kube-state-metrics-0" Feb 02 21:40:38 crc kubenswrapper[4789]: I0202 21:40:38.757954 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/212c4e72-7988-4770-ba07-ae0362baac7e-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"212c4e72-7988-4770-ba07-ae0362baac7e\") " pod="openstack/kube-state-metrics-0" Feb 02 21:40:38 crc kubenswrapper[4789]: I0202 21:40:38.758017 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/212c4e72-7988-4770-ba07-ae0362baac7e-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"212c4e72-7988-4770-ba07-ae0362baac7e\") " pod="openstack/kube-state-metrics-0" Feb 02 21:40:38 crc kubenswrapper[4789]: I0202 21:40:38.758060 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/212c4e72-7988-4770-ba07-ae0362baac7e-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"212c4e72-7988-4770-ba07-ae0362baac7e\") " pod="openstack/kube-state-metrics-0" Feb 02 21:40:38 crc kubenswrapper[4789]: I0202 21:40:38.758079 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5sb78\" (UniqueName: \"kubernetes.io/projected/212c4e72-7988-4770-ba07-ae0362baac7e-kube-api-access-5sb78\") pod \"kube-state-metrics-0\" (UID: \"212c4e72-7988-4770-ba07-ae0362baac7e\") " pod="openstack/kube-state-metrics-0" Feb 02 21:40:38 crc kubenswrapper[4789]: I0202 21:40:38.762107 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/212c4e72-7988-4770-ba07-ae0362baac7e-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"212c4e72-7988-4770-ba07-ae0362baac7e\") " pod="openstack/kube-state-metrics-0" Feb 02 21:40:38 crc kubenswrapper[4789]: I0202 21:40:38.762172 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/212c4e72-7988-4770-ba07-ae0362baac7e-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"212c4e72-7988-4770-ba07-ae0362baac7e\") " pod="openstack/kube-state-metrics-0" Feb 02 21:40:38 crc kubenswrapper[4789]: I0202 21:40:38.768553 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/212c4e72-7988-4770-ba07-ae0362baac7e-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"212c4e72-7988-4770-ba07-ae0362baac7e\") " pod="openstack/kube-state-metrics-0" Feb 02 21:40:38 crc kubenswrapper[4789]: I0202 21:40:38.777298 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5sb78\" (UniqueName: \"kubernetes.io/projected/212c4e72-7988-4770-ba07-ae0362baac7e-kube-api-access-5sb78\") pod \"kube-state-metrics-0\" (UID: \"212c4e72-7988-4770-ba07-ae0362baac7e\") " pod="openstack/kube-state-metrics-0" Feb 02 21:40:39 crc kubenswrapper[4789]: I0202 21:40:39.023417 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 02 21:40:39 crc kubenswrapper[4789]: I0202 21:40:39.302522 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-xg9jg"] Feb 02 21:40:39 crc kubenswrapper[4789]: I0202 21:40:39.304026 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-xg9jg" Feb 02 21:40:39 crc kubenswrapper[4789]: I0202 21:40:39.323035 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-xg9jg"] Feb 02 21:40:39 crc kubenswrapper[4789]: I0202 21:40:39.369196 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42jz8\" (UniqueName: \"kubernetes.io/projected/05dabff6-c489-4c3a-9030-4206f14e27fd-kube-api-access-42jz8\") pod \"nova-api-db-create-xg9jg\" (UID: \"05dabff6-c489-4c3a-9030-4206f14e27fd\") " pod="openstack/nova-api-db-create-xg9jg" Feb 02 21:40:39 crc kubenswrapper[4789]: I0202 21:40:39.369545 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/05dabff6-c489-4c3a-9030-4206f14e27fd-operator-scripts\") pod \"nova-api-db-create-xg9jg\" (UID: \"05dabff6-c489-4c3a-9030-4206f14e27fd\") " pod="openstack/nova-api-db-create-xg9jg" Feb 02 21:40:39 crc kubenswrapper[4789]: I0202 21:40:39.375928 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-ac1d-account-create-update-7qrx7"] Feb 02 21:40:39 crc kubenswrapper[4789]: I0202 21:40:39.380163 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-ac1d-account-create-update-7qrx7" Feb 02 21:40:39 crc kubenswrapper[4789]: I0202 21:40:39.417289 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-ac1d-account-create-update-7qrx7"] Feb 02 21:40:39 crc kubenswrapper[4789]: I0202 21:40:39.434254 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 02 21:40:39 crc kubenswrapper[4789]: I0202 21:40:39.475875 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/05dabff6-c489-4c3a-9030-4206f14e27fd-operator-scripts\") pod \"nova-api-db-create-xg9jg\" (UID: \"05dabff6-c489-4c3a-9030-4206f14e27fd\") " pod="openstack/nova-api-db-create-xg9jg" Feb 02 21:40:39 crc kubenswrapper[4789]: I0202 21:40:39.477121 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jd7vk\" (UniqueName: \"kubernetes.io/projected/4be719bd-b5d3-4499-9e80-9d8055c1a8df-kube-api-access-jd7vk\") pod \"nova-api-ac1d-account-create-update-7qrx7\" (UID: \"4be719bd-b5d3-4499-9e80-9d8055c1a8df\") " pod="openstack/nova-api-ac1d-account-create-update-7qrx7" Feb 02 21:40:39 crc kubenswrapper[4789]: I0202 21:40:39.477242 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42jz8\" (UniqueName: \"kubernetes.io/projected/05dabff6-c489-4c3a-9030-4206f14e27fd-kube-api-access-42jz8\") pod \"nova-api-db-create-xg9jg\" (UID: \"05dabff6-c489-4c3a-9030-4206f14e27fd\") " pod="openstack/nova-api-db-create-xg9jg" Feb 02 21:40:39 crc kubenswrapper[4789]: I0202 21:40:39.477421 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4be719bd-b5d3-4499-9e80-9d8055c1a8df-operator-scripts\") pod \"nova-api-ac1d-account-create-update-7qrx7\" (UID: \"4be719bd-b5d3-4499-9e80-9d8055c1a8df\") " pod="openstack/nova-api-ac1d-account-create-update-7qrx7" Feb 02 21:40:39 crc kubenswrapper[4789]: I0202 21:40:39.478843 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/05dabff6-c489-4c3a-9030-4206f14e27fd-operator-scripts\") pod \"nova-api-db-create-xg9jg\" (UID: \"05dabff6-c489-4c3a-9030-4206f14e27fd\") " pod="openstack/nova-api-db-create-xg9jg" Feb 02 21:40:39 crc kubenswrapper[4789]: I0202 21:40:39.490244 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-xdt8b"] Feb 02 21:40:39 crc kubenswrapper[4789]: I0202 21:40:39.491454 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-xdt8b" Feb 02 21:40:39 crc kubenswrapper[4789]: I0202 21:40:39.505171 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-xdt8b"] Feb 02 21:40:39 crc kubenswrapper[4789]: I0202 21:40:39.508072 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42jz8\" (UniqueName: \"kubernetes.io/projected/05dabff6-c489-4c3a-9030-4206f14e27fd-kube-api-access-42jz8\") pod \"nova-api-db-create-xg9jg\" (UID: \"05dabff6-c489-4c3a-9030-4206f14e27fd\") " pod="openstack/nova-api-db-create-xg9jg" Feb 02 21:40:39 crc kubenswrapper[4789]: I0202 21:40:39.512880 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 21:40:39 crc kubenswrapper[4789]: I0202 21:40:39.556632 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9fbf342f-e489-4914-99d2-d2b5da9a3e75","Type":"ContainerStarted","Data":"2cef93e7502918b2fdc30d919f677a18e537af13a5ab8002948b7da723faab1a"} Feb 02 21:40:39 crc kubenswrapper[4789]: I0202 21:40:39.556673 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9fbf342f-e489-4914-99d2-d2b5da9a3e75","Type":"ContainerStarted","Data":"338cf0d5d7efb9dbbb48b11890d5c138bba78aa9270e7f6c74ba60abb2882959"} Feb 02 21:40:39 crc kubenswrapper[4789]: I0202 21:40:39.582173 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a1e8a56-47de-4a5e-b4f6-389ebf616658-operator-scripts\") pod \"nova-cell0-db-create-xdt8b\" (UID: \"4a1e8a56-47de-4a5e-b4f6-389ebf616658\") " pod="openstack/nova-cell0-db-create-xdt8b" Feb 02 21:40:39 crc kubenswrapper[4789]: I0202 21:40:39.582297 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4be719bd-b5d3-4499-9e80-9d8055c1a8df-operator-scripts\") pod \"nova-api-ac1d-account-create-update-7qrx7\" (UID: \"4be719bd-b5d3-4499-9e80-9d8055c1a8df\") " pod="openstack/nova-api-ac1d-account-create-update-7qrx7" Feb 02 21:40:39 crc kubenswrapper[4789]: I0202 21:40:39.582337 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b75bq\" (UniqueName: \"kubernetes.io/projected/4a1e8a56-47de-4a5e-b4f6-389ebf616658-kube-api-access-b75bq\") pod \"nova-cell0-db-create-xdt8b\" (UID: \"4a1e8a56-47de-4a5e-b4f6-389ebf616658\") " pod="openstack/nova-cell0-db-create-xdt8b" Feb 02 21:40:39 crc kubenswrapper[4789]: I0202 21:40:39.582397 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jd7vk\" (UniqueName: \"kubernetes.io/projected/4be719bd-b5d3-4499-9e80-9d8055c1a8df-kube-api-access-jd7vk\") pod \"nova-api-ac1d-account-create-update-7qrx7\" (UID: \"4be719bd-b5d3-4499-9e80-9d8055c1a8df\") " pod="openstack/nova-api-ac1d-account-create-update-7qrx7" Feb 02 21:40:39 crc kubenswrapper[4789]: I0202 21:40:39.583511 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4be719bd-b5d3-4499-9e80-9d8055c1a8df-operator-scripts\") pod \"nova-api-ac1d-account-create-update-7qrx7\" (UID: \"4be719bd-b5d3-4499-9e80-9d8055c1a8df\") " pod="openstack/nova-api-ac1d-account-create-update-7qrx7" Feb 02 21:40:39 crc kubenswrapper[4789]: I0202 21:40:39.590627 4789 generic.go:334] "Generic (PLEG): container finished" podID="322d725c-ac03-4759-a08c-e534a70d1ec3" containerID="2a4f42975491d16296d64ff16671d19f3f7ca2af54f908f63979957cce03acbc" exitCode=0 Feb 02 21:40:39 crc kubenswrapper[4789]: I0202 21:40:39.590714 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"322d725c-ac03-4759-a08c-e534a70d1ec3","Type":"ContainerDied","Data":"2a4f42975491d16296d64ff16671d19f3f7ca2af54f908f63979957cce03acbc"} Feb 02 21:40:39 crc kubenswrapper[4789]: I0202 21:40:39.592691 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-725d-account-create-update-wv4nx"] Feb 02 21:40:39 crc kubenswrapper[4789]: I0202 21:40:39.593888 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-725d-account-create-update-wv4nx" Feb 02 21:40:39 crc kubenswrapper[4789]: I0202 21:40:39.597853 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"212c4e72-7988-4770-ba07-ae0362baac7e","Type":"ContainerStarted","Data":"061832fab16bcca4a9567dd47e3aa9f4f35856126e73b29d74d98749c45032a7"} Feb 02 21:40:39 crc kubenswrapper[4789]: I0202 21:40:39.600031 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 02 21:40:39 crc kubenswrapper[4789]: I0202 21:40:39.626659 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jd7vk\" (UniqueName: \"kubernetes.io/projected/4be719bd-b5d3-4499-9e80-9d8055c1a8df-kube-api-access-jd7vk\") pod \"nova-api-ac1d-account-create-update-7qrx7\" (UID: \"4be719bd-b5d3-4499-9e80-9d8055c1a8df\") " pod="openstack/nova-api-ac1d-account-create-update-7qrx7" Feb 02 21:40:39 crc kubenswrapper[4789]: I0202 21:40:39.642147 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-9zj6q"] Feb 02 21:40:39 crc kubenswrapper[4789]: I0202 21:40:39.647621 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-9zj6q" Feb 02 21:40:39 crc kubenswrapper[4789]: I0202 21:40:39.656034 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-725d-account-create-update-wv4nx"] Feb 02 21:40:39 crc kubenswrapper[4789]: I0202 21:40:39.671755 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-xg9jg" Feb 02 21:40:39 crc kubenswrapper[4789]: I0202 21:40:39.683445 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a1e8a56-47de-4a5e-b4f6-389ebf616658-operator-scripts\") pod \"nova-cell0-db-create-xdt8b\" (UID: \"4a1e8a56-47de-4a5e-b4f6-389ebf616658\") " pod="openstack/nova-cell0-db-create-xdt8b" Feb 02 21:40:39 crc kubenswrapper[4789]: I0202 21:40:39.683522 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfclb\" (UniqueName: \"kubernetes.io/projected/160db825-98d0-4663-80b5-1a50e382cfa5-kube-api-access-hfclb\") pod \"nova-cell0-725d-account-create-update-wv4nx\" (UID: \"160db825-98d0-4663-80b5-1a50e382cfa5\") " pod="openstack/nova-cell0-725d-account-create-update-wv4nx" Feb 02 21:40:39 crc kubenswrapper[4789]: I0202 21:40:39.683567 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/160db825-98d0-4663-80b5-1a50e382cfa5-operator-scripts\") pod \"nova-cell0-725d-account-create-update-wv4nx\" (UID: \"160db825-98d0-4663-80b5-1a50e382cfa5\") " pod="openstack/nova-cell0-725d-account-create-update-wv4nx" Feb 02 21:40:39 crc kubenswrapper[4789]: I0202 21:40:39.683608 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b75bq\" (UniqueName: \"kubernetes.io/projected/4a1e8a56-47de-4a5e-b4f6-389ebf616658-kube-api-access-b75bq\") pod \"nova-cell0-db-create-xdt8b\" (UID: \"4a1e8a56-47de-4a5e-b4f6-389ebf616658\") " pod="openstack/nova-cell0-db-create-xdt8b" Feb 02 21:40:39 crc kubenswrapper[4789]: I0202 21:40:39.684403 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a1e8a56-47de-4a5e-b4f6-389ebf616658-operator-scripts\") pod \"nova-cell0-db-create-xdt8b\" (UID: \"4a1e8a56-47de-4a5e-b4f6-389ebf616658\") " pod="openstack/nova-cell0-db-create-xdt8b" Feb 02 21:40:39 crc kubenswrapper[4789]: I0202 21:40:39.716131 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b75bq\" (UniqueName: \"kubernetes.io/projected/4a1e8a56-47de-4a5e-b4f6-389ebf616658-kube-api-access-b75bq\") pod \"nova-cell0-db-create-xdt8b\" (UID: \"4a1e8a56-47de-4a5e-b4f6-389ebf616658\") " pod="openstack/nova-cell0-db-create-xdt8b" Feb 02 21:40:39 crc kubenswrapper[4789]: I0202 21:40:39.719873 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-9zj6q"] Feb 02 21:40:39 crc kubenswrapper[4789]: I0202 21:40:39.770393 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-ac1d-account-create-update-7qrx7" Feb 02 21:40:39 crc kubenswrapper[4789]: I0202 21:40:39.785429 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/160db825-98d0-4663-80b5-1a50e382cfa5-operator-scripts\") pod \"nova-cell0-725d-account-create-update-wv4nx\" (UID: \"160db825-98d0-4663-80b5-1a50e382cfa5\") " pod="openstack/nova-cell0-725d-account-create-update-wv4nx" Feb 02 21:40:39 crc kubenswrapper[4789]: I0202 21:40:39.785480 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a55dfeeb-4219-4e3f-834b-0f4de4381c96-operator-scripts\") pod \"nova-cell1-db-create-9zj6q\" (UID: \"a55dfeeb-4219-4e3f-834b-0f4de4381c96\") " pod="openstack/nova-cell1-db-create-9zj6q" Feb 02 21:40:39 crc kubenswrapper[4789]: I0202 21:40:39.785546 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wqtm\" (UniqueName: \"kubernetes.io/projected/a55dfeeb-4219-4e3f-834b-0f4de4381c96-kube-api-access-2wqtm\") pod \"nova-cell1-db-create-9zj6q\" (UID: \"a55dfeeb-4219-4e3f-834b-0f4de4381c96\") " pod="openstack/nova-cell1-db-create-9zj6q" Feb 02 21:40:39 crc kubenswrapper[4789]: I0202 21:40:39.786248 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/160db825-98d0-4663-80b5-1a50e382cfa5-operator-scripts\") pod \"nova-cell0-725d-account-create-update-wv4nx\" (UID: \"160db825-98d0-4663-80b5-1a50e382cfa5\") " pod="openstack/nova-cell0-725d-account-create-update-wv4nx" Feb 02 21:40:39 crc kubenswrapper[4789]: I0202 21:40:39.786945 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hfclb\" (UniqueName: \"kubernetes.io/projected/160db825-98d0-4663-80b5-1a50e382cfa5-kube-api-access-hfclb\") pod \"nova-cell0-725d-account-create-update-wv4nx\" (UID: \"160db825-98d0-4663-80b5-1a50e382cfa5\") " pod="openstack/nova-cell0-725d-account-create-update-wv4nx" Feb 02 21:40:39 crc kubenswrapper[4789]: I0202 21:40:39.800796 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-3f0a-account-create-update-4kw96"] Feb 02 21:40:39 crc kubenswrapper[4789]: I0202 21:40:39.801880 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-3f0a-account-create-update-4kw96" Feb 02 21:40:39 crc kubenswrapper[4789]: I0202 21:40:39.809555 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfclb\" (UniqueName: \"kubernetes.io/projected/160db825-98d0-4663-80b5-1a50e382cfa5-kube-api-access-hfclb\") pod \"nova-cell0-725d-account-create-update-wv4nx\" (UID: \"160db825-98d0-4663-80b5-1a50e382cfa5\") " pod="openstack/nova-cell0-725d-account-create-update-wv4nx" Feb 02 21:40:39 crc kubenswrapper[4789]: I0202 21:40:39.810779 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 02 21:40:39 crc kubenswrapper[4789]: I0202 21:40:39.816866 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-3f0a-account-create-update-4kw96"] Feb 02 21:40:39 crc kubenswrapper[4789]: I0202 21:40:39.903657 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a55dfeeb-4219-4e3f-834b-0f4de4381c96-operator-scripts\") pod \"nova-cell1-db-create-9zj6q\" (UID: \"a55dfeeb-4219-4e3f-834b-0f4de4381c96\") " pod="openstack/nova-cell1-db-create-9zj6q" Feb 02 21:40:39 crc kubenswrapper[4789]: I0202 21:40:39.903950 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wqtm\" (UniqueName: \"kubernetes.io/projected/a55dfeeb-4219-4e3f-834b-0f4de4381c96-kube-api-access-2wqtm\") pod \"nova-cell1-db-create-9zj6q\" (UID: \"a55dfeeb-4219-4e3f-834b-0f4de4381c96\") " pod="openstack/nova-cell1-db-create-9zj6q" Feb 02 21:40:39 crc kubenswrapper[4789]: I0202 21:40:39.903976 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzp5m\" (UniqueName: \"kubernetes.io/projected/b833200c-e96b-4baa-9654-e7a3c07369e5-kube-api-access-xzp5m\") pod \"nova-cell1-3f0a-account-create-update-4kw96\" (UID: \"b833200c-e96b-4baa-9654-e7a3c07369e5\") " pod="openstack/nova-cell1-3f0a-account-create-update-4kw96" Feb 02 21:40:39 crc kubenswrapper[4789]: I0202 21:40:39.904017 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b833200c-e96b-4baa-9654-e7a3c07369e5-operator-scripts\") pod \"nova-cell1-3f0a-account-create-update-4kw96\" (UID: \"b833200c-e96b-4baa-9654-e7a3c07369e5\") " pod="openstack/nova-cell1-3f0a-account-create-update-4kw96" Feb 02 21:40:39 crc kubenswrapper[4789]: I0202 21:40:39.904769 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a55dfeeb-4219-4e3f-834b-0f4de4381c96-operator-scripts\") pod \"nova-cell1-db-create-9zj6q\" (UID: \"a55dfeeb-4219-4e3f-834b-0f4de4381c96\") " pod="openstack/nova-cell1-db-create-9zj6q" Feb 02 21:40:39 crc kubenswrapper[4789]: I0202 21:40:39.904907 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-xdt8b" Feb 02 21:40:39 crc kubenswrapper[4789]: I0202 21:40:39.951124 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wqtm\" (UniqueName: \"kubernetes.io/projected/a55dfeeb-4219-4e3f-834b-0f4de4381c96-kube-api-access-2wqtm\") pod \"nova-cell1-db-create-9zj6q\" (UID: \"a55dfeeb-4219-4e3f-834b-0f4de4381c96\") " pod="openstack/nova-cell1-db-create-9zj6q" Feb 02 21:40:39 crc kubenswrapper[4789]: I0202 21:40:39.968607 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-725d-account-create-update-wv4nx" Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.006675 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzp5m\" (UniqueName: \"kubernetes.io/projected/b833200c-e96b-4baa-9654-e7a3c07369e5-kube-api-access-xzp5m\") pod \"nova-cell1-3f0a-account-create-update-4kw96\" (UID: \"b833200c-e96b-4baa-9654-e7a3c07369e5\") " pod="openstack/nova-cell1-3f0a-account-create-update-4kw96" Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.006793 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b833200c-e96b-4baa-9654-e7a3c07369e5-operator-scripts\") pod \"nova-cell1-3f0a-account-create-update-4kw96\" (UID: \"b833200c-e96b-4baa-9654-e7a3c07369e5\") " pod="openstack/nova-cell1-3f0a-account-create-update-4kw96" Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.007419 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b833200c-e96b-4baa-9654-e7a3c07369e5-operator-scripts\") pod \"nova-cell1-3f0a-account-create-update-4kw96\" (UID: \"b833200c-e96b-4baa-9654-e7a3c07369e5\") " pod="openstack/nova-cell1-3f0a-account-create-update-4kw96" Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.049060 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzp5m\" (UniqueName: \"kubernetes.io/projected/b833200c-e96b-4baa-9654-e7a3c07369e5-kube-api-access-xzp5m\") pod \"nova-cell1-3f0a-account-create-update-4kw96\" (UID: \"b833200c-e96b-4baa-9654-e7a3c07369e5\") " pod="openstack/nova-cell1-3f0a-account-create-update-4kw96" Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.077670 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-9zj6q" Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.123381 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-3f0a-account-create-update-4kw96" Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.144306 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-xg9jg"] Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.164270 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.312419 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/322d725c-ac03-4759-a08c-e534a70d1ec3-public-tls-certs\") pod \"322d725c-ac03-4759-a08c-e534a70d1ec3\" (UID: \"322d725c-ac03-4759-a08c-e534a70d1ec3\") " Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.312461 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/322d725c-ac03-4759-a08c-e534a70d1ec3-httpd-run\") pod \"322d725c-ac03-4759-a08c-e534a70d1ec3\" (UID: \"322d725c-ac03-4759-a08c-e534a70d1ec3\") " Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.312504 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/322d725c-ac03-4759-a08c-e534a70d1ec3-config-data\") pod \"322d725c-ac03-4759-a08c-e534a70d1ec3\" (UID: \"322d725c-ac03-4759-a08c-e534a70d1ec3\") " Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.312557 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zggzl\" (UniqueName: \"kubernetes.io/projected/322d725c-ac03-4759-a08c-e534a70d1ec3-kube-api-access-zggzl\") pod \"322d725c-ac03-4759-a08c-e534a70d1ec3\" (UID: \"322d725c-ac03-4759-a08c-e534a70d1ec3\") " Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.312732 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/322d725c-ac03-4759-a08c-e534a70d1ec3-scripts\") pod \"322d725c-ac03-4759-a08c-e534a70d1ec3\" (UID: \"322d725c-ac03-4759-a08c-e534a70d1ec3\") " Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.312769 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/322d725c-ac03-4759-a08c-e534a70d1ec3-logs\") pod \"322d725c-ac03-4759-a08c-e534a70d1ec3\" (UID: \"322d725c-ac03-4759-a08c-e534a70d1ec3\") " Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.312788 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"322d725c-ac03-4759-a08c-e534a70d1ec3\" (UID: \"322d725c-ac03-4759-a08c-e534a70d1ec3\") " Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.312805 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/322d725c-ac03-4759-a08c-e534a70d1ec3-combined-ca-bundle\") pod \"322d725c-ac03-4759-a08c-e534a70d1ec3\" (UID: \"322d725c-ac03-4759-a08c-e534a70d1ec3\") " Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.315480 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/322d725c-ac03-4759-a08c-e534a70d1ec3-logs" (OuterVolumeSpecName: "logs") pod "322d725c-ac03-4759-a08c-e534a70d1ec3" (UID: "322d725c-ac03-4759-a08c-e534a70d1ec3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.316257 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/322d725c-ac03-4759-a08c-e534a70d1ec3-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "322d725c-ac03-4759-a08c-e534a70d1ec3" (UID: "322d725c-ac03-4759-a08c-e534a70d1ec3"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.325734 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/322d725c-ac03-4759-a08c-e534a70d1ec3-scripts" (OuterVolumeSpecName: "scripts") pod "322d725c-ac03-4759-a08c-e534a70d1ec3" (UID: "322d725c-ac03-4759-a08c-e534a70d1ec3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.330908 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "glance") pod "322d725c-ac03-4759-a08c-e534a70d1ec3" (UID: "322d725c-ac03-4759-a08c-e534a70d1ec3"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.338553 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/322d725c-ac03-4759-a08c-e534a70d1ec3-kube-api-access-zggzl" (OuterVolumeSpecName: "kube-api-access-zggzl") pod "322d725c-ac03-4759-a08c-e534a70d1ec3" (UID: "322d725c-ac03-4759-a08c-e534a70d1ec3"). InnerVolumeSpecName "kube-api-access-zggzl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.400370 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/322d725c-ac03-4759-a08c-e534a70d1ec3-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "322d725c-ac03-4759-a08c-e534a70d1ec3" (UID: "322d725c-ac03-4759-a08c-e534a70d1ec3"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.405480 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/322d725c-ac03-4759-a08c-e534a70d1ec3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "322d725c-ac03-4759-a08c-e534a70d1ec3" (UID: "322d725c-ac03-4759-a08c-e534a70d1ec3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.428058 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/322d725c-ac03-4759-a08c-e534a70d1ec3-config-data" (OuterVolumeSpecName: "config-data") pod "322d725c-ac03-4759-a08c-e534a70d1ec3" (UID: "322d725c-ac03-4759-a08c-e534a70d1ec3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.439613 4789 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/322d725c-ac03-4759-a08c-e534a70d1ec3-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.439643 4789 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/322d725c-ac03-4759-a08c-e534a70d1ec3-logs\") on node \"crc\" DevicePath \"\"" Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.439668 4789 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.439682 4789 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/322d725c-ac03-4759-a08c-e534a70d1ec3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.439694 4789 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/322d725c-ac03-4759-a08c-e534a70d1ec3-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.439703 4789 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/322d725c-ac03-4759-a08c-e534a70d1ec3-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.439714 4789 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/322d725c-ac03-4759-a08c-e534a70d1ec3-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.439726 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zggzl\" (UniqueName: \"kubernetes.io/projected/322d725c-ac03-4759-a08c-e534a70d1ec3-kube-api-access-zggzl\") on node \"crc\" DevicePath \"\"" Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.465077 4789 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.466219 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8a53bc3-3ae7-4358-8574-1adcd8d4fefb" path="/var/lib/kubelet/pods/b8a53bc3-3ae7-4358-8574-1adcd8d4fefb/volumes" Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.543201 4789 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.589349 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.625624 4789 generic.go:334] "Generic (PLEG): container finished" podID="dafa0ec8-f504-4174-b323-2a2d9f09ffb8" containerID="a69d96438857fda4e73a9d99a5ee151881ca4a7f917490f142b5ff27614c0058" exitCode=0 Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.625689 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"dafa0ec8-f504-4174-b323-2a2d9f09ffb8","Type":"ContainerDied","Data":"a69d96438857fda4e73a9d99a5ee151881ca4a7f917490f142b5ff27614c0058"} Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.625719 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"dafa0ec8-f504-4174-b323-2a2d9f09ffb8","Type":"ContainerDied","Data":"a4abf86fd97c494fbe2833e450b4cf1cb5725b3d0217bc8fe62563cc198cb670"} Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.625739 4789 scope.go:117] "RemoveContainer" containerID="a69d96438857fda4e73a9d99a5ee151881ca4a7f917490f142b5ff27614c0058" Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.625880 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.638335 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"212c4e72-7988-4770-ba07-ae0362baac7e","Type":"ContainerStarted","Data":"a8f8731d69214821017cbca7eb7712c56bfef4b649bbc850b8c1ced28aa04dc5"} Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.638784 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.643844 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-xg9jg" event={"ID":"05dabff6-c489-4c3a-9030-4206f14e27fd","Type":"ContainerStarted","Data":"31cb564d5503d24a074fb5871aa516ae2843819e6096357ded09ae326cf07b00"} Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.644023 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-xg9jg" event={"ID":"05dabff6-c489-4c3a-9030-4206f14e27fd","Type":"ContainerStarted","Data":"26698b7916a079f2c88f52dd654807d7851c1fa70d7cd0fc814ef50848044875"} Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.648555 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"322d725c-ac03-4759-a08c-e534a70d1ec3","Type":"ContainerDied","Data":"0bdac393df3c9587fe88dfbc075583684c398f614c50dc66848eee071ccbe9ec"} Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.648623 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.654404 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.134096628 podStartE2EDuration="2.654393039s" podCreationTimestamp="2026-02-02 21:40:38 +0000 UTC" firstStartedPulling="2026-02-02 21:40:39.510806754 +0000 UTC m=+1259.805831773" lastFinishedPulling="2026-02-02 21:40:40.031103165 +0000 UTC m=+1260.326128184" observedRunningTime="2026-02-02 21:40:40.652929568 +0000 UTC m=+1260.947954587" watchObservedRunningTime="2026-02-02 21:40:40.654393039 +0000 UTC m=+1260.949418058" Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.655261 4789 scope.go:117] "RemoveContainer" containerID="2fd71c7d0b224173b8bd6353601c10b697c5e97dfffbd457bae31ef0668e15e1" Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.675892 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-xg9jg" podStartSLOduration=1.6758709870000001 podStartE2EDuration="1.675870987s" podCreationTimestamp="2026-02-02 21:40:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:40:40.67137489 +0000 UTC m=+1260.966399909" watchObservedRunningTime="2026-02-02 21:40:40.675870987 +0000 UTC m=+1260.970896006" Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.697861 4789 scope.go:117] "RemoveContainer" containerID="a69d96438857fda4e73a9d99a5ee151881ca4a7f917490f142b5ff27614c0058" Feb 02 21:40:40 crc kubenswrapper[4789]: E0202 21:40:40.698404 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a69d96438857fda4e73a9d99a5ee151881ca4a7f917490f142b5ff27614c0058\": container with ID starting with a69d96438857fda4e73a9d99a5ee151881ca4a7f917490f142b5ff27614c0058 not found: ID does not exist" containerID="a69d96438857fda4e73a9d99a5ee151881ca4a7f917490f142b5ff27614c0058" Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.698441 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a69d96438857fda4e73a9d99a5ee151881ca4a7f917490f142b5ff27614c0058"} err="failed to get container status \"a69d96438857fda4e73a9d99a5ee151881ca4a7f917490f142b5ff27614c0058\": rpc error: code = NotFound desc = could not find container \"a69d96438857fda4e73a9d99a5ee151881ca4a7f917490f142b5ff27614c0058\": container with ID starting with a69d96438857fda4e73a9d99a5ee151881ca4a7f917490f142b5ff27614c0058 not found: ID does not exist" Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.698467 4789 scope.go:117] "RemoveContainer" containerID="2fd71c7d0b224173b8bd6353601c10b697c5e97dfffbd457bae31ef0668e15e1" Feb 02 21:40:40 crc kubenswrapper[4789]: E0202 21:40:40.700464 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2fd71c7d0b224173b8bd6353601c10b697c5e97dfffbd457bae31ef0668e15e1\": container with ID starting with 2fd71c7d0b224173b8bd6353601c10b697c5e97dfffbd457bae31ef0668e15e1 not found: ID does not exist" containerID="2fd71c7d0b224173b8bd6353601c10b697c5e97dfffbd457bae31ef0668e15e1" Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.700495 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2fd71c7d0b224173b8bd6353601c10b697c5e97dfffbd457bae31ef0668e15e1"} err="failed to get container status \"2fd71c7d0b224173b8bd6353601c10b697c5e97dfffbd457bae31ef0668e15e1\": rpc error: code = NotFound desc = could not find container \"2fd71c7d0b224173b8bd6353601c10b697c5e97dfffbd457bae31ef0668e15e1\": container with ID starting with 2fd71c7d0b224173b8bd6353601c10b697c5e97dfffbd457bae31ef0668e15e1 not found: ID does not exist" Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.700513 4789 scope.go:117] "RemoveContainer" containerID="2a4f42975491d16296d64ff16671d19f3f7ca2af54f908f63979957cce03acbc" Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.707416 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.728204 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.736491 4789 scope.go:117] "RemoveContainer" containerID="bc11dcab315b199d84e1245aed796e1917ddb88b3f9915b4433d912d44938385" Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.745523 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 21:40:40 crc kubenswrapper[4789]: E0202 21:40:40.746014 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dafa0ec8-f504-4174-b323-2a2d9f09ffb8" containerName="glance-log" Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.746158 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="dafa0ec8-f504-4174-b323-2a2d9f09ffb8" containerName="glance-log" Feb 02 21:40:40 crc kubenswrapper[4789]: E0202 21:40:40.746218 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="322d725c-ac03-4759-a08c-e534a70d1ec3" containerName="glance-httpd" Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.746267 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="322d725c-ac03-4759-a08c-e534a70d1ec3" containerName="glance-httpd" Feb 02 21:40:40 crc kubenswrapper[4789]: E0202 21:40:40.746321 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dafa0ec8-f504-4174-b323-2a2d9f09ffb8" containerName="glance-httpd" Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.746372 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="dafa0ec8-f504-4174-b323-2a2d9f09ffb8" containerName="glance-httpd" Feb 02 21:40:40 crc kubenswrapper[4789]: E0202 21:40:40.746430 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="322d725c-ac03-4759-a08c-e534a70d1ec3" containerName="glance-log" Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.746483 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="322d725c-ac03-4759-a08c-e534a70d1ec3" containerName="glance-log" Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.746761 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="322d725c-ac03-4759-a08c-e534a70d1ec3" containerName="glance-httpd" Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.746850 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="322d725c-ac03-4759-a08c-e534a70d1ec3" containerName="glance-log" Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.746923 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="dafa0ec8-f504-4174-b323-2a2d9f09ffb8" containerName="glance-log" Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.746985 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="dafa0ec8-f504-4174-b323-2a2d9f09ffb8" containerName="glance-httpd" Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.748052 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.751427 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"dafa0ec8-f504-4174-b323-2a2d9f09ffb8\" (UID: \"dafa0ec8-f504-4174-b323-2a2d9f09ffb8\") " Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.751517 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dafa0ec8-f504-4174-b323-2a2d9f09ffb8-config-data\") pod \"dafa0ec8-f504-4174-b323-2a2d9f09ffb8\" (UID: \"dafa0ec8-f504-4174-b323-2a2d9f09ffb8\") " Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.751594 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dafa0ec8-f504-4174-b323-2a2d9f09ffb8-httpd-run\") pod \"dafa0ec8-f504-4174-b323-2a2d9f09ffb8\" (UID: \"dafa0ec8-f504-4174-b323-2a2d9f09ffb8\") " Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.751612 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dafa0ec8-f504-4174-b323-2a2d9f09ffb8-logs\") pod \"dafa0ec8-f504-4174-b323-2a2d9f09ffb8\" (UID: \"dafa0ec8-f504-4174-b323-2a2d9f09ffb8\") " Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.751636 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dafa0ec8-f504-4174-b323-2a2d9f09ffb8-combined-ca-bundle\") pod \"dafa0ec8-f504-4174-b323-2a2d9f09ffb8\" (UID: \"dafa0ec8-f504-4174-b323-2a2d9f09ffb8\") " Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.751655 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dafa0ec8-f504-4174-b323-2a2d9f09ffb8-internal-tls-certs\") pod \"dafa0ec8-f504-4174-b323-2a2d9f09ffb8\" (UID: \"dafa0ec8-f504-4174-b323-2a2d9f09ffb8\") " Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.751875 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.752373 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.752911 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dafa0ec8-f504-4174-b323-2a2d9f09ffb8-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "dafa0ec8-f504-4174-b323-2a2d9f09ffb8" (UID: "dafa0ec8-f504-4174-b323-2a2d9f09ffb8"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.752988 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf92j\" (UniqueName: \"kubernetes.io/projected/dafa0ec8-f504-4174-b323-2a2d9f09ffb8-kube-api-access-bf92j\") pod \"dafa0ec8-f504-4174-b323-2a2d9f09ffb8\" (UID: \"dafa0ec8-f504-4174-b323-2a2d9f09ffb8\") " Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.753118 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dafa0ec8-f504-4174-b323-2a2d9f09ffb8-scripts\") pod \"dafa0ec8-f504-4174-b323-2a2d9f09ffb8\" (UID: \"dafa0ec8-f504-4174-b323-2a2d9f09ffb8\") " Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.754666 4789 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dafa0ec8-f504-4174-b323-2a2d9f09ffb8-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.753120 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dafa0ec8-f504-4174-b323-2a2d9f09ffb8-logs" (OuterVolumeSpecName: "logs") pod "dafa0ec8-f504-4174-b323-2a2d9f09ffb8" (UID: "dafa0ec8-f504-4174-b323-2a2d9f09ffb8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.769789 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.775768 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dafa0ec8-f504-4174-b323-2a2d9f09ffb8-scripts" (OuterVolumeSpecName: "scripts") pod "dafa0ec8-f504-4174-b323-2a2d9f09ffb8" (UID: "dafa0ec8-f504-4174-b323-2a2d9f09ffb8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.775850 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dafa0ec8-f504-4174-b323-2a2d9f09ffb8-kube-api-access-bf92j" (OuterVolumeSpecName: "kube-api-access-bf92j") pod "dafa0ec8-f504-4174-b323-2a2d9f09ffb8" (UID: "dafa0ec8-f504-4174-b323-2a2d9f09ffb8"). InnerVolumeSpecName "kube-api-access-bf92j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.775944 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "dafa0ec8-f504-4174-b323-2a2d9f09ffb8" (UID: "dafa0ec8-f504-4174-b323-2a2d9f09ffb8"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.817881 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dafa0ec8-f504-4174-b323-2a2d9f09ffb8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dafa0ec8-f504-4174-b323-2a2d9f09ffb8" (UID: "dafa0ec8-f504-4174-b323-2a2d9f09ffb8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.846923 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dafa0ec8-f504-4174-b323-2a2d9f09ffb8-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "dafa0ec8-f504-4174-b323-2a2d9f09ffb8" (UID: "dafa0ec8-f504-4174-b323-2a2d9f09ffb8"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.847091 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dafa0ec8-f504-4174-b323-2a2d9f09ffb8-config-data" (OuterVolumeSpecName: "config-data") pod "dafa0ec8-f504-4174-b323-2a2d9f09ffb8" (UID: "dafa0ec8-f504-4174-b323-2a2d9f09ffb8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.858135 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bb81567-8536-4275-ab0e-a003ef904230-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"3bb81567-8536-4275-ab0e-a003ef904230\") " pod="openstack/glance-default-external-api-0" Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.858193 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stsll\" (UniqueName: \"kubernetes.io/projected/3bb81567-8536-4275-ab0e-a003ef904230-kube-api-access-stsll\") pod \"glance-default-external-api-0\" (UID: \"3bb81567-8536-4275-ab0e-a003ef904230\") " pod="openstack/glance-default-external-api-0" Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.858222 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bb81567-8536-4275-ab0e-a003ef904230-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"3bb81567-8536-4275-ab0e-a003ef904230\") " pod="openstack/glance-default-external-api-0" Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.858243 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3bb81567-8536-4275-ab0e-a003ef904230-logs\") pod \"glance-default-external-api-0\" (UID: \"3bb81567-8536-4275-ab0e-a003ef904230\") " pod="openstack/glance-default-external-api-0" Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.858265 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3bb81567-8536-4275-ab0e-a003ef904230-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"3bb81567-8536-4275-ab0e-a003ef904230\") " pod="openstack/glance-default-external-api-0" Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.858285 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"3bb81567-8536-4275-ab0e-a003ef904230\") " pod="openstack/glance-default-external-api-0" Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.858307 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3bb81567-8536-4275-ab0e-a003ef904230-scripts\") pod \"glance-default-external-api-0\" (UID: \"3bb81567-8536-4275-ab0e-a003ef904230\") " pod="openstack/glance-default-external-api-0" Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.858346 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bb81567-8536-4275-ab0e-a003ef904230-config-data\") pod \"glance-default-external-api-0\" (UID: \"3bb81567-8536-4275-ab0e-a003ef904230\") " pod="openstack/glance-default-external-api-0" Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.858396 4789 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dafa0ec8-f504-4174-b323-2a2d9f09ffb8-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.858418 4789 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.858427 4789 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dafa0ec8-f504-4174-b323-2a2d9f09ffb8-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.858435 4789 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dafa0ec8-f504-4174-b323-2a2d9f09ffb8-logs\") on node \"crc\" DevicePath \"\"" Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.858442 4789 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dafa0ec8-f504-4174-b323-2a2d9f09ffb8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.858452 4789 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dafa0ec8-f504-4174-b323-2a2d9f09ffb8-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.858460 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf92j\" (UniqueName: \"kubernetes.io/projected/dafa0ec8-f504-4174-b323-2a2d9f09ffb8-kube-api-access-bf92j\") on node \"crc\" DevicePath \"\"" Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.890915 4789 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.960558 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stsll\" (UniqueName: \"kubernetes.io/projected/3bb81567-8536-4275-ab0e-a003ef904230-kube-api-access-stsll\") pod \"glance-default-external-api-0\" (UID: \"3bb81567-8536-4275-ab0e-a003ef904230\") " pod="openstack/glance-default-external-api-0" Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.960761 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bb81567-8536-4275-ab0e-a003ef904230-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"3bb81567-8536-4275-ab0e-a003ef904230\") " pod="openstack/glance-default-external-api-0" Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.960881 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3bb81567-8536-4275-ab0e-a003ef904230-logs\") pod \"glance-default-external-api-0\" (UID: \"3bb81567-8536-4275-ab0e-a003ef904230\") " pod="openstack/glance-default-external-api-0" Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.960984 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3bb81567-8536-4275-ab0e-a003ef904230-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"3bb81567-8536-4275-ab0e-a003ef904230\") " pod="openstack/glance-default-external-api-0" Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.961094 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"3bb81567-8536-4275-ab0e-a003ef904230\") " pod="openstack/glance-default-external-api-0" Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.961227 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3bb81567-8536-4275-ab0e-a003ef904230-scripts\") pod \"glance-default-external-api-0\" (UID: \"3bb81567-8536-4275-ab0e-a003ef904230\") " pod="openstack/glance-default-external-api-0" Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.961369 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bb81567-8536-4275-ab0e-a003ef904230-config-data\") pod \"glance-default-external-api-0\" (UID: \"3bb81567-8536-4275-ab0e-a003ef904230\") " pod="openstack/glance-default-external-api-0" Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.961556 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bb81567-8536-4275-ab0e-a003ef904230-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"3bb81567-8536-4275-ab0e-a003ef904230\") " pod="openstack/glance-default-external-api-0" Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.961750 4789 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.961657 4789 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"3bb81567-8536-4275-ab0e-a003ef904230\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-external-api-0" Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.961922 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3bb81567-8536-4275-ab0e-a003ef904230-logs\") pod \"glance-default-external-api-0\" (UID: \"3bb81567-8536-4275-ab0e-a003ef904230\") " pod="openstack/glance-default-external-api-0" Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.961959 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3bb81567-8536-4275-ab0e-a003ef904230-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"3bb81567-8536-4275-ab0e-a003ef904230\") " pod="openstack/glance-default-external-api-0" Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.966195 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bb81567-8536-4275-ab0e-a003ef904230-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"3bb81567-8536-4275-ab0e-a003ef904230\") " pod="openstack/glance-default-external-api-0" Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.968024 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3bb81567-8536-4275-ab0e-a003ef904230-scripts\") pod \"glance-default-external-api-0\" (UID: \"3bb81567-8536-4275-ab0e-a003ef904230\") " pod="openstack/glance-default-external-api-0" Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.975363 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bb81567-8536-4275-ab0e-a003ef904230-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"3bb81567-8536-4275-ab0e-a003ef904230\") " pod="openstack/glance-default-external-api-0" Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.988933 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.995170 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stsll\" (UniqueName: \"kubernetes.io/projected/3bb81567-8536-4275-ab0e-a003ef904230-kube-api-access-stsll\") pod \"glance-default-external-api-0\" (UID: \"3bb81567-8536-4275-ab0e-a003ef904230\") " pod="openstack/glance-default-external-api-0" Feb 02 21:40:40 crc kubenswrapper[4789]: I0202 21:40:40.999055 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bb81567-8536-4275-ab0e-a003ef904230-config-data\") pod \"glance-default-external-api-0\" (UID: \"3bb81567-8536-4275-ab0e-a003ef904230\") " pod="openstack/glance-default-external-api-0" Feb 02 21:40:41 crc kubenswrapper[4789]: I0202 21:40:41.026645 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 21:40:41 crc kubenswrapper[4789]: I0202 21:40:41.065835 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 21:40:41 crc kubenswrapper[4789]: I0202 21:40:41.067332 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 21:40:41 crc kubenswrapper[4789]: I0202 21:40:41.069342 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"3bb81567-8536-4275-ab0e-a003ef904230\") " pod="openstack/glance-default-external-api-0" Feb 02 21:40:41 crc kubenswrapper[4789]: I0202 21:40:41.069719 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 02 21:40:41 crc kubenswrapper[4789]: I0202 21:40:41.070046 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 02 21:40:41 crc kubenswrapper[4789]: I0202 21:40:41.084448 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 21:40:41 crc kubenswrapper[4789]: I0202 21:40:41.089680 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 02 21:40:41 crc kubenswrapper[4789]: I0202 21:40:41.097637 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-9zj6q"] Feb 02 21:40:41 crc kubenswrapper[4789]: W0202 21:40:41.098698 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4a1e8a56_47de_4a5e_b4f6_389ebf616658.slice/crio-8ce384ef88bc82674c6e94da8dcdf5c0996b62b1d6a3d12ac22c63de527efb8f WatchSource:0}: Error finding container 8ce384ef88bc82674c6e94da8dcdf5c0996b62b1d6a3d12ac22c63de527efb8f: Status 404 returned error can't find the container with id 8ce384ef88bc82674c6e94da8dcdf5c0996b62b1d6a3d12ac22c63de527efb8f Feb 02 21:40:41 crc kubenswrapper[4789]: W0202 21:40:41.102175 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod160db825_98d0_4663_80b5_1a50e382cfa5.slice/crio-a263b1c0a2054a79bcff712f82f8aab602bf8d8fc47039b3c694ba749c0102fc WatchSource:0}: Error finding container a263b1c0a2054a79bcff712f82f8aab602bf8d8fc47039b3c694ba749c0102fc: Status 404 returned error can't find the container with id a263b1c0a2054a79bcff712f82f8aab602bf8d8fc47039b3c694ba749c0102fc Feb 02 21:40:41 crc kubenswrapper[4789]: I0202 21:40:41.109667 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-ac1d-account-create-update-7qrx7"] Feb 02 21:40:41 crc kubenswrapper[4789]: I0202 21:40:41.111330 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 02 21:40:41 crc kubenswrapper[4789]: I0202 21:40:41.122973 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-xdt8b"] Feb 02 21:40:41 crc kubenswrapper[4789]: I0202 21:40:41.140803 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 02 21:40:41 crc kubenswrapper[4789]: I0202 21:40:41.168722 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"24fb18f4-7a0f-4ae5-9104-e7dc45a479ff\") " pod="openstack/glance-default-internal-api-0" Feb 02 21:40:41 crc kubenswrapper[4789]: I0202 21:40:41.168793 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jt6bh\" (UniqueName: \"kubernetes.io/projected/24fb18f4-7a0f-4ae5-9104-e7dc45a479ff-kube-api-access-jt6bh\") pod \"glance-default-internal-api-0\" (UID: \"24fb18f4-7a0f-4ae5-9104-e7dc45a479ff\") " pod="openstack/glance-default-internal-api-0" Feb 02 21:40:41 crc kubenswrapper[4789]: I0202 21:40:41.168817 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/24fb18f4-7a0f-4ae5-9104-e7dc45a479ff-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"24fb18f4-7a0f-4ae5-9104-e7dc45a479ff\") " pod="openstack/glance-default-internal-api-0" Feb 02 21:40:41 crc kubenswrapper[4789]: I0202 21:40:41.168855 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/24fb18f4-7a0f-4ae5-9104-e7dc45a479ff-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"24fb18f4-7a0f-4ae5-9104-e7dc45a479ff\") " pod="openstack/glance-default-internal-api-0" Feb 02 21:40:41 crc kubenswrapper[4789]: I0202 21:40:41.168876 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24fb18f4-7a0f-4ae5-9104-e7dc45a479ff-logs\") pod \"glance-default-internal-api-0\" (UID: \"24fb18f4-7a0f-4ae5-9104-e7dc45a479ff\") " pod="openstack/glance-default-internal-api-0" Feb 02 21:40:41 crc kubenswrapper[4789]: I0202 21:40:41.168974 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24fb18f4-7a0f-4ae5-9104-e7dc45a479ff-scripts\") pod \"glance-default-internal-api-0\" (UID: \"24fb18f4-7a0f-4ae5-9104-e7dc45a479ff\") " pod="openstack/glance-default-internal-api-0" Feb 02 21:40:41 crc kubenswrapper[4789]: I0202 21:40:41.169008 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24fb18f4-7a0f-4ae5-9104-e7dc45a479ff-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"24fb18f4-7a0f-4ae5-9104-e7dc45a479ff\") " pod="openstack/glance-default-internal-api-0" Feb 02 21:40:41 crc kubenswrapper[4789]: I0202 21:40:41.169180 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24fb18f4-7a0f-4ae5-9104-e7dc45a479ff-config-data\") pod \"glance-default-internal-api-0\" (UID: \"24fb18f4-7a0f-4ae5-9104-e7dc45a479ff\") " pod="openstack/glance-default-internal-api-0" Feb 02 21:40:41 crc kubenswrapper[4789]: I0202 21:40:41.170657 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-725d-account-create-update-wv4nx"] Feb 02 21:40:41 crc kubenswrapper[4789]: I0202 21:40:41.218010 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-3f0a-account-create-update-4kw96"] Feb 02 21:40:41 crc kubenswrapper[4789]: I0202 21:40:41.257647 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 21:40:41 crc kubenswrapper[4789]: I0202 21:40:41.274175 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"24fb18f4-7a0f-4ae5-9104-e7dc45a479ff\") " pod="openstack/glance-default-internal-api-0" Feb 02 21:40:41 crc kubenswrapper[4789]: I0202 21:40:41.274228 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jt6bh\" (UniqueName: \"kubernetes.io/projected/24fb18f4-7a0f-4ae5-9104-e7dc45a479ff-kube-api-access-jt6bh\") pod \"glance-default-internal-api-0\" (UID: \"24fb18f4-7a0f-4ae5-9104-e7dc45a479ff\") " pod="openstack/glance-default-internal-api-0" Feb 02 21:40:41 crc kubenswrapper[4789]: I0202 21:40:41.274259 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/24fb18f4-7a0f-4ae5-9104-e7dc45a479ff-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"24fb18f4-7a0f-4ae5-9104-e7dc45a479ff\") " pod="openstack/glance-default-internal-api-0" Feb 02 21:40:41 crc kubenswrapper[4789]: I0202 21:40:41.274279 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/24fb18f4-7a0f-4ae5-9104-e7dc45a479ff-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"24fb18f4-7a0f-4ae5-9104-e7dc45a479ff\") " pod="openstack/glance-default-internal-api-0" Feb 02 21:40:41 crc kubenswrapper[4789]: I0202 21:40:41.274300 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24fb18f4-7a0f-4ae5-9104-e7dc45a479ff-logs\") pod \"glance-default-internal-api-0\" (UID: \"24fb18f4-7a0f-4ae5-9104-e7dc45a479ff\") " pod="openstack/glance-default-internal-api-0" Feb 02 21:40:41 crc kubenswrapper[4789]: I0202 21:40:41.274367 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24fb18f4-7a0f-4ae5-9104-e7dc45a479ff-scripts\") pod \"glance-default-internal-api-0\" (UID: \"24fb18f4-7a0f-4ae5-9104-e7dc45a479ff\") " pod="openstack/glance-default-internal-api-0" Feb 02 21:40:41 crc kubenswrapper[4789]: I0202 21:40:41.274409 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24fb18f4-7a0f-4ae5-9104-e7dc45a479ff-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"24fb18f4-7a0f-4ae5-9104-e7dc45a479ff\") " pod="openstack/glance-default-internal-api-0" Feb 02 21:40:41 crc kubenswrapper[4789]: I0202 21:40:41.274466 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24fb18f4-7a0f-4ae5-9104-e7dc45a479ff-config-data\") pod \"glance-default-internal-api-0\" (UID: \"24fb18f4-7a0f-4ae5-9104-e7dc45a479ff\") " pod="openstack/glance-default-internal-api-0" Feb 02 21:40:41 crc kubenswrapper[4789]: I0202 21:40:41.275908 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/24fb18f4-7a0f-4ae5-9104-e7dc45a479ff-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"24fb18f4-7a0f-4ae5-9104-e7dc45a479ff\") " pod="openstack/glance-default-internal-api-0" Feb 02 21:40:41 crc kubenswrapper[4789]: I0202 21:40:41.276395 4789 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"24fb18f4-7a0f-4ae5-9104-e7dc45a479ff\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-internal-api-0" Feb 02 21:40:41 crc kubenswrapper[4789]: I0202 21:40:41.278991 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24fb18f4-7a0f-4ae5-9104-e7dc45a479ff-logs\") pod \"glance-default-internal-api-0\" (UID: \"24fb18f4-7a0f-4ae5-9104-e7dc45a479ff\") " pod="openstack/glance-default-internal-api-0" Feb 02 21:40:41 crc kubenswrapper[4789]: I0202 21:40:41.291066 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24fb18f4-7a0f-4ae5-9104-e7dc45a479ff-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"24fb18f4-7a0f-4ae5-9104-e7dc45a479ff\") " pod="openstack/glance-default-internal-api-0" Feb 02 21:40:41 crc kubenswrapper[4789]: I0202 21:40:41.294045 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24fb18f4-7a0f-4ae5-9104-e7dc45a479ff-scripts\") pod \"glance-default-internal-api-0\" (UID: \"24fb18f4-7a0f-4ae5-9104-e7dc45a479ff\") " pod="openstack/glance-default-internal-api-0" Feb 02 21:40:41 crc kubenswrapper[4789]: I0202 21:40:41.294661 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/24fb18f4-7a0f-4ae5-9104-e7dc45a479ff-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"24fb18f4-7a0f-4ae5-9104-e7dc45a479ff\") " pod="openstack/glance-default-internal-api-0" Feb 02 21:40:41 crc kubenswrapper[4789]: I0202 21:40:41.295083 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24fb18f4-7a0f-4ae5-9104-e7dc45a479ff-config-data\") pod \"glance-default-internal-api-0\" (UID: \"24fb18f4-7a0f-4ae5-9104-e7dc45a479ff\") " pod="openstack/glance-default-internal-api-0" Feb 02 21:40:41 crc kubenswrapper[4789]: I0202 21:40:41.309441 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jt6bh\" (UniqueName: \"kubernetes.io/projected/24fb18f4-7a0f-4ae5-9104-e7dc45a479ff-kube-api-access-jt6bh\") pod \"glance-default-internal-api-0\" (UID: \"24fb18f4-7a0f-4ae5-9104-e7dc45a479ff\") " pod="openstack/glance-default-internal-api-0" Feb 02 21:40:41 crc kubenswrapper[4789]: I0202 21:40:41.350336 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"24fb18f4-7a0f-4ae5-9104-e7dc45a479ff\") " pod="openstack/glance-default-internal-api-0" Feb 02 21:40:41 crc kubenswrapper[4789]: I0202 21:40:41.408438 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 21:40:41 crc kubenswrapper[4789]: I0202 21:40:41.672074 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-9zj6q" event={"ID":"a55dfeeb-4219-4e3f-834b-0f4de4381c96","Type":"ContainerStarted","Data":"4d415a493748a5fe09f699514591958e89913c004ba0a9b4078c606869ecb7de"} Feb 02 21:40:41 crc kubenswrapper[4789]: I0202 21:40:41.672330 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-9zj6q" event={"ID":"a55dfeeb-4219-4e3f-834b-0f4de4381c96","Type":"ContainerStarted","Data":"2539503a51deef89b3639543dc8ea524500a6ff2980c93a9a0a528a60042756a"} Feb 02 21:40:41 crc kubenswrapper[4789]: I0202 21:40:41.684018 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-725d-account-create-update-wv4nx" event={"ID":"160db825-98d0-4663-80b5-1a50e382cfa5","Type":"ContainerStarted","Data":"f0215d16c08c102f787f13d2c2da456f3ba5286566c5ccacad8f32a59f3affce"} Feb 02 21:40:41 crc kubenswrapper[4789]: I0202 21:40:41.684067 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-725d-account-create-update-wv4nx" event={"ID":"160db825-98d0-4663-80b5-1a50e382cfa5","Type":"ContainerStarted","Data":"a263b1c0a2054a79bcff712f82f8aab602bf8d8fc47039b3c694ba749c0102fc"} Feb 02 21:40:41 crc kubenswrapper[4789]: I0202 21:40:41.696832 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-xdt8b" event={"ID":"4a1e8a56-47de-4a5e-b4f6-389ebf616658","Type":"ContainerStarted","Data":"0eca0dc9b0d9a1bd83046a0f3570f8bb83aedec5f2d1b6428ad5f16255c8d458"} Feb 02 21:40:41 crc kubenswrapper[4789]: I0202 21:40:41.696865 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-xdt8b" event={"ID":"4a1e8a56-47de-4a5e-b4f6-389ebf616658","Type":"ContainerStarted","Data":"8ce384ef88bc82674c6e94da8dcdf5c0996b62b1d6a3d12ac22c63de527efb8f"} Feb 02 21:40:41 crc kubenswrapper[4789]: I0202 21:40:41.699362 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-9zj6q" podStartSLOduration=2.6993439329999998 podStartE2EDuration="2.699343933s" podCreationTimestamp="2026-02-02 21:40:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:40:41.687651833 +0000 UTC m=+1261.982676852" watchObservedRunningTime="2026-02-02 21:40:41.699343933 +0000 UTC m=+1261.994368952" Feb 02 21:40:41 crc kubenswrapper[4789]: I0202 21:40:41.713847 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-725d-account-create-update-wv4nx" podStartSLOduration=2.713832043 podStartE2EDuration="2.713832043s" podCreationTimestamp="2026-02-02 21:40:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:40:41.709869331 +0000 UTC m=+1262.004894350" watchObservedRunningTime="2026-02-02 21:40:41.713832043 +0000 UTC m=+1262.008857052" Feb 02 21:40:41 crc kubenswrapper[4789]: I0202 21:40:41.717291 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-ac1d-account-create-update-7qrx7" event={"ID":"4be719bd-b5d3-4499-9e80-9d8055c1a8df","Type":"ContainerStarted","Data":"2b6de69f9e66b84935d1feec95db1e8c1077e1b7f5201ec276390cf510290679"} Feb 02 21:40:41 crc kubenswrapper[4789]: I0202 21:40:41.717360 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-ac1d-account-create-update-7qrx7" event={"ID":"4be719bd-b5d3-4499-9e80-9d8055c1a8df","Type":"ContainerStarted","Data":"fe6674a16e94a1b89808884f0717771dd97a3b933a69992765162cb31409568a"} Feb 02 21:40:41 crc kubenswrapper[4789]: I0202 21:40:41.721124 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-3f0a-account-create-update-4kw96" event={"ID":"b833200c-e96b-4baa-9654-e7a3c07369e5","Type":"ContainerStarted","Data":"9e08ef420908bb5471cb646abacab871ac3bb7adf1ac462a377f792d5691f1fb"} Feb 02 21:40:41 crc kubenswrapper[4789]: I0202 21:40:41.721172 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-3f0a-account-create-update-4kw96" event={"ID":"b833200c-e96b-4baa-9654-e7a3c07369e5","Type":"ContainerStarted","Data":"89954c8d0a4f2b4f0456ce2973ab0d925fc51a65b27180a994a17c06f1afa679"} Feb 02 21:40:41 crc kubenswrapper[4789]: I0202 21:40:41.727011 4789 generic.go:334] "Generic (PLEG): container finished" podID="05dabff6-c489-4c3a-9030-4206f14e27fd" containerID="31cb564d5503d24a074fb5871aa516ae2843819e6096357ded09ae326cf07b00" exitCode=0 Feb 02 21:40:41 crc kubenswrapper[4789]: I0202 21:40:41.727263 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-xg9jg" event={"ID":"05dabff6-c489-4c3a-9030-4206f14e27fd","Type":"ContainerDied","Data":"31cb564d5503d24a074fb5871aa516ae2843819e6096357ded09ae326cf07b00"} Feb 02 21:40:41 crc kubenswrapper[4789]: I0202 21:40:41.739792 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-xdt8b" podStartSLOduration=2.739776047 podStartE2EDuration="2.739776047s" podCreationTimestamp="2026-02-02 21:40:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:40:41.737104941 +0000 UTC m=+1262.032129960" watchObservedRunningTime="2026-02-02 21:40:41.739776047 +0000 UTC m=+1262.034801066" Feb 02 21:40:41 crc kubenswrapper[4789]: I0202 21:40:41.755553 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-3f0a-account-create-update-4kw96" podStartSLOduration=2.755534543 podStartE2EDuration="2.755534543s" podCreationTimestamp="2026-02-02 21:40:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:40:41.755134332 +0000 UTC m=+1262.050159351" watchObservedRunningTime="2026-02-02 21:40:41.755534543 +0000 UTC m=+1262.050559562" Feb 02 21:40:41 crc kubenswrapper[4789]: I0202 21:40:41.777182 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-ac1d-account-create-update-7qrx7" podStartSLOduration=2.777167075 podStartE2EDuration="2.777167075s" podCreationTimestamp="2026-02-02 21:40:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:40:41.770492086 +0000 UTC m=+1262.065517105" watchObservedRunningTime="2026-02-02 21:40:41.777167075 +0000 UTC m=+1262.072192094" Feb 02 21:40:41 crc kubenswrapper[4789]: I0202 21:40:41.874630 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 21:40:41 crc kubenswrapper[4789]: W0202 21:40:41.964237 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3bb81567_8536_4275_ab0e_a003ef904230.slice/crio-767ead280a730bc38964cad1cff17fed926d00c1769f11adb821fd37e93cd85d WatchSource:0}: Error finding container 767ead280a730bc38964cad1cff17fed926d00c1769f11adb821fd37e93cd85d: Status 404 returned error can't find the container with id 767ead280a730bc38964cad1cff17fed926d00c1769f11adb821fd37e93cd85d Feb 02 21:40:42 crc kubenswrapper[4789]: I0202 21:40:42.072347 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 21:40:42 crc kubenswrapper[4789]: W0202 21:40:42.098687 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod24fb18f4_7a0f_4ae5_9104_e7dc45a479ff.slice/crio-c9280513d93dad9efb291ad4119b0e757638973d0ae971412188329924d4be13 WatchSource:0}: Error finding container c9280513d93dad9efb291ad4119b0e757638973d0ae971412188329924d4be13: Status 404 returned error can't find the container with id c9280513d93dad9efb291ad4119b0e757638973d0ae971412188329924d4be13 Feb 02 21:40:42 crc kubenswrapper[4789]: I0202 21:40:42.430090 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="322d725c-ac03-4759-a08c-e534a70d1ec3" path="/var/lib/kubelet/pods/322d725c-ac03-4759-a08c-e534a70d1ec3/volumes" Feb 02 21:40:42 crc kubenswrapper[4789]: I0202 21:40:42.431239 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dafa0ec8-f504-4174-b323-2a2d9f09ffb8" path="/var/lib/kubelet/pods/dafa0ec8-f504-4174-b323-2a2d9f09ffb8/volumes" Feb 02 21:40:42 crc kubenswrapper[4789]: I0202 21:40:42.805011 4789 generic.go:334] "Generic (PLEG): container finished" podID="160db825-98d0-4663-80b5-1a50e382cfa5" containerID="f0215d16c08c102f787f13d2c2da456f3ba5286566c5ccacad8f32a59f3affce" exitCode=0 Feb 02 21:40:42 crc kubenswrapper[4789]: I0202 21:40:42.805335 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-725d-account-create-update-wv4nx" event={"ID":"160db825-98d0-4663-80b5-1a50e382cfa5","Type":"ContainerDied","Data":"f0215d16c08c102f787f13d2c2da456f3ba5286566c5ccacad8f32a59f3affce"} Feb 02 21:40:42 crc kubenswrapper[4789]: I0202 21:40:42.825944 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"24fb18f4-7a0f-4ae5-9104-e7dc45a479ff","Type":"ContainerStarted","Data":"8d41fcf5f05241ca690bf9be181cdbc0af2afc9c357aaaa7b133a7a3685d2601"} Feb 02 21:40:42 crc kubenswrapper[4789]: I0202 21:40:42.825989 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"24fb18f4-7a0f-4ae5-9104-e7dc45a479ff","Type":"ContainerStarted","Data":"c9280513d93dad9efb291ad4119b0e757638973d0ae971412188329924d4be13"} Feb 02 21:40:42 crc kubenswrapper[4789]: I0202 21:40:42.868748 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9fbf342f-e489-4914-99d2-d2b5da9a3e75","Type":"ContainerStarted","Data":"4d19e9acfcd069d33568dd5eb37918d764af6cd982550a47d0a1b5bc56158aa0"} Feb 02 21:40:42 crc kubenswrapper[4789]: I0202 21:40:42.868910 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9fbf342f-e489-4914-99d2-d2b5da9a3e75" containerName="ceilometer-central-agent" containerID="cri-o://3da3de917d02b853103d72b4a2655d21606b0e94c091cada13d7340c8a07120d" gracePeriod=30 Feb 02 21:40:42 crc kubenswrapper[4789]: I0202 21:40:42.869196 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 02 21:40:42 crc kubenswrapper[4789]: I0202 21:40:42.869428 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9fbf342f-e489-4914-99d2-d2b5da9a3e75" containerName="proxy-httpd" containerID="cri-o://4d19e9acfcd069d33568dd5eb37918d764af6cd982550a47d0a1b5bc56158aa0" gracePeriod=30 Feb 02 21:40:42 crc kubenswrapper[4789]: I0202 21:40:42.869470 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9fbf342f-e489-4914-99d2-d2b5da9a3e75" containerName="sg-core" containerID="cri-o://2cef93e7502918b2fdc30d919f677a18e537af13a5ab8002948b7da723faab1a" gracePeriod=30 Feb 02 21:40:42 crc kubenswrapper[4789]: I0202 21:40:42.869502 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9fbf342f-e489-4914-99d2-d2b5da9a3e75" containerName="ceilometer-notification-agent" containerID="cri-o://338cf0d5d7efb9dbbb48b11890d5c138bba78aa9270e7f6c74ba60abb2882959" gracePeriod=30 Feb 02 21:40:42 crc kubenswrapper[4789]: I0202 21:40:42.887806 4789 generic.go:334] "Generic (PLEG): container finished" podID="a55dfeeb-4219-4e3f-834b-0f4de4381c96" containerID="4d415a493748a5fe09f699514591958e89913c004ba0a9b4078c606869ecb7de" exitCode=0 Feb 02 21:40:42 crc kubenswrapper[4789]: I0202 21:40:42.887896 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-9zj6q" event={"ID":"a55dfeeb-4219-4e3f-834b-0f4de4381c96","Type":"ContainerDied","Data":"4d415a493748a5fe09f699514591958e89913c004ba0a9b4078c606869ecb7de"} Feb 02 21:40:42 crc kubenswrapper[4789]: I0202 21:40:42.892190 4789 generic.go:334] "Generic (PLEG): container finished" podID="4be719bd-b5d3-4499-9e80-9d8055c1a8df" containerID="2b6de69f9e66b84935d1feec95db1e8c1077e1b7f5201ec276390cf510290679" exitCode=0 Feb 02 21:40:42 crc kubenswrapper[4789]: I0202 21:40:42.892242 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-ac1d-account-create-update-7qrx7" event={"ID":"4be719bd-b5d3-4499-9e80-9d8055c1a8df","Type":"ContainerDied","Data":"2b6de69f9e66b84935d1feec95db1e8c1077e1b7f5201ec276390cf510290679"} Feb 02 21:40:42 crc kubenswrapper[4789]: I0202 21:40:42.907573 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.457616088 podStartE2EDuration="6.907556887s" podCreationTimestamp="2026-02-02 21:40:36 +0000 UTC" firstStartedPulling="2026-02-02 21:40:37.514315579 +0000 UTC m=+1257.809340598" lastFinishedPulling="2026-02-02 21:40:41.964256378 +0000 UTC m=+1262.259281397" observedRunningTime="2026-02-02 21:40:42.903644706 +0000 UTC m=+1263.198669725" watchObservedRunningTime="2026-02-02 21:40:42.907556887 +0000 UTC m=+1263.202581906" Feb 02 21:40:42 crc kubenswrapper[4789]: I0202 21:40:42.915976 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3bb81567-8536-4275-ab0e-a003ef904230","Type":"ContainerStarted","Data":"ce9ef55c9302edded2a55530533656268a3c7b21b0ae936aae0892ef6e043554"} Feb 02 21:40:42 crc kubenswrapper[4789]: I0202 21:40:42.916015 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3bb81567-8536-4275-ab0e-a003ef904230","Type":"ContainerStarted","Data":"767ead280a730bc38964cad1cff17fed926d00c1769f11adb821fd37e93cd85d"} Feb 02 21:40:42 crc kubenswrapper[4789]: I0202 21:40:42.939154 4789 generic.go:334] "Generic (PLEG): container finished" podID="b833200c-e96b-4baa-9654-e7a3c07369e5" containerID="9e08ef420908bb5471cb646abacab871ac3bb7adf1ac462a377f792d5691f1fb" exitCode=0 Feb 02 21:40:42 crc kubenswrapper[4789]: I0202 21:40:42.939271 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-3f0a-account-create-update-4kw96" event={"ID":"b833200c-e96b-4baa-9654-e7a3c07369e5","Type":"ContainerDied","Data":"9e08ef420908bb5471cb646abacab871ac3bb7adf1ac462a377f792d5691f1fb"} Feb 02 21:40:42 crc kubenswrapper[4789]: I0202 21:40:42.961060 4789 generic.go:334] "Generic (PLEG): container finished" podID="4a1e8a56-47de-4a5e-b4f6-389ebf616658" containerID="0eca0dc9b0d9a1bd83046a0f3570f8bb83aedec5f2d1b6428ad5f16255c8d458" exitCode=0 Feb 02 21:40:42 crc kubenswrapper[4789]: I0202 21:40:42.961293 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-xdt8b" event={"ID":"4a1e8a56-47de-4a5e-b4f6-389ebf616658","Type":"ContainerDied","Data":"0eca0dc9b0d9a1bd83046a0f3570f8bb83aedec5f2d1b6428ad5f16255c8d458"} Feb 02 21:40:43 crc kubenswrapper[4789]: I0202 21:40:43.258650 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-xg9jg" Feb 02 21:40:43 crc kubenswrapper[4789]: I0202 21:40:43.335973 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/05dabff6-c489-4c3a-9030-4206f14e27fd-operator-scripts\") pod \"05dabff6-c489-4c3a-9030-4206f14e27fd\" (UID: \"05dabff6-c489-4c3a-9030-4206f14e27fd\") " Feb 02 21:40:43 crc kubenswrapper[4789]: I0202 21:40:43.336285 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-42jz8\" (UniqueName: \"kubernetes.io/projected/05dabff6-c489-4c3a-9030-4206f14e27fd-kube-api-access-42jz8\") pod \"05dabff6-c489-4c3a-9030-4206f14e27fd\" (UID: \"05dabff6-c489-4c3a-9030-4206f14e27fd\") " Feb 02 21:40:43 crc kubenswrapper[4789]: I0202 21:40:43.337108 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05dabff6-c489-4c3a-9030-4206f14e27fd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "05dabff6-c489-4c3a-9030-4206f14e27fd" (UID: "05dabff6-c489-4c3a-9030-4206f14e27fd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:40:43 crc kubenswrapper[4789]: I0202 21:40:43.341567 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05dabff6-c489-4c3a-9030-4206f14e27fd-kube-api-access-42jz8" (OuterVolumeSpecName: "kube-api-access-42jz8") pod "05dabff6-c489-4c3a-9030-4206f14e27fd" (UID: "05dabff6-c489-4c3a-9030-4206f14e27fd"). InnerVolumeSpecName "kube-api-access-42jz8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:40:43 crc kubenswrapper[4789]: I0202 21:40:43.438261 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-42jz8\" (UniqueName: \"kubernetes.io/projected/05dabff6-c489-4c3a-9030-4206f14e27fd-kube-api-access-42jz8\") on node \"crc\" DevicePath \"\"" Feb 02 21:40:43 crc kubenswrapper[4789]: I0202 21:40:43.438498 4789 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/05dabff6-c489-4c3a-9030-4206f14e27fd-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 21:40:43 crc kubenswrapper[4789]: I0202 21:40:43.972913 4789 generic.go:334] "Generic (PLEG): container finished" podID="9fbf342f-e489-4914-99d2-d2b5da9a3e75" containerID="4d19e9acfcd069d33568dd5eb37918d764af6cd982550a47d0a1b5bc56158aa0" exitCode=0 Feb 02 21:40:43 crc kubenswrapper[4789]: I0202 21:40:43.973235 4789 generic.go:334] "Generic (PLEG): container finished" podID="9fbf342f-e489-4914-99d2-d2b5da9a3e75" containerID="2cef93e7502918b2fdc30d919f677a18e537af13a5ab8002948b7da723faab1a" exitCode=2 Feb 02 21:40:43 crc kubenswrapper[4789]: I0202 21:40:43.973248 4789 generic.go:334] "Generic (PLEG): container finished" podID="9fbf342f-e489-4914-99d2-d2b5da9a3e75" containerID="338cf0d5d7efb9dbbb48b11890d5c138bba78aa9270e7f6c74ba60abb2882959" exitCode=0 Feb 02 21:40:43 crc kubenswrapper[4789]: I0202 21:40:43.972994 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9fbf342f-e489-4914-99d2-d2b5da9a3e75","Type":"ContainerDied","Data":"4d19e9acfcd069d33568dd5eb37918d764af6cd982550a47d0a1b5bc56158aa0"} Feb 02 21:40:43 crc kubenswrapper[4789]: I0202 21:40:43.973315 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9fbf342f-e489-4914-99d2-d2b5da9a3e75","Type":"ContainerDied","Data":"2cef93e7502918b2fdc30d919f677a18e537af13a5ab8002948b7da723faab1a"} Feb 02 21:40:43 crc kubenswrapper[4789]: I0202 21:40:43.973332 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9fbf342f-e489-4914-99d2-d2b5da9a3e75","Type":"ContainerDied","Data":"338cf0d5d7efb9dbbb48b11890d5c138bba78aa9270e7f6c74ba60abb2882959"} Feb 02 21:40:43 crc kubenswrapper[4789]: I0202 21:40:43.975365 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"24fb18f4-7a0f-4ae5-9104-e7dc45a479ff","Type":"ContainerStarted","Data":"a473f8d31f1d21a7c2b382a1e23b8b88890e3aa22648e9737f24020491949fe0"} Feb 02 21:40:43 crc kubenswrapper[4789]: I0202 21:40:43.977500 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-xg9jg" event={"ID":"05dabff6-c489-4c3a-9030-4206f14e27fd","Type":"ContainerDied","Data":"26698b7916a079f2c88f52dd654807d7851c1fa70d7cd0fc814ef50848044875"} Feb 02 21:40:43 crc kubenswrapper[4789]: I0202 21:40:43.977525 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-xg9jg" Feb 02 21:40:43 crc kubenswrapper[4789]: I0202 21:40:43.977538 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26698b7916a079f2c88f52dd654807d7851c1fa70d7cd0fc814ef50848044875" Feb 02 21:40:43 crc kubenswrapper[4789]: I0202 21:40:43.980405 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3bb81567-8536-4275-ab0e-a003ef904230","Type":"ContainerStarted","Data":"d9549a00930229585c1a660c46c1ee179871330062dec64c5947fd34ad7860f5"} Feb 02 21:40:44 crc kubenswrapper[4789]: I0202 21:40:44.038108 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.038086752 podStartE2EDuration="4.038086752s" podCreationTimestamp="2026-02-02 21:40:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:40:44.02667444 +0000 UTC m=+1264.321699459" watchObservedRunningTime="2026-02-02 21:40:44.038086752 +0000 UTC m=+1264.333111791" Feb 02 21:40:44 crc kubenswrapper[4789]: I0202 21:40:44.051416 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.051393349 podStartE2EDuration="4.051393349s" podCreationTimestamp="2026-02-02 21:40:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:40:44.045568014 +0000 UTC m=+1264.340593033" watchObservedRunningTime="2026-02-02 21:40:44.051393349 +0000 UTC m=+1264.346418368" Feb 02 21:40:44 crc kubenswrapper[4789]: I0202 21:40:44.394787 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-9zj6q" Feb 02 21:40:44 crc kubenswrapper[4789]: I0202 21:40:44.463431 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a55dfeeb-4219-4e3f-834b-0f4de4381c96-operator-scripts\") pod \"a55dfeeb-4219-4e3f-834b-0f4de4381c96\" (UID: \"a55dfeeb-4219-4e3f-834b-0f4de4381c96\") " Feb 02 21:40:44 crc kubenswrapper[4789]: I0202 21:40:44.463596 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2wqtm\" (UniqueName: \"kubernetes.io/projected/a55dfeeb-4219-4e3f-834b-0f4de4381c96-kube-api-access-2wqtm\") pod \"a55dfeeb-4219-4e3f-834b-0f4de4381c96\" (UID: \"a55dfeeb-4219-4e3f-834b-0f4de4381c96\") " Feb 02 21:40:44 crc kubenswrapper[4789]: I0202 21:40:44.464410 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a55dfeeb-4219-4e3f-834b-0f4de4381c96-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a55dfeeb-4219-4e3f-834b-0f4de4381c96" (UID: "a55dfeeb-4219-4e3f-834b-0f4de4381c96"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:40:44 crc kubenswrapper[4789]: I0202 21:40:44.485756 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a55dfeeb-4219-4e3f-834b-0f4de4381c96-kube-api-access-2wqtm" (OuterVolumeSpecName: "kube-api-access-2wqtm") pod "a55dfeeb-4219-4e3f-834b-0f4de4381c96" (UID: "a55dfeeb-4219-4e3f-834b-0f4de4381c96"). InnerVolumeSpecName "kube-api-access-2wqtm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:40:44 crc kubenswrapper[4789]: I0202 21:40:44.566289 4789 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a55dfeeb-4219-4e3f-834b-0f4de4381c96-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 21:40:44 crc kubenswrapper[4789]: I0202 21:40:44.566332 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2wqtm\" (UniqueName: \"kubernetes.io/projected/a55dfeeb-4219-4e3f-834b-0f4de4381c96-kube-api-access-2wqtm\") on node \"crc\" DevicePath \"\"" Feb 02 21:40:44 crc kubenswrapper[4789]: I0202 21:40:44.611662 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-725d-account-create-update-wv4nx" Feb 02 21:40:44 crc kubenswrapper[4789]: I0202 21:40:44.617874 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-3f0a-account-create-update-4kw96" Feb 02 21:40:44 crc kubenswrapper[4789]: I0202 21:40:44.624026 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-64bb487f87-44hz8" Feb 02 21:40:44 crc kubenswrapper[4789]: I0202 21:40:44.624534 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-ac1d-account-create-update-7qrx7" Feb 02 21:40:44 crc kubenswrapper[4789]: I0202 21:40:44.633778 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-xdt8b" Feb 02 21:40:44 crc kubenswrapper[4789]: I0202 21:40:44.636973 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-64bb487f87-44hz8" Feb 02 21:40:44 crc kubenswrapper[4789]: I0202 21:40:44.667104 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xzp5m\" (UniqueName: \"kubernetes.io/projected/b833200c-e96b-4baa-9654-e7a3c07369e5-kube-api-access-xzp5m\") pod \"b833200c-e96b-4baa-9654-e7a3c07369e5\" (UID: \"b833200c-e96b-4baa-9654-e7a3c07369e5\") " Feb 02 21:40:44 crc kubenswrapper[4789]: I0202 21:40:44.667341 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b75bq\" (UniqueName: \"kubernetes.io/projected/4a1e8a56-47de-4a5e-b4f6-389ebf616658-kube-api-access-b75bq\") pod \"4a1e8a56-47de-4a5e-b4f6-389ebf616658\" (UID: \"4a1e8a56-47de-4a5e-b4f6-389ebf616658\") " Feb 02 21:40:44 crc kubenswrapper[4789]: I0202 21:40:44.667508 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4be719bd-b5d3-4499-9e80-9d8055c1a8df-operator-scripts\") pod \"4be719bd-b5d3-4499-9e80-9d8055c1a8df\" (UID: \"4be719bd-b5d3-4499-9e80-9d8055c1a8df\") " Feb 02 21:40:44 crc kubenswrapper[4789]: I0202 21:40:44.667622 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jd7vk\" (UniqueName: \"kubernetes.io/projected/4be719bd-b5d3-4499-9e80-9d8055c1a8df-kube-api-access-jd7vk\") pod \"4be719bd-b5d3-4499-9e80-9d8055c1a8df\" (UID: \"4be719bd-b5d3-4499-9e80-9d8055c1a8df\") " Feb 02 21:40:44 crc kubenswrapper[4789]: I0202 21:40:44.667708 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/160db825-98d0-4663-80b5-1a50e382cfa5-operator-scripts\") pod \"160db825-98d0-4663-80b5-1a50e382cfa5\" (UID: \"160db825-98d0-4663-80b5-1a50e382cfa5\") " Feb 02 21:40:44 crc kubenswrapper[4789]: I0202 21:40:44.667857 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hfclb\" (UniqueName: \"kubernetes.io/projected/160db825-98d0-4663-80b5-1a50e382cfa5-kube-api-access-hfclb\") pod \"160db825-98d0-4663-80b5-1a50e382cfa5\" (UID: \"160db825-98d0-4663-80b5-1a50e382cfa5\") " Feb 02 21:40:44 crc kubenswrapper[4789]: I0202 21:40:44.667953 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b833200c-e96b-4baa-9654-e7a3c07369e5-operator-scripts\") pod \"b833200c-e96b-4baa-9654-e7a3c07369e5\" (UID: \"b833200c-e96b-4baa-9654-e7a3c07369e5\") " Feb 02 21:40:44 crc kubenswrapper[4789]: I0202 21:40:44.668043 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a1e8a56-47de-4a5e-b4f6-389ebf616658-operator-scripts\") pod \"4a1e8a56-47de-4a5e-b4f6-389ebf616658\" (UID: \"4a1e8a56-47de-4a5e-b4f6-389ebf616658\") " Feb 02 21:40:44 crc kubenswrapper[4789]: I0202 21:40:44.670978 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b833200c-e96b-4baa-9654-e7a3c07369e5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b833200c-e96b-4baa-9654-e7a3c07369e5" (UID: "b833200c-e96b-4baa-9654-e7a3c07369e5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:40:44 crc kubenswrapper[4789]: I0202 21:40:44.671416 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/160db825-98d0-4663-80b5-1a50e382cfa5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "160db825-98d0-4663-80b5-1a50e382cfa5" (UID: "160db825-98d0-4663-80b5-1a50e382cfa5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:40:44 crc kubenswrapper[4789]: I0202 21:40:44.672160 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a1e8a56-47de-4a5e-b4f6-389ebf616658-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4a1e8a56-47de-4a5e-b4f6-389ebf616658" (UID: "4a1e8a56-47de-4a5e-b4f6-389ebf616658"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:40:44 crc kubenswrapper[4789]: I0202 21:40:44.672219 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4be719bd-b5d3-4499-9e80-9d8055c1a8df-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4be719bd-b5d3-4499-9e80-9d8055c1a8df" (UID: "4be719bd-b5d3-4499-9e80-9d8055c1a8df"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:40:44 crc kubenswrapper[4789]: I0202 21:40:44.677705 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4be719bd-b5d3-4499-9e80-9d8055c1a8df-kube-api-access-jd7vk" (OuterVolumeSpecName: "kube-api-access-jd7vk") pod "4be719bd-b5d3-4499-9e80-9d8055c1a8df" (UID: "4be719bd-b5d3-4499-9e80-9d8055c1a8df"). InnerVolumeSpecName "kube-api-access-jd7vk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:40:44 crc kubenswrapper[4789]: I0202 21:40:44.691799 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/160db825-98d0-4663-80b5-1a50e382cfa5-kube-api-access-hfclb" (OuterVolumeSpecName: "kube-api-access-hfclb") pod "160db825-98d0-4663-80b5-1a50e382cfa5" (UID: "160db825-98d0-4663-80b5-1a50e382cfa5"). InnerVolumeSpecName "kube-api-access-hfclb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:40:44 crc kubenswrapper[4789]: I0202 21:40:44.691968 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b833200c-e96b-4baa-9654-e7a3c07369e5-kube-api-access-xzp5m" (OuterVolumeSpecName: "kube-api-access-xzp5m") pod "b833200c-e96b-4baa-9654-e7a3c07369e5" (UID: "b833200c-e96b-4baa-9654-e7a3c07369e5"). InnerVolumeSpecName "kube-api-access-xzp5m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:40:44 crc kubenswrapper[4789]: I0202 21:40:44.695744 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a1e8a56-47de-4a5e-b4f6-389ebf616658-kube-api-access-b75bq" (OuterVolumeSpecName: "kube-api-access-b75bq") pod "4a1e8a56-47de-4a5e-b4f6-389ebf616658" (UID: "4a1e8a56-47de-4a5e-b4f6-389ebf616658"). InnerVolumeSpecName "kube-api-access-b75bq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:40:44 crc kubenswrapper[4789]: I0202 21:40:44.771265 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hfclb\" (UniqueName: \"kubernetes.io/projected/160db825-98d0-4663-80b5-1a50e382cfa5-kube-api-access-hfclb\") on node \"crc\" DevicePath \"\"" Feb 02 21:40:44 crc kubenswrapper[4789]: I0202 21:40:44.771299 4789 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b833200c-e96b-4baa-9654-e7a3c07369e5-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 21:40:44 crc kubenswrapper[4789]: I0202 21:40:44.771308 4789 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a1e8a56-47de-4a5e-b4f6-389ebf616658-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 21:40:44 crc kubenswrapper[4789]: I0202 21:40:44.771317 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xzp5m\" (UniqueName: \"kubernetes.io/projected/b833200c-e96b-4baa-9654-e7a3c07369e5-kube-api-access-xzp5m\") on node \"crc\" DevicePath \"\"" Feb 02 21:40:44 crc kubenswrapper[4789]: I0202 21:40:44.771326 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b75bq\" (UniqueName: \"kubernetes.io/projected/4a1e8a56-47de-4a5e-b4f6-389ebf616658-kube-api-access-b75bq\") on node \"crc\" DevicePath \"\"" Feb 02 21:40:44 crc kubenswrapper[4789]: I0202 21:40:44.771335 4789 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4be719bd-b5d3-4499-9e80-9d8055c1a8df-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 21:40:44 crc kubenswrapper[4789]: I0202 21:40:44.771344 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jd7vk\" (UniqueName: \"kubernetes.io/projected/4be719bd-b5d3-4499-9e80-9d8055c1a8df-kube-api-access-jd7vk\") on node \"crc\" DevicePath \"\"" Feb 02 21:40:44 crc kubenswrapper[4789]: I0202 21:40:44.771352 4789 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/160db825-98d0-4663-80b5-1a50e382cfa5-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 21:40:44 crc kubenswrapper[4789]: I0202 21:40:44.989451 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-9zj6q" Feb 02 21:40:44 crc kubenswrapper[4789]: I0202 21:40:44.989447 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-9zj6q" event={"ID":"a55dfeeb-4219-4e3f-834b-0f4de4381c96","Type":"ContainerDied","Data":"2539503a51deef89b3639543dc8ea524500a6ff2980c93a9a0a528a60042756a"} Feb 02 21:40:44 crc kubenswrapper[4789]: I0202 21:40:44.989937 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2539503a51deef89b3639543dc8ea524500a6ff2980c93a9a0a528a60042756a" Feb 02 21:40:44 crc kubenswrapper[4789]: I0202 21:40:44.991008 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-725d-account-create-update-wv4nx" event={"ID":"160db825-98d0-4663-80b5-1a50e382cfa5","Type":"ContainerDied","Data":"a263b1c0a2054a79bcff712f82f8aab602bf8d8fc47039b3c694ba749c0102fc"} Feb 02 21:40:44 crc kubenswrapper[4789]: I0202 21:40:44.991032 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-725d-account-create-update-wv4nx" Feb 02 21:40:44 crc kubenswrapper[4789]: I0202 21:40:44.991038 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a263b1c0a2054a79bcff712f82f8aab602bf8d8fc47039b3c694ba749c0102fc" Feb 02 21:40:44 crc kubenswrapper[4789]: I0202 21:40:44.992778 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-xdt8b" Feb 02 21:40:44 crc kubenswrapper[4789]: I0202 21:40:44.992775 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-xdt8b" event={"ID":"4a1e8a56-47de-4a5e-b4f6-389ebf616658","Type":"ContainerDied","Data":"8ce384ef88bc82674c6e94da8dcdf5c0996b62b1d6a3d12ac22c63de527efb8f"} Feb 02 21:40:44 crc kubenswrapper[4789]: I0202 21:40:44.992984 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ce384ef88bc82674c6e94da8dcdf5c0996b62b1d6a3d12ac22c63de527efb8f" Feb 02 21:40:44 crc kubenswrapper[4789]: I0202 21:40:44.994772 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-ac1d-account-create-update-7qrx7" event={"ID":"4be719bd-b5d3-4499-9e80-9d8055c1a8df","Type":"ContainerDied","Data":"fe6674a16e94a1b89808884f0717771dd97a3b933a69992765162cb31409568a"} Feb 02 21:40:44 crc kubenswrapper[4789]: I0202 21:40:44.994801 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe6674a16e94a1b89808884f0717771dd97a3b933a69992765162cb31409568a" Feb 02 21:40:44 crc kubenswrapper[4789]: I0202 21:40:44.994779 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-ac1d-account-create-update-7qrx7" Feb 02 21:40:44 crc kubenswrapper[4789]: I0202 21:40:44.996271 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-3f0a-account-create-update-4kw96" event={"ID":"b833200c-e96b-4baa-9654-e7a3c07369e5","Type":"ContainerDied","Data":"89954c8d0a4f2b4f0456ce2973ab0d925fc51a65b27180a994a17c06f1afa679"} Feb 02 21:40:44 crc kubenswrapper[4789]: I0202 21:40:44.996304 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89954c8d0a4f2b4f0456ce2973ab0d925fc51a65b27180a994a17c06f1afa679" Feb 02 21:40:44 crc kubenswrapper[4789]: I0202 21:40:44.996356 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-3f0a-account-create-update-4kw96" Feb 02 21:40:45 crc kubenswrapper[4789]: I0202 21:40:45.766013 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-7c4994f5f-462kb" Feb 02 21:40:45 crc kubenswrapper[4789]: I0202 21:40:45.874442 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5f5c98b5b4-lj8fk"] Feb 02 21:40:45 crc kubenswrapper[4789]: I0202 21:40:45.875533 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5f5c98b5b4-lj8fk" podUID="44bf258b-7d3e-4f0f-8a92-c71ba94c022e" containerName="neutron-httpd" containerID="cri-o://a2d524b577d11f277bdb94e2bccad39399ecec6cda872e4edc1e618b0c1296b3" gracePeriod=30 Feb 02 21:40:45 crc kubenswrapper[4789]: I0202 21:40:45.875130 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5f5c98b5b4-lj8fk" podUID="44bf258b-7d3e-4f0f-8a92-c71ba94c022e" containerName="neutron-api" containerID="cri-o://8f773f2a9b77d6911f8067507b3caf8237f65b491b720c9baee3adbfa16f899e" gracePeriod=30 Feb 02 21:40:47 crc kubenswrapper[4789]: I0202 21:40:47.031387 4789 generic.go:334] "Generic (PLEG): container finished" podID="9fbf342f-e489-4914-99d2-d2b5da9a3e75" containerID="3da3de917d02b853103d72b4a2655d21606b0e94c091cada13d7340c8a07120d" exitCode=0 Feb 02 21:40:47 crc kubenswrapper[4789]: I0202 21:40:47.031467 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9fbf342f-e489-4914-99d2-d2b5da9a3e75","Type":"ContainerDied","Data":"3da3de917d02b853103d72b4a2655d21606b0e94c091cada13d7340c8a07120d"} Feb 02 21:40:47 crc kubenswrapper[4789]: I0202 21:40:47.037844 4789 generic.go:334] "Generic (PLEG): container finished" podID="44bf258b-7d3e-4f0f-8a92-c71ba94c022e" containerID="a2d524b577d11f277bdb94e2bccad39399ecec6cda872e4edc1e618b0c1296b3" exitCode=0 Feb 02 21:40:47 crc kubenswrapper[4789]: I0202 21:40:47.037885 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5f5c98b5b4-lj8fk" event={"ID":"44bf258b-7d3e-4f0f-8a92-c71ba94c022e","Type":"ContainerDied","Data":"a2d524b577d11f277bdb94e2bccad39399ecec6cda872e4edc1e618b0c1296b3"} Feb 02 21:40:48 crc kubenswrapper[4789]: I0202 21:40:48.051090 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9fbf342f-e489-4914-99d2-d2b5da9a3e75","Type":"ContainerDied","Data":"b456ca077c446b37143dd2847877295e046e152a000a6409622b37b8780c638e"} Feb 02 21:40:48 crc kubenswrapper[4789]: I0202 21:40:48.051355 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b456ca077c446b37143dd2847877295e046e152a000a6409622b37b8780c638e" Feb 02 21:40:48 crc kubenswrapper[4789]: I0202 21:40:48.069346 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 21:40:48 crc kubenswrapper[4789]: I0202 21:40:48.237025 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fbf342f-e489-4914-99d2-d2b5da9a3e75-config-data\") pod \"9fbf342f-e489-4914-99d2-d2b5da9a3e75\" (UID: \"9fbf342f-e489-4914-99d2-d2b5da9a3e75\") " Feb 02 21:40:48 crc kubenswrapper[4789]: I0202 21:40:48.237093 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9fbf342f-e489-4914-99d2-d2b5da9a3e75-scripts\") pod \"9fbf342f-e489-4914-99d2-d2b5da9a3e75\" (UID: \"9fbf342f-e489-4914-99d2-d2b5da9a3e75\") " Feb 02 21:40:48 crc kubenswrapper[4789]: I0202 21:40:48.237143 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9fbf342f-e489-4914-99d2-d2b5da9a3e75-sg-core-conf-yaml\") pod \"9fbf342f-e489-4914-99d2-d2b5da9a3e75\" (UID: \"9fbf342f-e489-4914-99d2-d2b5da9a3e75\") " Feb 02 21:40:48 crc kubenswrapper[4789]: I0202 21:40:48.237180 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fbf342f-e489-4914-99d2-d2b5da9a3e75-combined-ca-bundle\") pod \"9fbf342f-e489-4914-99d2-d2b5da9a3e75\" (UID: \"9fbf342f-e489-4914-99d2-d2b5da9a3e75\") " Feb 02 21:40:48 crc kubenswrapper[4789]: I0202 21:40:48.237219 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9fbf342f-e489-4914-99d2-d2b5da9a3e75-log-httpd\") pod \"9fbf342f-e489-4914-99d2-d2b5da9a3e75\" (UID: \"9fbf342f-e489-4914-99d2-d2b5da9a3e75\") " Feb 02 21:40:48 crc kubenswrapper[4789]: I0202 21:40:48.237295 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mrp48\" (UniqueName: \"kubernetes.io/projected/9fbf342f-e489-4914-99d2-d2b5da9a3e75-kube-api-access-mrp48\") pod \"9fbf342f-e489-4914-99d2-d2b5da9a3e75\" (UID: \"9fbf342f-e489-4914-99d2-d2b5da9a3e75\") " Feb 02 21:40:48 crc kubenswrapper[4789]: I0202 21:40:48.237334 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9fbf342f-e489-4914-99d2-d2b5da9a3e75-run-httpd\") pod \"9fbf342f-e489-4914-99d2-d2b5da9a3e75\" (UID: \"9fbf342f-e489-4914-99d2-d2b5da9a3e75\") " Feb 02 21:40:48 crc kubenswrapper[4789]: I0202 21:40:48.237975 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9fbf342f-e489-4914-99d2-d2b5da9a3e75-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "9fbf342f-e489-4914-99d2-d2b5da9a3e75" (UID: "9fbf342f-e489-4914-99d2-d2b5da9a3e75"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 21:40:48 crc kubenswrapper[4789]: I0202 21:40:48.239654 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9fbf342f-e489-4914-99d2-d2b5da9a3e75-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "9fbf342f-e489-4914-99d2-d2b5da9a3e75" (UID: "9fbf342f-e489-4914-99d2-d2b5da9a3e75"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 21:40:48 crc kubenswrapper[4789]: I0202 21:40:48.246102 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fbf342f-e489-4914-99d2-d2b5da9a3e75-kube-api-access-mrp48" (OuterVolumeSpecName: "kube-api-access-mrp48") pod "9fbf342f-e489-4914-99d2-d2b5da9a3e75" (UID: "9fbf342f-e489-4914-99d2-d2b5da9a3e75"). InnerVolumeSpecName "kube-api-access-mrp48". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:40:48 crc kubenswrapper[4789]: I0202 21:40:48.260784 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fbf342f-e489-4914-99d2-d2b5da9a3e75-scripts" (OuterVolumeSpecName: "scripts") pod "9fbf342f-e489-4914-99d2-d2b5da9a3e75" (UID: "9fbf342f-e489-4914-99d2-d2b5da9a3e75"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:40:48 crc kubenswrapper[4789]: I0202 21:40:48.278914 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fbf342f-e489-4914-99d2-d2b5da9a3e75-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "9fbf342f-e489-4914-99d2-d2b5da9a3e75" (UID: "9fbf342f-e489-4914-99d2-d2b5da9a3e75"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:40:48 crc kubenswrapper[4789]: I0202 21:40:48.339766 4789 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9fbf342f-e489-4914-99d2-d2b5da9a3e75-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 21:40:48 crc kubenswrapper[4789]: I0202 21:40:48.339797 4789 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9fbf342f-e489-4914-99d2-d2b5da9a3e75-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 21:40:48 crc kubenswrapper[4789]: I0202 21:40:48.339806 4789 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9fbf342f-e489-4914-99d2-d2b5da9a3e75-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 02 21:40:48 crc kubenswrapper[4789]: I0202 21:40:48.339815 4789 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9fbf342f-e489-4914-99d2-d2b5da9a3e75-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 21:40:48 crc kubenswrapper[4789]: I0202 21:40:48.339823 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mrp48\" (UniqueName: \"kubernetes.io/projected/9fbf342f-e489-4914-99d2-d2b5da9a3e75-kube-api-access-mrp48\") on node \"crc\" DevicePath \"\"" Feb 02 21:40:48 crc kubenswrapper[4789]: I0202 21:40:48.353370 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fbf342f-e489-4914-99d2-d2b5da9a3e75-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9fbf342f-e489-4914-99d2-d2b5da9a3e75" (UID: "9fbf342f-e489-4914-99d2-d2b5da9a3e75"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:40:48 crc kubenswrapper[4789]: I0202 21:40:48.368400 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fbf342f-e489-4914-99d2-d2b5da9a3e75-config-data" (OuterVolumeSpecName: "config-data") pod "9fbf342f-e489-4914-99d2-d2b5da9a3e75" (UID: "9fbf342f-e489-4914-99d2-d2b5da9a3e75"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:40:48 crc kubenswrapper[4789]: I0202 21:40:48.441148 4789 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fbf342f-e489-4914-99d2-d2b5da9a3e75-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 21:40:48 crc kubenswrapper[4789]: I0202 21:40:48.441177 4789 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fbf342f-e489-4914-99d2-d2b5da9a3e75-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 21:40:48 crc kubenswrapper[4789]: I0202 21:40:48.592524 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-68d9498c68-84jcz" Feb 02 21:40:48 crc kubenswrapper[4789]: I0202 21:40:48.626954 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-68d9498c68-84jcz" Feb 02 21:40:48 crc kubenswrapper[4789]: I0202 21:40:48.700506 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-59cf4774f6-v75lt"] Feb 02 21:40:48 crc kubenswrapper[4789]: I0202 21:40:48.700812 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-59cf4774f6-v75lt" podUID="d3e262bc-73ec-4c9b-adbb-9c20b7a6286b" containerName="placement-log" containerID="cri-o://b05e234fc983691d25347715e5ae6456b8853c259473be0f1c5126f4b8c3898a" gracePeriod=30 Feb 02 21:40:48 crc kubenswrapper[4789]: I0202 21:40:48.700965 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-59cf4774f6-v75lt" podUID="d3e262bc-73ec-4c9b-adbb-9c20b7a6286b" containerName="placement-api" containerID="cri-o://45cdfbba5277d68703ad6c24734a170b3eb079fbf6417f6b1a3641194e032e79" gracePeriod=30 Feb 02 21:40:49 crc kubenswrapper[4789]: I0202 21:40:49.032755 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 02 21:40:49 crc kubenswrapper[4789]: I0202 21:40:49.067878 4789 generic.go:334] "Generic (PLEG): container finished" podID="d3e262bc-73ec-4c9b-adbb-9c20b7a6286b" containerID="b05e234fc983691d25347715e5ae6456b8853c259473be0f1c5126f4b8c3898a" exitCode=143 Feb 02 21:40:49 crc kubenswrapper[4789]: I0202 21:40:49.068652 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-59cf4774f6-v75lt" event={"ID":"d3e262bc-73ec-4c9b-adbb-9c20b7a6286b","Type":"ContainerDied","Data":"b05e234fc983691d25347715e5ae6456b8853c259473be0f1c5126f4b8c3898a"} Feb 02 21:40:49 crc kubenswrapper[4789]: I0202 21:40:49.068711 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 21:40:49 crc kubenswrapper[4789]: I0202 21:40:49.111676 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 21:40:49 crc kubenswrapper[4789]: I0202 21:40:49.122619 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 02 21:40:49 crc kubenswrapper[4789]: I0202 21:40:49.140183 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 02 21:40:49 crc kubenswrapper[4789]: E0202 21:40:49.140640 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a1e8a56-47de-4a5e-b4f6-389ebf616658" containerName="mariadb-database-create" Feb 02 21:40:49 crc kubenswrapper[4789]: I0202 21:40:49.140658 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a1e8a56-47de-4a5e-b4f6-389ebf616658" containerName="mariadb-database-create" Feb 02 21:40:49 crc kubenswrapper[4789]: E0202 21:40:49.140675 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fbf342f-e489-4914-99d2-d2b5da9a3e75" containerName="ceilometer-notification-agent" Feb 02 21:40:49 crc kubenswrapper[4789]: I0202 21:40:49.140684 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fbf342f-e489-4914-99d2-d2b5da9a3e75" containerName="ceilometer-notification-agent" Feb 02 21:40:49 crc kubenswrapper[4789]: E0202 21:40:49.140696 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b833200c-e96b-4baa-9654-e7a3c07369e5" containerName="mariadb-account-create-update" Feb 02 21:40:49 crc kubenswrapper[4789]: I0202 21:40:49.140703 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="b833200c-e96b-4baa-9654-e7a3c07369e5" containerName="mariadb-account-create-update" Feb 02 21:40:49 crc kubenswrapper[4789]: E0202 21:40:49.140714 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05dabff6-c489-4c3a-9030-4206f14e27fd" containerName="mariadb-database-create" Feb 02 21:40:49 crc kubenswrapper[4789]: I0202 21:40:49.140719 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="05dabff6-c489-4c3a-9030-4206f14e27fd" containerName="mariadb-database-create" Feb 02 21:40:49 crc kubenswrapper[4789]: E0202 21:40:49.140729 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a55dfeeb-4219-4e3f-834b-0f4de4381c96" containerName="mariadb-database-create" Feb 02 21:40:49 crc kubenswrapper[4789]: I0202 21:40:49.140734 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="a55dfeeb-4219-4e3f-834b-0f4de4381c96" containerName="mariadb-database-create" Feb 02 21:40:49 crc kubenswrapper[4789]: E0202 21:40:49.140751 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fbf342f-e489-4914-99d2-d2b5da9a3e75" containerName="proxy-httpd" Feb 02 21:40:49 crc kubenswrapper[4789]: I0202 21:40:49.140757 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fbf342f-e489-4914-99d2-d2b5da9a3e75" containerName="proxy-httpd" Feb 02 21:40:49 crc kubenswrapper[4789]: E0202 21:40:49.140767 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="160db825-98d0-4663-80b5-1a50e382cfa5" containerName="mariadb-account-create-update" Feb 02 21:40:49 crc kubenswrapper[4789]: I0202 21:40:49.140772 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="160db825-98d0-4663-80b5-1a50e382cfa5" containerName="mariadb-account-create-update" Feb 02 21:40:49 crc kubenswrapper[4789]: E0202 21:40:49.140785 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4be719bd-b5d3-4499-9e80-9d8055c1a8df" containerName="mariadb-account-create-update" Feb 02 21:40:49 crc kubenswrapper[4789]: I0202 21:40:49.140790 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="4be719bd-b5d3-4499-9e80-9d8055c1a8df" containerName="mariadb-account-create-update" Feb 02 21:40:49 crc kubenswrapper[4789]: E0202 21:40:49.140802 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fbf342f-e489-4914-99d2-d2b5da9a3e75" containerName="sg-core" Feb 02 21:40:49 crc kubenswrapper[4789]: I0202 21:40:49.140807 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fbf342f-e489-4914-99d2-d2b5da9a3e75" containerName="sg-core" Feb 02 21:40:49 crc kubenswrapper[4789]: E0202 21:40:49.140816 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fbf342f-e489-4914-99d2-d2b5da9a3e75" containerName="ceilometer-central-agent" Feb 02 21:40:49 crc kubenswrapper[4789]: I0202 21:40:49.140822 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fbf342f-e489-4914-99d2-d2b5da9a3e75" containerName="ceilometer-central-agent" Feb 02 21:40:49 crc kubenswrapper[4789]: I0202 21:40:49.141047 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fbf342f-e489-4914-99d2-d2b5da9a3e75" containerName="ceilometer-notification-agent" Feb 02 21:40:49 crc kubenswrapper[4789]: I0202 21:40:49.141066 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="05dabff6-c489-4c3a-9030-4206f14e27fd" containerName="mariadb-database-create" Feb 02 21:40:49 crc kubenswrapper[4789]: I0202 21:40:49.141078 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fbf342f-e489-4914-99d2-d2b5da9a3e75" containerName="sg-core" Feb 02 21:40:49 crc kubenswrapper[4789]: I0202 21:40:49.141088 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="b833200c-e96b-4baa-9654-e7a3c07369e5" containerName="mariadb-account-create-update" Feb 02 21:40:49 crc kubenswrapper[4789]: I0202 21:40:49.141096 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="a55dfeeb-4219-4e3f-834b-0f4de4381c96" containerName="mariadb-database-create" Feb 02 21:40:49 crc kubenswrapper[4789]: I0202 21:40:49.141104 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a1e8a56-47de-4a5e-b4f6-389ebf616658" containerName="mariadb-database-create" Feb 02 21:40:49 crc kubenswrapper[4789]: I0202 21:40:49.141115 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="4be719bd-b5d3-4499-9e80-9d8055c1a8df" containerName="mariadb-account-create-update" Feb 02 21:40:49 crc kubenswrapper[4789]: I0202 21:40:49.141124 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="160db825-98d0-4663-80b5-1a50e382cfa5" containerName="mariadb-account-create-update" Feb 02 21:40:49 crc kubenswrapper[4789]: I0202 21:40:49.141134 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fbf342f-e489-4914-99d2-d2b5da9a3e75" containerName="proxy-httpd" Feb 02 21:40:49 crc kubenswrapper[4789]: I0202 21:40:49.141142 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fbf342f-e489-4914-99d2-d2b5da9a3e75" containerName="ceilometer-central-agent" Feb 02 21:40:49 crc kubenswrapper[4789]: I0202 21:40:49.142693 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 21:40:49 crc kubenswrapper[4789]: I0202 21:40:49.145450 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 02 21:40:49 crc kubenswrapper[4789]: I0202 21:40:49.145621 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 02 21:40:49 crc kubenswrapper[4789]: I0202 21:40:49.146064 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 02 21:40:49 crc kubenswrapper[4789]: I0202 21:40:49.159471 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 21:40:49 crc kubenswrapper[4789]: I0202 21:40:49.255387 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7f31748e-0f10-43cb-b749-614a88630363-log-httpd\") pod \"ceilometer-0\" (UID: \"7f31748e-0f10-43cb-b749-614a88630363\") " pod="openstack/ceilometer-0" Feb 02 21:40:49 crc kubenswrapper[4789]: I0202 21:40:49.255479 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f31748e-0f10-43cb-b749-614a88630363-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"7f31748e-0f10-43cb-b749-614a88630363\") " pod="openstack/ceilometer-0" Feb 02 21:40:49 crc kubenswrapper[4789]: I0202 21:40:49.255514 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f31748e-0f10-43cb-b749-614a88630363-config-data\") pod \"ceilometer-0\" (UID: \"7f31748e-0f10-43cb-b749-614a88630363\") " pod="openstack/ceilometer-0" Feb 02 21:40:49 crc kubenswrapper[4789]: I0202 21:40:49.255680 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rk5w4\" (UniqueName: \"kubernetes.io/projected/7f31748e-0f10-43cb-b749-614a88630363-kube-api-access-rk5w4\") pod \"ceilometer-0\" (UID: \"7f31748e-0f10-43cb-b749-614a88630363\") " pod="openstack/ceilometer-0" Feb 02 21:40:49 crc kubenswrapper[4789]: I0202 21:40:49.255770 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7f31748e-0f10-43cb-b749-614a88630363-run-httpd\") pod \"ceilometer-0\" (UID: \"7f31748e-0f10-43cb-b749-614a88630363\") " pod="openstack/ceilometer-0" Feb 02 21:40:49 crc kubenswrapper[4789]: I0202 21:40:49.255852 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7f31748e-0f10-43cb-b749-614a88630363-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7f31748e-0f10-43cb-b749-614a88630363\") " pod="openstack/ceilometer-0" Feb 02 21:40:49 crc kubenswrapper[4789]: I0202 21:40:49.255919 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f31748e-0f10-43cb-b749-614a88630363-scripts\") pod \"ceilometer-0\" (UID: \"7f31748e-0f10-43cb-b749-614a88630363\") " pod="openstack/ceilometer-0" Feb 02 21:40:49 crc kubenswrapper[4789]: I0202 21:40:49.255942 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f31748e-0f10-43cb-b749-614a88630363-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7f31748e-0f10-43cb-b749-614a88630363\") " pod="openstack/ceilometer-0" Feb 02 21:40:49 crc kubenswrapper[4789]: I0202 21:40:49.357298 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rk5w4\" (UniqueName: \"kubernetes.io/projected/7f31748e-0f10-43cb-b749-614a88630363-kube-api-access-rk5w4\") pod \"ceilometer-0\" (UID: \"7f31748e-0f10-43cb-b749-614a88630363\") " pod="openstack/ceilometer-0" Feb 02 21:40:49 crc kubenswrapper[4789]: I0202 21:40:49.357358 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7f31748e-0f10-43cb-b749-614a88630363-run-httpd\") pod \"ceilometer-0\" (UID: \"7f31748e-0f10-43cb-b749-614a88630363\") " pod="openstack/ceilometer-0" Feb 02 21:40:49 crc kubenswrapper[4789]: I0202 21:40:49.357387 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7f31748e-0f10-43cb-b749-614a88630363-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7f31748e-0f10-43cb-b749-614a88630363\") " pod="openstack/ceilometer-0" Feb 02 21:40:49 crc kubenswrapper[4789]: I0202 21:40:49.357430 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f31748e-0f10-43cb-b749-614a88630363-scripts\") pod \"ceilometer-0\" (UID: \"7f31748e-0f10-43cb-b749-614a88630363\") " pod="openstack/ceilometer-0" Feb 02 21:40:49 crc kubenswrapper[4789]: I0202 21:40:49.357445 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f31748e-0f10-43cb-b749-614a88630363-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7f31748e-0f10-43cb-b749-614a88630363\") " pod="openstack/ceilometer-0" Feb 02 21:40:49 crc kubenswrapper[4789]: I0202 21:40:49.357482 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7f31748e-0f10-43cb-b749-614a88630363-log-httpd\") pod \"ceilometer-0\" (UID: \"7f31748e-0f10-43cb-b749-614a88630363\") " pod="openstack/ceilometer-0" Feb 02 21:40:49 crc kubenswrapper[4789]: I0202 21:40:49.357523 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f31748e-0f10-43cb-b749-614a88630363-config-data\") pod \"ceilometer-0\" (UID: \"7f31748e-0f10-43cb-b749-614a88630363\") " pod="openstack/ceilometer-0" Feb 02 21:40:49 crc kubenswrapper[4789]: I0202 21:40:49.357538 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f31748e-0f10-43cb-b749-614a88630363-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"7f31748e-0f10-43cb-b749-614a88630363\") " pod="openstack/ceilometer-0" Feb 02 21:40:49 crc kubenswrapper[4789]: I0202 21:40:49.358004 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7f31748e-0f10-43cb-b749-614a88630363-run-httpd\") pod \"ceilometer-0\" (UID: \"7f31748e-0f10-43cb-b749-614a88630363\") " pod="openstack/ceilometer-0" Feb 02 21:40:49 crc kubenswrapper[4789]: I0202 21:40:49.358382 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7f31748e-0f10-43cb-b749-614a88630363-log-httpd\") pod \"ceilometer-0\" (UID: \"7f31748e-0f10-43cb-b749-614a88630363\") " pod="openstack/ceilometer-0" Feb 02 21:40:49 crc kubenswrapper[4789]: I0202 21:40:49.363170 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f31748e-0f10-43cb-b749-614a88630363-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7f31748e-0f10-43cb-b749-614a88630363\") " pod="openstack/ceilometer-0" Feb 02 21:40:49 crc kubenswrapper[4789]: I0202 21:40:49.373558 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f31748e-0f10-43cb-b749-614a88630363-scripts\") pod \"ceilometer-0\" (UID: \"7f31748e-0f10-43cb-b749-614a88630363\") " pod="openstack/ceilometer-0" Feb 02 21:40:49 crc kubenswrapper[4789]: I0202 21:40:49.373890 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f31748e-0f10-43cb-b749-614a88630363-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"7f31748e-0f10-43cb-b749-614a88630363\") " pod="openstack/ceilometer-0" Feb 02 21:40:49 crc kubenswrapper[4789]: I0202 21:40:49.374486 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7f31748e-0f10-43cb-b749-614a88630363-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7f31748e-0f10-43cb-b749-614a88630363\") " pod="openstack/ceilometer-0" Feb 02 21:40:49 crc kubenswrapper[4789]: I0202 21:40:49.375193 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f31748e-0f10-43cb-b749-614a88630363-config-data\") pod \"ceilometer-0\" (UID: \"7f31748e-0f10-43cb-b749-614a88630363\") " pod="openstack/ceilometer-0" Feb 02 21:40:49 crc kubenswrapper[4789]: I0202 21:40:49.377037 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rk5w4\" (UniqueName: \"kubernetes.io/projected/7f31748e-0f10-43cb-b749-614a88630363-kube-api-access-rk5w4\") pod \"ceilometer-0\" (UID: \"7f31748e-0f10-43cb-b749-614a88630363\") " pod="openstack/ceilometer-0" Feb 02 21:40:49 crc kubenswrapper[4789]: I0202 21:40:49.460552 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 21:40:49 crc kubenswrapper[4789]: I0202 21:40:49.849778 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-78r7j"] Feb 02 21:40:49 crc kubenswrapper[4789]: I0202 21:40:49.851320 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-78r7j" Feb 02 21:40:49 crc kubenswrapper[4789]: I0202 21:40:49.853662 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-lfwsj" Feb 02 21:40:49 crc kubenswrapper[4789]: I0202 21:40:49.853782 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 02 21:40:49 crc kubenswrapper[4789]: I0202 21:40:49.853706 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 02 21:40:49 crc kubenswrapper[4789]: I0202 21:40:49.856808 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-78r7j"] Feb 02 21:40:49 crc kubenswrapper[4789]: I0202 21:40:49.939427 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 21:40:49 crc kubenswrapper[4789]: I0202 21:40:49.977314 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f69804c-fec9-4fea-a392-b9a2a54b1155-config-data\") pod \"nova-cell0-conductor-db-sync-78r7j\" (UID: \"3f69804c-fec9-4fea-a392-b9a2a54b1155\") " pod="openstack/nova-cell0-conductor-db-sync-78r7j" Feb 02 21:40:49 crc kubenswrapper[4789]: I0202 21:40:49.977902 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsk74\" (UniqueName: \"kubernetes.io/projected/3f69804c-fec9-4fea-a392-b9a2a54b1155-kube-api-access-vsk74\") pod \"nova-cell0-conductor-db-sync-78r7j\" (UID: \"3f69804c-fec9-4fea-a392-b9a2a54b1155\") " pod="openstack/nova-cell0-conductor-db-sync-78r7j" Feb 02 21:40:49 crc kubenswrapper[4789]: I0202 21:40:49.977924 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f69804c-fec9-4fea-a392-b9a2a54b1155-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-78r7j\" (UID: \"3f69804c-fec9-4fea-a392-b9a2a54b1155\") " pod="openstack/nova-cell0-conductor-db-sync-78r7j" Feb 02 21:40:49 crc kubenswrapper[4789]: I0202 21:40:49.977969 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f69804c-fec9-4fea-a392-b9a2a54b1155-scripts\") pod \"nova-cell0-conductor-db-sync-78r7j\" (UID: \"3f69804c-fec9-4fea-a392-b9a2a54b1155\") " pod="openstack/nova-cell0-conductor-db-sync-78r7j" Feb 02 21:40:50 crc kubenswrapper[4789]: E0202 21:40:50.035343 4789 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod44bf258b_7d3e_4f0f_8a92_c71ba94c022e.slice/crio-conmon-8f773f2a9b77d6911f8067507b3caf8237f65b491b720c9baee3adbfa16f899e.scope\": RecentStats: unable to find data in memory cache]" Feb 02 21:40:50 crc kubenswrapper[4789]: I0202 21:40:50.079471 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsk74\" (UniqueName: \"kubernetes.io/projected/3f69804c-fec9-4fea-a392-b9a2a54b1155-kube-api-access-vsk74\") pod \"nova-cell0-conductor-db-sync-78r7j\" (UID: \"3f69804c-fec9-4fea-a392-b9a2a54b1155\") " pod="openstack/nova-cell0-conductor-db-sync-78r7j" Feb 02 21:40:50 crc kubenswrapper[4789]: I0202 21:40:50.079517 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f69804c-fec9-4fea-a392-b9a2a54b1155-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-78r7j\" (UID: \"3f69804c-fec9-4fea-a392-b9a2a54b1155\") " pod="openstack/nova-cell0-conductor-db-sync-78r7j" Feb 02 21:40:50 crc kubenswrapper[4789]: I0202 21:40:50.079618 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f69804c-fec9-4fea-a392-b9a2a54b1155-scripts\") pod \"nova-cell0-conductor-db-sync-78r7j\" (UID: \"3f69804c-fec9-4fea-a392-b9a2a54b1155\") " pod="openstack/nova-cell0-conductor-db-sync-78r7j" Feb 02 21:40:50 crc kubenswrapper[4789]: I0202 21:40:50.079675 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f69804c-fec9-4fea-a392-b9a2a54b1155-config-data\") pod \"nova-cell0-conductor-db-sync-78r7j\" (UID: \"3f69804c-fec9-4fea-a392-b9a2a54b1155\") " pod="openstack/nova-cell0-conductor-db-sync-78r7j" Feb 02 21:40:50 crc kubenswrapper[4789]: I0202 21:40:50.085722 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f69804c-fec9-4fea-a392-b9a2a54b1155-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-78r7j\" (UID: \"3f69804c-fec9-4fea-a392-b9a2a54b1155\") " pod="openstack/nova-cell0-conductor-db-sync-78r7j" Feb 02 21:40:50 crc kubenswrapper[4789]: I0202 21:40:50.086653 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f69804c-fec9-4fea-a392-b9a2a54b1155-config-data\") pod \"nova-cell0-conductor-db-sync-78r7j\" (UID: \"3f69804c-fec9-4fea-a392-b9a2a54b1155\") " pod="openstack/nova-cell0-conductor-db-sync-78r7j" Feb 02 21:40:50 crc kubenswrapper[4789]: I0202 21:40:50.096085 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f69804c-fec9-4fea-a392-b9a2a54b1155-scripts\") pod \"nova-cell0-conductor-db-sync-78r7j\" (UID: \"3f69804c-fec9-4fea-a392-b9a2a54b1155\") " pod="openstack/nova-cell0-conductor-db-sync-78r7j" Feb 02 21:40:50 crc kubenswrapper[4789]: I0202 21:40:50.096976 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsk74\" (UniqueName: \"kubernetes.io/projected/3f69804c-fec9-4fea-a392-b9a2a54b1155-kube-api-access-vsk74\") pod \"nova-cell0-conductor-db-sync-78r7j\" (UID: \"3f69804c-fec9-4fea-a392-b9a2a54b1155\") " pod="openstack/nova-cell0-conductor-db-sync-78r7j" Feb 02 21:40:50 crc kubenswrapper[4789]: I0202 21:40:50.097215 4789 generic.go:334] "Generic (PLEG): container finished" podID="44bf258b-7d3e-4f0f-8a92-c71ba94c022e" containerID="8f773f2a9b77d6911f8067507b3caf8237f65b491b720c9baee3adbfa16f899e" exitCode=0 Feb 02 21:40:50 crc kubenswrapper[4789]: I0202 21:40:50.097289 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5f5c98b5b4-lj8fk" event={"ID":"44bf258b-7d3e-4f0f-8a92-c71ba94c022e","Type":"ContainerDied","Data":"8f773f2a9b77d6911f8067507b3caf8237f65b491b720c9baee3adbfa16f899e"} Feb 02 21:40:50 crc kubenswrapper[4789]: I0202 21:40:50.098638 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7f31748e-0f10-43cb-b749-614a88630363","Type":"ContainerStarted","Data":"11d334627ab5a7fdd1b49cba0e3de4554646bb692a84b1f4104916ddcd4f8f17"} Feb 02 21:40:50 crc kubenswrapper[4789]: I0202 21:40:50.209217 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-78r7j" Feb 02 21:40:50 crc kubenswrapper[4789]: I0202 21:40:50.323738 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5f5c98b5b4-lj8fk" Feb 02 21:40:50 crc kubenswrapper[4789]: I0202 21:40:50.438916 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9fbf342f-e489-4914-99d2-d2b5da9a3e75" path="/var/lib/kubelet/pods/9fbf342f-e489-4914-99d2-d2b5da9a3e75/volumes" Feb 02 21:40:50 crc kubenswrapper[4789]: I0202 21:40:50.485940 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/44bf258b-7d3e-4f0f-8a92-c71ba94c022e-ovndb-tls-certs\") pod \"44bf258b-7d3e-4f0f-8a92-c71ba94c022e\" (UID: \"44bf258b-7d3e-4f0f-8a92-c71ba94c022e\") " Feb 02 21:40:50 crc kubenswrapper[4789]: I0202 21:40:50.486110 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lmnbm\" (UniqueName: \"kubernetes.io/projected/44bf258b-7d3e-4f0f-8a92-c71ba94c022e-kube-api-access-lmnbm\") pod \"44bf258b-7d3e-4f0f-8a92-c71ba94c022e\" (UID: \"44bf258b-7d3e-4f0f-8a92-c71ba94c022e\") " Feb 02 21:40:50 crc kubenswrapper[4789]: I0202 21:40:50.486261 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/44bf258b-7d3e-4f0f-8a92-c71ba94c022e-config\") pod \"44bf258b-7d3e-4f0f-8a92-c71ba94c022e\" (UID: \"44bf258b-7d3e-4f0f-8a92-c71ba94c022e\") " Feb 02 21:40:50 crc kubenswrapper[4789]: I0202 21:40:50.486311 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44bf258b-7d3e-4f0f-8a92-c71ba94c022e-combined-ca-bundle\") pod \"44bf258b-7d3e-4f0f-8a92-c71ba94c022e\" (UID: \"44bf258b-7d3e-4f0f-8a92-c71ba94c022e\") " Feb 02 21:40:50 crc kubenswrapper[4789]: I0202 21:40:50.486350 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/44bf258b-7d3e-4f0f-8a92-c71ba94c022e-httpd-config\") pod \"44bf258b-7d3e-4f0f-8a92-c71ba94c022e\" (UID: \"44bf258b-7d3e-4f0f-8a92-c71ba94c022e\") " Feb 02 21:40:50 crc kubenswrapper[4789]: I0202 21:40:50.490805 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44bf258b-7d3e-4f0f-8a92-c71ba94c022e-kube-api-access-lmnbm" (OuterVolumeSpecName: "kube-api-access-lmnbm") pod "44bf258b-7d3e-4f0f-8a92-c71ba94c022e" (UID: "44bf258b-7d3e-4f0f-8a92-c71ba94c022e"). InnerVolumeSpecName "kube-api-access-lmnbm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:40:50 crc kubenswrapper[4789]: I0202 21:40:50.490907 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44bf258b-7d3e-4f0f-8a92-c71ba94c022e-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "44bf258b-7d3e-4f0f-8a92-c71ba94c022e" (UID: "44bf258b-7d3e-4f0f-8a92-c71ba94c022e"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:40:50 crc kubenswrapper[4789]: I0202 21:40:50.533221 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44bf258b-7d3e-4f0f-8a92-c71ba94c022e-config" (OuterVolumeSpecName: "config") pod "44bf258b-7d3e-4f0f-8a92-c71ba94c022e" (UID: "44bf258b-7d3e-4f0f-8a92-c71ba94c022e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:40:50 crc kubenswrapper[4789]: I0202 21:40:50.560938 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44bf258b-7d3e-4f0f-8a92-c71ba94c022e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "44bf258b-7d3e-4f0f-8a92-c71ba94c022e" (UID: "44bf258b-7d3e-4f0f-8a92-c71ba94c022e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:40:50 crc kubenswrapper[4789]: I0202 21:40:50.575662 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44bf258b-7d3e-4f0f-8a92-c71ba94c022e-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "44bf258b-7d3e-4f0f-8a92-c71ba94c022e" (UID: "44bf258b-7d3e-4f0f-8a92-c71ba94c022e"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:40:50 crc kubenswrapper[4789]: I0202 21:40:50.592786 4789 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/44bf258b-7d3e-4f0f-8a92-c71ba94c022e-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 21:40:50 crc kubenswrapper[4789]: I0202 21:40:50.592818 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lmnbm\" (UniqueName: \"kubernetes.io/projected/44bf258b-7d3e-4f0f-8a92-c71ba94c022e-kube-api-access-lmnbm\") on node \"crc\" DevicePath \"\"" Feb 02 21:40:50 crc kubenswrapper[4789]: I0202 21:40:50.592836 4789 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/44bf258b-7d3e-4f0f-8a92-c71ba94c022e-config\") on node \"crc\" DevicePath \"\"" Feb 02 21:40:50 crc kubenswrapper[4789]: I0202 21:40:50.592853 4789 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44bf258b-7d3e-4f0f-8a92-c71ba94c022e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 21:40:50 crc kubenswrapper[4789]: I0202 21:40:50.592869 4789 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/44bf258b-7d3e-4f0f-8a92-c71ba94c022e-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 02 21:40:50 crc kubenswrapper[4789]: I0202 21:40:50.727878 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-78r7j"] Feb 02 21:40:50 crc kubenswrapper[4789]: W0202 21:40:50.729287 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3f69804c_fec9_4fea_a392_b9a2a54b1155.slice/crio-61ba275867de9c63b78621400743bd4337f08dd4f13a8f37e90cbdc9ea1cac65 WatchSource:0}: Error finding container 61ba275867de9c63b78621400743bd4337f08dd4f13a8f37e90cbdc9ea1cac65: Status 404 returned error can't find the container with id 61ba275867de9c63b78621400743bd4337f08dd4f13a8f37e90cbdc9ea1cac65 Feb 02 21:40:51 crc kubenswrapper[4789]: I0202 21:40:51.111523 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7f31748e-0f10-43cb-b749-614a88630363","Type":"ContainerStarted","Data":"dc1e69cf0a4f93b3ba1a7bcd5090eb052955dc82766a22c228feaee3dbcdab81"} Feb 02 21:40:51 crc kubenswrapper[4789]: I0202 21:40:51.116321 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5f5c98b5b4-lj8fk" event={"ID":"44bf258b-7d3e-4f0f-8a92-c71ba94c022e","Type":"ContainerDied","Data":"b9b750bd22ed42942027f3f078336e77d7b0c494168cdf53f62f49381e5d2ac4"} Feb 02 21:40:51 crc kubenswrapper[4789]: I0202 21:40:51.116371 4789 scope.go:117] "RemoveContainer" containerID="a2d524b577d11f277bdb94e2bccad39399ecec6cda872e4edc1e618b0c1296b3" Feb 02 21:40:51 crc kubenswrapper[4789]: I0202 21:40:51.116489 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5f5c98b5b4-lj8fk" Feb 02 21:40:51 crc kubenswrapper[4789]: I0202 21:40:51.126297 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-78r7j" event={"ID":"3f69804c-fec9-4fea-a392-b9a2a54b1155","Type":"ContainerStarted","Data":"61ba275867de9c63b78621400743bd4337f08dd4f13a8f37e90cbdc9ea1cac65"} Feb 02 21:40:51 crc kubenswrapper[4789]: I0202 21:40:51.192426 4789 scope.go:117] "RemoveContainer" containerID="8f773f2a9b77d6911f8067507b3caf8237f65b491b720c9baee3adbfa16f899e" Feb 02 21:40:51 crc kubenswrapper[4789]: I0202 21:40:51.225120 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5f5c98b5b4-lj8fk"] Feb 02 21:40:51 crc kubenswrapper[4789]: I0202 21:40:51.235022 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-5f5c98b5b4-lj8fk"] Feb 02 21:40:51 crc kubenswrapper[4789]: I0202 21:40:51.258628 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 02 21:40:51 crc kubenswrapper[4789]: I0202 21:40:51.258664 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 02 21:40:51 crc kubenswrapper[4789]: I0202 21:40:51.286639 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 02 21:40:51 crc kubenswrapper[4789]: I0202 21:40:51.303203 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 02 21:40:51 crc kubenswrapper[4789]: I0202 21:40:51.410870 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 02 21:40:51 crc kubenswrapper[4789]: I0202 21:40:51.410933 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 02 21:40:51 crc kubenswrapper[4789]: I0202 21:40:51.464415 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 02 21:40:51 crc kubenswrapper[4789]: I0202 21:40:51.471041 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 02 21:40:52 crc kubenswrapper[4789]: I0202 21:40:52.138655 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7f31748e-0f10-43cb-b749-614a88630363","Type":"ContainerStarted","Data":"25246328236a45bf9294d1be753d19c5dfa9530408302fa590b6cf30acb63fec"} Feb 02 21:40:52 crc kubenswrapper[4789]: I0202 21:40:52.138925 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7f31748e-0f10-43cb-b749-614a88630363","Type":"ContainerStarted","Data":"7d53377a9060f02a80c56bd43c5173cd6543c7fdf839f05c6a0dc0e23c520c28"} Feb 02 21:40:52 crc kubenswrapper[4789]: I0202 21:40:52.153434 4789 generic.go:334] "Generic (PLEG): container finished" podID="d3e262bc-73ec-4c9b-adbb-9c20b7a6286b" containerID="45cdfbba5277d68703ad6c24734a170b3eb079fbf6417f6b1a3641194e032e79" exitCode=0 Feb 02 21:40:52 crc kubenswrapper[4789]: I0202 21:40:52.153502 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-59cf4774f6-v75lt" event={"ID":"d3e262bc-73ec-4c9b-adbb-9c20b7a6286b","Type":"ContainerDied","Data":"45cdfbba5277d68703ad6c24734a170b3eb079fbf6417f6b1a3641194e032e79"} Feb 02 21:40:52 crc kubenswrapper[4789]: I0202 21:40:52.154747 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 02 21:40:52 crc kubenswrapper[4789]: I0202 21:40:52.154768 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 02 21:40:52 crc kubenswrapper[4789]: I0202 21:40:52.154777 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 02 21:40:52 crc kubenswrapper[4789]: I0202 21:40:52.154785 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 02 21:40:52 crc kubenswrapper[4789]: I0202 21:40:52.269927 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-59cf4774f6-v75lt" Feb 02 21:40:52 crc kubenswrapper[4789]: I0202 21:40:52.434283 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44bf258b-7d3e-4f0f-8a92-c71ba94c022e" path="/var/lib/kubelet/pods/44bf258b-7d3e-4f0f-8a92-c71ba94c022e/volumes" Feb 02 21:40:52 crc kubenswrapper[4789]: I0202 21:40:52.443811 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3e262bc-73ec-4c9b-adbb-9c20b7a6286b-public-tls-certs\") pod \"d3e262bc-73ec-4c9b-adbb-9c20b7a6286b\" (UID: \"d3e262bc-73ec-4c9b-adbb-9c20b7a6286b\") " Feb 02 21:40:52 crc kubenswrapper[4789]: I0202 21:40:52.443872 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v42tv\" (UniqueName: \"kubernetes.io/projected/d3e262bc-73ec-4c9b-adbb-9c20b7a6286b-kube-api-access-v42tv\") pod \"d3e262bc-73ec-4c9b-adbb-9c20b7a6286b\" (UID: \"d3e262bc-73ec-4c9b-adbb-9c20b7a6286b\") " Feb 02 21:40:52 crc kubenswrapper[4789]: I0202 21:40:52.443902 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3e262bc-73ec-4c9b-adbb-9c20b7a6286b-scripts\") pod \"d3e262bc-73ec-4c9b-adbb-9c20b7a6286b\" (UID: \"d3e262bc-73ec-4c9b-adbb-9c20b7a6286b\") " Feb 02 21:40:52 crc kubenswrapper[4789]: I0202 21:40:52.443922 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d3e262bc-73ec-4c9b-adbb-9c20b7a6286b-logs\") pod \"d3e262bc-73ec-4c9b-adbb-9c20b7a6286b\" (UID: \"d3e262bc-73ec-4c9b-adbb-9c20b7a6286b\") " Feb 02 21:40:52 crc kubenswrapper[4789]: I0202 21:40:52.443945 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3e262bc-73ec-4c9b-adbb-9c20b7a6286b-internal-tls-certs\") pod \"d3e262bc-73ec-4c9b-adbb-9c20b7a6286b\" (UID: \"d3e262bc-73ec-4c9b-adbb-9c20b7a6286b\") " Feb 02 21:40:52 crc kubenswrapper[4789]: I0202 21:40:52.443979 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3e262bc-73ec-4c9b-adbb-9c20b7a6286b-config-data\") pod \"d3e262bc-73ec-4c9b-adbb-9c20b7a6286b\" (UID: \"d3e262bc-73ec-4c9b-adbb-9c20b7a6286b\") " Feb 02 21:40:52 crc kubenswrapper[4789]: I0202 21:40:52.444146 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3e262bc-73ec-4c9b-adbb-9c20b7a6286b-combined-ca-bundle\") pod \"d3e262bc-73ec-4c9b-adbb-9c20b7a6286b\" (UID: \"d3e262bc-73ec-4c9b-adbb-9c20b7a6286b\") " Feb 02 21:40:52 crc kubenswrapper[4789]: I0202 21:40:52.445859 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3e262bc-73ec-4c9b-adbb-9c20b7a6286b-logs" (OuterVolumeSpecName: "logs") pod "d3e262bc-73ec-4c9b-adbb-9c20b7a6286b" (UID: "d3e262bc-73ec-4c9b-adbb-9c20b7a6286b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 21:40:52 crc kubenswrapper[4789]: I0202 21:40:52.450154 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3e262bc-73ec-4c9b-adbb-9c20b7a6286b-kube-api-access-v42tv" (OuterVolumeSpecName: "kube-api-access-v42tv") pod "d3e262bc-73ec-4c9b-adbb-9c20b7a6286b" (UID: "d3e262bc-73ec-4c9b-adbb-9c20b7a6286b"). InnerVolumeSpecName "kube-api-access-v42tv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:40:52 crc kubenswrapper[4789]: I0202 21:40:52.468940 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3e262bc-73ec-4c9b-adbb-9c20b7a6286b-scripts" (OuterVolumeSpecName: "scripts") pod "d3e262bc-73ec-4c9b-adbb-9c20b7a6286b" (UID: "d3e262bc-73ec-4c9b-adbb-9c20b7a6286b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:40:52 crc kubenswrapper[4789]: I0202 21:40:52.513643 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3e262bc-73ec-4c9b-adbb-9c20b7a6286b-config-data" (OuterVolumeSpecName: "config-data") pod "d3e262bc-73ec-4c9b-adbb-9c20b7a6286b" (UID: "d3e262bc-73ec-4c9b-adbb-9c20b7a6286b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:40:52 crc kubenswrapper[4789]: I0202 21:40:52.525330 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3e262bc-73ec-4c9b-adbb-9c20b7a6286b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d3e262bc-73ec-4c9b-adbb-9c20b7a6286b" (UID: "d3e262bc-73ec-4c9b-adbb-9c20b7a6286b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:40:52 crc kubenswrapper[4789]: I0202 21:40:52.546350 4789 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3e262bc-73ec-4c9b-adbb-9c20b7a6286b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 21:40:52 crc kubenswrapper[4789]: I0202 21:40:52.546385 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v42tv\" (UniqueName: \"kubernetes.io/projected/d3e262bc-73ec-4c9b-adbb-9c20b7a6286b-kube-api-access-v42tv\") on node \"crc\" DevicePath \"\"" Feb 02 21:40:52 crc kubenswrapper[4789]: I0202 21:40:52.546399 4789 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3e262bc-73ec-4c9b-adbb-9c20b7a6286b-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 21:40:52 crc kubenswrapper[4789]: I0202 21:40:52.546409 4789 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d3e262bc-73ec-4c9b-adbb-9c20b7a6286b-logs\") on node \"crc\" DevicePath \"\"" Feb 02 21:40:52 crc kubenswrapper[4789]: I0202 21:40:52.546419 4789 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3e262bc-73ec-4c9b-adbb-9c20b7a6286b-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 21:40:52 crc kubenswrapper[4789]: I0202 21:40:52.567172 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3e262bc-73ec-4c9b-adbb-9c20b7a6286b-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "d3e262bc-73ec-4c9b-adbb-9c20b7a6286b" (UID: "d3e262bc-73ec-4c9b-adbb-9c20b7a6286b"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:40:52 crc kubenswrapper[4789]: I0202 21:40:52.575675 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3e262bc-73ec-4c9b-adbb-9c20b7a6286b-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "d3e262bc-73ec-4c9b-adbb-9c20b7a6286b" (UID: "d3e262bc-73ec-4c9b-adbb-9c20b7a6286b"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:40:52 crc kubenswrapper[4789]: I0202 21:40:52.648753 4789 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3e262bc-73ec-4c9b-adbb-9c20b7a6286b-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 21:40:52 crc kubenswrapper[4789]: I0202 21:40:52.648792 4789 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3e262bc-73ec-4c9b-adbb-9c20b7a6286b-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 21:40:52 crc kubenswrapper[4789]: I0202 21:40:52.841546 4789 patch_prober.go:28] interesting pod/machine-config-daemon-c8vcn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 21:40:52 crc kubenswrapper[4789]: I0202 21:40:52.841625 4789 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 21:40:52 crc kubenswrapper[4789]: I0202 21:40:52.841672 4789 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" Feb 02 21:40:52 crc kubenswrapper[4789]: I0202 21:40:52.842400 4789 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"58201de0dc796bafdb3ebb503e9bfcd61c6265506eb41819ac59515674816d43"} pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 21:40:52 crc kubenswrapper[4789]: I0202 21:40:52.842465 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" containerName="machine-config-daemon" containerID="cri-o://58201de0dc796bafdb3ebb503e9bfcd61c6265506eb41819ac59515674816d43" gracePeriod=600 Feb 02 21:40:53 crc kubenswrapper[4789]: I0202 21:40:53.164145 4789 generic.go:334] "Generic (PLEG): container finished" podID="bdf018b4-1451-4d37-be6e-05802b67c73e" containerID="58201de0dc796bafdb3ebb503e9bfcd61c6265506eb41819ac59515674816d43" exitCode=0 Feb 02 21:40:53 crc kubenswrapper[4789]: I0202 21:40:53.164217 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" event={"ID":"bdf018b4-1451-4d37-be6e-05802b67c73e","Type":"ContainerDied","Data":"58201de0dc796bafdb3ebb503e9bfcd61c6265506eb41819ac59515674816d43"} Feb 02 21:40:53 crc kubenswrapper[4789]: I0202 21:40:53.164479 4789 scope.go:117] "RemoveContainer" containerID="731cbec71f64a4bdb77752b4fd336ae74457ae3978707682a716375d9f8b1609" Feb 02 21:40:53 crc kubenswrapper[4789]: I0202 21:40:53.167928 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-59cf4774f6-v75lt" event={"ID":"d3e262bc-73ec-4c9b-adbb-9c20b7a6286b","Type":"ContainerDied","Data":"8b9760bc60b27ffa55098d8ab2300ec22e9f748839b4a0e941465b24425cad89"} Feb 02 21:40:53 crc kubenswrapper[4789]: I0202 21:40:53.167939 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-59cf4774f6-v75lt" Feb 02 21:40:53 crc kubenswrapper[4789]: I0202 21:40:53.203065 4789 scope.go:117] "RemoveContainer" containerID="45cdfbba5277d68703ad6c24734a170b3eb079fbf6417f6b1a3641194e032e79" Feb 02 21:40:53 crc kubenswrapper[4789]: I0202 21:40:53.206338 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-59cf4774f6-v75lt"] Feb 02 21:40:53 crc kubenswrapper[4789]: I0202 21:40:53.218750 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-59cf4774f6-v75lt"] Feb 02 21:40:53 crc kubenswrapper[4789]: I0202 21:40:53.236342 4789 scope.go:117] "RemoveContainer" containerID="b05e234fc983691d25347715e5ae6456b8853c259473be0f1c5126f4b8c3898a" Feb 02 21:40:54 crc kubenswrapper[4789]: I0202 21:40:54.133391 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 02 21:40:54 crc kubenswrapper[4789]: I0202 21:40:54.212182 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" event={"ID":"bdf018b4-1451-4d37-be6e-05802b67c73e","Type":"ContainerStarted","Data":"6f53ea5f1a80f886dfd6c88f09837b2b4d54c1a0219e9a215978594e6e78e40f"} Feb 02 21:40:54 crc kubenswrapper[4789]: I0202 21:40:54.216897 4789 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 21:40:54 crc kubenswrapper[4789]: I0202 21:40:54.216919 4789 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 21:40:54 crc kubenswrapper[4789]: I0202 21:40:54.217824 4789 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 21:40:54 crc kubenswrapper[4789]: I0202 21:40:54.234364 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 02 21:40:54 crc kubenswrapper[4789]: I0202 21:40:54.242858 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 02 21:40:54 crc kubenswrapper[4789]: I0202 21:40:54.360759 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 02 21:40:54 crc kubenswrapper[4789]: I0202 21:40:54.436803 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3e262bc-73ec-4c9b-adbb-9c20b7a6286b" path="/var/lib/kubelet/pods/d3e262bc-73ec-4c9b-adbb-9c20b7a6286b/volumes" Feb 02 21:40:56 crc kubenswrapper[4789]: I0202 21:40:56.851354 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 21:41:00 crc kubenswrapper[4789]: I0202 21:41:00.292566 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7f31748e-0f10-43cb-b749-614a88630363","Type":"ContainerStarted","Data":"9393e6b19ae0b4853754bb5dbaf5a35c9bddbcc5303e9ae931975eb94c67bd2c"} Feb 02 21:41:00 crc kubenswrapper[4789]: I0202 21:41:00.292806 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7f31748e-0f10-43cb-b749-614a88630363" containerName="ceilometer-central-agent" containerID="cri-o://dc1e69cf0a4f93b3ba1a7bcd5090eb052955dc82766a22c228feaee3dbcdab81" gracePeriod=30 Feb 02 21:41:00 crc kubenswrapper[4789]: I0202 21:41:00.293192 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7f31748e-0f10-43cb-b749-614a88630363" containerName="proxy-httpd" containerID="cri-o://9393e6b19ae0b4853754bb5dbaf5a35c9bddbcc5303e9ae931975eb94c67bd2c" gracePeriod=30 Feb 02 21:41:00 crc kubenswrapper[4789]: I0202 21:41:00.293212 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7f31748e-0f10-43cb-b749-614a88630363" containerName="sg-core" containerID="cri-o://25246328236a45bf9294d1be753d19c5dfa9530408302fa590b6cf30acb63fec" gracePeriod=30 Feb 02 21:41:00 crc kubenswrapper[4789]: I0202 21:41:00.293230 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7f31748e-0f10-43cb-b749-614a88630363" containerName="ceilometer-notification-agent" containerID="cri-o://7d53377a9060f02a80c56bd43c5173cd6543c7fdf839f05c6a0dc0e23c520c28" gracePeriod=30 Feb 02 21:41:00 crc kubenswrapper[4789]: I0202 21:41:00.299082 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 02 21:41:00 crc kubenswrapper[4789]: I0202 21:41:00.299131 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-78r7j" event={"ID":"3f69804c-fec9-4fea-a392-b9a2a54b1155","Type":"ContainerStarted","Data":"c0a8ebcdf24c0da82f27897eaa37e69996ea151354df5ea83d450176e048c49d"} Feb 02 21:41:00 crc kubenswrapper[4789]: I0202 21:41:00.325745 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.124679518 podStartE2EDuration="11.325730248s" podCreationTimestamp="2026-02-02 21:40:49 +0000 UTC" firstStartedPulling="2026-02-02 21:40:49.961953193 +0000 UTC m=+1270.256978212" lastFinishedPulling="2026-02-02 21:40:59.163003913 +0000 UTC m=+1279.458028942" observedRunningTime="2026-02-02 21:41:00.324450342 +0000 UTC m=+1280.619475381" watchObservedRunningTime="2026-02-02 21:41:00.325730248 +0000 UTC m=+1280.620755257" Feb 02 21:41:00 crc kubenswrapper[4789]: I0202 21:41:00.351202 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-78r7j" podStartSLOduration=2.898983365 podStartE2EDuration="11.351177798s" podCreationTimestamp="2026-02-02 21:40:49 +0000 UTC" firstStartedPulling="2026-02-02 21:40:50.731141825 +0000 UTC m=+1271.026166844" lastFinishedPulling="2026-02-02 21:40:59.183336258 +0000 UTC m=+1279.478361277" observedRunningTime="2026-02-02 21:41:00.347668179 +0000 UTC m=+1280.642693238" watchObservedRunningTime="2026-02-02 21:41:00.351177798 +0000 UTC m=+1280.646202847" Feb 02 21:41:01 crc kubenswrapper[4789]: I0202 21:41:01.233926 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 21:41:01 crc kubenswrapper[4789]: I0202 21:41:01.307904 4789 generic.go:334] "Generic (PLEG): container finished" podID="7f31748e-0f10-43cb-b749-614a88630363" containerID="9393e6b19ae0b4853754bb5dbaf5a35c9bddbcc5303e9ae931975eb94c67bd2c" exitCode=0 Feb 02 21:41:01 crc kubenswrapper[4789]: I0202 21:41:01.307945 4789 generic.go:334] "Generic (PLEG): container finished" podID="7f31748e-0f10-43cb-b749-614a88630363" containerID="25246328236a45bf9294d1be753d19c5dfa9530408302fa590b6cf30acb63fec" exitCode=2 Feb 02 21:41:01 crc kubenswrapper[4789]: I0202 21:41:01.307955 4789 generic.go:334] "Generic (PLEG): container finished" podID="7f31748e-0f10-43cb-b749-614a88630363" containerID="7d53377a9060f02a80c56bd43c5173cd6543c7fdf839f05c6a0dc0e23c520c28" exitCode=0 Feb 02 21:41:01 crc kubenswrapper[4789]: I0202 21:41:01.307962 4789 generic.go:334] "Generic (PLEG): container finished" podID="7f31748e-0f10-43cb-b749-614a88630363" containerID="dc1e69cf0a4f93b3ba1a7bcd5090eb052955dc82766a22c228feaee3dbcdab81" exitCode=0 Feb 02 21:41:01 crc kubenswrapper[4789]: I0202 21:41:01.317359 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 21:41:01 crc kubenswrapper[4789]: I0202 21:41:01.317457 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7f31748e-0f10-43cb-b749-614a88630363","Type":"ContainerDied","Data":"9393e6b19ae0b4853754bb5dbaf5a35c9bddbcc5303e9ae931975eb94c67bd2c"} Feb 02 21:41:01 crc kubenswrapper[4789]: I0202 21:41:01.317511 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7f31748e-0f10-43cb-b749-614a88630363","Type":"ContainerDied","Data":"25246328236a45bf9294d1be753d19c5dfa9530408302fa590b6cf30acb63fec"} Feb 02 21:41:01 crc kubenswrapper[4789]: I0202 21:41:01.317526 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7f31748e-0f10-43cb-b749-614a88630363","Type":"ContainerDied","Data":"7d53377a9060f02a80c56bd43c5173cd6543c7fdf839f05c6a0dc0e23c520c28"} Feb 02 21:41:01 crc kubenswrapper[4789]: I0202 21:41:01.317540 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7f31748e-0f10-43cb-b749-614a88630363","Type":"ContainerDied","Data":"dc1e69cf0a4f93b3ba1a7bcd5090eb052955dc82766a22c228feaee3dbcdab81"} Feb 02 21:41:01 crc kubenswrapper[4789]: I0202 21:41:01.317551 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7f31748e-0f10-43cb-b749-614a88630363","Type":"ContainerDied","Data":"11d334627ab5a7fdd1b49cba0e3de4554646bb692a84b1f4104916ddcd4f8f17"} Feb 02 21:41:01 crc kubenswrapper[4789]: I0202 21:41:01.317570 4789 scope.go:117] "RemoveContainer" containerID="9393e6b19ae0b4853754bb5dbaf5a35c9bddbcc5303e9ae931975eb94c67bd2c" Feb 02 21:41:01 crc kubenswrapper[4789]: I0202 21:41:01.321929 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f31748e-0f10-43cb-b749-614a88630363-ceilometer-tls-certs\") pod \"7f31748e-0f10-43cb-b749-614a88630363\" (UID: \"7f31748e-0f10-43cb-b749-614a88630363\") " Feb 02 21:41:01 crc kubenswrapper[4789]: I0202 21:41:01.322014 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f31748e-0f10-43cb-b749-614a88630363-scripts\") pod \"7f31748e-0f10-43cb-b749-614a88630363\" (UID: \"7f31748e-0f10-43cb-b749-614a88630363\") " Feb 02 21:41:01 crc kubenswrapper[4789]: I0202 21:41:01.322072 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7f31748e-0f10-43cb-b749-614a88630363-sg-core-conf-yaml\") pod \"7f31748e-0f10-43cb-b749-614a88630363\" (UID: \"7f31748e-0f10-43cb-b749-614a88630363\") " Feb 02 21:41:01 crc kubenswrapper[4789]: I0202 21:41:01.322199 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rk5w4\" (UniqueName: \"kubernetes.io/projected/7f31748e-0f10-43cb-b749-614a88630363-kube-api-access-rk5w4\") pod \"7f31748e-0f10-43cb-b749-614a88630363\" (UID: \"7f31748e-0f10-43cb-b749-614a88630363\") " Feb 02 21:41:01 crc kubenswrapper[4789]: I0202 21:41:01.322216 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f31748e-0f10-43cb-b749-614a88630363-combined-ca-bundle\") pod \"7f31748e-0f10-43cb-b749-614a88630363\" (UID: \"7f31748e-0f10-43cb-b749-614a88630363\") " Feb 02 21:41:01 crc kubenswrapper[4789]: I0202 21:41:01.322250 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f31748e-0f10-43cb-b749-614a88630363-config-data\") pod \"7f31748e-0f10-43cb-b749-614a88630363\" (UID: \"7f31748e-0f10-43cb-b749-614a88630363\") " Feb 02 21:41:01 crc kubenswrapper[4789]: I0202 21:41:01.322287 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7f31748e-0f10-43cb-b749-614a88630363-log-httpd\") pod \"7f31748e-0f10-43cb-b749-614a88630363\" (UID: \"7f31748e-0f10-43cb-b749-614a88630363\") " Feb 02 21:41:01 crc kubenswrapper[4789]: I0202 21:41:01.322330 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7f31748e-0f10-43cb-b749-614a88630363-run-httpd\") pod \"7f31748e-0f10-43cb-b749-614a88630363\" (UID: \"7f31748e-0f10-43cb-b749-614a88630363\") " Feb 02 21:41:01 crc kubenswrapper[4789]: I0202 21:41:01.323489 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f31748e-0f10-43cb-b749-614a88630363-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "7f31748e-0f10-43cb-b749-614a88630363" (UID: "7f31748e-0f10-43cb-b749-614a88630363"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 21:41:01 crc kubenswrapper[4789]: I0202 21:41:01.323786 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f31748e-0f10-43cb-b749-614a88630363-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "7f31748e-0f10-43cb-b749-614a88630363" (UID: "7f31748e-0f10-43cb-b749-614a88630363"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 21:41:01 crc kubenswrapper[4789]: I0202 21:41:01.323923 4789 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7f31748e-0f10-43cb-b749-614a88630363-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 21:41:01 crc kubenswrapper[4789]: I0202 21:41:01.324227 4789 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7f31748e-0f10-43cb-b749-614a88630363-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 21:41:01 crc kubenswrapper[4789]: I0202 21:41:01.331545 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f31748e-0f10-43cb-b749-614a88630363-scripts" (OuterVolumeSpecName: "scripts") pod "7f31748e-0f10-43cb-b749-614a88630363" (UID: "7f31748e-0f10-43cb-b749-614a88630363"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:41:01 crc kubenswrapper[4789]: I0202 21:41:01.331666 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f31748e-0f10-43cb-b749-614a88630363-kube-api-access-rk5w4" (OuterVolumeSpecName: "kube-api-access-rk5w4") pod "7f31748e-0f10-43cb-b749-614a88630363" (UID: "7f31748e-0f10-43cb-b749-614a88630363"). InnerVolumeSpecName "kube-api-access-rk5w4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:41:01 crc kubenswrapper[4789]: I0202 21:41:01.348293 4789 scope.go:117] "RemoveContainer" containerID="25246328236a45bf9294d1be753d19c5dfa9530408302fa590b6cf30acb63fec" Feb 02 21:41:01 crc kubenswrapper[4789]: I0202 21:41:01.367976 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f31748e-0f10-43cb-b749-614a88630363-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "7f31748e-0f10-43cb-b749-614a88630363" (UID: "7f31748e-0f10-43cb-b749-614a88630363"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:41:01 crc kubenswrapper[4789]: I0202 21:41:01.378913 4789 scope.go:117] "RemoveContainer" containerID="7d53377a9060f02a80c56bd43c5173cd6543c7fdf839f05c6a0dc0e23c520c28" Feb 02 21:41:01 crc kubenswrapper[4789]: I0202 21:41:01.392025 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f31748e-0f10-43cb-b749-614a88630363-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "7f31748e-0f10-43cb-b749-614a88630363" (UID: "7f31748e-0f10-43cb-b749-614a88630363"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:41:01 crc kubenswrapper[4789]: I0202 21:41:01.400654 4789 scope.go:117] "RemoveContainer" containerID="dc1e69cf0a4f93b3ba1a7bcd5090eb052955dc82766a22c228feaee3dbcdab81" Feb 02 21:41:01 crc kubenswrapper[4789]: I0202 21:41:01.408013 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f31748e-0f10-43cb-b749-614a88630363-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7f31748e-0f10-43cb-b749-614a88630363" (UID: "7f31748e-0f10-43cb-b749-614a88630363"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:41:01 crc kubenswrapper[4789]: I0202 21:41:01.426333 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rk5w4\" (UniqueName: \"kubernetes.io/projected/7f31748e-0f10-43cb-b749-614a88630363-kube-api-access-rk5w4\") on node \"crc\" DevicePath \"\"" Feb 02 21:41:01 crc kubenswrapper[4789]: I0202 21:41:01.426359 4789 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f31748e-0f10-43cb-b749-614a88630363-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 21:41:01 crc kubenswrapper[4789]: I0202 21:41:01.426368 4789 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f31748e-0f10-43cb-b749-614a88630363-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 21:41:01 crc kubenswrapper[4789]: I0202 21:41:01.426377 4789 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f31748e-0f10-43cb-b749-614a88630363-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 21:41:01 crc kubenswrapper[4789]: I0202 21:41:01.426396 4789 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7f31748e-0f10-43cb-b749-614a88630363-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 02 21:41:01 crc kubenswrapper[4789]: I0202 21:41:01.435481 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f31748e-0f10-43cb-b749-614a88630363-config-data" (OuterVolumeSpecName: "config-data") pod "7f31748e-0f10-43cb-b749-614a88630363" (UID: "7f31748e-0f10-43cb-b749-614a88630363"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:41:01 crc kubenswrapper[4789]: I0202 21:41:01.527727 4789 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f31748e-0f10-43cb-b749-614a88630363-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 21:41:01 crc kubenswrapper[4789]: I0202 21:41:01.534823 4789 scope.go:117] "RemoveContainer" containerID="9393e6b19ae0b4853754bb5dbaf5a35c9bddbcc5303e9ae931975eb94c67bd2c" Feb 02 21:41:01 crc kubenswrapper[4789]: E0202 21:41:01.535586 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9393e6b19ae0b4853754bb5dbaf5a35c9bddbcc5303e9ae931975eb94c67bd2c\": container with ID starting with 9393e6b19ae0b4853754bb5dbaf5a35c9bddbcc5303e9ae931975eb94c67bd2c not found: ID does not exist" containerID="9393e6b19ae0b4853754bb5dbaf5a35c9bddbcc5303e9ae931975eb94c67bd2c" Feb 02 21:41:01 crc kubenswrapper[4789]: I0202 21:41:01.535620 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9393e6b19ae0b4853754bb5dbaf5a35c9bddbcc5303e9ae931975eb94c67bd2c"} err="failed to get container status \"9393e6b19ae0b4853754bb5dbaf5a35c9bddbcc5303e9ae931975eb94c67bd2c\": rpc error: code = NotFound desc = could not find container \"9393e6b19ae0b4853754bb5dbaf5a35c9bddbcc5303e9ae931975eb94c67bd2c\": container with ID starting with 9393e6b19ae0b4853754bb5dbaf5a35c9bddbcc5303e9ae931975eb94c67bd2c not found: ID does not exist" Feb 02 21:41:01 crc kubenswrapper[4789]: I0202 21:41:01.535648 4789 scope.go:117] "RemoveContainer" containerID="25246328236a45bf9294d1be753d19c5dfa9530408302fa590b6cf30acb63fec" Feb 02 21:41:01 crc kubenswrapper[4789]: E0202 21:41:01.535977 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25246328236a45bf9294d1be753d19c5dfa9530408302fa590b6cf30acb63fec\": container with ID starting with 25246328236a45bf9294d1be753d19c5dfa9530408302fa590b6cf30acb63fec not found: ID does not exist" containerID="25246328236a45bf9294d1be753d19c5dfa9530408302fa590b6cf30acb63fec" Feb 02 21:41:01 crc kubenswrapper[4789]: I0202 21:41:01.536000 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25246328236a45bf9294d1be753d19c5dfa9530408302fa590b6cf30acb63fec"} err="failed to get container status \"25246328236a45bf9294d1be753d19c5dfa9530408302fa590b6cf30acb63fec\": rpc error: code = NotFound desc = could not find container \"25246328236a45bf9294d1be753d19c5dfa9530408302fa590b6cf30acb63fec\": container with ID starting with 25246328236a45bf9294d1be753d19c5dfa9530408302fa590b6cf30acb63fec not found: ID does not exist" Feb 02 21:41:01 crc kubenswrapper[4789]: I0202 21:41:01.536014 4789 scope.go:117] "RemoveContainer" containerID="7d53377a9060f02a80c56bd43c5173cd6543c7fdf839f05c6a0dc0e23c520c28" Feb 02 21:41:01 crc kubenswrapper[4789]: E0202 21:41:01.536340 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d53377a9060f02a80c56bd43c5173cd6543c7fdf839f05c6a0dc0e23c520c28\": container with ID starting with 7d53377a9060f02a80c56bd43c5173cd6543c7fdf839f05c6a0dc0e23c520c28 not found: ID does not exist" containerID="7d53377a9060f02a80c56bd43c5173cd6543c7fdf839f05c6a0dc0e23c520c28" Feb 02 21:41:01 crc kubenswrapper[4789]: I0202 21:41:01.536378 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d53377a9060f02a80c56bd43c5173cd6543c7fdf839f05c6a0dc0e23c520c28"} err="failed to get container status \"7d53377a9060f02a80c56bd43c5173cd6543c7fdf839f05c6a0dc0e23c520c28\": rpc error: code = NotFound desc = could not find container \"7d53377a9060f02a80c56bd43c5173cd6543c7fdf839f05c6a0dc0e23c520c28\": container with ID starting with 7d53377a9060f02a80c56bd43c5173cd6543c7fdf839f05c6a0dc0e23c520c28 not found: ID does not exist" Feb 02 21:41:01 crc kubenswrapper[4789]: I0202 21:41:01.536418 4789 scope.go:117] "RemoveContainer" containerID="dc1e69cf0a4f93b3ba1a7bcd5090eb052955dc82766a22c228feaee3dbcdab81" Feb 02 21:41:01 crc kubenswrapper[4789]: E0202 21:41:01.536761 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc1e69cf0a4f93b3ba1a7bcd5090eb052955dc82766a22c228feaee3dbcdab81\": container with ID starting with dc1e69cf0a4f93b3ba1a7bcd5090eb052955dc82766a22c228feaee3dbcdab81 not found: ID does not exist" containerID="dc1e69cf0a4f93b3ba1a7bcd5090eb052955dc82766a22c228feaee3dbcdab81" Feb 02 21:41:01 crc kubenswrapper[4789]: I0202 21:41:01.536784 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc1e69cf0a4f93b3ba1a7bcd5090eb052955dc82766a22c228feaee3dbcdab81"} err="failed to get container status \"dc1e69cf0a4f93b3ba1a7bcd5090eb052955dc82766a22c228feaee3dbcdab81\": rpc error: code = NotFound desc = could not find container \"dc1e69cf0a4f93b3ba1a7bcd5090eb052955dc82766a22c228feaee3dbcdab81\": container with ID starting with dc1e69cf0a4f93b3ba1a7bcd5090eb052955dc82766a22c228feaee3dbcdab81 not found: ID does not exist" Feb 02 21:41:01 crc kubenswrapper[4789]: I0202 21:41:01.536799 4789 scope.go:117] "RemoveContainer" containerID="9393e6b19ae0b4853754bb5dbaf5a35c9bddbcc5303e9ae931975eb94c67bd2c" Feb 02 21:41:01 crc kubenswrapper[4789]: I0202 21:41:01.537037 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9393e6b19ae0b4853754bb5dbaf5a35c9bddbcc5303e9ae931975eb94c67bd2c"} err="failed to get container status \"9393e6b19ae0b4853754bb5dbaf5a35c9bddbcc5303e9ae931975eb94c67bd2c\": rpc error: code = NotFound desc = could not find container \"9393e6b19ae0b4853754bb5dbaf5a35c9bddbcc5303e9ae931975eb94c67bd2c\": container with ID starting with 9393e6b19ae0b4853754bb5dbaf5a35c9bddbcc5303e9ae931975eb94c67bd2c not found: ID does not exist" Feb 02 21:41:01 crc kubenswrapper[4789]: I0202 21:41:01.537059 4789 scope.go:117] "RemoveContainer" containerID="25246328236a45bf9294d1be753d19c5dfa9530408302fa590b6cf30acb63fec" Feb 02 21:41:01 crc kubenswrapper[4789]: I0202 21:41:01.537343 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25246328236a45bf9294d1be753d19c5dfa9530408302fa590b6cf30acb63fec"} err="failed to get container status \"25246328236a45bf9294d1be753d19c5dfa9530408302fa590b6cf30acb63fec\": rpc error: code = NotFound desc = could not find container \"25246328236a45bf9294d1be753d19c5dfa9530408302fa590b6cf30acb63fec\": container with ID starting with 25246328236a45bf9294d1be753d19c5dfa9530408302fa590b6cf30acb63fec not found: ID does not exist" Feb 02 21:41:01 crc kubenswrapper[4789]: I0202 21:41:01.537361 4789 scope.go:117] "RemoveContainer" containerID="7d53377a9060f02a80c56bd43c5173cd6543c7fdf839f05c6a0dc0e23c520c28" Feb 02 21:41:01 crc kubenswrapper[4789]: I0202 21:41:01.537702 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d53377a9060f02a80c56bd43c5173cd6543c7fdf839f05c6a0dc0e23c520c28"} err="failed to get container status \"7d53377a9060f02a80c56bd43c5173cd6543c7fdf839f05c6a0dc0e23c520c28\": rpc error: code = NotFound desc = could not find container \"7d53377a9060f02a80c56bd43c5173cd6543c7fdf839f05c6a0dc0e23c520c28\": container with ID starting with 7d53377a9060f02a80c56bd43c5173cd6543c7fdf839f05c6a0dc0e23c520c28 not found: ID does not exist" Feb 02 21:41:01 crc kubenswrapper[4789]: I0202 21:41:01.537725 4789 scope.go:117] "RemoveContainer" containerID="dc1e69cf0a4f93b3ba1a7bcd5090eb052955dc82766a22c228feaee3dbcdab81" Feb 02 21:41:01 crc kubenswrapper[4789]: I0202 21:41:01.538133 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc1e69cf0a4f93b3ba1a7bcd5090eb052955dc82766a22c228feaee3dbcdab81"} err="failed to get container status \"dc1e69cf0a4f93b3ba1a7bcd5090eb052955dc82766a22c228feaee3dbcdab81\": rpc error: code = NotFound desc = could not find container \"dc1e69cf0a4f93b3ba1a7bcd5090eb052955dc82766a22c228feaee3dbcdab81\": container with ID starting with dc1e69cf0a4f93b3ba1a7bcd5090eb052955dc82766a22c228feaee3dbcdab81 not found: ID does not exist" Feb 02 21:41:01 crc kubenswrapper[4789]: I0202 21:41:01.538179 4789 scope.go:117] "RemoveContainer" containerID="9393e6b19ae0b4853754bb5dbaf5a35c9bddbcc5303e9ae931975eb94c67bd2c" Feb 02 21:41:01 crc kubenswrapper[4789]: I0202 21:41:01.538453 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9393e6b19ae0b4853754bb5dbaf5a35c9bddbcc5303e9ae931975eb94c67bd2c"} err="failed to get container status \"9393e6b19ae0b4853754bb5dbaf5a35c9bddbcc5303e9ae931975eb94c67bd2c\": rpc error: code = NotFound desc = could not find container \"9393e6b19ae0b4853754bb5dbaf5a35c9bddbcc5303e9ae931975eb94c67bd2c\": container with ID starting with 9393e6b19ae0b4853754bb5dbaf5a35c9bddbcc5303e9ae931975eb94c67bd2c not found: ID does not exist" Feb 02 21:41:01 crc kubenswrapper[4789]: I0202 21:41:01.538492 4789 scope.go:117] "RemoveContainer" containerID="25246328236a45bf9294d1be753d19c5dfa9530408302fa590b6cf30acb63fec" Feb 02 21:41:01 crc kubenswrapper[4789]: I0202 21:41:01.538733 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25246328236a45bf9294d1be753d19c5dfa9530408302fa590b6cf30acb63fec"} err="failed to get container status \"25246328236a45bf9294d1be753d19c5dfa9530408302fa590b6cf30acb63fec\": rpc error: code = NotFound desc = could not find container \"25246328236a45bf9294d1be753d19c5dfa9530408302fa590b6cf30acb63fec\": container with ID starting with 25246328236a45bf9294d1be753d19c5dfa9530408302fa590b6cf30acb63fec not found: ID does not exist" Feb 02 21:41:01 crc kubenswrapper[4789]: I0202 21:41:01.538751 4789 scope.go:117] "RemoveContainer" containerID="7d53377a9060f02a80c56bd43c5173cd6543c7fdf839f05c6a0dc0e23c520c28" Feb 02 21:41:01 crc kubenswrapper[4789]: I0202 21:41:01.538971 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d53377a9060f02a80c56bd43c5173cd6543c7fdf839f05c6a0dc0e23c520c28"} err="failed to get container status \"7d53377a9060f02a80c56bd43c5173cd6543c7fdf839f05c6a0dc0e23c520c28\": rpc error: code = NotFound desc = could not find container \"7d53377a9060f02a80c56bd43c5173cd6543c7fdf839f05c6a0dc0e23c520c28\": container with ID starting with 7d53377a9060f02a80c56bd43c5173cd6543c7fdf839f05c6a0dc0e23c520c28 not found: ID does not exist" Feb 02 21:41:01 crc kubenswrapper[4789]: I0202 21:41:01.539012 4789 scope.go:117] "RemoveContainer" containerID="dc1e69cf0a4f93b3ba1a7bcd5090eb052955dc82766a22c228feaee3dbcdab81" Feb 02 21:41:01 crc kubenswrapper[4789]: I0202 21:41:01.539258 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc1e69cf0a4f93b3ba1a7bcd5090eb052955dc82766a22c228feaee3dbcdab81"} err="failed to get container status \"dc1e69cf0a4f93b3ba1a7bcd5090eb052955dc82766a22c228feaee3dbcdab81\": rpc error: code = NotFound desc = could not find container \"dc1e69cf0a4f93b3ba1a7bcd5090eb052955dc82766a22c228feaee3dbcdab81\": container with ID starting with dc1e69cf0a4f93b3ba1a7bcd5090eb052955dc82766a22c228feaee3dbcdab81 not found: ID does not exist" Feb 02 21:41:01 crc kubenswrapper[4789]: I0202 21:41:01.539274 4789 scope.go:117] "RemoveContainer" containerID="9393e6b19ae0b4853754bb5dbaf5a35c9bddbcc5303e9ae931975eb94c67bd2c" Feb 02 21:41:01 crc kubenswrapper[4789]: I0202 21:41:01.539767 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9393e6b19ae0b4853754bb5dbaf5a35c9bddbcc5303e9ae931975eb94c67bd2c"} err="failed to get container status \"9393e6b19ae0b4853754bb5dbaf5a35c9bddbcc5303e9ae931975eb94c67bd2c\": rpc error: code = NotFound desc = could not find container \"9393e6b19ae0b4853754bb5dbaf5a35c9bddbcc5303e9ae931975eb94c67bd2c\": container with ID starting with 9393e6b19ae0b4853754bb5dbaf5a35c9bddbcc5303e9ae931975eb94c67bd2c not found: ID does not exist" Feb 02 21:41:01 crc kubenswrapper[4789]: I0202 21:41:01.539813 4789 scope.go:117] "RemoveContainer" containerID="25246328236a45bf9294d1be753d19c5dfa9530408302fa590b6cf30acb63fec" Feb 02 21:41:01 crc kubenswrapper[4789]: I0202 21:41:01.540084 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25246328236a45bf9294d1be753d19c5dfa9530408302fa590b6cf30acb63fec"} err="failed to get container status \"25246328236a45bf9294d1be753d19c5dfa9530408302fa590b6cf30acb63fec\": rpc error: code = NotFound desc = could not find container \"25246328236a45bf9294d1be753d19c5dfa9530408302fa590b6cf30acb63fec\": container with ID starting with 25246328236a45bf9294d1be753d19c5dfa9530408302fa590b6cf30acb63fec not found: ID does not exist" Feb 02 21:41:01 crc kubenswrapper[4789]: I0202 21:41:01.540132 4789 scope.go:117] "RemoveContainer" containerID="7d53377a9060f02a80c56bd43c5173cd6543c7fdf839f05c6a0dc0e23c520c28" Feb 02 21:41:01 crc kubenswrapper[4789]: I0202 21:41:01.540802 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d53377a9060f02a80c56bd43c5173cd6543c7fdf839f05c6a0dc0e23c520c28"} err="failed to get container status \"7d53377a9060f02a80c56bd43c5173cd6543c7fdf839f05c6a0dc0e23c520c28\": rpc error: code = NotFound desc = could not find container \"7d53377a9060f02a80c56bd43c5173cd6543c7fdf839f05c6a0dc0e23c520c28\": container with ID starting with 7d53377a9060f02a80c56bd43c5173cd6543c7fdf839f05c6a0dc0e23c520c28 not found: ID does not exist" Feb 02 21:41:01 crc kubenswrapper[4789]: I0202 21:41:01.540845 4789 scope.go:117] "RemoveContainer" containerID="dc1e69cf0a4f93b3ba1a7bcd5090eb052955dc82766a22c228feaee3dbcdab81" Feb 02 21:41:01 crc kubenswrapper[4789]: I0202 21:41:01.541103 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc1e69cf0a4f93b3ba1a7bcd5090eb052955dc82766a22c228feaee3dbcdab81"} err="failed to get container status \"dc1e69cf0a4f93b3ba1a7bcd5090eb052955dc82766a22c228feaee3dbcdab81\": rpc error: code = NotFound desc = could not find container \"dc1e69cf0a4f93b3ba1a7bcd5090eb052955dc82766a22c228feaee3dbcdab81\": container with ID starting with dc1e69cf0a4f93b3ba1a7bcd5090eb052955dc82766a22c228feaee3dbcdab81 not found: ID does not exist" Feb 02 21:41:01 crc kubenswrapper[4789]: I0202 21:41:01.660223 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 21:41:01 crc kubenswrapper[4789]: I0202 21:41:01.666777 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 02 21:41:01 crc kubenswrapper[4789]: I0202 21:41:01.681509 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 02 21:41:01 crc kubenswrapper[4789]: E0202 21:41:01.682200 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44bf258b-7d3e-4f0f-8a92-c71ba94c022e" containerName="neutron-httpd" Feb 02 21:41:01 crc kubenswrapper[4789]: I0202 21:41:01.682220 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="44bf258b-7d3e-4f0f-8a92-c71ba94c022e" containerName="neutron-httpd" Feb 02 21:41:01 crc kubenswrapper[4789]: E0202 21:41:01.682235 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3e262bc-73ec-4c9b-adbb-9c20b7a6286b" containerName="placement-api" Feb 02 21:41:01 crc kubenswrapper[4789]: I0202 21:41:01.682241 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3e262bc-73ec-4c9b-adbb-9c20b7a6286b" containerName="placement-api" Feb 02 21:41:01 crc kubenswrapper[4789]: E0202 21:41:01.682257 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f31748e-0f10-43cb-b749-614a88630363" containerName="ceilometer-central-agent" Feb 02 21:41:01 crc kubenswrapper[4789]: I0202 21:41:01.682272 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f31748e-0f10-43cb-b749-614a88630363" containerName="ceilometer-central-agent" Feb 02 21:41:01 crc kubenswrapper[4789]: E0202 21:41:01.682283 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44bf258b-7d3e-4f0f-8a92-c71ba94c022e" containerName="neutron-api" Feb 02 21:41:01 crc kubenswrapper[4789]: I0202 21:41:01.682289 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="44bf258b-7d3e-4f0f-8a92-c71ba94c022e" containerName="neutron-api" Feb 02 21:41:01 crc kubenswrapper[4789]: E0202 21:41:01.682298 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f31748e-0f10-43cb-b749-614a88630363" containerName="ceilometer-notification-agent" Feb 02 21:41:01 crc kubenswrapper[4789]: I0202 21:41:01.682303 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f31748e-0f10-43cb-b749-614a88630363" containerName="ceilometer-notification-agent" Feb 02 21:41:01 crc kubenswrapper[4789]: E0202 21:41:01.682315 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f31748e-0f10-43cb-b749-614a88630363" containerName="proxy-httpd" Feb 02 21:41:01 crc kubenswrapper[4789]: I0202 21:41:01.682321 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f31748e-0f10-43cb-b749-614a88630363" containerName="proxy-httpd" Feb 02 21:41:01 crc kubenswrapper[4789]: E0202 21:41:01.682332 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3e262bc-73ec-4c9b-adbb-9c20b7a6286b" containerName="placement-log" Feb 02 21:41:01 crc kubenswrapper[4789]: I0202 21:41:01.682338 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3e262bc-73ec-4c9b-adbb-9c20b7a6286b" containerName="placement-log" Feb 02 21:41:01 crc kubenswrapper[4789]: E0202 21:41:01.682350 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f31748e-0f10-43cb-b749-614a88630363" containerName="sg-core" Feb 02 21:41:01 crc kubenswrapper[4789]: I0202 21:41:01.682357 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f31748e-0f10-43cb-b749-614a88630363" containerName="sg-core" Feb 02 21:41:01 crc kubenswrapper[4789]: I0202 21:41:01.682518 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f31748e-0f10-43cb-b749-614a88630363" containerName="sg-core" Feb 02 21:41:01 crc kubenswrapper[4789]: I0202 21:41:01.682533 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3e262bc-73ec-4c9b-adbb-9c20b7a6286b" containerName="placement-log" Feb 02 21:41:01 crc kubenswrapper[4789]: I0202 21:41:01.682547 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3e262bc-73ec-4c9b-adbb-9c20b7a6286b" containerName="placement-api" Feb 02 21:41:01 crc kubenswrapper[4789]: I0202 21:41:01.682555 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="44bf258b-7d3e-4f0f-8a92-c71ba94c022e" containerName="neutron-api" Feb 02 21:41:01 crc kubenswrapper[4789]: I0202 21:41:01.682566 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f31748e-0f10-43cb-b749-614a88630363" containerName="ceilometer-central-agent" Feb 02 21:41:01 crc kubenswrapper[4789]: I0202 21:41:01.682596 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f31748e-0f10-43cb-b749-614a88630363" containerName="ceilometer-notification-agent" Feb 02 21:41:01 crc kubenswrapper[4789]: I0202 21:41:01.682608 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f31748e-0f10-43cb-b749-614a88630363" containerName="proxy-httpd" Feb 02 21:41:01 crc kubenswrapper[4789]: I0202 21:41:01.682619 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="44bf258b-7d3e-4f0f-8a92-c71ba94c022e" containerName="neutron-httpd" Feb 02 21:41:01 crc kubenswrapper[4789]: I0202 21:41:01.684170 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 21:41:01 crc kubenswrapper[4789]: I0202 21:41:01.686216 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 02 21:41:01 crc kubenswrapper[4789]: I0202 21:41:01.687920 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 02 21:41:01 crc kubenswrapper[4789]: I0202 21:41:01.693433 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 02 21:41:01 crc kubenswrapper[4789]: I0202 21:41:01.705421 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 21:41:01 crc kubenswrapper[4789]: I0202 21:41:01.732013 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a-log-httpd\") pod \"ceilometer-0\" (UID: \"1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a\") " pod="openstack/ceilometer-0" Feb 02 21:41:01 crc kubenswrapper[4789]: I0202 21:41:01.732080 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a\") " pod="openstack/ceilometer-0" Feb 02 21:41:01 crc kubenswrapper[4789]: I0202 21:41:01.732147 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a-config-data\") pod \"ceilometer-0\" (UID: \"1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a\") " pod="openstack/ceilometer-0" Feb 02 21:41:01 crc kubenswrapper[4789]: I0202 21:41:01.732175 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a-run-httpd\") pod \"ceilometer-0\" (UID: \"1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a\") " pod="openstack/ceilometer-0" Feb 02 21:41:01 crc kubenswrapper[4789]: I0202 21:41:01.732405 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a-scripts\") pod \"ceilometer-0\" (UID: \"1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a\") " pod="openstack/ceilometer-0" Feb 02 21:41:01 crc kubenswrapper[4789]: I0202 21:41:01.732460 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a\") " pod="openstack/ceilometer-0" Feb 02 21:41:01 crc kubenswrapper[4789]: I0202 21:41:01.732507 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a\") " pod="openstack/ceilometer-0" Feb 02 21:41:01 crc kubenswrapper[4789]: I0202 21:41:01.732555 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tr7h9\" (UniqueName: \"kubernetes.io/projected/1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a-kube-api-access-tr7h9\") pod \"ceilometer-0\" (UID: \"1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a\") " pod="openstack/ceilometer-0" Feb 02 21:41:01 crc kubenswrapper[4789]: I0202 21:41:01.833879 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a\") " pod="openstack/ceilometer-0" Feb 02 21:41:01 crc kubenswrapper[4789]: I0202 21:41:01.833959 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a\") " pod="openstack/ceilometer-0" Feb 02 21:41:01 crc kubenswrapper[4789]: I0202 21:41:01.834011 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tr7h9\" (UniqueName: \"kubernetes.io/projected/1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a-kube-api-access-tr7h9\") pod \"ceilometer-0\" (UID: \"1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a\") " pod="openstack/ceilometer-0" Feb 02 21:41:01 crc kubenswrapper[4789]: I0202 21:41:01.834058 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a-log-httpd\") pod \"ceilometer-0\" (UID: \"1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a\") " pod="openstack/ceilometer-0" Feb 02 21:41:01 crc kubenswrapper[4789]: I0202 21:41:01.834092 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a\") " pod="openstack/ceilometer-0" Feb 02 21:41:01 crc kubenswrapper[4789]: I0202 21:41:01.834526 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a-log-httpd\") pod \"ceilometer-0\" (UID: \"1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a\") " pod="openstack/ceilometer-0" Feb 02 21:41:01 crc kubenswrapper[4789]: I0202 21:41:01.834144 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a-config-data\") pod \"ceilometer-0\" (UID: \"1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a\") " pod="openstack/ceilometer-0" Feb 02 21:41:01 crc kubenswrapper[4789]: I0202 21:41:01.834982 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a-run-httpd\") pod \"ceilometer-0\" (UID: \"1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a\") " pod="openstack/ceilometer-0" Feb 02 21:41:01 crc kubenswrapper[4789]: I0202 21:41:01.835401 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a-scripts\") pod \"ceilometer-0\" (UID: \"1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a\") " pod="openstack/ceilometer-0" Feb 02 21:41:01 crc kubenswrapper[4789]: I0202 21:41:01.835709 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a-run-httpd\") pod \"ceilometer-0\" (UID: \"1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a\") " pod="openstack/ceilometer-0" Feb 02 21:41:01 crc kubenswrapper[4789]: I0202 21:41:01.839071 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a\") " pod="openstack/ceilometer-0" Feb 02 21:41:01 crc kubenswrapper[4789]: I0202 21:41:01.839482 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a-config-data\") pod \"ceilometer-0\" (UID: \"1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a\") " pod="openstack/ceilometer-0" Feb 02 21:41:01 crc kubenswrapper[4789]: I0202 21:41:01.839966 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a\") " pod="openstack/ceilometer-0" Feb 02 21:41:01 crc kubenswrapper[4789]: I0202 21:41:01.840078 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a-scripts\") pod \"ceilometer-0\" (UID: \"1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a\") " pod="openstack/ceilometer-0" Feb 02 21:41:01 crc kubenswrapper[4789]: I0202 21:41:01.852084 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tr7h9\" (UniqueName: \"kubernetes.io/projected/1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a-kube-api-access-tr7h9\") pod \"ceilometer-0\" (UID: \"1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a\") " pod="openstack/ceilometer-0" Feb 02 21:41:01 crc kubenswrapper[4789]: I0202 21:41:01.854883 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a\") " pod="openstack/ceilometer-0" Feb 02 21:41:02 crc kubenswrapper[4789]: I0202 21:41:02.035338 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 21:41:02 crc kubenswrapper[4789]: I0202 21:41:02.434303 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f31748e-0f10-43cb-b749-614a88630363" path="/var/lib/kubelet/pods/7f31748e-0f10-43cb-b749-614a88630363/volumes" Feb 02 21:41:02 crc kubenswrapper[4789]: I0202 21:41:02.551331 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 21:41:02 crc kubenswrapper[4789]: W0202 21:41:02.553774 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1f4adac0_6f11_4e4b_89a1_702f2ae3bd5a.slice/crio-403e4ec6b4773db4734445fa63c63e932307854781f8dde1d06b52337ccd9b55 WatchSource:0}: Error finding container 403e4ec6b4773db4734445fa63c63e932307854781f8dde1d06b52337ccd9b55: Status 404 returned error can't find the container with id 403e4ec6b4773db4734445fa63c63e932307854781f8dde1d06b52337ccd9b55 Feb 02 21:41:03 crc kubenswrapper[4789]: I0202 21:41:03.330642 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a","Type":"ContainerStarted","Data":"b424e43858ad870d584f7acc035252d5ea4362b1c11557f71598ce628935fffe"} Feb 02 21:41:03 crc kubenswrapper[4789]: I0202 21:41:03.331120 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a","Type":"ContainerStarted","Data":"403e4ec6b4773db4734445fa63c63e932307854781f8dde1d06b52337ccd9b55"} Feb 02 21:41:04 crc kubenswrapper[4789]: I0202 21:41:04.345329 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a","Type":"ContainerStarted","Data":"fa505511f5bdb1f9454c7fb0ffd8dc6284220c7d716d063dd61660e06e6929a4"} Feb 02 21:41:05 crc kubenswrapper[4789]: I0202 21:41:05.361228 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a","Type":"ContainerStarted","Data":"d40a4908b50914e45821732601107d538a620d14b9c9a51407085b5a97b7c614"} Feb 02 21:41:05 crc kubenswrapper[4789]: I0202 21:41:05.758208 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 21:41:07 crc kubenswrapper[4789]: I0202 21:41:07.386601 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a","Type":"ContainerStarted","Data":"4c7189c4be6e0d2b75e7827d5f5c5ee84da6eaea4bbc1aa79e5145b1d90aab0b"} Feb 02 21:41:07 crc kubenswrapper[4789]: I0202 21:41:07.387300 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 02 21:41:07 crc kubenswrapper[4789]: I0202 21:41:07.386835 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a" containerName="proxy-httpd" containerID="cri-o://4c7189c4be6e0d2b75e7827d5f5c5ee84da6eaea4bbc1aa79e5145b1d90aab0b" gracePeriod=30 Feb 02 21:41:07 crc kubenswrapper[4789]: I0202 21:41:07.386791 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a" containerName="ceilometer-central-agent" containerID="cri-o://b424e43858ad870d584f7acc035252d5ea4362b1c11557f71598ce628935fffe" gracePeriod=30 Feb 02 21:41:07 crc kubenswrapper[4789]: I0202 21:41:07.386862 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a" containerName="sg-core" containerID="cri-o://d40a4908b50914e45821732601107d538a620d14b9c9a51407085b5a97b7c614" gracePeriod=30 Feb 02 21:41:07 crc kubenswrapper[4789]: I0202 21:41:07.386866 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a" containerName="ceilometer-notification-agent" containerID="cri-o://fa505511f5bdb1f9454c7fb0ffd8dc6284220c7d716d063dd61660e06e6929a4" gracePeriod=30 Feb 02 21:41:07 crc kubenswrapper[4789]: I0202 21:41:07.415088 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.864796795 podStartE2EDuration="6.415070053s" podCreationTimestamp="2026-02-02 21:41:01 +0000 UTC" firstStartedPulling="2026-02-02 21:41:02.559701603 +0000 UTC m=+1282.854726662" lastFinishedPulling="2026-02-02 21:41:07.109974891 +0000 UTC m=+1287.404999920" observedRunningTime="2026-02-02 21:41:07.409919707 +0000 UTC m=+1287.704944746" watchObservedRunningTime="2026-02-02 21:41:07.415070053 +0000 UTC m=+1287.710095082" Feb 02 21:41:08 crc kubenswrapper[4789]: I0202 21:41:08.406445 4789 generic.go:334] "Generic (PLEG): container finished" podID="1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a" containerID="d40a4908b50914e45821732601107d538a620d14b9c9a51407085b5a97b7c614" exitCode=2 Feb 02 21:41:08 crc kubenswrapper[4789]: I0202 21:41:08.406962 4789 generic.go:334] "Generic (PLEG): container finished" podID="1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a" containerID="fa505511f5bdb1f9454c7fb0ffd8dc6284220c7d716d063dd61660e06e6929a4" exitCode=0 Feb 02 21:41:08 crc kubenswrapper[4789]: I0202 21:41:08.406570 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a","Type":"ContainerDied","Data":"d40a4908b50914e45821732601107d538a620d14b9c9a51407085b5a97b7c614"} Feb 02 21:41:08 crc kubenswrapper[4789]: I0202 21:41:08.407031 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a","Type":"ContainerDied","Data":"fa505511f5bdb1f9454c7fb0ffd8dc6284220c7d716d063dd61660e06e6929a4"} Feb 02 21:41:09 crc kubenswrapper[4789]: I0202 21:41:09.417108 4789 generic.go:334] "Generic (PLEG): container finished" podID="3f69804c-fec9-4fea-a392-b9a2a54b1155" containerID="c0a8ebcdf24c0da82f27897eaa37e69996ea151354df5ea83d450176e048c49d" exitCode=0 Feb 02 21:41:09 crc kubenswrapper[4789]: I0202 21:41:09.417149 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-78r7j" event={"ID":"3f69804c-fec9-4fea-a392-b9a2a54b1155","Type":"ContainerDied","Data":"c0a8ebcdf24c0da82f27897eaa37e69996ea151354df5ea83d450176e048c49d"} Feb 02 21:41:10 crc kubenswrapper[4789]: I0202 21:41:10.848920 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-78r7j" Feb 02 21:41:10 crc kubenswrapper[4789]: I0202 21:41:10.903341 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f69804c-fec9-4fea-a392-b9a2a54b1155-scripts\") pod \"3f69804c-fec9-4fea-a392-b9a2a54b1155\" (UID: \"3f69804c-fec9-4fea-a392-b9a2a54b1155\") " Feb 02 21:41:10 crc kubenswrapper[4789]: I0202 21:41:10.903415 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f69804c-fec9-4fea-a392-b9a2a54b1155-config-data\") pod \"3f69804c-fec9-4fea-a392-b9a2a54b1155\" (UID: \"3f69804c-fec9-4fea-a392-b9a2a54b1155\") " Feb 02 21:41:10 crc kubenswrapper[4789]: I0202 21:41:10.903471 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vsk74\" (UniqueName: \"kubernetes.io/projected/3f69804c-fec9-4fea-a392-b9a2a54b1155-kube-api-access-vsk74\") pod \"3f69804c-fec9-4fea-a392-b9a2a54b1155\" (UID: \"3f69804c-fec9-4fea-a392-b9a2a54b1155\") " Feb 02 21:41:10 crc kubenswrapper[4789]: I0202 21:41:10.903506 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f69804c-fec9-4fea-a392-b9a2a54b1155-combined-ca-bundle\") pod \"3f69804c-fec9-4fea-a392-b9a2a54b1155\" (UID: \"3f69804c-fec9-4fea-a392-b9a2a54b1155\") " Feb 02 21:41:10 crc kubenswrapper[4789]: I0202 21:41:10.909538 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f69804c-fec9-4fea-a392-b9a2a54b1155-kube-api-access-vsk74" (OuterVolumeSpecName: "kube-api-access-vsk74") pod "3f69804c-fec9-4fea-a392-b9a2a54b1155" (UID: "3f69804c-fec9-4fea-a392-b9a2a54b1155"). InnerVolumeSpecName "kube-api-access-vsk74". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:41:10 crc kubenswrapper[4789]: I0202 21:41:10.911010 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f69804c-fec9-4fea-a392-b9a2a54b1155-scripts" (OuterVolumeSpecName: "scripts") pod "3f69804c-fec9-4fea-a392-b9a2a54b1155" (UID: "3f69804c-fec9-4fea-a392-b9a2a54b1155"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:41:10 crc kubenswrapper[4789]: I0202 21:41:10.945298 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f69804c-fec9-4fea-a392-b9a2a54b1155-config-data" (OuterVolumeSpecName: "config-data") pod "3f69804c-fec9-4fea-a392-b9a2a54b1155" (UID: "3f69804c-fec9-4fea-a392-b9a2a54b1155"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:41:10 crc kubenswrapper[4789]: I0202 21:41:10.945343 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f69804c-fec9-4fea-a392-b9a2a54b1155-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3f69804c-fec9-4fea-a392-b9a2a54b1155" (UID: "3f69804c-fec9-4fea-a392-b9a2a54b1155"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:41:11 crc kubenswrapper[4789]: I0202 21:41:11.006214 4789 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f69804c-fec9-4fea-a392-b9a2a54b1155-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 21:41:11 crc kubenswrapper[4789]: I0202 21:41:11.006250 4789 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f69804c-fec9-4fea-a392-b9a2a54b1155-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 21:41:11 crc kubenswrapper[4789]: I0202 21:41:11.006262 4789 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f69804c-fec9-4fea-a392-b9a2a54b1155-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 21:41:11 crc kubenswrapper[4789]: I0202 21:41:11.006273 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vsk74\" (UniqueName: \"kubernetes.io/projected/3f69804c-fec9-4fea-a392-b9a2a54b1155-kube-api-access-vsk74\") on node \"crc\" DevicePath \"\"" Feb 02 21:41:11 crc kubenswrapper[4789]: I0202 21:41:11.445363 4789 generic.go:334] "Generic (PLEG): container finished" podID="1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a" containerID="b424e43858ad870d584f7acc035252d5ea4362b1c11557f71598ce628935fffe" exitCode=0 Feb 02 21:41:11 crc kubenswrapper[4789]: I0202 21:41:11.445436 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a","Type":"ContainerDied","Data":"b424e43858ad870d584f7acc035252d5ea4362b1c11557f71598ce628935fffe"} Feb 02 21:41:11 crc kubenswrapper[4789]: I0202 21:41:11.447956 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-78r7j" event={"ID":"3f69804c-fec9-4fea-a392-b9a2a54b1155","Type":"ContainerDied","Data":"61ba275867de9c63b78621400743bd4337f08dd4f13a8f37e90cbdc9ea1cac65"} Feb 02 21:41:11 crc kubenswrapper[4789]: I0202 21:41:11.447981 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="61ba275867de9c63b78621400743bd4337f08dd4f13a8f37e90cbdc9ea1cac65" Feb 02 21:41:11 crc kubenswrapper[4789]: I0202 21:41:11.448003 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-78r7j" Feb 02 21:41:11 crc kubenswrapper[4789]: I0202 21:41:11.542646 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 02 21:41:11 crc kubenswrapper[4789]: E0202 21:41:11.542994 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f69804c-fec9-4fea-a392-b9a2a54b1155" containerName="nova-cell0-conductor-db-sync" Feb 02 21:41:11 crc kubenswrapper[4789]: I0202 21:41:11.543009 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f69804c-fec9-4fea-a392-b9a2a54b1155" containerName="nova-cell0-conductor-db-sync" Feb 02 21:41:11 crc kubenswrapper[4789]: I0202 21:41:11.543188 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f69804c-fec9-4fea-a392-b9a2a54b1155" containerName="nova-cell0-conductor-db-sync" Feb 02 21:41:11 crc kubenswrapper[4789]: I0202 21:41:11.544085 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 02 21:41:11 crc kubenswrapper[4789]: I0202 21:41:11.552909 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 02 21:41:11 crc kubenswrapper[4789]: I0202 21:41:11.553454 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-lfwsj" Feb 02 21:41:11 crc kubenswrapper[4789]: I0202 21:41:11.574636 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 02 21:41:11 crc kubenswrapper[4789]: I0202 21:41:11.614453 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa36ad4d-6c5e-4dd5-a7a3-34e5dddfba8b-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"aa36ad4d-6c5e-4dd5-a7a3-34e5dddfba8b\") " pod="openstack/nova-cell0-conductor-0" Feb 02 21:41:11 crc kubenswrapper[4789]: I0202 21:41:11.614503 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa36ad4d-6c5e-4dd5-a7a3-34e5dddfba8b-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"aa36ad4d-6c5e-4dd5-a7a3-34e5dddfba8b\") " pod="openstack/nova-cell0-conductor-0" Feb 02 21:41:11 crc kubenswrapper[4789]: I0202 21:41:11.614896 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mf6bm\" (UniqueName: \"kubernetes.io/projected/aa36ad4d-6c5e-4dd5-a7a3-34e5dddfba8b-kube-api-access-mf6bm\") pod \"nova-cell0-conductor-0\" (UID: \"aa36ad4d-6c5e-4dd5-a7a3-34e5dddfba8b\") " pod="openstack/nova-cell0-conductor-0" Feb 02 21:41:11 crc kubenswrapper[4789]: I0202 21:41:11.716657 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa36ad4d-6c5e-4dd5-a7a3-34e5dddfba8b-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"aa36ad4d-6c5e-4dd5-a7a3-34e5dddfba8b\") " pod="openstack/nova-cell0-conductor-0" Feb 02 21:41:11 crc kubenswrapper[4789]: I0202 21:41:11.716725 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa36ad4d-6c5e-4dd5-a7a3-34e5dddfba8b-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"aa36ad4d-6c5e-4dd5-a7a3-34e5dddfba8b\") " pod="openstack/nova-cell0-conductor-0" Feb 02 21:41:11 crc kubenswrapper[4789]: I0202 21:41:11.716875 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mf6bm\" (UniqueName: \"kubernetes.io/projected/aa36ad4d-6c5e-4dd5-a7a3-34e5dddfba8b-kube-api-access-mf6bm\") pod \"nova-cell0-conductor-0\" (UID: \"aa36ad4d-6c5e-4dd5-a7a3-34e5dddfba8b\") " pod="openstack/nova-cell0-conductor-0" Feb 02 21:41:11 crc kubenswrapper[4789]: I0202 21:41:11.721922 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa36ad4d-6c5e-4dd5-a7a3-34e5dddfba8b-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"aa36ad4d-6c5e-4dd5-a7a3-34e5dddfba8b\") " pod="openstack/nova-cell0-conductor-0" Feb 02 21:41:11 crc kubenswrapper[4789]: I0202 21:41:11.722368 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa36ad4d-6c5e-4dd5-a7a3-34e5dddfba8b-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"aa36ad4d-6c5e-4dd5-a7a3-34e5dddfba8b\") " pod="openstack/nova-cell0-conductor-0" Feb 02 21:41:11 crc kubenswrapper[4789]: I0202 21:41:11.744749 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mf6bm\" (UniqueName: \"kubernetes.io/projected/aa36ad4d-6c5e-4dd5-a7a3-34e5dddfba8b-kube-api-access-mf6bm\") pod \"nova-cell0-conductor-0\" (UID: \"aa36ad4d-6c5e-4dd5-a7a3-34e5dddfba8b\") " pod="openstack/nova-cell0-conductor-0" Feb 02 21:41:11 crc kubenswrapper[4789]: I0202 21:41:11.876726 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 02 21:41:12 crc kubenswrapper[4789]: I0202 21:41:12.396859 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 02 21:41:12 crc kubenswrapper[4789]: I0202 21:41:12.464518 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"aa36ad4d-6c5e-4dd5-a7a3-34e5dddfba8b","Type":"ContainerStarted","Data":"dc9c9a8e812b02299bac4776912a78d9626ea7964e754bafbb13f798edd82c59"} Feb 02 21:41:13 crc kubenswrapper[4789]: I0202 21:41:13.480450 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"aa36ad4d-6c5e-4dd5-a7a3-34e5dddfba8b","Type":"ContainerStarted","Data":"dfd72fb016b8250b7d52cd2384cba2cc136043ef8ea07229e3afb0b578d3fbf4"} Feb 02 21:41:13 crc kubenswrapper[4789]: I0202 21:41:13.481117 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 02 21:41:13 crc kubenswrapper[4789]: I0202 21:41:13.515523 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.515496119 podStartE2EDuration="2.515496119s" podCreationTimestamp="2026-02-02 21:41:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:41:13.502145401 +0000 UTC m=+1293.797170460" watchObservedRunningTime="2026-02-02 21:41:13.515496119 +0000 UTC m=+1293.810521178" Feb 02 21:41:21 crc kubenswrapper[4789]: I0202 21:41:21.928912 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 02 21:41:22 crc kubenswrapper[4789]: I0202 21:41:22.570805 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-g4xxk"] Feb 02 21:41:22 crc kubenswrapper[4789]: I0202 21:41:22.572743 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-g4xxk" Feb 02 21:41:22 crc kubenswrapper[4789]: I0202 21:41:22.575321 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 02 21:41:22 crc kubenswrapper[4789]: I0202 21:41:22.576232 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 02 21:41:22 crc kubenswrapper[4789]: I0202 21:41:22.581441 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-g4xxk"] Feb 02 21:41:22 crc kubenswrapper[4789]: I0202 21:41:22.602312 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgr8j\" (UniqueName: \"kubernetes.io/projected/30f5a928-4b52-4eef-acb5-7748decc816e-kube-api-access-pgr8j\") pod \"nova-cell0-cell-mapping-g4xxk\" (UID: \"30f5a928-4b52-4eef-acb5-7748decc816e\") " pod="openstack/nova-cell0-cell-mapping-g4xxk" Feb 02 21:41:22 crc kubenswrapper[4789]: I0202 21:41:22.602377 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30f5a928-4b52-4eef-acb5-7748decc816e-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-g4xxk\" (UID: \"30f5a928-4b52-4eef-acb5-7748decc816e\") " pod="openstack/nova-cell0-cell-mapping-g4xxk" Feb 02 21:41:22 crc kubenswrapper[4789]: I0202 21:41:22.602443 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30f5a928-4b52-4eef-acb5-7748decc816e-scripts\") pod \"nova-cell0-cell-mapping-g4xxk\" (UID: \"30f5a928-4b52-4eef-acb5-7748decc816e\") " pod="openstack/nova-cell0-cell-mapping-g4xxk" Feb 02 21:41:22 crc kubenswrapper[4789]: I0202 21:41:22.602525 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30f5a928-4b52-4eef-acb5-7748decc816e-config-data\") pod \"nova-cell0-cell-mapping-g4xxk\" (UID: \"30f5a928-4b52-4eef-acb5-7748decc816e\") " pod="openstack/nova-cell0-cell-mapping-g4xxk" Feb 02 21:41:22 crc kubenswrapper[4789]: I0202 21:41:22.703558 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 21:41:22 crc kubenswrapper[4789]: I0202 21:41:22.704947 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 02 21:41:22 crc kubenswrapper[4789]: I0202 21:41:22.705890 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgr8j\" (UniqueName: \"kubernetes.io/projected/30f5a928-4b52-4eef-acb5-7748decc816e-kube-api-access-pgr8j\") pod \"nova-cell0-cell-mapping-g4xxk\" (UID: \"30f5a928-4b52-4eef-acb5-7748decc816e\") " pod="openstack/nova-cell0-cell-mapping-g4xxk" Feb 02 21:41:22 crc kubenswrapper[4789]: I0202 21:41:22.706076 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30f5a928-4b52-4eef-acb5-7748decc816e-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-g4xxk\" (UID: \"30f5a928-4b52-4eef-acb5-7748decc816e\") " pod="openstack/nova-cell0-cell-mapping-g4xxk" Feb 02 21:41:22 crc kubenswrapper[4789]: I0202 21:41:22.706290 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30f5a928-4b52-4eef-acb5-7748decc816e-scripts\") pod \"nova-cell0-cell-mapping-g4xxk\" (UID: \"30f5a928-4b52-4eef-acb5-7748decc816e\") " pod="openstack/nova-cell0-cell-mapping-g4xxk" Feb 02 21:41:22 crc kubenswrapper[4789]: I0202 21:41:22.706578 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30f5a928-4b52-4eef-acb5-7748decc816e-config-data\") pod \"nova-cell0-cell-mapping-g4xxk\" (UID: \"30f5a928-4b52-4eef-acb5-7748decc816e\") " pod="openstack/nova-cell0-cell-mapping-g4xxk" Feb 02 21:41:22 crc kubenswrapper[4789]: I0202 21:41:22.711729 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 02 21:41:22 crc kubenswrapper[4789]: I0202 21:41:22.722178 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30f5a928-4b52-4eef-acb5-7748decc816e-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-g4xxk\" (UID: \"30f5a928-4b52-4eef-acb5-7748decc816e\") " pod="openstack/nova-cell0-cell-mapping-g4xxk" Feb 02 21:41:22 crc kubenswrapper[4789]: I0202 21:41:22.722575 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30f5a928-4b52-4eef-acb5-7748decc816e-scripts\") pod \"nova-cell0-cell-mapping-g4xxk\" (UID: \"30f5a928-4b52-4eef-acb5-7748decc816e\") " pod="openstack/nova-cell0-cell-mapping-g4xxk" Feb 02 21:41:22 crc kubenswrapper[4789]: I0202 21:41:22.724287 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 21:41:22 crc kubenswrapper[4789]: I0202 21:41:22.731048 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30f5a928-4b52-4eef-acb5-7748decc816e-config-data\") pod \"nova-cell0-cell-mapping-g4xxk\" (UID: \"30f5a928-4b52-4eef-acb5-7748decc816e\") " pod="openstack/nova-cell0-cell-mapping-g4xxk" Feb 02 21:41:22 crc kubenswrapper[4789]: I0202 21:41:22.738142 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgr8j\" (UniqueName: \"kubernetes.io/projected/30f5a928-4b52-4eef-acb5-7748decc816e-kube-api-access-pgr8j\") pod \"nova-cell0-cell-mapping-g4xxk\" (UID: \"30f5a928-4b52-4eef-acb5-7748decc816e\") " pod="openstack/nova-cell0-cell-mapping-g4xxk" Feb 02 21:41:22 crc kubenswrapper[4789]: I0202 21:41:22.916908 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6140fbc7-6f0e-43dd-a95c-50a4dc52c351-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6140fbc7-6f0e-43dd-a95c-50a4dc52c351\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 21:41:22 crc kubenswrapper[4789]: I0202 21:41:22.917022 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6140fbc7-6f0e-43dd-a95c-50a4dc52c351-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6140fbc7-6f0e-43dd-a95c-50a4dc52c351\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 21:41:22 crc kubenswrapper[4789]: I0202 21:41:22.917043 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gb22x\" (UniqueName: \"kubernetes.io/projected/6140fbc7-6f0e-43dd-a95c-50a4dc52c351-kube-api-access-gb22x\") pod \"nova-cell1-novncproxy-0\" (UID: \"6140fbc7-6f0e-43dd-a95c-50a4dc52c351\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 21:41:22 crc kubenswrapper[4789]: I0202 21:41:22.925554 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 02 21:41:22 crc kubenswrapper[4789]: I0202 21:41:22.935599 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-g4xxk" Feb 02 21:41:22 crc kubenswrapper[4789]: I0202 21:41:22.936993 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 21:41:22 crc kubenswrapper[4789]: I0202 21:41:22.945952 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 02 21:41:22 crc kubenswrapper[4789]: I0202 21:41:22.947343 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 21:41:22 crc kubenswrapper[4789]: I0202 21:41:22.947754 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 02 21:41:22 crc kubenswrapper[4789]: I0202 21:41:22.953895 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 02 21:41:23 crc kubenswrapper[4789]: I0202 21:41:22.999673 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 21:41:23 crc kubenswrapper[4789]: I0202 21:41:23.018749 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6140fbc7-6f0e-43dd-a95c-50a4dc52c351-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6140fbc7-6f0e-43dd-a95c-50a4dc52c351\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 21:41:23 crc kubenswrapper[4789]: I0202 21:41:23.018867 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6140fbc7-6f0e-43dd-a95c-50a4dc52c351-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6140fbc7-6f0e-43dd-a95c-50a4dc52c351\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 21:41:23 crc kubenswrapper[4789]: I0202 21:41:23.018890 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gb22x\" (UniqueName: \"kubernetes.io/projected/6140fbc7-6f0e-43dd-a95c-50a4dc52c351-kube-api-access-gb22x\") pod \"nova-cell1-novncproxy-0\" (UID: \"6140fbc7-6f0e-43dd-a95c-50a4dc52c351\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 21:41:23 crc kubenswrapper[4789]: I0202 21:41:23.034547 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6140fbc7-6f0e-43dd-a95c-50a4dc52c351-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6140fbc7-6f0e-43dd-a95c-50a4dc52c351\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 21:41:23 crc kubenswrapper[4789]: I0202 21:41:23.038148 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6140fbc7-6f0e-43dd-a95c-50a4dc52c351-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6140fbc7-6f0e-43dd-a95c-50a4dc52c351\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 21:41:23 crc kubenswrapper[4789]: I0202 21:41:23.073433 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 21:41:23 crc kubenswrapper[4789]: I0202 21:41:23.074873 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 21:41:23 crc kubenswrapper[4789]: I0202 21:41:23.080144 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 02 21:41:23 crc kubenswrapper[4789]: I0202 21:41:23.104201 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gb22x\" (UniqueName: \"kubernetes.io/projected/6140fbc7-6f0e-43dd-a95c-50a4dc52c351-kube-api-access-gb22x\") pod \"nova-cell1-novncproxy-0\" (UID: \"6140fbc7-6f0e-43dd-a95c-50a4dc52c351\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 21:41:23 crc kubenswrapper[4789]: I0202 21:41:23.104266 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 02 21:41:23 crc kubenswrapper[4789]: I0202 21:41:23.106123 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 02 21:41:23 crc kubenswrapper[4789]: I0202 21:41:23.121557 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fg6n\" (UniqueName: \"kubernetes.io/projected/7fd2fb6e-7696-41f6-9108-4e931e4f85ec-kube-api-access-2fg6n\") pod \"nova-api-0\" (UID: \"7fd2fb6e-7696-41f6-9108-4e931e4f85ec\") " pod="openstack/nova-api-0" Feb 02 21:41:23 crc kubenswrapper[4789]: I0202 21:41:23.121607 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fd2fb6e-7696-41f6-9108-4e931e4f85ec-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7fd2fb6e-7696-41f6-9108-4e931e4f85ec\") " pod="openstack/nova-api-0" Feb 02 21:41:23 crc kubenswrapper[4789]: I0202 21:41:23.121625 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hv68\" (UniqueName: \"kubernetes.io/projected/ce7cd65a-eb48-4aee-98e0-838af2d90a8e-kube-api-access-4hv68\") pod \"nova-metadata-0\" (UID: \"ce7cd65a-eb48-4aee-98e0-838af2d90a8e\") " pod="openstack/nova-metadata-0" Feb 02 21:41:23 crc kubenswrapper[4789]: I0202 21:41:23.121653 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce7cd65a-eb48-4aee-98e0-838af2d90a8e-config-data\") pod \"nova-metadata-0\" (UID: \"ce7cd65a-eb48-4aee-98e0-838af2d90a8e\") " pod="openstack/nova-metadata-0" Feb 02 21:41:23 crc kubenswrapper[4789]: I0202 21:41:23.121666 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fd2fb6e-7696-41f6-9108-4e931e4f85ec-config-data\") pod \"nova-api-0\" (UID: \"7fd2fb6e-7696-41f6-9108-4e931e4f85ec\") " pod="openstack/nova-api-0" Feb 02 21:41:23 crc kubenswrapper[4789]: I0202 21:41:23.121725 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7fd2fb6e-7696-41f6-9108-4e931e4f85ec-logs\") pod \"nova-api-0\" (UID: \"7fd2fb6e-7696-41f6-9108-4e931e4f85ec\") " pod="openstack/nova-api-0" Feb 02 21:41:23 crc kubenswrapper[4789]: I0202 21:41:23.121781 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce7cd65a-eb48-4aee-98e0-838af2d90a8e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ce7cd65a-eb48-4aee-98e0-838af2d90a8e\") " pod="openstack/nova-metadata-0" Feb 02 21:41:23 crc kubenswrapper[4789]: I0202 21:41:23.121821 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ce7cd65a-eb48-4aee-98e0-838af2d90a8e-logs\") pod \"nova-metadata-0\" (UID: \"ce7cd65a-eb48-4aee-98e0-838af2d90a8e\") " pod="openstack/nova-metadata-0" Feb 02 21:41:23 crc kubenswrapper[4789]: I0202 21:41:23.146942 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 21:41:23 crc kubenswrapper[4789]: I0202 21:41:23.230468 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce7cd65a-eb48-4aee-98e0-838af2d90a8e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ce7cd65a-eb48-4aee-98e0-838af2d90a8e\") " pod="openstack/nova-metadata-0" Feb 02 21:41:23 crc kubenswrapper[4789]: I0202 21:41:23.230534 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2f42703-8d3d-4f46-9cdb-924e5d849c42-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d2f42703-8d3d-4f46-9cdb-924e5d849c42\") " pod="openstack/nova-scheduler-0" Feb 02 21:41:23 crc kubenswrapper[4789]: I0202 21:41:23.230558 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ce7cd65a-eb48-4aee-98e0-838af2d90a8e-logs\") pod \"nova-metadata-0\" (UID: \"ce7cd65a-eb48-4aee-98e0-838af2d90a8e\") " pod="openstack/nova-metadata-0" Feb 02 21:41:23 crc kubenswrapper[4789]: I0202 21:41:23.230581 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fg6n\" (UniqueName: \"kubernetes.io/projected/7fd2fb6e-7696-41f6-9108-4e931e4f85ec-kube-api-access-2fg6n\") pod \"nova-api-0\" (UID: \"7fd2fb6e-7696-41f6-9108-4e931e4f85ec\") " pod="openstack/nova-api-0" Feb 02 21:41:23 crc kubenswrapper[4789]: I0202 21:41:23.230603 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fd2fb6e-7696-41f6-9108-4e931e4f85ec-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7fd2fb6e-7696-41f6-9108-4e931e4f85ec\") " pod="openstack/nova-api-0" Feb 02 21:41:23 crc kubenswrapper[4789]: I0202 21:41:23.231367 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hv68\" (UniqueName: \"kubernetes.io/projected/ce7cd65a-eb48-4aee-98e0-838af2d90a8e-kube-api-access-4hv68\") pod \"nova-metadata-0\" (UID: \"ce7cd65a-eb48-4aee-98e0-838af2d90a8e\") " pod="openstack/nova-metadata-0" Feb 02 21:41:23 crc kubenswrapper[4789]: I0202 21:41:23.231404 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce7cd65a-eb48-4aee-98e0-838af2d90a8e-config-data\") pod \"nova-metadata-0\" (UID: \"ce7cd65a-eb48-4aee-98e0-838af2d90a8e\") " pod="openstack/nova-metadata-0" Feb 02 21:41:23 crc kubenswrapper[4789]: I0202 21:41:23.231419 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fd2fb6e-7696-41f6-9108-4e931e4f85ec-config-data\") pod \"nova-api-0\" (UID: \"7fd2fb6e-7696-41f6-9108-4e931e4f85ec\") " pod="openstack/nova-api-0" Feb 02 21:41:23 crc kubenswrapper[4789]: I0202 21:41:23.231488 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2f42703-8d3d-4f46-9cdb-924e5d849c42-config-data\") pod \"nova-scheduler-0\" (UID: \"d2f42703-8d3d-4f46-9cdb-924e5d849c42\") " pod="openstack/nova-scheduler-0" Feb 02 21:41:23 crc kubenswrapper[4789]: I0202 21:41:23.231516 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7fd2fb6e-7696-41f6-9108-4e931e4f85ec-logs\") pod \"nova-api-0\" (UID: \"7fd2fb6e-7696-41f6-9108-4e931e4f85ec\") " pod="openstack/nova-api-0" Feb 02 21:41:23 crc kubenswrapper[4789]: I0202 21:41:23.231555 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jg9jl\" (UniqueName: \"kubernetes.io/projected/d2f42703-8d3d-4f46-9cdb-924e5d849c42-kube-api-access-jg9jl\") pod \"nova-scheduler-0\" (UID: \"d2f42703-8d3d-4f46-9cdb-924e5d849c42\") " pod="openstack/nova-scheduler-0" Feb 02 21:41:23 crc kubenswrapper[4789]: I0202 21:41:23.231871 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ce7cd65a-eb48-4aee-98e0-838af2d90a8e-logs\") pod \"nova-metadata-0\" (UID: \"ce7cd65a-eb48-4aee-98e0-838af2d90a8e\") " pod="openstack/nova-metadata-0" Feb 02 21:41:23 crc kubenswrapper[4789]: I0202 21:41:23.241091 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7fd2fb6e-7696-41f6-9108-4e931e4f85ec-logs\") pod \"nova-api-0\" (UID: \"7fd2fb6e-7696-41f6-9108-4e931e4f85ec\") " pod="openstack/nova-api-0" Feb 02 21:41:23 crc kubenswrapper[4789]: I0202 21:41:23.245741 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fd2fb6e-7696-41f6-9108-4e931e4f85ec-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7fd2fb6e-7696-41f6-9108-4e931e4f85ec\") " pod="openstack/nova-api-0" Feb 02 21:41:23 crc kubenswrapper[4789]: I0202 21:41:23.246185 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce7cd65a-eb48-4aee-98e0-838af2d90a8e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ce7cd65a-eb48-4aee-98e0-838af2d90a8e\") " pod="openstack/nova-metadata-0" Feb 02 21:41:23 crc kubenswrapper[4789]: I0202 21:41:23.253906 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce7cd65a-eb48-4aee-98e0-838af2d90a8e-config-data\") pod \"nova-metadata-0\" (UID: \"ce7cd65a-eb48-4aee-98e0-838af2d90a8e\") " pod="openstack/nova-metadata-0" Feb 02 21:41:23 crc kubenswrapper[4789]: I0202 21:41:23.259234 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fd2fb6e-7696-41f6-9108-4e931e4f85ec-config-data\") pod \"nova-api-0\" (UID: \"7fd2fb6e-7696-41f6-9108-4e931e4f85ec\") " pod="openstack/nova-api-0" Feb 02 21:41:23 crc kubenswrapper[4789]: I0202 21:41:23.259775 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fg6n\" (UniqueName: \"kubernetes.io/projected/7fd2fb6e-7696-41f6-9108-4e931e4f85ec-kube-api-access-2fg6n\") pod \"nova-api-0\" (UID: \"7fd2fb6e-7696-41f6-9108-4e931e4f85ec\") " pod="openstack/nova-api-0" Feb 02 21:41:23 crc kubenswrapper[4789]: I0202 21:41:23.276099 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-c7r2c"] Feb 02 21:41:23 crc kubenswrapper[4789]: I0202 21:41:23.277439 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-c7r2c" Feb 02 21:41:23 crc kubenswrapper[4789]: I0202 21:41:23.278218 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hv68\" (UniqueName: \"kubernetes.io/projected/ce7cd65a-eb48-4aee-98e0-838af2d90a8e-kube-api-access-4hv68\") pod \"nova-metadata-0\" (UID: \"ce7cd65a-eb48-4aee-98e0-838af2d90a8e\") " pod="openstack/nova-metadata-0" Feb 02 21:41:23 crc kubenswrapper[4789]: I0202 21:41:23.347036 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jg9jl\" (UniqueName: \"kubernetes.io/projected/d2f42703-8d3d-4f46-9cdb-924e5d849c42-kube-api-access-jg9jl\") pod \"nova-scheduler-0\" (UID: \"d2f42703-8d3d-4f46-9cdb-924e5d849c42\") " pod="openstack/nova-scheduler-0" Feb 02 21:41:23 crc kubenswrapper[4789]: I0202 21:41:23.347508 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2f42703-8d3d-4f46-9cdb-924e5d849c42-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d2f42703-8d3d-4f46-9cdb-924e5d849c42\") " pod="openstack/nova-scheduler-0" Feb 02 21:41:23 crc kubenswrapper[4789]: I0202 21:41:23.347793 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2f42703-8d3d-4f46-9cdb-924e5d849c42-config-data\") pod \"nova-scheduler-0\" (UID: \"d2f42703-8d3d-4f46-9cdb-924e5d849c42\") " pod="openstack/nova-scheduler-0" Feb 02 21:41:23 crc kubenswrapper[4789]: I0202 21:41:23.353744 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2f42703-8d3d-4f46-9cdb-924e5d849c42-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d2f42703-8d3d-4f46-9cdb-924e5d849c42\") " pod="openstack/nova-scheduler-0" Feb 02 21:41:23 crc kubenswrapper[4789]: I0202 21:41:23.360461 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-c7r2c"] Feb 02 21:41:23 crc kubenswrapper[4789]: I0202 21:41:23.355454 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2f42703-8d3d-4f46-9cdb-924e5d849c42-config-data\") pod \"nova-scheduler-0\" (UID: \"d2f42703-8d3d-4f46-9cdb-924e5d849c42\") " pod="openstack/nova-scheduler-0" Feb 02 21:41:23 crc kubenswrapper[4789]: I0202 21:41:23.378015 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jg9jl\" (UniqueName: \"kubernetes.io/projected/d2f42703-8d3d-4f46-9cdb-924e5d849c42-kube-api-access-jg9jl\") pod \"nova-scheduler-0\" (UID: \"d2f42703-8d3d-4f46-9cdb-924e5d849c42\") " pod="openstack/nova-scheduler-0" Feb 02 21:41:23 crc kubenswrapper[4789]: I0202 21:41:23.402289 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 21:41:23 crc kubenswrapper[4789]: I0202 21:41:23.421783 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 21:41:23 crc kubenswrapper[4789]: I0202 21:41:23.449921 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6b8f57c8-9467-4545-89f7-bbda22025d26-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-c7r2c\" (UID: \"6b8f57c8-9467-4545-89f7-bbda22025d26\") " pod="openstack/dnsmasq-dns-757b4f8459-c7r2c" Feb 02 21:41:23 crc kubenswrapper[4789]: I0202 21:41:23.450087 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jf6j2\" (UniqueName: \"kubernetes.io/projected/6b8f57c8-9467-4545-89f7-bbda22025d26-kube-api-access-jf6j2\") pod \"dnsmasq-dns-757b4f8459-c7r2c\" (UID: \"6b8f57c8-9467-4545-89f7-bbda22025d26\") " pod="openstack/dnsmasq-dns-757b4f8459-c7r2c" Feb 02 21:41:23 crc kubenswrapper[4789]: I0202 21:41:23.450119 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6b8f57c8-9467-4545-89f7-bbda22025d26-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-c7r2c\" (UID: \"6b8f57c8-9467-4545-89f7-bbda22025d26\") " pod="openstack/dnsmasq-dns-757b4f8459-c7r2c" Feb 02 21:41:23 crc kubenswrapper[4789]: I0202 21:41:23.450203 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b8f57c8-9467-4545-89f7-bbda22025d26-config\") pod \"dnsmasq-dns-757b4f8459-c7r2c\" (UID: \"6b8f57c8-9467-4545-89f7-bbda22025d26\") " pod="openstack/dnsmasq-dns-757b4f8459-c7r2c" Feb 02 21:41:23 crc kubenswrapper[4789]: I0202 21:41:23.450237 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6b8f57c8-9467-4545-89f7-bbda22025d26-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-c7r2c\" (UID: \"6b8f57c8-9467-4545-89f7-bbda22025d26\") " pod="openstack/dnsmasq-dns-757b4f8459-c7r2c" Feb 02 21:41:23 crc kubenswrapper[4789]: I0202 21:41:23.450337 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6b8f57c8-9467-4545-89f7-bbda22025d26-dns-svc\") pod \"dnsmasq-dns-757b4f8459-c7r2c\" (UID: \"6b8f57c8-9467-4545-89f7-bbda22025d26\") " pod="openstack/dnsmasq-dns-757b4f8459-c7r2c" Feb 02 21:41:23 crc kubenswrapper[4789]: I0202 21:41:23.488240 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 21:41:23 crc kubenswrapper[4789]: I0202 21:41:23.553205 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jf6j2\" (UniqueName: \"kubernetes.io/projected/6b8f57c8-9467-4545-89f7-bbda22025d26-kube-api-access-jf6j2\") pod \"dnsmasq-dns-757b4f8459-c7r2c\" (UID: \"6b8f57c8-9467-4545-89f7-bbda22025d26\") " pod="openstack/dnsmasq-dns-757b4f8459-c7r2c" Feb 02 21:41:23 crc kubenswrapper[4789]: I0202 21:41:23.553261 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6b8f57c8-9467-4545-89f7-bbda22025d26-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-c7r2c\" (UID: \"6b8f57c8-9467-4545-89f7-bbda22025d26\") " pod="openstack/dnsmasq-dns-757b4f8459-c7r2c" Feb 02 21:41:23 crc kubenswrapper[4789]: I0202 21:41:23.553342 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b8f57c8-9467-4545-89f7-bbda22025d26-config\") pod \"dnsmasq-dns-757b4f8459-c7r2c\" (UID: \"6b8f57c8-9467-4545-89f7-bbda22025d26\") " pod="openstack/dnsmasq-dns-757b4f8459-c7r2c" Feb 02 21:41:23 crc kubenswrapper[4789]: I0202 21:41:23.553370 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6b8f57c8-9467-4545-89f7-bbda22025d26-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-c7r2c\" (UID: \"6b8f57c8-9467-4545-89f7-bbda22025d26\") " pod="openstack/dnsmasq-dns-757b4f8459-c7r2c" Feb 02 21:41:23 crc kubenswrapper[4789]: I0202 21:41:23.553396 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6b8f57c8-9467-4545-89f7-bbda22025d26-dns-svc\") pod \"dnsmasq-dns-757b4f8459-c7r2c\" (UID: \"6b8f57c8-9467-4545-89f7-bbda22025d26\") " pod="openstack/dnsmasq-dns-757b4f8459-c7r2c" Feb 02 21:41:23 crc kubenswrapper[4789]: I0202 21:41:23.553457 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6b8f57c8-9467-4545-89f7-bbda22025d26-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-c7r2c\" (UID: \"6b8f57c8-9467-4545-89f7-bbda22025d26\") " pod="openstack/dnsmasq-dns-757b4f8459-c7r2c" Feb 02 21:41:23 crc kubenswrapper[4789]: I0202 21:41:23.554370 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6b8f57c8-9467-4545-89f7-bbda22025d26-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-c7r2c\" (UID: \"6b8f57c8-9467-4545-89f7-bbda22025d26\") " pod="openstack/dnsmasq-dns-757b4f8459-c7r2c" Feb 02 21:41:23 crc kubenswrapper[4789]: I0202 21:41:23.554408 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b8f57c8-9467-4545-89f7-bbda22025d26-config\") pod \"dnsmasq-dns-757b4f8459-c7r2c\" (UID: \"6b8f57c8-9467-4545-89f7-bbda22025d26\") " pod="openstack/dnsmasq-dns-757b4f8459-c7r2c" Feb 02 21:41:23 crc kubenswrapper[4789]: I0202 21:41:23.558381 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6b8f57c8-9467-4545-89f7-bbda22025d26-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-c7r2c\" (UID: \"6b8f57c8-9467-4545-89f7-bbda22025d26\") " pod="openstack/dnsmasq-dns-757b4f8459-c7r2c" Feb 02 21:41:23 crc kubenswrapper[4789]: I0202 21:41:23.558410 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6b8f57c8-9467-4545-89f7-bbda22025d26-dns-svc\") pod \"dnsmasq-dns-757b4f8459-c7r2c\" (UID: \"6b8f57c8-9467-4545-89f7-bbda22025d26\") " pod="openstack/dnsmasq-dns-757b4f8459-c7r2c" Feb 02 21:41:23 crc kubenswrapper[4789]: I0202 21:41:23.559516 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6b8f57c8-9467-4545-89f7-bbda22025d26-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-c7r2c\" (UID: \"6b8f57c8-9467-4545-89f7-bbda22025d26\") " pod="openstack/dnsmasq-dns-757b4f8459-c7r2c" Feb 02 21:41:23 crc kubenswrapper[4789]: I0202 21:41:23.574247 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jf6j2\" (UniqueName: \"kubernetes.io/projected/6b8f57c8-9467-4545-89f7-bbda22025d26-kube-api-access-jf6j2\") pod \"dnsmasq-dns-757b4f8459-c7r2c\" (UID: \"6b8f57c8-9467-4545-89f7-bbda22025d26\") " pod="openstack/dnsmasq-dns-757b4f8459-c7r2c" Feb 02 21:41:23 crc kubenswrapper[4789]: I0202 21:41:23.645125 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-c7r2c" Feb 02 21:41:23 crc kubenswrapper[4789]: I0202 21:41:23.694793 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-g4xxk"] Feb 02 21:41:23 crc kubenswrapper[4789]: I0202 21:41:23.781188 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 21:41:23 crc kubenswrapper[4789]: I0202 21:41:23.897212 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-2xz72"] Feb 02 21:41:23 crc kubenswrapper[4789]: I0202 21:41:23.898827 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-2xz72" Feb 02 21:41:23 crc kubenswrapper[4789]: I0202 21:41:23.901379 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 02 21:41:23 crc kubenswrapper[4789]: I0202 21:41:23.901693 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 02 21:41:23 crc kubenswrapper[4789]: I0202 21:41:23.908587 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-2xz72"] Feb 02 21:41:23 crc kubenswrapper[4789]: I0202 21:41:23.957145 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 02 21:41:24 crc kubenswrapper[4789]: I0202 21:41:24.063865 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdjjv\" (UniqueName: \"kubernetes.io/projected/db3bfd1d-3fb9-4406-bd26-7b58e943e963-kube-api-access-xdjjv\") pod \"nova-cell1-conductor-db-sync-2xz72\" (UID: \"db3bfd1d-3fb9-4406-bd26-7b58e943e963\") " pod="openstack/nova-cell1-conductor-db-sync-2xz72" Feb 02 21:41:24 crc kubenswrapper[4789]: I0202 21:41:24.063995 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db3bfd1d-3fb9-4406-bd26-7b58e943e963-scripts\") pod \"nova-cell1-conductor-db-sync-2xz72\" (UID: \"db3bfd1d-3fb9-4406-bd26-7b58e943e963\") " pod="openstack/nova-cell1-conductor-db-sync-2xz72" Feb 02 21:41:24 crc kubenswrapper[4789]: I0202 21:41:24.064024 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db3bfd1d-3fb9-4406-bd26-7b58e943e963-config-data\") pod \"nova-cell1-conductor-db-sync-2xz72\" (UID: \"db3bfd1d-3fb9-4406-bd26-7b58e943e963\") " pod="openstack/nova-cell1-conductor-db-sync-2xz72" Feb 02 21:41:24 crc kubenswrapper[4789]: I0202 21:41:24.064052 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db3bfd1d-3fb9-4406-bd26-7b58e943e963-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-2xz72\" (UID: \"db3bfd1d-3fb9-4406-bd26-7b58e943e963\") " pod="openstack/nova-cell1-conductor-db-sync-2xz72" Feb 02 21:41:24 crc kubenswrapper[4789]: I0202 21:41:24.098150 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 21:41:24 crc kubenswrapper[4789]: I0202 21:41:24.104090 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 21:41:24 crc kubenswrapper[4789]: W0202 21:41:24.114822 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podce7cd65a_eb48_4aee_98e0_838af2d90a8e.slice/crio-1f03183c33f39d0a106d6926d8d4af22a69f49162d026a018fe1b49712769715 WatchSource:0}: Error finding container 1f03183c33f39d0a106d6926d8d4af22a69f49162d026a018fe1b49712769715: Status 404 returned error can't find the container with id 1f03183c33f39d0a106d6926d8d4af22a69f49162d026a018fe1b49712769715 Feb 02 21:41:24 crc kubenswrapper[4789]: I0202 21:41:24.165895 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdjjv\" (UniqueName: \"kubernetes.io/projected/db3bfd1d-3fb9-4406-bd26-7b58e943e963-kube-api-access-xdjjv\") pod \"nova-cell1-conductor-db-sync-2xz72\" (UID: \"db3bfd1d-3fb9-4406-bd26-7b58e943e963\") " pod="openstack/nova-cell1-conductor-db-sync-2xz72" Feb 02 21:41:24 crc kubenswrapper[4789]: I0202 21:41:24.165994 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db3bfd1d-3fb9-4406-bd26-7b58e943e963-config-data\") pod \"nova-cell1-conductor-db-sync-2xz72\" (UID: \"db3bfd1d-3fb9-4406-bd26-7b58e943e963\") " pod="openstack/nova-cell1-conductor-db-sync-2xz72" Feb 02 21:41:24 crc kubenswrapper[4789]: I0202 21:41:24.166014 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db3bfd1d-3fb9-4406-bd26-7b58e943e963-scripts\") pod \"nova-cell1-conductor-db-sync-2xz72\" (UID: \"db3bfd1d-3fb9-4406-bd26-7b58e943e963\") " pod="openstack/nova-cell1-conductor-db-sync-2xz72" Feb 02 21:41:24 crc kubenswrapper[4789]: I0202 21:41:24.166032 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db3bfd1d-3fb9-4406-bd26-7b58e943e963-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-2xz72\" (UID: \"db3bfd1d-3fb9-4406-bd26-7b58e943e963\") " pod="openstack/nova-cell1-conductor-db-sync-2xz72" Feb 02 21:41:24 crc kubenswrapper[4789]: I0202 21:41:24.172262 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db3bfd1d-3fb9-4406-bd26-7b58e943e963-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-2xz72\" (UID: \"db3bfd1d-3fb9-4406-bd26-7b58e943e963\") " pod="openstack/nova-cell1-conductor-db-sync-2xz72" Feb 02 21:41:24 crc kubenswrapper[4789]: I0202 21:41:24.172481 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db3bfd1d-3fb9-4406-bd26-7b58e943e963-config-data\") pod \"nova-cell1-conductor-db-sync-2xz72\" (UID: \"db3bfd1d-3fb9-4406-bd26-7b58e943e963\") " pod="openstack/nova-cell1-conductor-db-sync-2xz72" Feb 02 21:41:24 crc kubenswrapper[4789]: I0202 21:41:24.173058 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db3bfd1d-3fb9-4406-bd26-7b58e943e963-scripts\") pod \"nova-cell1-conductor-db-sync-2xz72\" (UID: \"db3bfd1d-3fb9-4406-bd26-7b58e943e963\") " pod="openstack/nova-cell1-conductor-db-sync-2xz72" Feb 02 21:41:24 crc kubenswrapper[4789]: I0202 21:41:24.184356 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdjjv\" (UniqueName: \"kubernetes.io/projected/db3bfd1d-3fb9-4406-bd26-7b58e943e963-kube-api-access-xdjjv\") pod \"nova-cell1-conductor-db-sync-2xz72\" (UID: \"db3bfd1d-3fb9-4406-bd26-7b58e943e963\") " pod="openstack/nova-cell1-conductor-db-sync-2xz72" Feb 02 21:41:24 crc kubenswrapper[4789]: I0202 21:41:24.287740 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-c7r2c"] Feb 02 21:41:24 crc kubenswrapper[4789]: W0202 21:41:24.292821 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6b8f57c8_9467_4545_89f7_bbda22025d26.slice/crio-0c2c7c161d7ae39b46d88b379c4d9802e3bae9f6fc8459b38df4b80491841591 WatchSource:0}: Error finding container 0c2c7c161d7ae39b46d88b379c4d9802e3bae9f6fc8459b38df4b80491841591: Status 404 returned error can't find the container with id 0c2c7c161d7ae39b46d88b379c4d9802e3bae9f6fc8459b38df4b80491841591 Feb 02 21:41:24 crc kubenswrapper[4789]: I0202 21:41:24.314516 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-2xz72" Feb 02 21:41:24 crc kubenswrapper[4789]: I0202 21:41:24.625302 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6140fbc7-6f0e-43dd-a95c-50a4dc52c351","Type":"ContainerStarted","Data":"46d4cb2b55cc78c303e679b0cac70ca3019da98e8727eecfa2ed96a76009aa59"} Feb 02 21:41:24 crc kubenswrapper[4789]: I0202 21:41:24.627908 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7fd2fb6e-7696-41f6-9108-4e931e4f85ec","Type":"ContainerStarted","Data":"03486b680685e97eb11e13a66f623e6759c42bda1737b006e13fb2bc109f3a77"} Feb 02 21:41:24 crc kubenswrapper[4789]: I0202 21:41:24.631382 4789 generic.go:334] "Generic (PLEG): container finished" podID="6b8f57c8-9467-4545-89f7-bbda22025d26" containerID="769de9c8bf165e35289b8a68dea9fb07d75ccb9362227f3a94b1889434eb4ece" exitCode=0 Feb 02 21:41:24 crc kubenswrapper[4789]: I0202 21:41:24.631455 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-c7r2c" event={"ID":"6b8f57c8-9467-4545-89f7-bbda22025d26","Type":"ContainerDied","Data":"769de9c8bf165e35289b8a68dea9fb07d75ccb9362227f3a94b1889434eb4ece"} Feb 02 21:41:24 crc kubenswrapper[4789]: I0202 21:41:24.631519 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-c7r2c" event={"ID":"6b8f57c8-9467-4545-89f7-bbda22025d26","Type":"ContainerStarted","Data":"0c2c7c161d7ae39b46d88b379c4d9802e3bae9f6fc8459b38df4b80491841591"} Feb 02 21:41:24 crc kubenswrapper[4789]: I0202 21:41:24.638876 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ce7cd65a-eb48-4aee-98e0-838af2d90a8e","Type":"ContainerStarted","Data":"1f03183c33f39d0a106d6926d8d4af22a69f49162d026a018fe1b49712769715"} Feb 02 21:41:24 crc kubenswrapper[4789]: I0202 21:41:24.642458 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-g4xxk" event={"ID":"30f5a928-4b52-4eef-acb5-7748decc816e","Type":"ContainerStarted","Data":"1f900f4e81cf4a1185756414a43858602853f385c3d95d439c33014b387e9ccf"} Feb 02 21:41:24 crc kubenswrapper[4789]: I0202 21:41:24.642503 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-g4xxk" event={"ID":"30f5a928-4b52-4eef-acb5-7748decc816e","Type":"ContainerStarted","Data":"c8eaeac1ad9f74fb7516b8e740a606f4498637ca8ca80a4f9095d9373e286171"} Feb 02 21:41:24 crc kubenswrapper[4789]: I0202 21:41:24.645040 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d2f42703-8d3d-4f46-9cdb-924e5d849c42","Type":"ContainerStarted","Data":"f4028b935cbfdbadd501e4967b4651780d7d9a895ede4a56bca034fb3e825f25"} Feb 02 21:41:24 crc kubenswrapper[4789]: I0202 21:41:24.671691 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-g4xxk" podStartSLOduration=2.6716766830000003 podStartE2EDuration="2.671676683s" podCreationTimestamp="2026-02-02 21:41:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:41:24.667201086 +0000 UTC m=+1304.962226115" watchObservedRunningTime="2026-02-02 21:41:24.671676683 +0000 UTC m=+1304.966701702" Feb 02 21:41:24 crc kubenswrapper[4789]: I0202 21:41:24.780893 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-2xz72"] Feb 02 21:41:25 crc kubenswrapper[4789]: I0202 21:41:25.658517 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-2xz72" event={"ID":"db3bfd1d-3fb9-4406-bd26-7b58e943e963","Type":"ContainerStarted","Data":"e085656982d1a651607fc032070499319d528e8519c270e5e4395a2dd48ea137"} Feb 02 21:41:25 crc kubenswrapper[4789]: I0202 21:41:25.658917 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-2xz72" event={"ID":"db3bfd1d-3fb9-4406-bd26-7b58e943e963","Type":"ContainerStarted","Data":"a775d6867909595db7e1903b8f5f4eb09355cddf02fcbe4cfe5b9a8e7a4d526c"} Feb 02 21:41:25 crc kubenswrapper[4789]: I0202 21:41:25.664158 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-c7r2c" event={"ID":"6b8f57c8-9467-4545-89f7-bbda22025d26","Type":"ContainerStarted","Data":"f71ea990fd211f90f73858215cc4f96f5720f05aa94fd89824ef0789717946ca"} Feb 02 21:41:25 crc kubenswrapper[4789]: I0202 21:41:25.664252 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-757b4f8459-c7r2c" Feb 02 21:41:25 crc kubenswrapper[4789]: I0202 21:41:25.693403 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-2xz72" podStartSLOduration=2.6933806689999997 podStartE2EDuration="2.693380669s" podCreationTimestamp="2026-02-02 21:41:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:41:25.673501107 +0000 UTC m=+1305.968526146" watchObservedRunningTime="2026-02-02 21:41:25.693380669 +0000 UTC m=+1305.988405688" Feb 02 21:41:26 crc kubenswrapper[4789]: I0202 21:41:26.582843 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-757b4f8459-c7r2c" podStartSLOduration=3.582818954 podStartE2EDuration="3.582818954s" podCreationTimestamp="2026-02-02 21:41:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:41:25.701753566 +0000 UTC m=+1305.996778595" watchObservedRunningTime="2026-02-02 21:41:26.582818954 +0000 UTC m=+1306.877843983" Feb 02 21:41:26 crc kubenswrapper[4789]: I0202 21:41:26.590147 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 21:41:26 crc kubenswrapper[4789]: I0202 21:41:26.601644 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 21:41:27 crc kubenswrapper[4789]: I0202 21:41:27.689996 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ce7cd65a-eb48-4aee-98e0-838af2d90a8e","Type":"ContainerStarted","Data":"8771e234cc9ab5eeb3863ed774dc6a6fb4a0426ad3b13c4f03cbd5b0c9f0c139"} Feb 02 21:41:27 crc kubenswrapper[4789]: I0202 21:41:27.690364 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ce7cd65a-eb48-4aee-98e0-838af2d90a8e","Type":"ContainerStarted","Data":"373e56e05c3a89befa7bee8d06ae512690dbbd42aa1e35bc8866500cbe0a88c8"} Feb 02 21:41:27 crc kubenswrapper[4789]: I0202 21:41:27.690179 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ce7cd65a-eb48-4aee-98e0-838af2d90a8e" containerName="nova-metadata-metadata" containerID="cri-o://8771e234cc9ab5eeb3863ed774dc6a6fb4a0426ad3b13c4f03cbd5b0c9f0c139" gracePeriod=30 Feb 02 21:41:27 crc kubenswrapper[4789]: I0202 21:41:27.690061 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ce7cd65a-eb48-4aee-98e0-838af2d90a8e" containerName="nova-metadata-log" containerID="cri-o://373e56e05c3a89befa7bee8d06ae512690dbbd42aa1e35bc8866500cbe0a88c8" gracePeriod=30 Feb 02 21:41:27 crc kubenswrapper[4789]: I0202 21:41:27.704067 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d2f42703-8d3d-4f46-9cdb-924e5d849c42","Type":"ContainerStarted","Data":"c6adcda4d30dd8d80348de151844b0e666f7e98189122d438d68436e5a5ba717"} Feb 02 21:41:27 crc kubenswrapper[4789]: I0202 21:41:27.709049 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6140fbc7-6f0e-43dd-a95c-50a4dc52c351","Type":"ContainerStarted","Data":"d6f2324a46251235fd022eec6e2e8619f37bc7e11c247719a5f7a99e3fc575fd"} Feb 02 21:41:27 crc kubenswrapper[4789]: I0202 21:41:27.709207 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="6140fbc7-6f0e-43dd-a95c-50a4dc52c351" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://d6f2324a46251235fd022eec6e2e8619f37bc7e11c247719a5f7a99e3fc575fd" gracePeriod=30 Feb 02 21:41:27 crc kubenswrapper[4789]: I0202 21:41:27.714815 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7fd2fb6e-7696-41f6-9108-4e931e4f85ec","Type":"ContainerStarted","Data":"f1fefd77b479e6c16b32102559688054ee9aeae02a12dcd6ade72c915582d3ca"} Feb 02 21:41:27 crc kubenswrapper[4789]: I0202 21:41:27.715039 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7fd2fb6e-7696-41f6-9108-4e931e4f85ec","Type":"ContainerStarted","Data":"4702917ff0138df6eadd26c7899c76d739c77baac82a5b4ae2d09c3b400326ec"} Feb 02 21:41:27 crc kubenswrapper[4789]: I0202 21:41:27.729224 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.202343857 podStartE2EDuration="5.729203018s" podCreationTimestamp="2026-02-02 21:41:22 +0000 UTC" firstStartedPulling="2026-02-02 21:41:24.11770487 +0000 UTC m=+1304.412729879" lastFinishedPulling="2026-02-02 21:41:26.644564021 +0000 UTC m=+1306.939589040" observedRunningTime="2026-02-02 21:41:27.714027449 +0000 UTC m=+1308.009052468" watchObservedRunningTime="2026-02-02 21:41:27.729203018 +0000 UTC m=+1308.024228037" Feb 02 21:41:27 crc kubenswrapper[4789]: I0202 21:41:27.738731 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.214720358 podStartE2EDuration="5.738709887s" podCreationTimestamp="2026-02-02 21:41:22 +0000 UTC" firstStartedPulling="2026-02-02 21:41:24.116592699 +0000 UTC m=+1304.411617718" lastFinishedPulling="2026-02-02 21:41:26.640582228 +0000 UTC m=+1306.935607247" observedRunningTime="2026-02-02 21:41:27.738253694 +0000 UTC m=+1308.033278713" watchObservedRunningTime="2026-02-02 21:41:27.738709887 +0000 UTC m=+1308.033734916" Feb 02 21:41:27 crc kubenswrapper[4789]: I0202 21:41:27.774557 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.947868478 podStartE2EDuration="5.774537931s" podCreationTimestamp="2026-02-02 21:41:22 +0000 UTC" firstStartedPulling="2026-02-02 21:41:23.815464279 +0000 UTC m=+1304.110489298" lastFinishedPulling="2026-02-02 21:41:26.642133722 +0000 UTC m=+1306.937158751" observedRunningTime="2026-02-02 21:41:27.752500667 +0000 UTC m=+1308.047525686" watchObservedRunningTime="2026-02-02 21:41:27.774537931 +0000 UTC m=+1308.069562950" Feb 02 21:41:27 crc kubenswrapper[4789]: I0202 21:41:27.779661 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.085814691 podStartE2EDuration="5.779648796s" podCreationTimestamp="2026-02-02 21:41:22 +0000 UTC" firstStartedPulling="2026-02-02 21:41:23.948286737 +0000 UTC m=+1304.243311756" lastFinishedPulling="2026-02-02 21:41:26.642120842 +0000 UTC m=+1306.937145861" observedRunningTime="2026-02-02 21:41:27.767246635 +0000 UTC m=+1308.062271654" watchObservedRunningTime="2026-02-02 21:41:27.779648796 +0000 UTC m=+1308.074673815" Feb 02 21:41:28 crc kubenswrapper[4789]: I0202 21:41:28.106749 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 02 21:41:28 crc kubenswrapper[4789]: I0202 21:41:28.280711 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 21:41:28 crc kubenswrapper[4789]: I0202 21:41:28.360223 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ce7cd65a-eb48-4aee-98e0-838af2d90a8e-logs\") pod \"ce7cd65a-eb48-4aee-98e0-838af2d90a8e\" (UID: \"ce7cd65a-eb48-4aee-98e0-838af2d90a8e\") " Feb 02 21:41:28 crc kubenswrapper[4789]: I0202 21:41:28.360368 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hv68\" (UniqueName: \"kubernetes.io/projected/ce7cd65a-eb48-4aee-98e0-838af2d90a8e-kube-api-access-4hv68\") pod \"ce7cd65a-eb48-4aee-98e0-838af2d90a8e\" (UID: \"ce7cd65a-eb48-4aee-98e0-838af2d90a8e\") " Feb 02 21:41:28 crc kubenswrapper[4789]: I0202 21:41:28.360410 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce7cd65a-eb48-4aee-98e0-838af2d90a8e-config-data\") pod \"ce7cd65a-eb48-4aee-98e0-838af2d90a8e\" (UID: \"ce7cd65a-eb48-4aee-98e0-838af2d90a8e\") " Feb 02 21:41:28 crc kubenswrapper[4789]: I0202 21:41:28.360465 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce7cd65a-eb48-4aee-98e0-838af2d90a8e-combined-ca-bundle\") pod \"ce7cd65a-eb48-4aee-98e0-838af2d90a8e\" (UID: \"ce7cd65a-eb48-4aee-98e0-838af2d90a8e\") " Feb 02 21:41:28 crc kubenswrapper[4789]: I0202 21:41:28.361835 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce7cd65a-eb48-4aee-98e0-838af2d90a8e-logs" (OuterVolumeSpecName: "logs") pod "ce7cd65a-eb48-4aee-98e0-838af2d90a8e" (UID: "ce7cd65a-eb48-4aee-98e0-838af2d90a8e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 21:41:28 crc kubenswrapper[4789]: I0202 21:41:28.365977 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce7cd65a-eb48-4aee-98e0-838af2d90a8e-kube-api-access-4hv68" (OuterVolumeSpecName: "kube-api-access-4hv68") pod "ce7cd65a-eb48-4aee-98e0-838af2d90a8e" (UID: "ce7cd65a-eb48-4aee-98e0-838af2d90a8e"). InnerVolumeSpecName "kube-api-access-4hv68". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:41:28 crc kubenswrapper[4789]: I0202 21:41:28.386836 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce7cd65a-eb48-4aee-98e0-838af2d90a8e-config-data" (OuterVolumeSpecName: "config-data") pod "ce7cd65a-eb48-4aee-98e0-838af2d90a8e" (UID: "ce7cd65a-eb48-4aee-98e0-838af2d90a8e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:41:28 crc kubenswrapper[4789]: I0202 21:41:28.393151 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce7cd65a-eb48-4aee-98e0-838af2d90a8e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ce7cd65a-eb48-4aee-98e0-838af2d90a8e" (UID: "ce7cd65a-eb48-4aee-98e0-838af2d90a8e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:41:28 crc kubenswrapper[4789]: I0202 21:41:28.463608 4789 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce7cd65a-eb48-4aee-98e0-838af2d90a8e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 21:41:28 crc kubenswrapper[4789]: I0202 21:41:28.463642 4789 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ce7cd65a-eb48-4aee-98e0-838af2d90a8e-logs\") on node \"crc\" DevicePath \"\"" Feb 02 21:41:28 crc kubenswrapper[4789]: I0202 21:41:28.463654 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4hv68\" (UniqueName: \"kubernetes.io/projected/ce7cd65a-eb48-4aee-98e0-838af2d90a8e-kube-api-access-4hv68\") on node \"crc\" DevicePath \"\"" Feb 02 21:41:28 crc kubenswrapper[4789]: I0202 21:41:28.463664 4789 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce7cd65a-eb48-4aee-98e0-838af2d90a8e-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 21:41:28 crc kubenswrapper[4789]: I0202 21:41:28.489270 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 02 21:41:28 crc kubenswrapper[4789]: I0202 21:41:28.735805 4789 generic.go:334] "Generic (PLEG): container finished" podID="ce7cd65a-eb48-4aee-98e0-838af2d90a8e" containerID="8771e234cc9ab5eeb3863ed774dc6a6fb4a0426ad3b13c4f03cbd5b0c9f0c139" exitCode=0 Feb 02 21:41:28 crc kubenswrapper[4789]: I0202 21:41:28.735843 4789 generic.go:334] "Generic (PLEG): container finished" podID="ce7cd65a-eb48-4aee-98e0-838af2d90a8e" containerID="373e56e05c3a89befa7bee8d06ae512690dbbd42aa1e35bc8866500cbe0a88c8" exitCode=143 Feb 02 21:41:28 crc kubenswrapper[4789]: I0202 21:41:28.736867 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 21:41:28 crc kubenswrapper[4789]: I0202 21:41:28.737395 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ce7cd65a-eb48-4aee-98e0-838af2d90a8e","Type":"ContainerDied","Data":"8771e234cc9ab5eeb3863ed774dc6a6fb4a0426ad3b13c4f03cbd5b0c9f0c139"} Feb 02 21:41:28 crc kubenswrapper[4789]: I0202 21:41:28.737428 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ce7cd65a-eb48-4aee-98e0-838af2d90a8e","Type":"ContainerDied","Data":"373e56e05c3a89befa7bee8d06ae512690dbbd42aa1e35bc8866500cbe0a88c8"} Feb 02 21:41:28 crc kubenswrapper[4789]: I0202 21:41:28.737439 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ce7cd65a-eb48-4aee-98e0-838af2d90a8e","Type":"ContainerDied","Data":"1f03183c33f39d0a106d6926d8d4af22a69f49162d026a018fe1b49712769715"} Feb 02 21:41:28 crc kubenswrapper[4789]: I0202 21:41:28.737472 4789 scope.go:117] "RemoveContainer" containerID="8771e234cc9ab5eeb3863ed774dc6a6fb4a0426ad3b13c4f03cbd5b0c9f0c139" Feb 02 21:41:28 crc kubenswrapper[4789]: I0202 21:41:28.774158 4789 scope.go:117] "RemoveContainer" containerID="373e56e05c3a89befa7bee8d06ae512690dbbd42aa1e35bc8866500cbe0a88c8" Feb 02 21:41:28 crc kubenswrapper[4789]: I0202 21:41:28.786881 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 21:41:28 crc kubenswrapper[4789]: I0202 21:41:28.800662 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 21:41:28 crc kubenswrapper[4789]: I0202 21:41:28.800829 4789 scope.go:117] "RemoveContainer" containerID="8771e234cc9ab5eeb3863ed774dc6a6fb4a0426ad3b13c4f03cbd5b0c9f0c139" Feb 02 21:41:28 crc kubenswrapper[4789]: E0202 21:41:28.804133 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8771e234cc9ab5eeb3863ed774dc6a6fb4a0426ad3b13c4f03cbd5b0c9f0c139\": container with ID starting with 8771e234cc9ab5eeb3863ed774dc6a6fb4a0426ad3b13c4f03cbd5b0c9f0c139 not found: ID does not exist" containerID="8771e234cc9ab5eeb3863ed774dc6a6fb4a0426ad3b13c4f03cbd5b0c9f0c139" Feb 02 21:41:28 crc kubenswrapper[4789]: I0202 21:41:28.804174 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8771e234cc9ab5eeb3863ed774dc6a6fb4a0426ad3b13c4f03cbd5b0c9f0c139"} err="failed to get container status \"8771e234cc9ab5eeb3863ed774dc6a6fb4a0426ad3b13c4f03cbd5b0c9f0c139\": rpc error: code = NotFound desc = could not find container \"8771e234cc9ab5eeb3863ed774dc6a6fb4a0426ad3b13c4f03cbd5b0c9f0c139\": container with ID starting with 8771e234cc9ab5eeb3863ed774dc6a6fb4a0426ad3b13c4f03cbd5b0c9f0c139 not found: ID does not exist" Feb 02 21:41:28 crc kubenswrapper[4789]: I0202 21:41:28.804199 4789 scope.go:117] "RemoveContainer" containerID="373e56e05c3a89befa7bee8d06ae512690dbbd42aa1e35bc8866500cbe0a88c8" Feb 02 21:41:28 crc kubenswrapper[4789]: E0202 21:41:28.806003 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"373e56e05c3a89befa7bee8d06ae512690dbbd42aa1e35bc8866500cbe0a88c8\": container with ID starting with 373e56e05c3a89befa7bee8d06ae512690dbbd42aa1e35bc8866500cbe0a88c8 not found: ID does not exist" containerID="373e56e05c3a89befa7bee8d06ae512690dbbd42aa1e35bc8866500cbe0a88c8" Feb 02 21:41:28 crc kubenswrapper[4789]: I0202 21:41:28.806042 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"373e56e05c3a89befa7bee8d06ae512690dbbd42aa1e35bc8866500cbe0a88c8"} err="failed to get container status \"373e56e05c3a89befa7bee8d06ae512690dbbd42aa1e35bc8866500cbe0a88c8\": rpc error: code = NotFound desc = could not find container \"373e56e05c3a89befa7bee8d06ae512690dbbd42aa1e35bc8866500cbe0a88c8\": container with ID starting with 373e56e05c3a89befa7bee8d06ae512690dbbd42aa1e35bc8866500cbe0a88c8 not found: ID does not exist" Feb 02 21:41:28 crc kubenswrapper[4789]: I0202 21:41:28.806067 4789 scope.go:117] "RemoveContainer" containerID="8771e234cc9ab5eeb3863ed774dc6a6fb4a0426ad3b13c4f03cbd5b0c9f0c139" Feb 02 21:41:28 crc kubenswrapper[4789]: I0202 21:41:28.811462 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 02 21:41:28 crc kubenswrapper[4789]: E0202 21:41:28.812030 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce7cd65a-eb48-4aee-98e0-838af2d90a8e" containerName="nova-metadata-metadata" Feb 02 21:41:28 crc kubenswrapper[4789]: I0202 21:41:28.812045 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce7cd65a-eb48-4aee-98e0-838af2d90a8e" containerName="nova-metadata-metadata" Feb 02 21:41:28 crc kubenswrapper[4789]: E0202 21:41:28.812063 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce7cd65a-eb48-4aee-98e0-838af2d90a8e" containerName="nova-metadata-log" Feb 02 21:41:28 crc kubenswrapper[4789]: I0202 21:41:28.812069 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce7cd65a-eb48-4aee-98e0-838af2d90a8e" containerName="nova-metadata-log" Feb 02 21:41:28 crc kubenswrapper[4789]: I0202 21:41:28.812246 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce7cd65a-eb48-4aee-98e0-838af2d90a8e" containerName="nova-metadata-metadata" Feb 02 21:41:28 crc kubenswrapper[4789]: I0202 21:41:28.812258 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce7cd65a-eb48-4aee-98e0-838af2d90a8e" containerName="nova-metadata-log" Feb 02 21:41:28 crc kubenswrapper[4789]: I0202 21:41:28.813417 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 21:41:28 crc kubenswrapper[4789]: I0202 21:41:28.816838 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 02 21:41:28 crc kubenswrapper[4789]: I0202 21:41:28.817210 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 02 21:41:28 crc kubenswrapper[4789]: I0202 21:41:28.821435 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8771e234cc9ab5eeb3863ed774dc6a6fb4a0426ad3b13c4f03cbd5b0c9f0c139"} err="failed to get container status \"8771e234cc9ab5eeb3863ed774dc6a6fb4a0426ad3b13c4f03cbd5b0c9f0c139\": rpc error: code = NotFound desc = could not find container \"8771e234cc9ab5eeb3863ed774dc6a6fb4a0426ad3b13c4f03cbd5b0c9f0c139\": container with ID starting with 8771e234cc9ab5eeb3863ed774dc6a6fb4a0426ad3b13c4f03cbd5b0c9f0c139 not found: ID does not exist" Feb 02 21:41:28 crc kubenswrapper[4789]: I0202 21:41:28.821476 4789 scope.go:117] "RemoveContainer" containerID="373e56e05c3a89befa7bee8d06ae512690dbbd42aa1e35bc8866500cbe0a88c8" Feb 02 21:41:28 crc kubenswrapper[4789]: I0202 21:41:28.821853 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"373e56e05c3a89befa7bee8d06ae512690dbbd42aa1e35bc8866500cbe0a88c8"} err="failed to get container status \"373e56e05c3a89befa7bee8d06ae512690dbbd42aa1e35bc8866500cbe0a88c8\": rpc error: code = NotFound desc = could not find container \"373e56e05c3a89befa7bee8d06ae512690dbbd42aa1e35bc8866500cbe0a88c8\": container with ID starting with 373e56e05c3a89befa7bee8d06ae512690dbbd42aa1e35bc8866500cbe0a88c8 not found: ID does not exist" Feb 02 21:41:28 crc kubenswrapper[4789]: I0202 21:41:28.822545 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 21:41:28 crc kubenswrapper[4789]: I0202 21:41:28.987207 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6bd238e-ea07-468e-ade8-4c40c90d4429-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"f6bd238e-ea07-468e-ade8-4c40c90d4429\") " pod="openstack/nova-metadata-0" Feb 02 21:41:28 crc kubenswrapper[4789]: I0202 21:41:28.987555 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6bd238e-ea07-468e-ade8-4c40c90d4429-config-data\") pod \"nova-metadata-0\" (UID: \"f6bd238e-ea07-468e-ade8-4c40c90d4429\") " pod="openstack/nova-metadata-0" Feb 02 21:41:28 crc kubenswrapper[4789]: I0202 21:41:28.987685 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6bd238e-ea07-468e-ade8-4c40c90d4429-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f6bd238e-ea07-468e-ade8-4c40c90d4429\") " pod="openstack/nova-metadata-0" Feb 02 21:41:28 crc kubenswrapper[4789]: I0202 21:41:28.988009 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f6bd238e-ea07-468e-ade8-4c40c90d4429-logs\") pod \"nova-metadata-0\" (UID: \"f6bd238e-ea07-468e-ade8-4c40c90d4429\") " pod="openstack/nova-metadata-0" Feb 02 21:41:28 crc kubenswrapper[4789]: I0202 21:41:28.988125 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lphrn\" (UniqueName: \"kubernetes.io/projected/f6bd238e-ea07-468e-ade8-4c40c90d4429-kube-api-access-lphrn\") pod \"nova-metadata-0\" (UID: \"f6bd238e-ea07-468e-ade8-4c40c90d4429\") " pod="openstack/nova-metadata-0" Feb 02 21:41:29 crc kubenswrapper[4789]: I0202 21:41:29.090282 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6bd238e-ea07-468e-ade8-4c40c90d4429-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f6bd238e-ea07-468e-ade8-4c40c90d4429\") " pod="openstack/nova-metadata-0" Feb 02 21:41:29 crc kubenswrapper[4789]: I0202 21:41:29.090402 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f6bd238e-ea07-468e-ade8-4c40c90d4429-logs\") pod \"nova-metadata-0\" (UID: \"f6bd238e-ea07-468e-ade8-4c40c90d4429\") " pod="openstack/nova-metadata-0" Feb 02 21:41:29 crc kubenswrapper[4789]: I0202 21:41:29.090445 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lphrn\" (UniqueName: \"kubernetes.io/projected/f6bd238e-ea07-468e-ade8-4c40c90d4429-kube-api-access-lphrn\") pod \"nova-metadata-0\" (UID: \"f6bd238e-ea07-468e-ade8-4c40c90d4429\") " pod="openstack/nova-metadata-0" Feb 02 21:41:29 crc kubenswrapper[4789]: I0202 21:41:29.090522 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6bd238e-ea07-468e-ade8-4c40c90d4429-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"f6bd238e-ea07-468e-ade8-4c40c90d4429\") " pod="openstack/nova-metadata-0" Feb 02 21:41:29 crc kubenswrapper[4789]: I0202 21:41:29.090608 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6bd238e-ea07-468e-ade8-4c40c90d4429-config-data\") pod \"nova-metadata-0\" (UID: \"f6bd238e-ea07-468e-ade8-4c40c90d4429\") " pod="openstack/nova-metadata-0" Feb 02 21:41:29 crc kubenswrapper[4789]: I0202 21:41:29.091027 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f6bd238e-ea07-468e-ade8-4c40c90d4429-logs\") pod \"nova-metadata-0\" (UID: \"f6bd238e-ea07-468e-ade8-4c40c90d4429\") " pod="openstack/nova-metadata-0" Feb 02 21:41:29 crc kubenswrapper[4789]: I0202 21:41:29.095027 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6bd238e-ea07-468e-ade8-4c40c90d4429-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f6bd238e-ea07-468e-ade8-4c40c90d4429\") " pod="openstack/nova-metadata-0" Feb 02 21:41:29 crc kubenswrapper[4789]: I0202 21:41:29.107218 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6bd238e-ea07-468e-ade8-4c40c90d4429-config-data\") pod \"nova-metadata-0\" (UID: \"f6bd238e-ea07-468e-ade8-4c40c90d4429\") " pod="openstack/nova-metadata-0" Feb 02 21:41:29 crc kubenswrapper[4789]: I0202 21:41:29.107307 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6bd238e-ea07-468e-ade8-4c40c90d4429-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"f6bd238e-ea07-468e-ade8-4c40c90d4429\") " pod="openstack/nova-metadata-0" Feb 02 21:41:29 crc kubenswrapper[4789]: I0202 21:41:29.113325 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lphrn\" (UniqueName: \"kubernetes.io/projected/f6bd238e-ea07-468e-ade8-4c40c90d4429-kube-api-access-lphrn\") pod \"nova-metadata-0\" (UID: \"f6bd238e-ea07-468e-ade8-4c40c90d4429\") " pod="openstack/nova-metadata-0" Feb 02 21:41:29 crc kubenswrapper[4789]: I0202 21:41:29.141339 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 21:41:29 crc kubenswrapper[4789]: I0202 21:41:29.621472 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 21:41:29 crc kubenswrapper[4789]: I0202 21:41:29.744386 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f6bd238e-ea07-468e-ade8-4c40c90d4429","Type":"ContainerStarted","Data":"6848f31639f74cdec01b55dcc75631ae6e030da6e82480254d47e9d4345fe75c"} Feb 02 21:41:30 crc kubenswrapper[4789]: I0202 21:41:30.447224 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce7cd65a-eb48-4aee-98e0-838af2d90a8e" path="/var/lib/kubelet/pods/ce7cd65a-eb48-4aee-98e0-838af2d90a8e/volumes" Feb 02 21:41:30 crc kubenswrapper[4789]: I0202 21:41:30.761842 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f6bd238e-ea07-468e-ade8-4c40c90d4429","Type":"ContainerStarted","Data":"6d616adbdddc82e73064d7459ef8f39f39c7d2542ccd22858addfd8a7cb38696"} Feb 02 21:41:30 crc kubenswrapper[4789]: I0202 21:41:30.761926 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f6bd238e-ea07-468e-ade8-4c40c90d4429","Type":"ContainerStarted","Data":"268ac6ed236165ea35540ea979db2d467ad6f7692c477b39e0aa58df1c316e3b"} Feb 02 21:41:30 crc kubenswrapper[4789]: I0202 21:41:30.789200 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.7891717419999997 podStartE2EDuration="2.789171742s" podCreationTimestamp="2026-02-02 21:41:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:41:30.78732675 +0000 UTC m=+1311.082351839" watchObservedRunningTime="2026-02-02 21:41:30.789171742 +0000 UTC m=+1311.084196801" Feb 02 21:41:31 crc kubenswrapper[4789]: I0202 21:41:31.803936 4789 generic.go:334] "Generic (PLEG): container finished" podID="30f5a928-4b52-4eef-acb5-7748decc816e" containerID="1f900f4e81cf4a1185756414a43858602853f385c3d95d439c33014b387e9ccf" exitCode=0 Feb 02 21:41:31 crc kubenswrapper[4789]: I0202 21:41:31.804123 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-g4xxk" event={"ID":"30f5a928-4b52-4eef-acb5-7748decc816e","Type":"ContainerDied","Data":"1f900f4e81cf4a1185756414a43858602853f385c3d95d439c33014b387e9ccf"} Feb 02 21:41:32 crc kubenswrapper[4789]: I0202 21:41:32.047853 4789 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 02 21:41:32 crc kubenswrapper[4789]: I0202 21:41:32.821150 4789 generic.go:334] "Generic (PLEG): container finished" podID="db3bfd1d-3fb9-4406-bd26-7b58e943e963" containerID="e085656982d1a651607fc032070499319d528e8519c270e5e4395a2dd48ea137" exitCode=0 Feb 02 21:41:32 crc kubenswrapper[4789]: I0202 21:41:32.821252 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-2xz72" event={"ID":"db3bfd1d-3fb9-4406-bd26-7b58e943e963","Type":"ContainerDied","Data":"e085656982d1a651607fc032070499319d528e8519c270e5e4395a2dd48ea137"} Feb 02 21:41:33 crc kubenswrapper[4789]: I0202 21:41:33.283152 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-g4xxk" Feb 02 21:41:33 crc kubenswrapper[4789]: I0202 21:41:33.378857 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30f5a928-4b52-4eef-acb5-7748decc816e-config-data\") pod \"30f5a928-4b52-4eef-acb5-7748decc816e\" (UID: \"30f5a928-4b52-4eef-acb5-7748decc816e\") " Feb 02 21:41:33 crc kubenswrapper[4789]: I0202 21:41:33.379850 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30f5a928-4b52-4eef-acb5-7748decc816e-combined-ca-bundle\") pod \"30f5a928-4b52-4eef-acb5-7748decc816e\" (UID: \"30f5a928-4b52-4eef-acb5-7748decc816e\") " Feb 02 21:41:33 crc kubenswrapper[4789]: I0202 21:41:33.379995 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgr8j\" (UniqueName: \"kubernetes.io/projected/30f5a928-4b52-4eef-acb5-7748decc816e-kube-api-access-pgr8j\") pod \"30f5a928-4b52-4eef-acb5-7748decc816e\" (UID: \"30f5a928-4b52-4eef-acb5-7748decc816e\") " Feb 02 21:41:33 crc kubenswrapper[4789]: I0202 21:41:33.380099 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30f5a928-4b52-4eef-acb5-7748decc816e-scripts\") pod \"30f5a928-4b52-4eef-acb5-7748decc816e\" (UID: \"30f5a928-4b52-4eef-acb5-7748decc816e\") " Feb 02 21:41:33 crc kubenswrapper[4789]: I0202 21:41:33.385047 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30f5a928-4b52-4eef-acb5-7748decc816e-kube-api-access-pgr8j" (OuterVolumeSpecName: "kube-api-access-pgr8j") pod "30f5a928-4b52-4eef-acb5-7748decc816e" (UID: "30f5a928-4b52-4eef-acb5-7748decc816e"). InnerVolumeSpecName "kube-api-access-pgr8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:41:33 crc kubenswrapper[4789]: I0202 21:41:33.387743 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30f5a928-4b52-4eef-acb5-7748decc816e-scripts" (OuterVolumeSpecName: "scripts") pod "30f5a928-4b52-4eef-acb5-7748decc816e" (UID: "30f5a928-4b52-4eef-acb5-7748decc816e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:41:33 crc kubenswrapper[4789]: I0202 21:41:33.403505 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 02 21:41:33 crc kubenswrapper[4789]: I0202 21:41:33.403549 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 02 21:41:33 crc kubenswrapper[4789]: I0202 21:41:33.410889 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30f5a928-4b52-4eef-acb5-7748decc816e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "30f5a928-4b52-4eef-acb5-7748decc816e" (UID: "30f5a928-4b52-4eef-acb5-7748decc816e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:41:33 crc kubenswrapper[4789]: I0202 21:41:33.413904 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30f5a928-4b52-4eef-acb5-7748decc816e-config-data" (OuterVolumeSpecName: "config-data") pod "30f5a928-4b52-4eef-acb5-7748decc816e" (UID: "30f5a928-4b52-4eef-acb5-7748decc816e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:41:33 crc kubenswrapper[4789]: I0202 21:41:33.483493 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pgr8j\" (UniqueName: \"kubernetes.io/projected/30f5a928-4b52-4eef-acb5-7748decc816e-kube-api-access-pgr8j\") on node \"crc\" DevicePath \"\"" Feb 02 21:41:33 crc kubenswrapper[4789]: I0202 21:41:33.483747 4789 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30f5a928-4b52-4eef-acb5-7748decc816e-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 21:41:33 crc kubenswrapper[4789]: I0202 21:41:33.483999 4789 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30f5a928-4b52-4eef-acb5-7748decc816e-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 21:41:33 crc kubenswrapper[4789]: I0202 21:41:33.484124 4789 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30f5a928-4b52-4eef-acb5-7748decc816e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 21:41:33 crc kubenswrapper[4789]: I0202 21:41:33.488963 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 02 21:41:33 crc kubenswrapper[4789]: I0202 21:41:33.517287 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 02 21:41:33 crc kubenswrapper[4789]: I0202 21:41:33.647827 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-757b4f8459-c7r2c" Feb 02 21:41:33 crc kubenswrapper[4789]: I0202 21:41:33.721778 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-lg4td"] Feb 02 21:41:33 crc kubenswrapper[4789]: I0202 21:41:33.722031 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c9776ccc5-lg4td" podUID="1f3d2a87-d6d0-4d7f-8686-90e1731ff80e" containerName="dnsmasq-dns" containerID="cri-o://f03611fc3b4f64a18d8ad989aa70fe1f61fd98739fb82be5104c8aa9c9ec863c" gracePeriod=10 Feb 02 21:41:33 crc kubenswrapper[4789]: I0202 21:41:33.833532 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-g4xxk" Feb 02 21:41:33 crc kubenswrapper[4789]: I0202 21:41:33.833708 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-g4xxk" event={"ID":"30f5a928-4b52-4eef-acb5-7748decc816e","Type":"ContainerDied","Data":"c8eaeac1ad9f74fb7516b8e740a606f4498637ca8ca80a4f9095d9373e286171"} Feb 02 21:41:33 crc kubenswrapper[4789]: I0202 21:41:33.833860 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c8eaeac1ad9f74fb7516b8e740a606f4498637ca8ca80a4f9095d9373e286171" Feb 02 21:41:33 crc kubenswrapper[4789]: I0202 21:41:33.896745 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 02 21:41:34 crc kubenswrapper[4789]: I0202 21:41:34.018661 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 02 21:41:34 crc kubenswrapper[4789]: I0202 21:41:34.019033 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="7fd2fb6e-7696-41f6-9108-4e931e4f85ec" containerName="nova-api-log" containerID="cri-o://4702917ff0138df6eadd26c7899c76d739c77baac82a5b4ae2d09c3b400326ec" gracePeriod=30 Feb 02 21:41:34 crc kubenswrapper[4789]: I0202 21:41:34.019386 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="7fd2fb6e-7696-41f6-9108-4e931e4f85ec" containerName="nova-api-api" containerID="cri-o://f1fefd77b479e6c16b32102559688054ee9aeae02a12dcd6ade72c915582d3ca" gracePeriod=30 Feb 02 21:41:34 crc kubenswrapper[4789]: I0202 21:41:34.025382 4789 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="7fd2fb6e-7696-41f6-9108-4e931e4f85ec" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.188:8774/\": EOF" Feb 02 21:41:34 crc kubenswrapper[4789]: I0202 21:41:34.025665 4789 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="7fd2fb6e-7696-41f6-9108-4e931e4f85ec" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.188:8774/\": EOF" Feb 02 21:41:34 crc kubenswrapper[4789]: I0202 21:41:34.035043 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 21:41:34 crc kubenswrapper[4789]: I0202 21:41:34.035258 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="f6bd238e-ea07-468e-ade8-4c40c90d4429" containerName="nova-metadata-log" containerID="cri-o://268ac6ed236165ea35540ea979db2d467ad6f7692c477b39e0aa58df1c316e3b" gracePeriod=30 Feb 02 21:41:34 crc kubenswrapper[4789]: I0202 21:41:34.035372 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="f6bd238e-ea07-468e-ade8-4c40c90d4429" containerName="nova-metadata-metadata" containerID="cri-o://6d616adbdddc82e73064d7459ef8f39f39c7d2542ccd22858addfd8a7cb38696" gracePeriod=30 Feb 02 21:41:34 crc kubenswrapper[4789]: I0202 21:41:34.172852 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 02 21:41:34 crc kubenswrapper[4789]: I0202 21:41:34.173110 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 02 21:41:34 crc kubenswrapper[4789]: I0202 21:41:34.515311 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-lg4td" Feb 02 21:41:34 crc kubenswrapper[4789]: I0202 21:41:34.519144 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-2xz72" Feb 02 21:41:34 crc kubenswrapper[4789]: I0202 21:41:34.621642 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1f3d2a87-d6d0-4d7f-8686-90e1731ff80e-ovsdbserver-nb\") pod \"1f3d2a87-d6d0-4d7f-8686-90e1731ff80e\" (UID: \"1f3d2a87-d6d0-4d7f-8686-90e1731ff80e\") " Feb 02 21:41:34 crc kubenswrapper[4789]: I0202 21:41:34.621694 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db3bfd1d-3fb9-4406-bd26-7b58e943e963-combined-ca-bundle\") pod \"db3bfd1d-3fb9-4406-bd26-7b58e943e963\" (UID: \"db3bfd1d-3fb9-4406-bd26-7b58e943e963\") " Feb 02 21:41:34 crc kubenswrapper[4789]: I0202 21:41:34.621730 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xdjjv\" (UniqueName: \"kubernetes.io/projected/db3bfd1d-3fb9-4406-bd26-7b58e943e963-kube-api-access-xdjjv\") pod \"db3bfd1d-3fb9-4406-bd26-7b58e943e963\" (UID: \"db3bfd1d-3fb9-4406-bd26-7b58e943e963\") " Feb 02 21:41:34 crc kubenswrapper[4789]: I0202 21:41:34.621777 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6gp5\" (UniqueName: \"kubernetes.io/projected/1f3d2a87-d6d0-4d7f-8686-90e1731ff80e-kube-api-access-b6gp5\") pod \"1f3d2a87-d6d0-4d7f-8686-90e1731ff80e\" (UID: \"1f3d2a87-d6d0-4d7f-8686-90e1731ff80e\") " Feb 02 21:41:34 crc kubenswrapper[4789]: I0202 21:41:34.621815 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f3d2a87-d6d0-4d7f-8686-90e1731ff80e-config\") pod \"1f3d2a87-d6d0-4d7f-8686-90e1731ff80e\" (UID: \"1f3d2a87-d6d0-4d7f-8686-90e1731ff80e\") " Feb 02 21:41:34 crc kubenswrapper[4789]: I0202 21:41:34.621885 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db3bfd1d-3fb9-4406-bd26-7b58e943e963-config-data\") pod \"db3bfd1d-3fb9-4406-bd26-7b58e943e963\" (UID: \"db3bfd1d-3fb9-4406-bd26-7b58e943e963\") " Feb 02 21:41:34 crc kubenswrapper[4789]: I0202 21:41:34.621934 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db3bfd1d-3fb9-4406-bd26-7b58e943e963-scripts\") pod \"db3bfd1d-3fb9-4406-bd26-7b58e943e963\" (UID: \"db3bfd1d-3fb9-4406-bd26-7b58e943e963\") " Feb 02 21:41:34 crc kubenswrapper[4789]: I0202 21:41:34.621981 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1f3d2a87-d6d0-4d7f-8686-90e1731ff80e-ovsdbserver-sb\") pod \"1f3d2a87-d6d0-4d7f-8686-90e1731ff80e\" (UID: \"1f3d2a87-d6d0-4d7f-8686-90e1731ff80e\") " Feb 02 21:41:34 crc kubenswrapper[4789]: I0202 21:41:34.622009 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1f3d2a87-d6d0-4d7f-8686-90e1731ff80e-dns-svc\") pod \"1f3d2a87-d6d0-4d7f-8686-90e1731ff80e\" (UID: \"1f3d2a87-d6d0-4d7f-8686-90e1731ff80e\") " Feb 02 21:41:34 crc kubenswrapper[4789]: I0202 21:41:34.622037 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1f3d2a87-d6d0-4d7f-8686-90e1731ff80e-dns-swift-storage-0\") pod \"1f3d2a87-d6d0-4d7f-8686-90e1731ff80e\" (UID: \"1f3d2a87-d6d0-4d7f-8686-90e1731ff80e\") " Feb 02 21:41:34 crc kubenswrapper[4789]: I0202 21:41:34.627804 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db3bfd1d-3fb9-4406-bd26-7b58e943e963-kube-api-access-xdjjv" (OuterVolumeSpecName: "kube-api-access-xdjjv") pod "db3bfd1d-3fb9-4406-bd26-7b58e943e963" (UID: "db3bfd1d-3fb9-4406-bd26-7b58e943e963"). InnerVolumeSpecName "kube-api-access-xdjjv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:41:34 crc kubenswrapper[4789]: I0202 21:41:34.640957 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f3d2a87-d6d0-4d7f-8686-90e1731ff80e-kube-api-access-b6gp5" (OuterVolumeSpecName: "kube-api-access-b6gp5") pod "1f3d2a87-d6d0-4d7f-8686-90e1731ff80e" (UID: "1f3d2a87-d6d0-4d7f-8686-90e1731ff80e"). InnerVolumeSpecName "kube-api-access-b6gp5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:41:34 crc kubenswrapper[4789]: I0202 21:41:34.644632 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db3bfd1d-3fb9-4406-bd26-7b58e943e963-scripts" (OuterVolumeSpecName: "scripts") pod "db3bfd1d-3fb9-4406-bd26-7b58e943e963" (UID: "db3bfd1d-3fb9-4406-bd26-7b58e943e963"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:41:34 crc kubenswrapper[4789]: I0202 21:41:34.657533 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db3bfd1d-3fb9-4406-bd26-7b58e943e963-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "db3bfd1d-3fb9-4406-bd26-7b58e943e963" (UID: "db3bfd1d-3fb9-4406-bd26-7b58e943e963"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:41:34 crc kubenswrapper[4789]: I0202 21:41:34.678592 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f3d2a87-d6d0-4d7f-8686-90e1731ff80e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "1f3d2a87-d6d0-4d7f-8686-90e1731ff80e" (UID: "1f3d2a87-d6d0-4d7f-8686-90e1731ff80e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:41:34 crc kubenswrapper[4789]: I0202 21:41:34.692113 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f3d2a87-d6d0-4d7f-8686-90e1731ff80e-config" (OuterVolumeSpecName: "config") pod "1f3d2a87-d6d0-4d7f-8686-90e1731ff80e" (UID: "1f3d2a87-d6d0-4d7f-8686-90e1731ff80e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:41:34 crc kubenswrapper[4789]: I0202 21:41:34.706666 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f3d2a87-d6d0-4d7f-8686-90e1731ff80e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1f3d2a87-d6d0-4d7f-8686-90e1731ff80e" (UID: "1f3d2a87-d6d0-4d7f-8686-90e1731ff80e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:41:34 crc kubenswrapper[4789]: I0202 21:41:34.710757 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db3bfd1d-3fb9-4406-bd26-7b58e943e963-config-data" (OuterVolumeSpecName: "config-data") pod "db3bfd1d-3fb9-4406-bd26-7b58e943e963" (UID: "db3bfd1d-3fb9-4406-bd26-7b58e943e963"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:41:34 crc kubenswrapper[4789]: I0202 21:41:34.710771 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f3d2a87-d6d0-4d7f-8686-90e1731ff80e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1f3d2a87-d6d0-4d7f-8686-90e1731ff80e" (UID: "1f3d2a87-d6d0-4d7f-8686-90e1731ff80e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:41:34 crc kubenswrapper[4789]: I0202 21:41:34.714111 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f3d2a87-d6d0-4d7f-8686-90e1731ff80e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1f3d2a87-d6d0-4d7f-8686-90e1731ff80e" (UID: "1f3d2a87-d6d0-4d7f-8686-90e1731ff80e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:41:34 crc kubenswrapper[4789]: I0202 21:41:34.724538 4789 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db3bfd1d-3fb9-4406-bd26-7b58e943e963-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 21:41:34 crc kubenswrapper[4789]: I0202 21:41:34.724768 4789 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db3bfd1d-3fb9-4406-bd26-7b58e943e963-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 21:41:34 crc kubenswrapper[4789]: I0202 21:41:34.724835 4789 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1f3d2a87-d6d0-4d7f-8686-90e1731ff80e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 21:41:34 crc kubenswrapper[4789]: I0202 21:41:34.724892 4789 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1f3d2a87-d6d0-4d7f-8686-90e1731ff80e-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 21:41:34 crc kubenswrapper[4789]: I0202 21:41:34.724954 4789 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1f3d2a87-d6d0-4d7f-8686-90e1731ff80e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 02 21:41:34 crc kubenswrapper[4789]: I0202 21:41:34.725010 4789 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1f3d2a87-d6d0-4d7f-8686-90e1731ff80e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 21:41:34 crc kubenswrapper[4789]: I0202 21:41:34.725068 4789 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db3bfd1d-3fb9-4406-bd26-7b58e943e963-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 21:41:34 crc kubenswrapper[4789]: I0202 21:41:34.725109 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 21:41:34 crc kubenswrapper[4789]: I0202 21:41:34.725130 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xdjjv\" (UniqueName: \"kubernetes.io/projected/db3bfd1d-3fb9-4406-bd26-7b58e943e963-kube-api-access-xdjjv\") on node \"crc\" DevicePath \"\"" Feb 02 21:41:34 crc kubenswrapper[4789]: I0202 21:41:34.725240 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b6gp5\" (UniqueName: \"kubernetes.io/projected/1f3d2a87-d6d0-4d7f-8686-90e1731ff80e-kube-api-access-b6gp5\") on node \"crc\" DevicePath \"\"" Feb 02 21:41:34 crc kubenswrapper[4789]: I0202 21:41:34.725313 4789 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f3d2a87-d6d0-4d7f-8686-90e1731ff80e-config\") on node \"crc\" DevicePath \"\"" Feb 02 21:41:34 crc kubenswrapper[4789]: I0202 21:41:34.728860 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 21:41:34 crc kubenswrapper[4789]: I0202 21:41:34.827645 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6bd238e-ea07-468e-ade8-4c40c90d4429-config-data\") pod \"f6bd238e-ea07-468e-ade8-4c40c90d4429\" (UID: \"f6bd238e-ea07-468e-ade8-4c40c90d4429\") " Feb 02 21:41:34 crc kubenswrapper[4789]: I0202 21:41:34.827720 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f6bd238e-ea07-468e-ade8-4c40c90d4429-logs\") pod \"f6bd238e-ea07-468e-ade8-4c40c90d4429\" (UID: \"f6bd238e-ea07-468e-ade8-4c40c90d4429\") " Feb 02 21:41:34 crc kubenswrapper[4789]: I0202 21:41:34.827784 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6bd238e-ea07-468e-ade8-4c40c90d4429-nova-metadata-tls-certs\") pod \"f6bd238e-ea07-468e-ade8-4c40c90d4429\" (UID: \"f6bd238e-ea07-468e-ade8-4c40c90d4429\") " Feb 02 21:41:34 crc kubenswrapper[4789]: I0202 21:41:34.827815 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lphrn\" (UniqueName: \"kubernetes.io/projected/f6bd238e-ea07-468e-ade8-4c40c90d4429-kube-api-access-lphrn\") pod \"f6bd238e-ea07-468e-ade8-4c40c90d4429\" (UID: \"f6bd238e-ea07-468e-ade8-4c40c90d4429\") " Feb 02 21:41:34 crc kubenswrapper[4789]: I0202 21:41:34.827865 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6bd238e-ea07-468e-ade8-4c40c90d4429-combined-ca-bundle\") pod \"f6bd238e-ea07-468e-ade8-4c40c90d4429\" (UID: \"f6bd238e-ea07-468e-ade8-4c40c90d4429\") " Feb 02 21:41:34 crc kubenswrapper[4789]: I0202 21:41:34.828539 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6bd238e-ea07-468e-ade8-4c40c90d4429-logs" (OuterVolumeSpecName: "logs") pod "f6bd238e-ea07-468e-ade8-4c40c90d4429" (UID: "f6bd238e-ea07-468e-ade8-4c40c90d4429"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 21:41:34 crc kubenswrapper[4789]: I0202 21:41:34.835549 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6bd238e-ea07-468e-ade8-4c40c90d4429-kube-api-access-lphrn" (OuterVolumeSpecName: "kube-api-access-lphrn") pod "f6bd238e-ea07-468e-ade8-4c40c90d4429" (UID: "f6bd238e-ea07-468e-ade8-4c40c90d4429"). InnerVolumeSpecName "kube-api-access-lphrn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:41:34 crc kubenswrapper[4789]: I0202 21:41:34.852284 4789 generic.go:334] "Generic (PLEG): container finished" podID="1f3d2a87-d6d0-4d7f-8686-90e1731ff80e" containerID="f03611fc3b4f64a18d8ad989aa70fe1f61fd98739fb82be5104c8aa9c9ec863c" exitCode=0 Feb 02 21:41:34 crc kubenswrapper[4789]: I0202 21:41:34.852440 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-lg4td" event={"ID":"1f3d2a87-d6d0-4d7f-8686-90e1731ff80e","Type":"ContainerDied","Data":"f03611fc3b4f64a18d8ad989aa70fe1f61fd98739fb82be5104c8aa9c9ec863c"} Feb 02 21:41:34 crc kubenswrapper[4789]: I0202 21:41:34.852528 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-lg4td" event={"ID":"1f3d2a87-d6d0-4d7f-8686-90e1731ff80e","Type":"ContainerDied","Data":"f75dec15fb3d61d94dbdefaeca2dae47c26f933e298782cc28ebff9d650dafdc"} Feb 02 21:41:34 crc kubenswrapper[4789]: I0202 21:41:34.852566 4789 scope.go:117] "RemoveContainer" containerID="f03611fc3b4f64a18d8ad989aa70fe1f61fd98739fb82be5104c8aa9c9ec863c" Feb 02 21:41:34 crc kubenswrapper[4789]: I0202 21:41:34.852770 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-lg4td" Feb 02 21:41:34 crc kubenswrapper[4789]: I0202 21:41:34.853823 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6bd238e-ea07-468e-ade8-4c40c90d4429-config-data" (OuterVolumeSpecName: "config-data") pod "f6bd238e-ea07-468e-ade8-4c40c90d4429" (UID: "f6bd238e-ea07-468e-ade8-4c40c90d4429"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:41:34 crc kubenswrapper[4789]: I0202 21:41:34.855554 4789 generic.go:334] "Generic (PLEG): container finished" podID="f6bd238e-ea07-468e-ade8-4c40c90d4429" containerID="6d616adbdddc82e73064d7459ef8f39f39c7d2542ccd22858addfd8a7cb38696" exitCode=0 Feb 02 21:41:34 crc kubenswrapper[4789]: I0202 21:41:34.855638 4789 generic.go:334] "Generic (PLEG): container finished" podID="f6bd238e-ea07-468e-ade8-4c40c90d4429" containerID="268ac6ed236165ea35540ea979db2d467ad6f7692c477b39e0aa58df1c316e3b" exitCode=143 Feb 02 21:41:34 crc kubenswrapper[4789]: I0202 21:41:34.855695 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f6bd238e-ea07-468e-ade8-4c40c90d4429","Type":"ContainerDied","Data":"6d616adbdddc82e73064d7459ef8f39f39c7d2542ccd22858addfd8a7cb38696"} Feb 02 21:41:34 crc kubenswrapper[4789]: I0202 21:41:34.855727 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f6bd238e-ea07-468e-ade8-4c40c90d4429","Type":"ContainerDied","Data":"268ac6ed236165ea35540ea979db2d467ad6f7692c477b39e0aa58df1c316e3b"} Feb 02 21:41:34 crc kubenswrapper[4789]: I0202 21:41:34.855742 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f6bd238e-ea07-468e-ade8-4c40c90d4429","Type":"ContainerDied","Data":"6848f31639f74cdec01b55dcc75631ae6e030da6e82480254d47e9d4345fe75c"} Feb 02 21:41:34 crc kubenswrapper[4789]: I0202 21:41:34.855802 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 21:41:34 crc kubenswrapper[4789]: I0202 21:41:34.860111 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-2xz72" event={"ID":"db3bfd1d-3fb9-4406-bd26-7b58e943e963","Type":"ContainerDied","Data":"a775d6867909595db7e1903b8f5f4eb09355cddf02fcbe4cfe5b9a8e7a4d526c"} Feb 02 21:41:34 crc kubenswrapper[4789]: I0202 21:41:34.860143 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a775d6867909595db7e1903b8f5f4eb09355cddf02fcbe4cfe5b9a8e7a4d526c" Feb 02 21:41:34 crc kubenswrapper[4789]: I0202 21:41:34.860218 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-2xz72" Feb 02 21:41:34 crc kubenswrapper[4789]: I0202 21:41:34.869008 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6bd238e-ea07-468e-ade8-4c40c90d4429-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f6bd238e-ea07-468e-ade8-4c40c90d4429" (UID: "f6bd238e-ea07-468e-ade8-4c40c90d4429"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:41:34 crc kubenswrapper[4789]: I0202 21:41:34.879968 4789 generic.go:334] "Generic (PLEG): container finished" podID="7fd2fb6e-7696-41f6-9108-4e931e4f85ec" containerID="4702917ff0138df6eadd26c7899c76d739c77baac82a5b4ae2d09c3b400326ec" exitCode=143 Feb 02 21:41:34 crc kubenswrapper[4789]: I0202 21:41:34.880032 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7fd2fb6e-7696-41f6-9108-4e931e4f85ec","Type":"ContainerDied","Data":"4702917ff0138df6eadd26c7899c76d739c77baac82a5b4ae2d09c3b400326ec"} Feb 02 21:41:34 crc kubenswrapper[4789]: I0202 21:41:34.894768 4789 scope.go:117] "RemoveContainer" containerID="8c409f67673a79ef50f62a1df7708d4692141395f8dba4a50218b03da7b468fd" Feb 02 21:41:34 crc kubenswrapper[4789]: I0202 21:41:34.897378 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6bd238e-ea07-468e-ade8-4c40c90d4429-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "f6bd238e-ea07-468e-ade8-4c40c90d4429" (UID: "f6bd238e-ea07-468e-ade8-4c40c90d4429"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:41:34 crc kubenswrapper[4789]: I0202 21:41:34.912310 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-lg4td"] Feb 02 21:41:34 crc kubenswrapper[4789]: I0202 21:41:34.928271 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-lg4td"] Feb 02 21:41:34 crc kubenswrapper[4789]: I0202 21:41:34.929501 4789 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f6bd238e-ea07-468e-ade8-4c40c90d4429-logs\") on node \"crc\" DevicePath \"\"" Feb 02 21:41:34 crc kubenswrapper[4789]: I0202 21:41:34.929524 4789 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6bd238e-ea07-468e-ade8-4c40c90d4429-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 21:41:34 crc kubenswrapper[4789]: I0202 21:41:34.929538 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lphrn\" (UniqueName: \"kubernetes.io/projected/f6bd238e-ea07-468e-ade8-4c40c90d4429-kube-api-access-lphrn\") on node \"crc\" DevicePath \"\"" Feb 02 21:41:34 crc kubenswrapper[4789]: I0202 21:41:34.929551 4789 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6bd238e-ea07-468e-ade8-4c40c90d4429-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 21:41:34 crc kubenswrapper[4789]: I0202 21:41:34.929561 4789 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6bd238e-ea07-468e-ade8-4c40c90d4429-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 21:41:34 crc kubenswrapper[4789]: I0202 21:41:34.937634 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 02 21:41:34 crc kubenswrapper[4789]: E0202 21:41:34.937965 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f3d2a87-d6d0-4d7f-8686-90e1731ff80e" containerName="dnsmasq-dns" Feb 02 21:41:34 crc kubenswrapper[4789]: I0202 21:41:34.937980 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f3d2a87-d6d0-4d7f-8686-90e1731ff80e" containerName="dnsmasq-dns" Feb 02 21:41:34 crc kubenswrapper[4789]: E0202 21:41:34.937992 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6bd238e-ea07-468e-ade8-4c40c90d4429" containerName="nova-metadata-log" Feb 02 21:41:34 crc kubenswrapper[4789]: I0202 21:41:34.938000 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6bd238e-ea07-468e-ade8-4c40c90d4429" containerName="nova-metadata-log" Feb 02 21:41:34 crc kubenswrapper[4789]: E0202 21:41:34.938016 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30f5a928-4b52-4eef-acb5-7748decc816e" containerName="nova-manage" Feb 02 21:41:34 crc kubenswrapper[4789]: I0202 21:41:34.938022 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="30f5a928-4b52-4eef-acb5-7748decc816e" containerName="nova-manage" Feb 02 21:41:34 crc kubenswrapper[4789]: E0202 21:41:34.938037 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6bd238e-ea07-468e-ade8-4c40c90d4429" containerName="nova-metadata-metadata" Feb 02 21:41:34 crc kubenswrapper[4789]: I0202 21:41:34.938042 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6bd238e-ea07-468e-ade8-4c40c90d4429" containerName="nova-metadata-metadata" Feb 02 21:41:34 crc kubenswrapper[4789]: E0202 21:41:34.938058 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f3d2a87-d6d0-4d7f-8686-90e1731ff80e" containerName="init" Feb 02 21:41:34 crc kubenswrapper[4789]: I0202 21:41:34.938063 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f3d2a87-d6d0-4d7f-8686-90e1731ff80e" containerName="init" Feb 02 21:41:34 crc kubenswrapper[4789]: E0202 21:41:34.938078 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db3bfd1d-3fb9-4406-bd26-7b58e943e963" containerName="nova-cell1-conductor-db-sync" Feb 02 21:41:34 crc kubenswrapper[4789]: I0202 21:41:34.938084 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="db3bfd1d-3fb9-4406-bd26-7b58e943e963" containerName="nova-cell1-conductor-db-sync" Feb 02 21:41:34 crc kubenswrapper[4789]: I0202 21:41:34.938242 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="db3bfd1d-3fb9-4406-bd26-7b58e943e963" containerName="nova-cell1-conductor-db-sync" Feb 02 21:41:34 crc kubenswrapper[4789]: I0202 21:41:34.938260 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6bd238e-ea07-468e-ade8-4c40c90d4429" containerName="nova-metadata-log" Feb 02 21:41:34 crc kubenswrapper[4789]: I0202 21:41:34.938267 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6bd238e-ea07-468e-ade8-4c40c90d4429" containerName="nova-metadata-metadata" Feb 02 21:41:34 crc kubenswrapper[4789]: I0202 21:41:34.938324 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="30f5a928-4b52-4eef-acb5-7748decc816e" containerName="nova-manage" Feb 02 21:41:34 crc kubenswrapper[4789]: I0202 21:41:34.938336 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f3d2a87-d6d0-4d7f-8686-90e1731ff80e" containerName="dnsmasq-dns" Feb 02 21:41:34 crc kubenswrapper[4789]: I0202 21:41:34.939092 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 02 21:41:34 crc kubenswrapper[4789]: I0202 21:41:34.940049 4789 scope.go:117] "RemoveContainer" containerID="f03611fc3b4f64a18d8ad989aa70fe1f61fd98739fb82be5104c8aa9c9ec863c" Feb 02 21:41:34 crc kubenswrapper[4789]: E0202 21:41:34.941452 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f03611fc3b4f64a18d8ad989aa70fe1f61fd98739fb82be5104c8aa9c9ec863c\": container with ID starting with f03611fc3b4f64a18d8ad989aa70fe1f61fd98739fb82be5104c8aa9c9ec863c not found: ID does not exist" containerID="f03611fc3b4f64a18d8ad989aa70fe1f61fd98739fb82be5104c8aa9c9ec863c" Feb 02 21:41:34 crc kubenswrapper[4789]: I0202 21:41:34.941490 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f03611fc3b4f64a18d8ad989aa70fe1f61fd98739fb82be5104c8aa9c9ec863c"} err="failed to get container status \"f03611fc3b4f64a18d8ad989aa70fe1f61fd98739fb82be5104c8aa9c9ec863c\": rpc error: code = NotFound desc = could not find container \"f03611fc3b4f64a18d8ad989aa70fe1f61fd98739fb82be5104c8aa9c9ec863c\": container with ID starting with f03611fc3b4f64a18d8ad989aa70fe1f61fd98739fb82be5104c8aa9c9ec863c not found: ID does not exist" Feb 02 21:41:34 crc kubenswrapper[4789]: I0202 21:41:34.941518 4789 scope.go:117] "RemoveContainer" containerID="8c409f67673a79ef50f62a1df7708d4692141395f8dba4a50218b03da7b468fd" Feb 02 21:41:34 crc kubenswrapper[4789]: I0202 21:41:34.941781 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 02 21:41:34 crc kubenswrapper[4789]: E0202 21:41:34.942231 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c409f67673a79ef50f62a1df7708d4692141395f8dba4a50218b03da7b468fd\": container with ID starting with 8c409f67673a79ef50f62a1df7708d4692141395f8dba4a50218b03da7b468fd not found: ID does not exist" containerID="8c409f67673a79ef50f62a1df7708d4692141395f8dba4a50218b03da7b468fd" Feb 02 21:41:34 crc kubenswrapper[4789]: I0202 21:41:34.942253 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c409f67673a79ef50f62a1df7708d4692141395f8dba4a50218b03da7b468fd"} err="failed to get container status \"8c409f67673a79ef50f62a1df7708d4692141395f8dba4a50218b03da7b468fd\": rpc error: code = NotFound desc = could not find container \"8c409f67673a79ef50f62a1df7708d4692141395f8dba4a50218b03da7b468fd\": container with ID starting with 8c409f67673a79ef50f62a1df7708d4692141395f8dba4a50218b03da7b468fd not found: ID does not exist" Feb 02 21:41:34 crc kubenswrapper[4789]: I0202 21:41:34.942264 4789 scope.go:117] "RemoveContainer" containerID="6d616adbdddc82e73064d7459ef8f39f39c7d2542ccd22858addfd8a7cb38696" Feb 02 21:41:34 crc kubenswrapper[4789]: I0202 21:41:34.950529 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 02 21:41:34 crc kubenswrapper[4789]: I0202 21:41:34.974154 4789 scope.go:117] "RemoveContainer" containerID="268ac6ed236165ea35540ea979db2d467ad6f7692c477b39e0aa58df1c316e3b" Feb 02 21:41:34 crc kubenswrapper[4789]: I0202 21:41:34.991269 4789 scope.go:117] "RemoveContainer" containerID="6d616adbdddc82e73064d7459ef8f39f39c7d2542ccd22858addfd8a7cb38696" Feb 02 21:41:34 crc kubenswrapper[4789]: E0202 21:41:34.991796 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d616adbdddc82e73064d7459ef8f39f39c7d2542ccd22858addfd8a7cb38696\": container with ID starting with 6d616adbdddc82e73064d7459ef8f39f39c7d2542ccd22858addfd8a7cb38696 not found: ID does not exist" containerID="6d616adbdddc82e73064d7459ef8f39f39c7d2542ccd22858addfd8a7cb38696" Feb 02 21:41:34 crc kubenswrapper[4789]: I0202 21:41:34.991837 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d616adbdddc82e73064d7459ef8f39f39c7d2542ccd22858addfd8a7cb38696"} err="failed to get container status \"6d616adbdddc82e73064d7459ef8f39f39c7d2542ccd22858addfd8a7cb38696\": rpc error: code = NotFound desc = could not find container \"6d616adbdddc82e73064d7459ef8f39f39c7d2542ccd22858addfd8a7cb38696\": container with ID starting with 6d616adbdddc82e73064d7459ef8f39f39c7d2542ccd22858addfd8a7cb38696 not found: ID does not exist" Feb 02 21:41:34 crc kubenswrapper[4789]: I0202 21:41:34.991866 4789 scope.go:117] "RemoveContainer" containerID="268ac6ed236165ea35540ea979db2d467ad6f7692c477b39e0aa58df1c316e3b" Feb 02 21:41:34 crc kubenswrapper[4789]: E0202 21:41:34.992145 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"268ac6ed236165ea35540ea979db2d467ad6f7692c477b39e0aa58df1c316e3b\": container with ID starting with 268ac6ed236165ea35540ea979db2d467ad6f7692c477b39e0aa58df1c316e3b not found: ID does not exist" containerID="268ac6ed236165ea35540ea979db2d467ad6f7692c477b39e0aa58df1c316e3b" Feb 02 21:41:34 crc kubenswrapper[4789]: I0202 21:41:34.992164 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"268ac6ed236165ea35540ea979db2d467ad6f7692c477b39e0aa58df1c316e3b"} err="failed to get container status \"268ac6ed236165ea35540ea979db2d467ad6f7692c477b39e0aa58df1c316e3b\": rpc error: code = NotFound desc = could not find container \"268ac6ed236165ea35540ea979db2d467ad6f7692c477b39e0aa58df1c316e3b\": container with ID starting with 268ac6ed236165ea35540ea979db2d467ad6f7692c477b39e0aa58df1c316e3b not found: ID does not exist" Feb 02 21:41:34 crc kubenswrapper[4789]: I0202 21:41:34.992178 4789 scope.go:117] "RemoveContainer" containerID="6d616adbdddc82e73064d7459ef8f39f39c7d2542ccd22858addfd8a7cb38696" Feb 02 21:41:34 crc kubenswrapper[4789]: I0202 21:41:34.992357 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d616adbdddc82e73064d7459ef8f39f39c7d2542ccd22858addfd8a7cb38696"} err="failed to get container status \"6d616adbdddc82e73064d7459ef8f39f39c7d2542ccd22858addfd8a7cb38696\": rpc error: code = NotFound desc = could not find container \"6d616adbdddc82e73064d7459ef8f39f39c7d2542ccd22858addfd8a7cb38696\": container with ID starting with 6d616adbdddc82e73064d7459ef8f39f39c7d2542ccd22858addfd8a7cb38696 not found: ID does not exist" Feb 02 21:41:34 crc kubenswrapper[4789]: I0202 21:41:34.992376 4789 scope.go:117] "RemoveContainer" containerID="268ac6ed236165ea35540ea979db2d467ad6f7692c477b39e0aa58df1c316e3b" Feb 02 21:41:34 crc kubenswrapper[4789]: I0202 21:41:34.992569 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"268ac6ed236165ea35540ea979db2d467ad6f7692c477b39e0aa58df1c316e3b"} err="failed to get container status \"268ac6ed236165ea35540ea979db2d467ad6f7692c477b39e0aa58df1c316e3b\": rpc error: code = NotFound desc = could not find container \"268ac6ed236165ea35540ea979db2d467ad6f7692c477b39e0aa58df1c316e3b\": container with ID starting with 268ac6ed236165ea35540ea979db2d467ad6f7692c477b39e0aa58df1c316e3b not found: ID does not exist" Feb 02 21:41:35 crc kubenswrapper[4789]: I0202 21:41:35.135271 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/399d9417-2065-4e92-89c5-a04dbeaf2cca-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"399d9417-2065-4e92-89c5-a04dbeaf2cca\") " pod="openstack/nova-cell1-conductor-0" Feb 02 21:41:35 crc kubenswrapper[4789]: I0202 21:41:35.135432 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6695\" (UniqueName: \"kubernetes.io/projected/399d9417-2065-4e92-89c5-a04dbeaf2cca-kube-api-access-b6695\") pod \"nova-cell1-conductor-0\" (UID: \"399d9417-2065-4e92-89c5-a04dbeaf2cca\") " pod="openstack/nova-cell1-conductor-0" Feb 02 21:41:35 crc kubenswrapper[4789]: I0202 21:41:35.135467 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/399d9417-2065-4e92-89c5-a04dbeaf2cca-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"399d9417-2065-4e92-89c5-a04dbeaf2cca\") " pod="openstack/nova-cell1-conductor-0" Feb 02 21:41:35 crc kubenswrapper[4789]: I0202 21:41:35.184866 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 21:41:35 crc kubenswrapper[4789]: I0202 21:41:35.194705 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 21:41:35 crc kubenswrapper[4789]: I0202 21:41:35.206905 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 02 21:41:35 crc kubenswrapper[4789]: I0202 21:41:35.208267 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 21:41:35 crc kubenswrapper[4789]: I0202 21:41:35.210411 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 02 21:41:35 crc kubenswrapper[4789]: I0202 21:41:35.210633 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 02 21:41:35 crc kubenswrapper[4789]: I0202 21:41:35.225249 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 21:41:35 crc kubenswrapper[4789]: I0202 21:41:35.237049 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/399d9417-2065-4e92-89c5-a04dbeaf2cca-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"399d9417-2065-4e92-89c5-a04dbeaf2cca\") " pod="openstack/nova-cell1-conductor-0" Feb 02 21:41:35 crc kubenswrapper[4789]: I0202 21:41:35.237209 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6695\" (UniqueName: \"kubernetes.io/projected/399d9417-2065-4e92-89c5-a04dbeaf2cca-kube-api-access-b6695\") pod \"nova-cell1-conductor-0\" (UID: \"399d9417-2065-4e92-89c5-a04dbeaf2cca\") " pod="openstack/nova-cell1-conductor-0" Feb 02 21:41:35 crc kubenswrapper[4789]: I0202 21:41:35.237238 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/399d9417-2065-4e92-89c5-a04dbeaf2cca-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"399d9417-2065-4e92-89c5-a04dbeaf2cca\") " pod="openstack/nova-cell1-conductor-0" Feb 02 21:41:35 crc kubenswrapper[4789]: I0202 21:41:35.247519 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/399d9417-2065-4e92-89c5-a04dbeaf2cca-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"399d9417-2065-4e92-89c5-a04dbeaf2cca\") " pod="openstack/nova-cell1-conductor-0" Feb 02 21:41:35 crc kubenswrapper[4789]: I0202 21:41:35.247793 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/399d9417-2065-4e92-89c5-a04dbeaf2cca-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"399d9417-2065-4e92-89c5-a04dbeaf2cca\") " pod="openstack/nova-cell1-conductor-0" Feb 02 21:41:35 crc kubenswrapper[4789]: I0202 21:41:35.268117 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6695\" (UniqueName: \"kubernetes.io/projected/399d9417-2065-4e92-89c5-a04dbeaf2cca-kube-api-access-b6695\") pod \"nova-cell1-conductor-0\" (UID: \"399d9417-2065-4e92-89c5-a04dbeaf2cca\") " pod="openstack/nova-cell1-conductor-0" Feb 02 21:41:35 crc kubenswrapper[4789]: I0202 21:41:35.339444 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4n89\" (UniqueName: \"kubernetes.io/projected/2eed9773-f2f1-4d61-8b88-c0eb30620612-kube-api-access-m4n89\") pod \"nova-metadata-0\" (UID: \"2eed9773-f2f1-4d61-8b88-c0eb30620612\") " pod="openstack/nova-metadata-0" Feb 02 21:41:35 crc kubenswrapper[4789]: I0202 21:41:35.339509 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2eed9773-f2f1-4d61-8b88-c0eb30620612-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2eed9773-f2f1-4d61-8b88-c0eb30620612\") " pod="openstack/nova-metadata-0" Feb 02 21:41:35 crc kubenswrapper[4789]: I0202 21:41:35.339539 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2eed9773-f2f1-4d61-8b88-c0eb30620612-config-data\") pod \"nova-metadata-0\" (UID: \"2eed9773-f2f1-4d61-8b88-c0eb30620612\") " pod="openstack/nova-metadata-0" Feb 02 21:41:35 crc kubenswrapper[4789]: I0202 21:41:35.339714 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2eed9773-f2f1-4d61-8b88-c0eb30620612-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"2eed9773-f2f1-4d61-8b88-c0eb30620612\") " pod="openstack/nova-metadata-0" Feb 02 21:41:35 crc kubenswrapper[4789]: I0202 21:41:35.339743 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2eed9773-f2f1-4d61-8b88-c0eb30620612-logs\") pod \"nova-metadata-0\" (UID: \"2eed9773-f2f1-4d61-8b88-c0eb30620612\") " pod="openstack/nova-metadata-0" Feb 02 21:41:35 crc kubenswrapper[4789]: I0202 21:41:35.441769 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2eed9773-f2f1-4d61-8b88-c0eb30620612-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"2eed9773-f2f1-4d61-8b88-c0eb30620612\") " pod="openstack/nova-metadata-0" Feb 02 21:41:35 crc kubenswrapper[4789]: I0202 21:41:35.441809 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2eed9773-f2f1-4d61-8b88-c0eb30620612-logs\") pod \"nova-metadata-0\" (UID: \"2eed9773-f2f1-4d61-8b88-c0eb30620612\") " pod="openstack/nova-metadata-0" Feb 02 21:41:35 crc kubenswrapper[4789]: I0202 21:41:35.441913 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4n89\" (UniqueName: \"kubernetes.io/projected/2eed9773-f2f1-4d61-8b88-c0eb30620612-kube-api-access-m4n89\") pod \"nova-metadata-0\" (UID: \"2eed9773-f2f1-4d61-8b88-c0eb30620612\") " pod="openstack/nova-metadata-0" Feb 02 21:41:35 crc kubenswrapper[4789]: I0202 21:41:35.441954 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2eed9773-f2f1-4d61-8b88-c0eb30620612-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2eed9773-f2f1-4d61-8b88-c0eb30620612\") " pod="openstack/nova-metadata-0" Feb 02 21:41:35 crc kubenswrapper[4789]: I0202 21:41:35.441994 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2eed9773-f2f1-4d61-8b88-c0eb30620612-config-data\") pod \"nova-metadata-0\" (UID: \"2eed9773-f2f1-4d61-8b88-c0eb30620612\") " pod="openstack/nova-metadata-0" Feb 02 21:41:35 crc kubenswrapper[4789]: I0202 21:41:35.442557 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2eed9773-f2f1-4d61-8b88-c0eb30620612-logs\") pod \"nova-metadata-0\" (UID: \"2eed9773-f2f1-4d61-8b88-c0eb30620612\") " pod="openstack/nova-metadata-0" Feb 02 21:41:35 crc kubenswrapper[4789]: I0202 21:41:35.446138 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2eed9773-f2f1-4d61-8b88-c0eb30620612-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"2eed9773-f2f1-4d61-8b88-c0eb30620612\") " pod="openstack/nova-metadata-0" Feb 02 21:41:35 crc kubenswrapper[4789]: I0202 21:41:35.454796 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2eed9773-f2f1-4d61-8b88-c0eb30620612-config-data\") pod \"nova-metadata-0\" (UID: \"2eed9773-f2f1-4d61-8b88-c0eb30620612\") " pod="openstack/nova-metadata-0" Feb 02 21:41:35 crc kubenswrapper[4789]: I0202 21:41:35.457834 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4n89\" (UniqueName: \"kubernetes.io/projected/2eed9773-f2f1-4d61-8b88-c0eb30620612-kube-api-access-m4n89\") pod \"nova-metadata-0\" (UID: \"2eed9773-f2f1-4d61-8b88-c0eb30620612\") " pod="openstack/nova-metadata-0" Feb 02 21:41:35 crc kubenswrapper[4789]: I0202 21:41:35.459296 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2eed9773-f2f1-4d61-8b88-c0eb30620612-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2eed9773-f2f1-4d61-8b88-c0eb30620612\") " pod="openstack/nova-metadata-0" Feb 02 21:41:35 crc kubenswrapper[4789]: I0202 21:41:35.562382 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 02 21:41:35 crc kubenswrapper[4789]: I0202 21:41:35.619484 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 21:41:35 crc kubenswrapper[4789]: I0202 21:41:35.899008 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="d2f42703-8d3d-4f46-9cdb-924e5d849c42" containerName="nova-scheduler-scheduler" containerID="cri-o://c6adcda4d30dd8d80348de151844b0e666f7e98189122d438d68436e5a5ba717" gracePeriod=30 Feb 02 21:41:36 crc kubenswrapper[4789]: I0202 21:41:36.009146 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 02 21:41:36 crc kubenswrapper[4789]: W0202 21:41:36.012341 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod399d9417_2065_4e92_89c5_a04dbeaf2cca.slice/crio-aa4caa858ae8b9bebea327895e4749f90535e00fc76b86bda685bc300c0b326a WatchSource:0}: Error finding container aa4caa858ae8b9bebea327895e4749f90535e00fc76b86bda685bc300c0b326a: Status 404 returned error can't find the container with id aa4caa858ae8b9bebea327895e4749f90535e00fc76b86bda685bc300c0b326a Feb 02 21:41:36 crc kubenswrapper[4789]: I0202 21:41:36.190295 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 21:41:36 crc kubenswrapper[4789]: I0202 21:41:36.430968 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f3d2a87-d6d0-4d7f-8686-90e1731ff80e" path="/var/lib/kubelet/pods/1f3d2a87-d6d0-4d7f-8686-90e1731ff80e/volumes" Feb 02 21:41:36 crc kubenswrapper[4789]: I0202 21:41:36.431984 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6bd238e-ea07-468e-ade8-4c40c90d4429" path="/var/lib/kubelet/pods/f6bd238e-ea07-468e-ade8-4c40c90d4429/volumes" Feb 02 21:41:36 crc kubenswrapper[4789]: I0202 21:41:36.911931 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2eed9773-f2f1-4d61-8b88-c0eb30620612","Type":"ContainerStarted","Data":"1e4d57a0c906192712dd83cd3490316f6b4df1a328f976508a9336ce6fa60b36"} Feb 02 21:41:36 crc kubenswrapper[4789]: I0202 21:41:36.911987 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2eed9773-f2f1-4d61-8b88-c0eb30620612","Type":"ContainerStarted","Data":"01a4dbf7d2f219eb93cb798d6f034d38d23e5a5e159195b783019d1d6b5662fe"} Feb 02 21:41:36 crc kubenswrapper[4789]: I0202 21:41:36.912003 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2eed9773-f2f1-4d61-8b88-c0eb30620612","Type":"ContainerStarted","Data":"1db476ce12680e953f5b4ccca8dcd2ebb48ad4f49c924be7a0af04c0463916d2"} Feb 02 21:41:36 crc kubenswrapper[4789]: I0202 21:41:36.913775 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"399d9417-2065-4e92-89c5-a04dbeaf2cca","Type":"ContainerStarted","Data":"e98adb233ea0f16d2b2f46eddae689b1bb397a9ba532ccb91e5ece02b9397f95"} Feb 02 21:41:36 crc kubenswrapper[4789]: I0202 21:41:36.913826 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"399d9417-2065-4e92-89c5-a04dbeaf2cca","Type":"ContainerStarted","Data":"aa4caa858ae8b9bebea327895e4749f90535e00fc76b86bda685bc300c0b326a"} Feb 02 21:41:36 crc kubenswrapper[4789]: I0202 21:41:36.913931 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 02 21:41:36 crc kubenswrapper[4789]: I0202 21:41:36.934443 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=1.934425136 podStartE2EDuration="1.934425136s" podCreationTimestamp="2026-02-02 21:41:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:41:36.925807452 +0000 UTC m=+1317.220832481" watchObservedRunningTime="2026-02-02 21:41:36.934425136 +0000 UTC m=+1317.229450145" Feb 02 21:41:36 crc kubenswrapper[4789]: I0202 21:41:36.944884 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.944866101 podStartE2EDuration="2.944866101s" podCreationTimestamp="2026-02-02 21:41:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:41:36.943920554 +0000 UTC m=+1317.238945573" watchObservedRunningTime="2026-02-02 21:41:36.944866101 +0000 UTC m=+1317.239891120" Feb 02 21:41:37 crc kubenswrapper[4789]: I0202 21:41:37.833107 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 21:41:37 crc kubenswrapper[4789]: I0202 21:41:37.926071 4789 generic.go:334] "Generic (PLEG): container finished" podID="1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a" containerID="4c7189c4be6e0d2b75e7827d5f5c5ee84da6eaea4bbc1aa79e5145b1d90aab0b" exitCode=137 Feb 02 21:41:37 crc kubenswrapper[4789]: I0202 21:41:37.926179 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a","Type":"ContainerDied","Data":"4c7189c4be6e0d2b75e7827d5f5c5ee84da6eaea4bbc1aa79e5145b1d90aab0b"} Feb 02 21:41:37 crc kubenswrapper[4789]: I0202 21:41:37.926214 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 21:41:37 crc kubenswrapper[4789]: I0202 21:41:37.926246 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a","Type":"ContainerDied","Data":"403e4ec6b4773db4734445fa63c63e932307854781f8dde1d06b52337ccd9b55"} Feb 02 21:41:37 crc kubenswrapper[4789]: I0202 21:41:37.926262 4789 scope.go:117] "RemoveContainer" containerID="4c7189c4be6e0d2b75e7827d5f5c5ee84da6eaea4bbc1aa79e5145b1d90aab0b" Feb 02 21:41:37 crc kubenswrapper[4789]: I0202 21:41:37.946200 4789 scope.go:117] "RemoveContainer" containerID="d40a4908b50914e45821732601107d538a620d14b9c9a51407085b5a97b7c614" Feb 02 21:41:37 crc kubenswrapper[4789]: I0202 21:41:37.973754 4789 scope.go:117] "RemoveContainer" containerID="fa505511f5bdb1f9454c7fb0ffd8dc6284220c7d716d063dd61660e06e6929a4" Feb 02 21:41:37 crc kubenswrapper[4789]: I0202 21:41:37.998195 4789 scope.go:117] "RemoveContainer" containerID="b424e43858ad870d584f7acc035252d5ea4362b1c11557f71598ce628935fffe" Feb 02 21:41:38 crc kubenswrapper[4789]: I0202 21:41:38.008556 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a-combined-ca-bundle\") pod \"1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a\" (UID: \"1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a\") " Feb 02 21:41:38 crc kubenswrapper[4789]: I0202 21:41:38.008658 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a-scripts\") pod \"1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a\" (UID: \"1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a\") " Feb 02 21:41:38 crc kubenswrapper[4789]: I0202 21:41:38.008693 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a-sg-core-conf-yaml\") pod \"1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a\" (UID: \"1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a\") " Feb 02 21:41:38 crc kubenswrapper[4789]: I0202 21:41:38.008788 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a-log-httpd\") pod \"1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a\" (UID: \"1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a\") " Feb 02 21:41:38 crc kubenswrapper[4789]: I0202 21:41:38.008811 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a-config-data\") pod \"1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a\" (UID: \"1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a\") " Feb 02 21:41:38 crc kubenswrapper[4789]: I0202 21:41:38.008841 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tr7h9\" (UniqueName: \"kubernetes.io/projected/1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a-kube-api-access-tr7h9\") pod \"1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a\" (UID: \"1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a\") " Feb 02 21:41:38 crc kubenswrapper[4789]: I0202 21:41:38.008879 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a-run-httpd\") pod \"1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a\" (UID: \"1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a\") " Feb 02 21:41:38 crc kubenswrapper[4789]: I0202 21:41:38.008936 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a-ceilometer-tls-certs\") pod \"1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a\" (UID: \"1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a\") " Feb 02 21:41:38 crc kubenswrapper[4789]: I0202 21:41:38.009691 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a" (UID: "1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 21:41:38 crc kubenswrapper[4789]: I0202 21:41:38.009982 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a" (UID: "1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 21:41:38 crc kubenswrapper[4789]: I0202 21:41:38.010441 4789 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 21:41:38 crc kubenswrapper[4789]: I0202 21:41:38.010456 4789 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 21:41:38 crc kubenswrapper[4789]: I0202 21:41:38.014920 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a-kube-api-access-tr7h9" (OuterVolumeSpecName: "kube-api-access-tr7h9") pod "1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a" (UID: "1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a"). InnerVolumeSpecName "kube-api-access-tr7h9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:41:38 crc kubenswrapper[4789]: I0202 21:41:38.015065 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a-scripts" (OuterVolumeSpecName: "scripts") pod "1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a" (UID: "1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:41:38 crc kubenswrapper[4789]: I0202 21:41:38.021821 4789 scope.go:117] "RemoveContainer" containerID="4c7189c4be6e0d2b75e7827d5f5c5ee84da6eaea4bbc1aa79e5145b1d90aab0b" Feb 02 21:41:38 crc kubenswrapper[4789]: E0202 21:41:38.022300 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c7189c4be6e0d2b75e7827d5f5c5ee84da6eaea4bbc1aa79e5145b1d90aab0b\": container with ID starting with 4c7189c4be6e0d2b75e7827d5f5c5ee84da6eaea4bbc1aa79e5145b1d90aab0b not found: ID does not exist" containerID="4c7189c4be6e0d2b75e7827d5f5c5ee84da6eaea4bbc1aa79e5145b1d90aab0b" Feb 02 21:41:38 crc kubenswrapper[4789]: I0202 21:41:38.022337 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c7189c4be6e0d2b75e7827d5f5c5ee84da6eaea4bbc1aa79e5145b1d90aab0b"} err="failed to get container status \"4c7189c4be6e0d2b75e7827d5f5c5ee84da6eaea4bbc1aa79e5145b1d90aab0b\": rpc error: code = NotFound desc = could not find container \"4c7189c4be6e0d2b75e7827d5f5c5ee84da6eaea4bbc1aa79e5145b1d90aab0b\": container with ID starting with 4c7189c4be6e0d2b75e7827d5f5c5ee84da6eaea4bbc1aa79e5145b1d90aab0b not found: ID does not exist" Feb 02 21:41:38 crc kubenswrapper[4789]: I0202 21:41:38.022363 4789 scope.go:117] "RemoveContainer" containerID="d40a4908b50914e45821732601107d538a620d14b9c9a51407085b5a97b7c614" Feb 02 21:41:38 crc kubenswrapper[4789]: E0202 21:41:38.022673 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d40a4908b50914e45821732601107d538a620d14b9c9a51407085b5a97b7c614\": container with ID starting with d40a4908b50914e45821732601107d538a620d14b9c9a51407085b5a97b7c614 not found: ID does not exist" containerID="d40a4908b50914e45821732601107d538a620d14b9c9a51407085b5a97b7c614" Feb 02 21:41:38 crc kubenswrapper[4789]: I0202 21:41:38.022700 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d40a4908b50914e45821732601107d538a620d14b9c9a51407085b5a97b7c614"} err="failed to get container status \"d40a4908b50914e45821732601107d538a620d14b9c9a51407085b5a97b7c614\": rpc error: code = NotFound desc = could not find container \"d40a4908b50914e45821732601107d538a620d14b9c9a51407085b5a97b7c614\": container with ID starting with d40a4908b50914e45821732601107d538a620d14b9c9a51407085b5a97b7c614 not found: ID does not exist" Feb 02 21:41:38 crc kubenswrapper[4789]: I0202 21:41:38.022732 4789 scope.go:117] "RemoveContainer" containerID="fa505511f5bdb1f9454c7fb0ffd8dc6284220c7d716d063dd61660e06e6929a4" Feb 02 21:41:38 crc kubenswrapper[4789]: E0202 21:41:38.023113 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa505511f5bdb1f9454c7fb0ffd8dc6284220c7d716d063dd61660e06e6929a4\": container with ID starting with fa505511f5bdb1f9454c7fb0ffd8dc6284220c7d716d063dd61660e06e6929a4 not found: ID does not exist" containerID="fa505511f5bdb1f9454c7fb0ffd8dc6284220c7d716d063dd61660e06e6929a4" Feb 02 21:41:38 crc kubenswrapper[4789]: I0202 21:41:38.023141 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa505511f5bdb1f9454c7fb0ffd8dc6284220c7d716d063dd61660e06e6929a4"} err="failed to get container status \"fa505511f5bdb1f9454c7fb0ffd8dc6284220c7d716d063dd61660e06e6929a4\": rpc error: code = NotFound desc = could not find container \"fa505511f5bdb1f9454c7fb0ffd8dc6284220c7d716d063dd61660e06e6929a4\": container with ID starting with fa505511f5bdb1f9454c7fb0ffd8dc6284220c7d716d063dd61660e06e6929a4 not found: ID does not exist" Feb 02 21:41:38 crc kubenswrapper[4789]: I0202 21:41:38.023159 4789 scope.go:117] "RemoveContainer" containerID="b424e43858ad870d584f7acc035252d5ea4362b1c11557f71598ce628935fffe" Feb 02 21:41:38 crc kubenswrapper[4789]: E0202 21:41:38.023410 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b424e43858ad870d584f7acc035252d5ea4362b1c11557f71598ce628935fffe\": container with ID starting with b424e43858ad870d584f7acc035252d5ea4362b1c11557f71598ce628935fffe not found: ID does not exist" containerID="b424e43858ad870d584f7acc035252d5ea4362b1c11557f71598ce628935fffe" Feb 02 21:41:38 crc kubenswrapper[4789]: I0202 21:41:38.023439 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b424e43858ad870d584f7acc035252d5ea4362b1c11557f71598ce628935fffe"} err="failed to get container status \"b424e43858ad870d584f7acc035252d5ea4362b1c11557f71598ce628935fffe\": rpc error: code = NotFound desc = could not find container \"b424e43858ad870d584f7acc035252d5ea4362b1c11557f71598ce628935fffe\": container with ID starting with b424e43858ad870d584f7acc035252d5ea4362b1c11557f71598ce628935fffe not found: ID does not exist" Feb 02 21:41:38 crc kubenswrapper[4789]: I0202 21:41:38.041891 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a" (UID: "1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:41:38 crc kubenswrapper[4789]: I0202 21:41:38.083288 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a" (UID: "1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:41:38 crc kubenswrapper[4789]: I0202 21:41:38.108767 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a-config-data" (OuterVolumeSpecName: "config-data") pod "1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a" (UID: "1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:41:38 crc kubenswrapper[4789]: I0202 21:41:38.111859 4789 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 21:41:38 crc kubenswrapper[4789]: I0202 21:41:38.111882 4789 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 02 21:41:38 crc kubenswrapper[4789]: I0202 21:41:38.111891 4789 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 21:41:38 crc kubenswrapper[4789]: I0202 21:41:38.111900 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tr7h9\" (UniqueName: \"kubernetes.io/projected/1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a-kube-api-access-tr7h9\") on node \"crc\" DevicePath \"\"" Feb 02 21:41:38 crc kubenswrapper[4789]: I0202 21:41:38.111910 4789 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 21:41:38 crc kubenswrapper[4789]: I0202 21:41:38.119405 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a" (UID: "1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:41:38 crc kubenswrapper[4789]: I0202 21:41:38.216573 4789 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 21:41:38 crc kubenswrapper[4789]: I0202 21:41:38.337088 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 21:41:38 crc kubenswrapper[4789]: I0202 21:41:38.349352 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 02 21:41:38 crc kubenswrapper[4789]: I0202 21:41:38.374752 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 02 21:41:38 crc kubenswrapper[4789]: E0202 21:41:38.375217 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a" containerName="ceilometer-central-agent" Feb 02 21:41:38 crc kubenswrapper[4789]: I0202 21:41:38.375236 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a" containerName="ceilometer-central-agent" Feb 02 21:41:38 crc kubenswrapper[4789]: E0202 21:41:38.375260 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a" containerName="ceilometer-notification-agent" Feb 02 21:41:38 crc kubenswrapper[4789]: I0202 21:41:38.375267 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a" containerName="ceilometer-notification-agent" Feb 02 21:41:38 crc kubenswrapper[4789]: E0202 21:41:38.375280 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a" containerName="sg-core" Feb 02 21:41:38 crc kubenswrapper[4789]: I0202 21:41:38.375285 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a" containerName="sg-core" Feb 02 21:41:38 crc kubenswrapper[4789]: E0202 21:41:38.375295 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a" containerName="proxy-httpd" Feb 02 21:41:38 crc kubenswrapper[4789]: I0202 21:41:38.375300 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a" containerName="proxy-httpd" Feb 02 21:41:38 crc kubenswrapper[4789]: I0202 21:41:38.375460 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a" containerName="proxy-httpd" Feb 02 21:41:38 crc kubenswrapper[4789]: I0202 21:41:38.375477 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a" containerName="sg-core" Feb 02 21:41:38 crc kubenswrapper[4789]: I0202 21:41:38.375493 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a" containerName="ceilometer-notification-agent" Feb 02 21:41:38 crc kubenswrapper[4789]: I0202 21:41:38.375505 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a" containerName="ceilometer-central-agent" Feb 02 21:41:38 crc kubenswrapper[4789]: I0202 21:41:38.377431 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 21:41:38 crc kubenswrapper[4789]: I0202 21:41:38.379096 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 02 21:41:38 crc kubenswrapper[4789]: I0202 21:41:38.379725 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 02 21:41:38 crc kubenswrapper[4789]: I0202 21:41:38.379962 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 02 21:41:38 crc kubenswrapper[4789]: I0202 21:41:38.385218 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 21:41:38 crc kubenswrapper[4789]: I0202 21:41:38.431790 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a" path="/var/lib/kubelet/pods/1f4adac0-6f11-4e4b-89a1-702f2ae3bd5a/volumes" Feb 02 21:41:38 crc kubenswrapper[4789]: E0202 21:41:38.490111 4789 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c6adcda4d30dd8d80348de151844b0e666f7e98189122d438d68436e5a5ba717 is running failed: container process not found" containerID="c6adcda4d30dd8d80348de151844b0e666f7e98189122d438d68436e5a5ba717" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 02 21:41:38 crc kubenswrapper[4789]: E0202 21:41:38.492756 4789 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c6adcda4d30dd8d80348de151844b0e666f7e98189122d438d68436e5a5ba717 is running failed: container process not found" containerID="c6adcda4d30dd8d80348de151844b0e666f7e98189122d438d68436e5a5ba717" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 02 21:41:38 crc kubenswrapper[4789]: E0202 21:41:38.493029 4789 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c6adcda4d30dd8d80348de151844b0e666f7e98189122d438d68436e5a5ba717 is running failed: container process not found" containerID="c6adcda4d30dd8d80348de151844b0e666f7e98189122d438d68436e5a5ba717" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 02 21:41:38 crc kubenswrapper[4789]: E0202 21:41:38.493069 4789 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c6adcda4d30dd8d80348de151844b0e666f7e98189122d438d68436e5a5ba717 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="d2f42703-8d3d-4f46-9cdb-924e5d849c42" containerName="nova-scheduler-scheduler" Feb 02 21:41:38 crc kubenswrapper[4789]: I0202 21:41:38.529771 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5438d2ed-8212-4b05-9074-5b888caf6884-config-data\") pod \"ceilometer-0\" (UID: \"5438d2ed-8212-4b05-9074-5b888caf6884\") " pod="openstack/ceilometer-0" Feb 02 21:41:38 crc kubenswrapper[4789]: I0202 21:41:38.529807 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5438d2ed-8212-4b05-9074-5b888caf6884-log-httpd\") pod \"ceilometer-0\" (UID: \"5438d2ed-8212-4b05-9074-5b888caf6884\") " pod="openstack/ceilometer-0" Feb 02 21:41:38 crc kubenswrapper[4789]: I0202 21:41:38.529831 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xd8ff\" (UniqueName: \"kubernetes.io/projected/5438d2ed-8212-4b05-9074-5b888caf6884-kube-api-access-xd8ff\") pod \"ceilometer-0\" (UID: \"5438d2ed-8212-4b05-9074-5b888caf6884\") " pod="openstack/ceilometer-0" Feb 02 21:41:38 crc kubenswrapper[4789]: I0202 21:41:38.529857 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5438d2ed-8212-4b05-9074-5b888caf6884-run-httpd\") pod \"ceilometer-0\" (UID: \"5438d2ed-8212-4b05-9074-5b888caf6884\") " pod="openstack/ceilometer-0" Feb 02 21:41:38 crc kubenswrapper[4789]: I0202 21:41:38.529940 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5438d2ed-8212-4b05-9074-5b888caf6884-scripts\") pod \"ceilometer-0\" (UID: \"5438d2ed-8212-4b05-9074-5b888caf6884\") " pod="openstack/ceilometer-0" Feb 02 21:41:38 crc kubenswrapper[4789]: I0202 21:41:38.530995 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5438d2ed-8212-4b05-9074-5b888caf6884-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5438d2ed-8212-4b05-9074-5b888caf6884\") " pod="openstack/ceilometer-0" Feb 02 21:41:38 crc kubenswrapper[4789]: I0202 21:41:38.531100 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5438d2ed-8212-4b05-9074-5b888caf6884-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5438d2ed-8212-4b05-9074-5b888caf6884\") " pod="openstack/ceilometer-0" Feb 02 21:41:38 crc kubenswrapper[4789]: I0202 21:41:38.531188 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5438d2ed-8212-4b05-9074-5b888caf6884-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"5438d2ed-8212-4b05-9074-5b888caf6884\") " pod="openstack/ceilometer-0" Feb 02 21:41:38 crc kubenswrapper[4789]: I0202 21:41:38.625290 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 21:41:38 crc kubenswrapper[4789]: I0202 21:41:38.632504 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5438d2ed-8212-4b05-9074-5b888caf6884-scripts\") pod \"ceilometer-0\" (UID: \"5438d2ed-8212-4b05-9074-5b888caf6884\") " pod="openstack/ceilometer-0" Feb 02 21:41:38 crc kubenswrapper[4789]: I0202 21:41:38.632760 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5438d2ed-8212-4b05-9074-5b888caf6884-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5438d2ed-8212-4b05-9074-5b888caf6884\") " pod="openstack/ceilometer-0" Feb 02 21:41:38 crc kubenswrapper[4789]: I0202 21:41:38.632854 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5438d2ed-8212-4b05-9074-5b888caf6884-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5438d2ed-8212-4b05-9074-5b888caf6884\") " pod="openstack/ceilometer-0" Feb 02 21:41:38 crc kubenswrapper[4789]: I0202 21:41:38.632936 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5438d2ed-8212-4b05-9074-5b888caf6884-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"5438d2ed-8212-4b05-9074-5b888caf6884\") " pod="openstack/ceilometer-0" Feb 02 21:41:38 crc kubenswrapper[4789]: I0202 21:41:38.633101 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5438d2ed-8212-4b05-9074-5b888caf6884-config-data\") pod \"ceilometer-0\" (UID: \"5438d2ed-8212-4b05-9074-5b888caf6884\") " pod="openstack/ceilometer-0" Feb 02 21:41:38 crc kubenswrapper[4789]: I0202 21:41:38.633203 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5438d2ed-8212-4b05-9074-5b888caf6884-log-httpd\") pod \"ceilometer-0\" (UID: \"5438d2ed-8212-4b05-9074-5b888caf6884\") " pod="openstack/ceilometer-0" Feb 02 21:41:38 crc kubenswrapper[4789]: I0202 21:41:38.633303 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xd8ff\" (UniqueName: \"kubernetes.io/projected/5438d2ed-8212-4b05-9074-5b888caf6884-kube-api-access-xd8ff\") pod \"ceilometer-0\" (UID: \"5438d2ed-8212-4b05-9074-5b888caf6884\") " pod="openstack/ceilometer-0" Feb 02 21:41:38 crc kubenswrapper[4789]: I0202 21:41:38.633401 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5438d2ed-8212-4b05-9074-5b888caf6884-run-httpd\") pod \"ceilometer-0\" (UID: \"5438d2ed-8212-4b05-9074-5b888caf6884\") " pod="openstack/ceilometer-0" Feb 02 21:41:38 crc kubenswrapper[4789]: I0202 21:41:38.633912 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5438d2ed-8212-4b05-9074-5b888caf6884-run-httpd\") pod \"ceilometer-0\" (UID: \"5438d2ed-8212-4b05-9074-5b888caf6884\") " pod="openstack/ceilometer-0" Feb 02 21:41:38 crc kubenswrapper[4789]: I0202 21:41:38.635048 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5438d2ed-8212-4b05-9074-5b888caf6884-log-httpd\") pod \"ceilometer-0\" (UID: \"5438d2ed-8212-4b05-9074-5b888caf6884\") " pod="openstack/ceilometer-0" Feb 02 21:41:38 crc kubenswrapper[4789]: I0202 21:41:38.638240 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5438d2ed-8212-4b05-9074-5b888caf6884-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"5438d2ed-8212-4b05-9074-5b888caf6884\") " pod="openstack/ceilometer-0" Feb 02 21:41:38 crc kubenswrapper[4789]: I0202 21:41:38.641082 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5438d2ed-8212-4b05-9074-5b888caf6884-scripts\") pod \"ceilometer-0\" (UID: \"5438d2ed-8212-4b05-9074-5b888caf6884\") " pod="openstack/ceilometer-0" Feb 02 21:41:38 crc kubenswrapper[4789]: I0202 21:41:38.685876 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5438d2ed-8212-4b05-9074-5b888caf6884-config-data\") pod \"ceilometer-0\" (UID: \"5438d2ed-8212-4b05-9074-5b888caf6884\") " pod="openstack/ceilometer-0" Feb 02 21:41:38 crc kubenswrapper[4789]: I0202 21:41:38.685922 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5438d2ed-8212-4b05-9074-5b888caf6884-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5438d2ed-8212-4b05-9074-5b888caf6884\") " pod="openstack/ceilometer-0" Feb 02 21:41:38 crc kubenswrapper[4789]: I0202 21:41:38.688118 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5438d2ed-8212-4b05-9074-5b888caf6884-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5438d2ed-8212-4b05-9074-5b888caf6884\") " pod="openstack/ceilometer-0" Feb 02 21:41:38 crc kubenswrapper[4789]: I0202 21:41:38.714325 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xd8ff\" (UniqueName: \"kubernetes.io/projected/5438d2ed-8212-4b05-9074-5b888caf6884-kube-api-access-xd8ff\") pod \"ceilometer-0\" (UID: \"5438d2ed-8212-4b05-9074-5b888caf6884\") " pod="openstack/ceilometer-0" Feb 02 21:41:38 crc kubenswrapper[4789]: I0202 21:41:38.734709 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jg9jl\" (UniqueName: \"kubernetes.io/projected/d2f42703-8d3d-4f46-9cdb-924e5d849c42-kube-api-access-jg9jl\") pod \"d2f42703-8d3d-4f46-9cdb-924e5d849c42\" (UID: \"d2f42703-8d3d-4f46-9cdb-924e5d849c42\") " Feb 02 21:41:38 crc kubenswrapper[4789]: I0202 21:41:38.735044 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2f42703-8d3d-4f46-9cdb-924e5d849c42-combined-ca-bundle\") pod \"d2f42703-8d3d-4f46-9cdb-924e5d849c42\" (UID: \"d2f42703-8d3d-4f46-9cdb-924e5d849c42\") " Feb 02 21:41:38 crc kubenswrapper[4789]: I0202 21:41:38.735456 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2f42703-8d3d-4f46-9cdb-924e5d849c42-config-data\") pod \"d2f42703-8d3d-4f46-9cdb-924e5d849c42\" (UID: \"d2f42703-8d3d-4f46-9cdb-924e5d849c42\") " Feb 02 21:41:38 crc kubenswrapper[4789]: I0202 21:41:38.738388 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2f42703-8d3d-4f46-9cdb-924e5d849c42-kube-api-access-jg9jl" (OuterVolumeSpecName: "kube-api-access-jg9jl") pod "d2f42703-8d3d-4f46-9cdb-924e5d849c42" (UID: "d2f42703-8d3d-4f46-9cdb-924e5d849c42"). InnerVolumeSpecName "kube-api-access-jg9jl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:41:38 crc kubenswrapper[4789]: I0202 21:41:38.759926 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2f42703-8d3d-4f46-9cdb-924e5d849c42-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d2f42703-8d3d-4f46-9cdb-924e5d849c42" (UID: "d2f42703-8d3d-4f46-9cdb-924e5d849c42"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:41:38 crc kubenswrapper[4789]: I0202 21:41:38.760377 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2f42703-8d3d-4f46-9cdb-924e5d849c42-config-data" (OuterVolumeSpecName: "config-data") pod "d2f42703-8d3d-4f46-9cdb-924e5d849c42" (UID: "d2f42703-8d3d-4f46-9cdb-924e5d849c42"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:41:38 crc kubenswrapper[4789]: I0202 21:41:38.839811 4789 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2f42703-8d3d-4f46-9cdb-924e5d849c42-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 21:41:38 crc kubenswrapper[4789]: I0202 21:41:38.839846 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jg9jl\" (UniqueName: \"kubernetes.io/projected/d2f42703-8d3d-4f46-9cdb-924e5d849c42-kube-api-access-jg9jl\") on node \"crc\" DevicePath \"\"" Feb 02 21:41:38 crc kubenswrapper[4789]: I0202 21:41:38.839860 4789 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2f42703-8d3d-4f46-9cdb-924e5d849c42-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 21:41:38 crc kubenswrapper[4789]: I0202 21:41:38.936722 4789 generic.go:334] "Generic (PLEG): container finished" podID="d2f42703-8d3d-4f46-9cdb-924e5d849c42" containerID="c6adcda4d30dd8d80348de151844b0e666f7e98189122d438d68436e5a5ba717" exitCode=0 Feb 02 21:41:38 crc kubenswrapper[4789]: I0202 21:41:38.936767 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d2f42703-8d3d-4f46-9cdb-924e5d849c42","Type":"ContainerDied","Data":"c6adcda4d30dd8d80348de151844b0e666f7e98189122d438d68436e5a5ba717"} Feb 02 21:41:38 crc kubenswrapper[4789]: I0202 21:41:38.936793 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d2f42703-8d3d-4f46-9cdb-924e5d849c42","Type":"ContainerDied","Data":"f4028b935cbfdbadd501e4967b4651780d7d9a895ede4a56bca034fb3e825f25"} Feb 02 21:41:38 crc kubenswrapper[4789]: I0202 21:41:38.936814 4789 scope.go:117] "RemoveContainer" containerID="c6adcda4d30dd8d80348de151844b0e666f7e98189122d438d68436e5a5ba717" Feb 02 21:41:38 crc kubenswrapper[4789]: I0202 21:41:38.936929 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 21:41:38 crc kubenswrapper[4789]: I0202 21:41:38.956860 4789 scope.go:117] "RemoveContainer" containerID="c6adcda4d30dd8d80348de151844b0e666f7e98189122d438d68436e5a5ba717" Feb 02 21:41:38 crc kubenswrapper[4789]: E0202 21:41:38.957347 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6adcda4d30dd8d80348de151844b0e666f7e98189122d438d68436e5a5ba717\": container with ID starting with c6adcda4d30dd8d80348de151844b0e666f7e98189122d438d68436e5a5ba717 not found: ID does not exist" containerID="c6adcda4d30dd8d80348de151844b0e666f7e98189122d438d68436e5a5ba717" Feb 02 21:41:38 crc kubenswrapper[4789]: I0202 21:41:38.957391 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6adcda4d30dd8d80348de151844b0e666f7e98189122d438d68436e5a5ba717"} err="failed to get container status \"c6adcda4d30dd8d80348de151844b0e666f7e98189122d438d68436e5a5ba717\": rpc error: code = NotFound desc = could not find container \"c6adcda4d30dd8d80348de151844b0e666f7e98189122d438d68436e5a5ba717\": container with ID starting with c6adcda4d30dd8d80348de151844b0e666f7e98189122d438d68436e5a5ba717 not found: ID does not exist" Feb 02 21:41:38 crc kubenswrapper[4789]: I0202 21:41:38.971918 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 21:41:38 crc kubenswrapper[4789]: I0202 21:41:38.981676 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 21:41:38 crc kubenswrapper[4789]: I0202 21:41:38.989709 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 21:41:38 crc kubenswrapper[4789]: E0202 21:41:38.990380 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2f42703-8d3d-4f46-9cdb-924e5d849c42" containerName="nova-scheduler-scheduler" Feb 02 21:41:38 crc kubenswrapper[4789]: I0202 21:41:38.990415 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2f42703-8d3d-4f46-9cdb-924e5d849c42" containerName="nova-scheduler-scheduler" Feb 02 21:41:38 crc kubenswrapper[4789]: I0202 21:41:38.990749 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2f42703-8d3d-4f46-9cdb-924e5d849c42" containerName="nova-scheduler-scheduler" Feb 02 21:41:38 crc kubenswrapper[4789]: I0202 21:41:38.991813 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 21:41:38 crc kubenswrapper[4789]: I0202 21:41:38.996863 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 21:41:39 crc kubenswrapper[4789]: I0202 21:41:39.000228 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 02 21:41:39 crc kubenswrapper[4789]: I0202 21:41:39.007818 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 21:41:39 crc kubenswrapper[4789]: I0202 21:41:39.144915 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/731fdb9f-c2ed-41a9-a94e-9a2e1e9bd1a6-config-data\") pod \"nova-scheduler-0\" (UID: \"731fdb9f-c2ed-41a9-a94e-9a2e1e9bd1a6\") " pod="openstack/nova-scheduler-0" Feb 02 21:41:39 crc kubenswrapper[4789]: I0202 21:41:39.145196 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-545j4\" (UniqueName: \"kubernetes.io/projected/731fdb9f-c2ed-41a9-a94e-9a2e1e9bd1a6-kube-api-access-545j4\") pod \"nova-scheduler-0\" (UID: \"731fdb9f-c2ed-41a9-a94e-9a2e1e9bd1a6\") " pod="openstack/nova-scheduler-0" Feb 02 21:41:39 crc kubenswrapper[4789]: I0202 21:41:39.145223 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/731fdb9f-c2ed-41a9-a94e-9a2e1e9bd1a6-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"731fdb9f-c2ed-41a9-a94e-9a2e1e9bd1a6\") " pod="openstack/nova-scheduler-0" Feb 02 21:41:39 crc kubenswrapper[4789]: I0202 21:41:39.246443 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/731fdb9f-c2ed-41a9-a94e-9a2e1e9bd1a6-config-data\") pod \"nova-scheduler-0\" (UID: \"731fdb9f-c2ed-41a9-a94e-9a2e1e9bd1a6\") " pod="openstack/nova-scheduler-0" Feb 02 21:41:39 crc kubenswrapper[4789]: I0202 21:41:39.246496 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-545j4\" (UniqueName: \"kubernetes.io/projected/731fdb9f-c2ed-41a9-a94e-9a2e1e9bd1a6-kube-api-access-545j4\") pod \"nova-scheduler-0\" (UID: \"731fdb9f-c2ed-41a9-a94e-9a2e1e9bd1a6\") " pod="openstack/nova-scheduler-0" Feb 02 21:41:39 crc kubenswrapper[4789]: I0202 21:41:39.246527 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/731fdb9f-c2ed-41a9-a94e-9a2e1e9bd1a6-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"731fdb9f-c2ed-41a9-a94e-9a2e1e9bd1a6\") " pod="openstack/nova-scheduler-0" Feb 02 21:41:39 crc kubenswrapper[4789]: I0202 21:41:39.263352 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/731fdb9f-c2ed-41a9-a94e-9a2e1e9bd1a6-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"731fdb9f-c2ed-41a9-a94e-9a2e1e9bd1a6\") " pod="openstack/nova-scheduler-0" Feb 02 21:41:39 crc kubenswrapper[4789]: I0202 21:41:39.264495 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/731fdb9f-c2ed-41a9-a94e-9a2e1e9bd1a6-config-data\") pod \"nova-scheduler-0\" (UID: \"731fdb9f-c2ed-41a9-a94e-9a2e1e9bd1a6\") " pod="openstack/nova-scheduler-0" Feb 02 21:41:39 crc kubenswrapper[4789]: I0202 21:41:39.267028 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-545j4\" (UniqueName: \"kubernetes.io/projected/731fdb9f-c2ed-41a9-a94e-9a2e1e9bd1a6-kube-api-access-545j4\") pod \"nova-scheduler-0\" (UID: \"731fdb9f-c2ed-41a9-a94e-9a2e1e9bd1a6\") " pod="openstack/nova-scheduler-0" Feb 02 21:41:39 crc kubenswrapper[4789]: I0202 21:41:39.313978 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 21:41:39 crc kubenswrapper[4789]: I0202 21:41:39.467075 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 21:41:39 crc kubenswrapper[4789]: W0202 21:41:39.508228 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5438d2ed_8212_4b05_9074_5b888caf6884.slice/crio-8b923ecdd3c8ee43f79cfdf54d07db86433100876e1fba369cff3f7fefe5f86e WatchSource:0}: Error finding container 8b923ecdd3c8ee43f79cfdf54d07db86433100876e1fba369cff3f7fefe5f86e: Status 404 returned error can't find the container with id 8b923ecdd3c8ee43f79cfdf54d07db86433100876e1fba369cff3f7fefe5f86e Feb 02 21:41:39 crc kubenswrapper[4789]: I0202 21:41:39.802926 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 21:41:39 crc kubenswrapper[4789]: W0202 21:41:39.811771 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod731fdb9f_c2ed_41a9_a94e_9a2e1e9bd1a6.slice/crio-61da846cae4f3215cc186b7033b344b77c8cbacb74b49107957b6a90d7a82cd1 WatchSource:0}: Error finding container 61da846cae4f3215cc186b7033b344b77c8cbacb74b49107957b6a90d7a82cd1: Status 404 returned error can't find the container with id 61da846cae4f3215cc186b7033b344b77c8cbacb74b49107957b6a90d7a82cd1 Feb 02 21:41:39 crc kubenswrapper[4789]: I0202 21:41:39.868940 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 21:41:39 crc kubenswrapper[4789]: I0202 21:41:39.958929 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"731fdb9f-c2ed-41a9-a94e-9a2e1e9bd1a6","Type":"ContainerStarted","Data":"61da846cae4f3215cc186b7033b344b77c8cbacb74b49107957b6a90d7a82cd1"} Feb 02 21:41:39 crc kubenswrapper[4789]: I0202 21:41:39.961488 4789 generic.go:334] "Generic (PLEG): container finished" podID="7fd2fb6e-7696-41f6-9108-4e931e4f85ec" containerID="f1fefd77b479e6c16b32102559688054ee9aeae02a12dcd6ade72c915582d3ca" exitCode=0 Feb 02 21:41:39 crc kubenswrapper[4789]: I0202 21:41:39.961553 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7fd2fb6e-7696-41f6-9108-4e931e4f85ec","Type":"ContainerDied","Data":"f1fefd77b479e6c16b32102559688054ee9aeae02a12dcd6ade72c915582d3ca"} Feb 02 21:41:39 crc kubenswrapper[4789]: I0202 21:41:39.961605 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7fd2fb6e-7696-41f6-9108-4e931e4f85ec","Type":"ContainerDied","Data":"03486b680685e97eb11e13a66f623e6759c42bda1737b006e13fb2bc109f3a77"} Feb 02 21:41:39 crc kubenswrapper[4789]: I0202 21:41:39.961628 4789 scope.go:117] "RemoveContainer" containerID="f1fefd77b479e6c16b32102559688054ee9aeae02a12dcd6ade72c915582d3ca" Feb 02 21:41:39 crc kubenswrapper[4789]: I0202 21:41:39.961753 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 21:41:39 crc kubenswrapper[4789]: I0202 21:41:39.964393 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5438d2ed-8212-4b05-9074-5b888caf6884","Type":"ContainerStarted","Data":"8b923ecdd3c8ee43f79cfdf54d07db86433100876e1fba369cff3f7fefe5f86e"} Feb 02 21:41:39 crc kubenswrapper[4789]: I0202 21:41:39.967543 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7fd2fb6e-7696-41f6-9108-4e931e4f85ec-logs\") pod \"7fd2fb6e-7696-41f6-9108-4e931e4f85ec\" (UID: \"7fd2fb6e-7696-41f6-9108-4e931e4f85ec\") " Feb 02 21:41:39 crc kubenswrapper[4789]: I0202 21:41:39.968098 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fd2fb6e-7696-41f6-9108-4e931e4f85ec-logs" (OuterVolumeSpecName: "logs") pod "7fd2fb6e-7696-41f6-9108-4e931e4f85ec" (UID: "7fd2fb6e-7696-41f6-9108-4e931e4f85ec"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 21:41:39 crc kubenswrapper[4789]: I0202 21:41:39.968924 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fd2fb6e-7696-41f6-9108-4e931e4f85ec-combined-ca-bundle\") pod \"7fd2fb6e-7696-41f6-9108-4e931e4f85ec\" (UID: \"7fd2fb6e-7696-41f6-9108-4e931e4f85ec\") " Feb 02 21:41:39 crc kubenswrapper[4789]: I0202 21:41:39.969031 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2fg6n\" (UniqueName: \"kubernetes.io/projected/7fd2fb6e-7696-41f6-9108-4e931e4f85ec-kube-api-access-2fg6n\") pod \"7fd2fb6e-7696-41f6-9108-4e931e4f85ec\" (UID: \"7fd2fb6e-7696-41f6-9108-4e931e4f85ec\") " Feb 02 21:41:39 crc kubenswrapper[4789]: I0202 21:41:39.969087 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fd2fb6e-7696-41f6-9108-4e931e4f85ec-config-data\") pod \"7fd2fb6e-7696-41f6-9108-4e931e4f85ec\" (UID: \"7fd2fb6e-7696-41f6-9108-4e931e4f85ec\") " Feb 02 21:41:39 crc kubenswrapper[4789]: I0202 21:41:39.969635 4789 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7fd2fb6e-7696-41f6-9108-4e931e4f85ec-logs\") on node \"crc\" DevicePath \"\"" Feb 02 21:41:39 crc kubenswrapper[4789]: I0202 21:41:39.972493 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fd2fb6e-7696-41f6-9108-4e931e4f85ec-kube-api-access-2fg6n" (OuterVolumeSpecName: "kube-api-access-2fg6n") pod "7fd2fb6e-7696-41f6-9108-4e931e4f85ec" (UID: "7fd2fb6e-7696-41f6-9108-4e931e4f85ec"). InnerVolumeSpecName "kube-api-access-2fg6n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:41:39 crc kubenswrapper[4789]: I0202 21:41:39.984080 4789 scope.go:117] "RemoveContainer" containerID="4702917ff0138df6eadd26c7899c76d739c77baac82a5b4ae2d09c3b400326ec" Feb 02 21:41:40 crc kubenswrapper[4789]: I0202 21:41:40.002813 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fd2fb6e-7696-41f6-9108-4e931e4f85ec-config-data" (OuterVolumeSpecName: "config-data") pod "7fd2fb6e-7696-41f6-9108-4e931e4f85ec" (UID: "7fd2fb6e-7696-41f6-9108-4e931e4f85ec"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:41:40 crc kubenswrapper[4789]: I0202 21:41:40.009981 4789 scope.go:117] "RemoveContainer" containerID="f1fefd77b479e6c16b32102559688054ee9aeae02a12dcd6ade72c915582d3ca" Feb 02 21:41:40 crc kubenswrapper[4789]: E0202 21:41:40.010570 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1fefd77b479e6c16b32102559688054ee9aeae02a12dcd6ade72c915582d3ca\": container with ID starting with f1fefd77b479e6c16b32102559688054ee9aeae02a12dcd6ade72c915582d3ca not found: ID does not exist" containerID="f1fefd77b479e6c16b32102559688054ee9aeae02a12dcd6ade72c915582d3ca" Feb 02 21:41:40 crc kubenswrapper[4789]: I0202 21:41:40.010625 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1fefd77b479e6c16b32102559688054ee9aeae02a12dcd6ade72c915582d3ca"} err="failed to get container status \"f1fefd77b479e6c16b32102559688054ee9aeae02a12dcd6ade72c915582d3ca\": rpc error: code = NotFound desc = could not find container \"f1fefd77b479e6c16b32102559688054ee9aeae02a12dcd6ade72c915582d3ca\": container with ID starting with f1fefd77b479e6c16b32102559688054ee9aeae02a12dcd6ade72c915582d3ca not found: ID does not exist" Feb 02 21:41:40 crc kubenswrapper[4789]: I0202 21:41:40.010651 4789 scope.go:117] "RemoveContainer" containerID="4702917ff0138df6eadd26c7899c76d739c77baac82a5b4ae2d09c3b400326ec" Feb 02 21:41:40 crc kubenswrapper[4789]: E0202 21:41:40.010910 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4702917ff0138df6eadd26c7899c76d739c77baac82a5b4ae2d09c3b400326ec\": container with ID starting with 4702917ff0138df6eadd26c7899c76d739c77baac82a5b4ae2d09c3b400326ec not found: ID does not exist" containerID="4702917ff0138df6eadd26c7899c76d739c77baac82a5b4ae2d09c3b400326ec" Feb 02 21:41:40 crc kubenswrapper[4789]: I0202 21:41:40.010934 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4702917ff0138df6eadd26c7899c76d739c77baac82a5b4ae2d09c3b400326ec"} err="failed to get container status \"4702917ff0138df6eadd26c7899c76d739c77baac82a5b4ae2d09c3b400326ec\": rpc error: code = NotFound desc = could not find container \"4702917ff0138df6eadd26c7899c76d739c77baac82a5b4ae2d09c3b400326ec\": container with ID starting with 4702917ff0138df6eadd26c7899c76d739c77baac82a5b4ae2d09c3b400326ec not found: ID does not exist" Feb 02 21:41:40 crc kubenswrapper[4789]: I0202 21:41:40.019798 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fd2fb6e-7696-41f6-9108-4e931e4f85ec-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7fd2fb6e-7696-41f6-9108-4e931e4f85ec" (UID: "7fd2fb6e-7696-41f6-9108-4e931e4f85ec"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:41:40 crc kubenswrapper[4789]: I0202 21:41:40.070623 4789 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fd2fb6e-7696-41f6-9108-4e931e4f85ec-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 21:41:40 crc kubenswrapper[4789]: I0202 21:41:40.070658 4789 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fd2fb6e-7696-41f6-9108-4e931e4f85ec-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 21:41:40 crc kubenswrapper[4789]: I0202 21:41:40.070672 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2fg6n\" (UniqueName: \"kubernetes.io/projected/7fd2fb6e-7696-41f6-9108-4e931e4f85ec-kube-api-access-2fg6n\") on node \"crc\" DevicePath \"\"" Feb 02 21:41:40 crc kubenswrapper[4789]: I0202 21:41:40.302041 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 02 21:41:40 crc kubenswrapper[4789]: I0202 21:41:40.308837 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 02 21:41:40 crc kubenswrapper[4789]: I0202 21:41:40.327594 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 02 21:41:40 crc kubenswrapper[4789]: E0202 21:41:40.328058 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fd2fb6e-7696-41f6-9108-4e931e4f85ec" containerName="nova-api-log" Feb 02 21:41:40 crc kubenswrapper[4789]: I0202 21:41:40.328076 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fd2fb6e-7696-41f6-9108-4e931e4f85ec" containerName="nova-api-log" Feb 02 21:41:40 crc kubenswrapper[4789]: E0202 21:41:40.328093 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fd2fb6e-7696-41f6-9108-4e931e4f85ec" containerName="nova-api-api" Feb 02 21:41:40 crc kubenswrapper[4789]: I0202 21:41:40.328101 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fd2fb6e-7696-41f6-9108-4e931e4f85ec" containerName="nova-api-api" Feb 02 21:41:40 crc kubenswrapper[4789]: I0202 21:41:40.328276 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fd2fb6e-7696-41f6-9108-4e931e4f85ec" containerName="nova-api-api" Feb 02 21:41:40 crc kubenswrapper[4789]: I0202 21:41:40.328295 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fd2fb6e-7696-41f6-9108-4e931e4f85ec" containerName="nova-api-log" Feb 02 21:41:40 crc kubenswrapper[4789]: I0202 21:41:40.329333 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 21:41:40 crc kubenswrapper[4789]: I0202 21:41:40.332607 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 02 21:41:40 crc kubenswrapper[4789]: I0202 21:41:40.347799 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 02 21:41:40 crc kubenswrapper[4789]: I0202 21:41:40.389459 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7098ef7-f9ae-4cd8-8264-9cfec2c20343-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a7098ef7-f9ae-4cd8-8264-9cfec2c20343\") " pod="openstack/nova-api-0" Feb 02 21:41:40 crc kubenswrapper[4789]: I0202 21:41:40.389517 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7098ef7-f9ae-4cd8-8264-9cfec2c20343-config-data\") pod \"nova-api-0\" (UID: \"a7098ef7-f9ae-4cd8-8264-9cfec2c20343\") " pod="openstack/nova-api-0" Feb 02 21:41:40 crc kubenswrapper[4789]: I0202 21:41:40.389578 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a7098ef7-f9ae-4cd8-8264-9cfec2c20343-logs\") pod \"nova-api-0\" (UID: \"a7098ef7-f9ae-4cd8-8264-9cfec2c20343\") " pod="openstack/nova-api-0" Feb 02 21:41:40 crc kubenswrapper[4789]: I0202 21:41:40.389651 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c729j\" (UniqueName: \"kubernetes.io/projected/a7098ef7-f9ae-4cd8-8264-9cfec2c20343-kube-api-access-c729j\") pod \"nova-api-0\" (UID: \"a7098ef7-f9ae-4cd8-8264-9cfec2c20343\") " pod="openstack/nova-api-0" Feb 02 21:41:40 crc kubenswrapper[4789]: I0202 21:41:40.475137 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fd2fb6e-7696-41f6-9108-4e931e4f85ec" path="/var/lib/kubelet/pods/7fd2fb6e-7696-41f6-9108-4e931e4f85ec/volumes" Feb 02 21:41:40 crc kubenswrapper[4789]: I0202 21:41:40.475734 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2f42703-8d3d-4f46-9cdb-924e5d849c42" path="/var/lib/kubelet/pods/d2f42703-8d3d-4f46-9cdb-924e5d849c42/volumes" Feb 02 21:41:40 crc kubenswrapper[4789]: I0202 21:41:40.497588 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a7098ef7-f9ae-4cd8-8264-9cfec2c20343-logs\") pod \"nova-api-0\" (UID: \"a7098ef7-f9ae-4cd8-8264-9cfec2c20343\") " pod="openstack/nova-api-0" Feb 02 21:41:40 crc kubenswrapper[4789]: I0202 21:41:40.497815 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c729j\" (UniqueName: \"kubernetes.io/projected/a7098ef7-f9ae-4cd8-8264-9cfec2c20343-kube-api-access-c729j\") pod \"nova-api-0\" (UID: \"a7098ef7-f9ae-4cd8-8264-9cfec2c20343\") " pod="openstack/nova-api-0" Feb 02 21:41:40 crc kubenswrapper[4789]: I0202 21:41:40.498014 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7098ef7-f9ae-4cd8-8264-9cfec2c20343-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a7098ef7-f9ae-4cd8-8264-9cfec2c20343\") " pod="openstack/nova-api-0" Feb 02 21:41:40 crc kubenswrapper[4789]: I0202 21:41:40.498092 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7098ef7-f9ae-4cd8-8264-9cfec2c20343-config-data\") pod \"nova-api-0\" (UID: \"a7098ef7-f9ae-4cd8-8264-9cfec2c20343\") " pod="openstack/nova-api-0" Feb 02 21:41:40 crc kubenswrapper[4789]: I0202 21:41:40.498659 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a7098ef7-f9ae-4cd8-8264-9cfec2c20343-logs\") pod \"nova-api-0\" (UID: \"a7098ef7-f9ae-4cd8-8264-9cfec2c20343\") " pod="openstack/nova-api-0" Feb 02 21:41:40 crc kubenswrapper[4789]: I0202 21:41:40.503135 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7098ef7-f9ae-4cd8-8264-9cfec2c20343-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a7098ef7-f9ae-4cd8-8264-9cfec2c20343\") " pod="openstack/nova-api-0" Feb 02 21:41:40 crc kubenswrapper[4789]: I0202 21:41:40.520437 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 02 21:41:40 crc kubenswrapper[4789]: I0202 21:41:40.525434 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c729j\" (UniqueName: \"kubernetes.io/projected/a7098ef7-f9ae-4cd8-8264-9cfec2c20343-kube-api-access-c729j\") pod \"nova-api-0\" (UID: \"a7098ef7-f9ae-4cd8-8264-9cfec2c20343\") " pod="openstack/nova-api-0" Feb 02 21:41:40 crc kubenswrapper[4789]: I0202 21:41:40.530741 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7098ef7-f9ae-4cd8-8264-9cfec2c20343-config-data\") pod \"nova-api-0\" (UID: \"a7098ef7-f9ae-4cd8-8264-9cfec2c20343\") " pod="openstack/nova-api-0" Feb 02 21:41:40 crc kubenswrapper[4789]: I0202 21:41:40.621899 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 02 21:41:40 crc kubenswrapper[4789]: I0202 21:41:40.622145 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 02 21:41:40 crc kubenswrapper[4789]: I0202 21:41:40.644526 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 21:41:40 crc kubenswrapper[4789]: I0202 21:41:40.976542 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5438d2ed-8212-4b05-9074-5b888caf6884","Type":"ContainerStarted","Data":"16bfd6f01f555276b1f6797054bf250f16a6582842aa25a0c73e10c02f3adab4"} Feb 02 21:41:40 crc kubenswrapper[4789]: I0202 21:41:40.977187 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5438d2ed-8212-4b05-9074-5b888caf6884","Type":"ContainerStarted","Data":"febcc628d8993fb3aa84fc28d8a1fab2eb763c23a62dd03cb1f67f0d559ad3cd"} Feb 02 21:41:40 crc kubenswrapper[4789]: I0202 21:41:40.978798 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"731fdb9f-c2ed-41a9-a94e-9a2e1e9bd1a6","Type":"ContainerStarted","Data":"bbf3961f4b9988969194fdfa516605852c0d8137222ae22baf8d415f3a2897d2"} Feb 02 21:41:41 crc kubenswrapper[4789]: I0202 21:41:41.002391 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.002369377 podStartE2EDuration="3.002369377s" podCreationTimestamp="2026-02-02 21:41:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:41:40.99785619 +0000 UTC m=+1321.292881209" watchObservedRunningTime="2026-02-02 21:41:41.002369377 +0000 UTC m=+1321.297394396" Feb 02 21:41:41 crc kubenswrapper[4789]: I0202 21:41:41.164720 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 02 21:41:41 crc kubenswrapper[4789]: I0202 21:41:41.989231 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5438d2ed-8212-4b05-9074-5b888caf6884","Type":"ContainerStarted","Data":"bd42a71f2bc7bab8bb2ee79375643b36fe789cefa12607470aa8708245e0a03b"} Feb 02 21:41:41 crc kubenswrapper[4789]: I0202 21:41:41.990872 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a7098ef7-f9ae-4cd8-8264-9cfec2c20343","Type":"ContainerStarted","Data":"2858d5548504cad9374d6af9e634a391a1a87c5d7ace7a37454a5740d5e0255d"} Feb 02 21:41:41 crc kubenswrapper[4789]: I0202 21:41:41.990918 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a7098ef7-f9ae-4cd8-8264-9cfec2c20343","Type":"ContainerStarted","Data":"9b3893ed83f44b2989913747db245e393592b03a247ee00dcedef8f1523a6e34"} Feb 02 21:41:41 crc kubenswrapper[4789]: I0202 21:41:41.990928 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a7098ef7-f9ae-4cd8-8264-9cfec2c20343","Type":"ContainerStarted","Data":"2f8df1c5cb268f99c420e7a2d002325138a9cc8310775a053079aafedc2a20f7"} Feb 02 21:41:42 crc kubenswrapper[4789]: I0202 21:41:42.017094 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.017074486 podStartE2EDuration="2.017074486s" podCreationTimestamp="2026-02-02 21:41:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:41:42.0097781 +0000 UTC m=+1322.304803119" watchObservedRunningTime="2026-02-02 21:41:42.017074486 +0000 UTC m=+1322.312099515" Feb 02 21:41:44 crc kubenswrapper[4789]: I0202 21:41:44.224830 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5438d2ed-8212-4b05-9074-5b888caf6884","Type":"ContainerStarted","Data":"feadf05deec69fec401ad5dd588e4ad129121bed274e977bc029d1f08c15932d"} Feb 02 21:41:44 crc kubenswrapper[4789]: I0202 21:41:44.225309 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 02 21:41:44 crc kubenswrapper[4789]: I0202 21:41:44.248982 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.9566523710000001 podStartE2EDuration="6.248966271s" podCreationTimestamp="2026-02-02 21:41:38 +0000 UTC" firstStartedPulling="2026-02-02 21:41:39.510439967 +0000 UTC m=+1319.805464986" lastFinishedPulling="2026-02-02 21:41:43.802753827 +0000 UTC m=+1324.097778886" observedRunningTime="2026-02-02 21:41:44.247907391 +0000 UTC m=+1324.542932420" watchObservedRunningTime="2026-02-02 21:41:44.248966271 +0000 UTC m=+1324.543991290" Feb 02 21:41:44 crc kubenswrapper[4789]: I0202 21:41:44.315101 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 02 21:41:45 crc kubenswrapper[4789]: I0202 21:41:45.611326 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 02 21:41:45 crc kubenswrapper[4789]: I0202 21:41:45.621956 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 02 21:41:45 crc kubenswrapper[4789]: I0202 21:41:45.623976 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 02 21:41:46 crc kubenswrapper[4789]: I0202 21:41:46.634813 4789 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="2eed9773-f2f1-4d61-8b88-c0eb30620612" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.195:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 02 21:41:46 crc kubenswrapper[4789]: I0202 21:41:46.635192 4789 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="2eed9773-f2f1-4d61-8b88-c0eb30620612" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.195:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 02 21:41:49 crc kubenswrapper[4789]: I0202 21:41:49.314810 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 02 21:41:49 crc kubenswrapper[4789]: I0202 21:41:49.364092 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 02 21:41:50 crc kubenswrapper[4789]: I0202 21:41:50.332037 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 02 21:41:50 crc kubenswrapper[4789]: I0202 21:41:50.645925 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 02 21:41:50 crc kubenswrapper[4789]: I0202 21:41:50.646267 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 02 21:41:51 crc kubenswrapper[4789]: I0202 21:41:51.727809 4789 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a7098ef7-f9ae-4cd8-8264-9cfec2c20343" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.198:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 02 21:41:51 crc kubenswrapper[4789]: I0202 21:41:51.727838 4789 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a7098ef7-f9ae-4cd8-8264-9cfec2c20343" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.198:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 02 21:41:55 crc kubenswrapper[4789]: I0202 21:41:55.628169 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 02 21:41:55 crc kubenswrapper[4789]: I0202 21:41:55.630741 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 02 21:41:55 crc kubenswrapper[4789]: I0202 21:41:55.640317 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 02 21:41:56 crc kubenswrapper[4789]: I0202 21:41:56.355026 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 02 21:41:58 crc kubenswrapper[4789]: I0202 21:41:58.200657 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 02 21:41:58 crc kubenswrapper[4789]: I0202 21:41:58.297574 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6140fbc7-6f0e-43dd-a95c-50a4dc52c351-combined-ca-bundle\") pod \"6140fbc7-6f0e-43dd-a95c-50a4dc52c351\" (UID: \"6140fbc7-6f0e-43dd-a95c-50a4dc52c351\") " Feb 02 21:41:58 crc kubenswrapper[4789]: I0202 21:41:58.297865 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6140fbc7-6f0e-43dd-a95c-50a4dc52c351-config-data\") pod \"6140fbc7-6f0e-43dd-a95c-50a4dc52c351\" (UID: \"6140fbc7-6f0e-43dd-a95c-50a4dc52c351\") " Feb 02 21:41:58 crc kubenswrapper[4789]: I0202 21:41:58.297961 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gb22x\" (UniqueName: \"kubernetes.io/projected/6140fbc7-6f0e-43dd-a95c-50a4dc52c351-kube-api-access-gb22x\") pod \"6140fbc7-6f0e-43dd-a95c-50a4dc52c351\" (UID: \"6140fbc7-6f0e-43dd-a95c-50a4dc52c351\") " Feb 02 21:41:58 crc kubenswrapper[4789]: I0202 21:41:58.303788 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6140fbc7-6f0e-43dd-a95c-50a4dc52c351-kube-api-access-gb22x" (OuterVolumeSpecName: "kube-api-access-gb22x") pod "6140fbc7-6f0e-43dd-a95c-50a4dc52c351" (UID: "6140fbc7-6f0e-43dd-a95c-50a4dc52c351"). InnerVolumeSpecName "kube-api-access-gb22x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:41:58 crc kubenswrapper[4789]: I0202 21:41:58.324655 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6140fbc7-6f0e-43dd-a95c-50a4dc52c351-config-data" (OuterVolumeSpecName: "config-data") pod "6140fbc7-6f0e-43dd-a95c-50a4dc52c351" (UID: "6140fbc7-6f0e-43dd-a95c-50a4dc52c351"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:41:58 crc kubenswrapper[4789]: I0202 21:41:58.347843 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6140fbc7-6f0e-43dd-a95c-50a4dc52c351-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6140fbc7-6f0e-43dd-a95c-50a4dc52c351" (UID: "6140fbc7-6f0e-43dd-a95c-50a4dc52c351"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:41:58 crc kubenswrapper[4789]: I0202 21:41:58.365165 4789 generic.go:334] "Generic (PLEG): container finished" podID="6140fbc7-6f0e-43dd-a95c-50a4dc52c351" containerID="d6f2324a46251235fd022eec6e2e8619f37bc7e11c247719a5f7a99e3fc575fd" exitCode=137 Feb 02 21:41:58 crc kubenswrapper[4789]: I0202 21:41:58.365485 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6140fbc7-6f0e-43dd-a95c-50a4dc52c351","Type":"ContainerDied","Data":"d6f2324a46251235fd022eec6e2e8619f37bc7e11c247719a5f7a99e3fc575fd"} Feb 02 21:41:58 crc kubenswrapper[4789]: I0202 21:41:58.365467 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 02 21:41:58 crc kubenswrapper[4789]: I0202 21:41:58.365655 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6140fbc7-6f0e-43dd-a95c-50a4dc52c351","Type":"ContainerDied","Data":"46d4cb2b55cc78c303e679b0cac70ca3019da98e8727eecfa2ed96a76009aa59"} Feb 02 21:41:58 crc kubenswrapper[4789]: I0202 21:41:58.365722 4789 scope.go:117] "RemoveContainer" containerID="d6f2324a46251235fd022eec6e2e8619f37bc7e11c247719a5f7a99e3fc575fd" Feb 02 21:41:58 crc kubenswrapper[4789]: I0202 21:41:58.400276 4789 scope.go:117] "RemoveContainer" containerID="d6f2324a46251235fd022eec6e2e8619f37bc7e11c247719a5f7a99e3fc575fd" Feb 02 21:41:58 crc kubenswrapper[4789]: E0202 21:41:58.405896 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6f2324a46251235fd022eec6e2e8619f37bc7e11c247719a5f7a99e3fc575fd\": container with ID starting with d6f2324a46251235fd022eec6e2e8619f37bc7e11c247719a5f7a99e3fc575fd not found: ID does not exist" containerID="d6f2324a46251235fd022eec6e2e8619f37bc7e11c247719a5f7a99e3fc575fd" Feb 02 21:41:58 crc kubenswrapper[4789]: I0202 21:41:58.406455 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6f2324a46251235fd022eec6e2e8619f37bc7e11c247719a5f7a99e3fc575fd"} err="failed to get container status \"d6f2324a46251235fd022eec6e2e8619f37bc7e11c247719a5f7a99e3fc575fd\": rpc error: code = NotFound desc = could not find container \"d6f2324a46251235fd022eec6e2e8619f37bc7e11c247719a5f7a99e3fc575fd\": container with ID starting with d6f2324a46251235fd022eec6e2e8619f37bc7e11c247719a5f7a99e3fc575fd not found: ID does not exist" Feb 02 21:41:58 crc kubenswrapper[4789]: I0202 21:41:58.409109 4789 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6140fbc7-6f0e-43dd-a95c-50a4dc52c351-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 21:41:58 crc kubenswrapper[4789]: I0202 21:41:58.409244 4789 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6140fbc7-6f0e-43dd-a95c-50a4dc52c351-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 21:41:58 crc kubenswrapper[4789]: I0202 21:41:58.409327 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gb22x\" (UniqueName: \"kubernetes.io/projected/6140fbc7-6f0e-43dd-a95c-50a4dc52c351-kube-api-access-gb22x\") on node \"crc\" DevicePath \"\"" Feb 02 21:41:58 crc kubenswrapper[4789]: I0202 21:41:58.438431 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 21:41:58 crc kubenswrapper[4789]: I0202 21:41:58.444776 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 21:41:58 crc kubenswrapper[4789]: I0202 21:41:58.457965 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 21:41:58 crc kubenswrapper[4789]: E0202 21:41:58.458780 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6140fbc7-6f0e-43dd-a95c-50a4dc52c351" containerName="nova-cell1-novncproxy-novncproxy" Feb 02 21:41:58 crc kubenswrapper[4789]: I0202 21:41:58.458814 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="6140fbc7-6f0e-43dd-a95c-50a4dc52c351" containerName="nova-cell1-novncproxy-novncproxy" Feb 02 21:41:58 crc kubenswrapper[4789]: I0202 21:41:58.459116 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="6140fbc7-6f0e-43dd-a95c-50a4dc52c351" containerName="nova-cell1-novncproxy-novncproxy" Feb 02 21:41:58 crc kubenswrapper[4789]: I0202 21:41:58.460094 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 02 21:41:58 crc kubenswrapper[4789]: I0202 21:41:58.466807 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 02 21:41:58 crc kubenswrapper[4789]: I0202 21:41:58.467115 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Feb 02 21:41:58 crc kubenswrapper[4789]: I0202 21:41:58.468932 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Feb 02 21:41:58 crc kubenswrapper[4789]: I0202 21:41:58.484025 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 21:41:58 crc kubenswrapper[4789]: I0202 21:41:58.614014 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9d0bd72-572d-4b90-b747-f37b490b3e4a-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f9d0bd72-572d-4b90-b747-f37b490b3e4a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 21:41:58 crc kubenswrapper[4789]: I0202 21:41:58.614192 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hshqq\" (UniqueName: \"kubernetes.io/projected/f9d0bd72-572d-4b90-b747-f37b490b3e4a-kube-api-access-hshqq\") pod \"nova-cell1-novncproxy-0\" (UID: \"f9d0bd72-572d-4b90-b747-f37b490b3e4a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 21:41:58 crc kubenswrapper[4789]: I0202 21:41:58.614242 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9d0bd72-572d-4b90-b747-f37b490b3e4a-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f9d0bd72-572d-4b90-b747-f37b490b3e4a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 21:41:58 crc kubenswrapper[4789]: I0202 21:41:58.614303 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9d0bd72-572d-4b90-b747-f37b490b3e4a-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"f9d0bd72-572d-4b90-b747-f37b490b3e4a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 21:41:58 crc kubenswrapper[4789]: I0202 21:41:58.614486 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9d0bd72-572d-4b90-b747-f37b490b3e4a-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"f9d0bd72-572d-4b90-b747-f37b490b3e4a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 21:41:58 crc kubenswrapper[4789]: I0202 21:41:58.717179 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9d0bd72-572d-4b90-b747-f37b490b3e4a-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f9d0bd72-572d-4b90-b747-f37b490b3e4a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 21:41:58 crc kubenswrapper[4789]: I0202 21:41:58.717310 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hshqq\" (UniqueName: \"kubernetes.io/projected/f9d0bd72-572d-4b90-b747-f37b490b3e4a-kube-api-access-hshqq\") pod \"nova-cell1-novncproxy-0\" (UID: \"f9d0bd72-572d-4b90-b747-f37b490b3e4a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 21:41:58 crc kubenswrapper[4789]: I0202 21:41:58.717357 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9d0bd72-572d-4b90-b747-f37b490b3e4a-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f9d0bd72-572d-4b90-b747-f37b490b3e4a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 21:41:58 crc kubenswrapper[4789]: I0202 21:41:58.717391 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9d0bd72-572d-4b90-b747-f37b490b3e4a-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"f9d0bd72-572d-4b90-b747-f37b490b3e4a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 21:41:58 crc kubenswrapper[4789]: I0202 21:41:58.717495 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9d0bd72-572d-4b90-b747-f37b490b3e4a-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"f9d0bd72-572d-4b90-b747-f37b490b3e4a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 21:41:58 crc kubenswrapper[4789]: I0202 21:41:58.722257 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9d0bd72-572d-4b90-b747-f37b490b3e4a-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f9d0bd72-572d-4b90-b747-f37b490b3e4a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 21:41:58 crc kubenswrapper[4789]: I0202 21:41:58.723491 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9d0bd72-572d-4b90-b747-f37b490b3e4a-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"f9d0bd72-572d-4b90-b747-f37b490b3e4a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 21:41:58 crc kubenswrapper[4789]: I0202 21:41:58.723666 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9d0bd72-572d-4b90-b747-f37b490b3e4a-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"f9d0bd72-572d-4b90-b747-f37b490b3e4a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 21:41:58 crc kubenswrapper[4789]: I0202 21:41:58.723573 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9d0bd72-572d-4b90-b747-f37b490b3e4a-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f9d0bd72-572d-4b90-b747-f37b490b3e4a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 21:41:58 crc kubenswrapper[4789]: I0202 21:41:58.739388 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hshqq\" (UniqueName: \"kubernetes.io/projected/f9d0bd72-572d-4b90-b747-f37b490b3e4a-kube-api-access-hshqq\") pod \"nova-cell1-novncproxy-0\" (UID: \"f9d0bd72-572d-4b90-b747-f37b490b3e4a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 21:41:58 crc kubenswrapper[4789]: I0202 21:41:58.783711 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 02 21:41:59 crc kubenswrapper[4789]: I0202 21:41:59.103311 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 21:41:59 crc kubenswrapper[4789]: I0202 21:41:59.379391 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f9d0bd72-572d-4b90-b747-f37b490b3e4a","Type":"ContainerStarted","Data":"3526b18b1cdfe6a5a1f276e6a5cb128388504d6bdbc6ed184e29a01812ab1266"} Feb 02 21:41:59 crc kubenswrapper[4789]: I0202 21:41:59.379436 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f9d0bd72-572d-4b90-b747-f37b490b3e4a","Type":"ContainerStarted","Data":"d22e9493ffa3fed25bfc87497d839cb4f9f443c3335599f33d8e713eb4b9ecaf"} Feb 02 21:42:00 crc kubenswrapper[4789]: I0202 21:42:00.437435 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6140fbc7-6f0e-43dd-a95c-50a4dc52c351" path="/var/lib/kubelet/pods/6140fbc7-6f0e-43dd-a95c-50a4dc52c351/volumes" Feb 02 21:42:00 crc kubenswrapper[4789]: I0202 21:42:00.648010 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 02 21:42:00 crc kubenswrapper[4789]: I0202 21:42:00.649341 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 02 21:42:00 crc kubenswrapper[4789]: I0202 21:42:00.649795 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 02 21:42:00 crc kubenswrapper[4789]: I0202 21:42:00.651829 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 02 21:42:00 crc kubenswrapper[4789]: I0202 21:42:00.671974 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.671958417 podStartE2EDuration="2.671958417s" podCreationTimestamp="2026-02-02 21:41:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:41:59.400261828 +0000 UTC m=+1339.695286847" watchObservedRunningTime="2026-02-02 21:42:00.671958417 +0000 UTC m=+1340.966983436" Feb 02 21:42:01 crc kubenswrapper[4789]: I0202 21:42:01.399188 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 02 21:42:01 crc kubenswrapper[4789]: I0202 21:42:01.402354 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 02 21:42:01 crc kubenswrapper[4789]: I0202 21:42:01.722458 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-r4rd5"] Feb 02 21:42:01 crc kubenswrapper[4789]: I0202 21:42:01.723876 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-r4rd5" Feb 02 21:42:01 crc kubenswrapper[4789]: I0202 21:42:01.735017 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-r4rd5"] Feb 02 21:42:01 crc kubenswrapper[4789]: I0202 21:42:01.892389 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7cd6e7ff-266d-4288-9df4-dc22dbe5f19d-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-r4rd5\" (UID: \"7cd6e7ff-266d-4288-9df4-dc22dbe5f19d\") " pod="openstack/dnsmasq-dns-89c5cd4d5-r4rd5" Feb 02 21:42:01 crc kubenswrapper[4789]: I0202 21:42:01.892439 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7cd6e7ff-266d-4288-9df4-dc22dbe5f19d-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-r4rd5\" (UID: \"7cd6e7ff-266d-4288-9df4-dc22dbe5f19d\") " pod="openstack/dnsmasq-dns-89c5cd4d5-r4rd5" Feb 02 21:42:01 crc kubenswrapper[4789]: I0202 21:42:01.892510 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7cd6e7ff-266d-4288-9df4-dc22dbe5f19d-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-r4rd5\" (UID: \"7cd6e7ff-266d-4288-9df4-dc22dbe5f19d\") " pod="openstack/dnsmasq-dns-89c5cd4d5-r4rd5" Feb 02 21:42:01 crc kubenswrapper[4789]: I0202 21:42:01.892651 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5v59\" (UniqueName: \"kubernetes.io/projected/7cd6e7ff-266d-4288-9df4-dc22dbe5f19d-kube-api-access-t5v59\") pod \"dnsmasq-dns-89c5cd4d5-r4rd5\" (UID: \"7cd6e7ff-266d-4288-9df4-dc22dbe5f19d\") " pod="openstack/dnsmasq-dns-89c5cd4d5-r4rd5" Feb 02 21:42:01 crc kubenswrapper[4789]: I0202 21:42:01.892737 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7cd6e7ff-266d-4288-9df4-dc22dbe5f19d-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-r4rd5\" (UID: \"7cd6e7ff-266d-4288-9df4-dc22dbe5f19d\") " pod="openstack/dnsmasq-dns-89c5cd4d5-r4rd5" Feb 02 21:42:01 crc kubenswrapper[4789]: I0202 21:42:01.892788 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7cd6e7ff-266d-4288-9df4-dc22dbe5f19d-config\") pod \"dnsmasq-dns-89c5cd4d5-r4rd5\" (UID: \"7cd6e7ff-266d-4288-9df4-dc22dbe5f19d\") " pod="openstack/dnsmasq-dns-89c5cd4d5-r4rd5" Feb 02 21:42:01 crc kubenswrapper[4789]: I0202 21:42:01.994792 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5v59\" (UniqueName: \"kubernetes.io/projected/7cd6e7ff-266d-4288-9df4-dc22dbe5f19d-kube-api-access-t5v59\") pod \"dnsmasq-dns-89c5cd4d5-r4rd5\" (UID: \"7cd6e7ff-266d-4288-9df4-dc22dbe5f19d\") " pod="openstack/dnsmasq-dns-89c5cd4d5-r4rd5" Feb 02 21:42:01 crc kubenswrapper[4789]: I0202 21:42:01.994874 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7cd6e7ff-266d-4288-9df4-dc22dbe5f19d-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-r4rd5\" (UID: \"7cd6e7ff-266d-4288-9df4-dc22dbe5f19d\") " pod="openstack/dnsmasq-dns-89c5cd4d5-r4rd5" Feb 02 21:42:01 crc kubenswrapper[4789]: I0202 21:42:01.994912 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7cd6e7ff-266d-4288-9df4-dc22dbe5f19d-config\") pod \"dnsmasq-dns-89c5cd4d5-r4rd5\" (UID: \"7cd6e7ff-266d-4288-9df4-dc22dbe5f19d\") " pod="openstack/dnsmasq-dns-89c5cd4d5-r4rd5" Feb 02 21:42:01 crc kubenswrapper[4789]: I0202 21:42:01.994969 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7cd6e7ff-266d-4288-9df4-dc22dbe5f19d-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-r4rd5\" (UID: \"7cd6e7ff-266d-4288-9df4-dc22dbe5f19d\") " pod="openstack/dnsmasq-dns-89c5cd4d5-r4rd5" Feb 02 21:42:01 crc kubenswrapper[4789]: I0202 21:42:01.995590 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7cd6e7ff-266d-4288-9df4-dc22dbe5f19d-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-r4rd5\" (UID: \"7cd6e7ff-266d-4288-9df4-dc22dbe5f19d\") " pod="openstack/dnsmasq-dns-89c5cd4d5-r4rd5" Feb 02 21:42:01 crc kubenswrapper[4789]: I0202 21:42:01.995637 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7cd6e7ff-266d-4288-9df4-dc22dbe5f19d-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-r4rd5\" (UID: \"7cd6e7ff-266d-4288-9df4-dc22dbe5f19d\") " pod="openstack/dnsmasq-dns-89c5cd4d5-r4rd5" Feb 02 21:42:01 crc kubenswrapper[4789]: I0202 21:42:01.995959 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7cd6e7ff-266d-4288-9df4-dc22dbe5f19d-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-r4rd5\" (UID: \"7cd6e7ff-266d-4288-9df4-dc22dbe5f19d\") " pod="openstack/dnsmasq-dns-89c5cd4d5-r4rd5" Feb 02 21:42:01 crc kubenswrapper[4789]: I0202 21:42:01.996044 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7cd6e7ff-266d-4288-9df4-dc22dbe5f19d-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-r4rd5\" (UID: \"7cd6e7ff-266d-4288-9df4-dc22dbe5f19d\") " pod="openstack/dnsmasq-dns-89c5cd4d5-r4rd5" Feb 02 21:42:01 crc kubenswrapper[4789]: I0202 21:42:01.996720 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7cd6e7ff-266d-4288-9df4-dc22dbe5f19d-config\") pod \"dnsmasq-dns-89c5cd4d5-r4rd5\" (UID: \"7cd6e7ff-266d-4288-9df4-dc22dbe5f19d\") " pod="openstack/dnsmasq-dns-89c5cd4d5-r4rd5" Feb 02 21:42:01 crc kubenswrapper[4789]: I0202 21:42:01.996989 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7cd6e7ff-266d-4288-9df4-dc22dbe5f19d-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-r4rd5\" (UID: \"7cd6e7ff-266d-4288-9df4-dc22dbe5f19d\") " pod="openstack/dnsmasq-dns-89c5cd4d5-r4rd5" Feb 02 21:42:02 crc kubenswrapper[4789]: I0202 21:42:02.000397 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7cd6e7ff-266d-4288-9df4-dc22dbe5f19d-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-r4rd5\" (UID: \"7cd6e7ff-266d-4288-9df4-dc22dbe5f19d\") " pod="openstack/dnsmasq-dns-89c5cd4d5-r4rd5" Feb 02 21:42:02 crc kubenswrapper[4789]: I0202 21:42:02.020847 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5v59\" (UniqueName: \"kubernetes.io/projected/7cd6e7ff-266d-4288-9df4-dc22dbe5f19d-kube-api-access-t5v59\") pod \"dnsmasq-dns-89c5cd4d5-r4rd5\" (UID: \"7cd6e7ff-266d-4288-9df4-dc22dbe5f19d\") " pod="openstack/dnsmasq-dns-89c5cd4d5-r4rd5" Feb 02 21:42:02 crc kubenswrapper[4789]: I0202 21:42:02.041157 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-r4rd5" Feb 02 21:42:02 crc kubenswrapper[4789]: I0202 21:42:02.493610 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-r4rd5"] Feb 02 21:42:02 crc kubenswrapper[4789]: W0202 21:42:02.497840 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7cd6e7ff_266d_4288_9df4_dc22dbe5f19d.slice/crio-1d12f57b2b89440a76b1e76b886dbb4a7b35501e6e9a79dd7c13a09511cf73f4 WatchSource:0}: Error finding container 1d12f57b2b89440a76b1e76b886dbb4a7b35501e6e9a79dd7c13a09511cf73f4: Status 404 returned error can't find the container with id 1d12f57b2b89440a76b1e76b886dbb4a7b35501e6e9a79dd7c13a09511cf73f4 Feb 02 21:42:03 crc kubenswrapper[4789]: I0202 21:42:03.416569 4789 generic.go:334] "Generic (PLEG): container finished" podID="7cd6e7ff-266d-4288-9df4-dc22dbe5f19d" containerID="44f9eeaf0ad5b799a8a00f9424dda2c0646881c42b4eba10f3cf88542d1f3cbe" exitCode=0 Feb 02 21:42:03 crc kubenswrapper[4789]: I0202 21:42:03.416654 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-r4rd5" event={"ID":"7cd6e7ff-266d-4288-9df4-dc22dbe5f19d","Type":"ContainerDied","Data":"44f9eeaf0ad5b799a8a00f9424dda2c0646881c42b4eba10f3cf88542d1f3cbe"} Feb 02 21:42:03 crc kubenswrapper[4789]: I0202 21:42:03.416920 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-r4rd5" event={"ID":"7cd6e7ff-266d-4288-9df4-dc22dbe5f19d","Type":"ContainerStarted","Data":"1d12f57b2b89440a76b1e76b886dbb4a7b35501e6e9a79dd7c13a09511cf73f4"} Feb 02 21:42:03 crc kubenswrapper[4789]: I0202 21:42:03.716083 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 21:42:03 crc kubenswrapper[4789]: I0202 21:42:03.716691 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5438d2ed-8212-4b05-9074-5b888caf6884" containerName="ceilometer-central-agent" containerID="cri-o://febcc628d8993fb3aa84fc28d8a1fab2eb763c23a62dd03cb1f67f0d559ad3cd" gracePeriod=30 Feb 02 21:42:03 crc kubenswrapper[4789]: I0202 21:42:03.716838 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5438d2ed-8212-4b05-9074-5b888caf6884" containerName="ceilometer-notification-agent" containerID="cri-o://16bfd6f01f555276b1f6797054bf250f16a6582842aa25a0c73e10c02f3adab4" gracePeriod=30 Feb 02 21:42:03 crc kubenswrapper[4789]: I0202 21:42:03.716932 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5438d2ed-8212-4b05-9074-5b888caf6884" containerName="sg-core" containerID="cri-o://bd42a71f2bc7bab8bb2ee79375643b36fe789cefa12607470aa8708245e0a03b" gracePeriod=30 Feb 02 21:42:03 crc kubenswrapper[4789]: I0202 21:42:03.716821 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5438d2ed-8212-4b05-9074-5b888caf6884" containerName="proxy-httpd" containerID="cri-o://feadf05deec69fec401ad5dd588e4ad129121bed274e977bc029d1f08c15932d" gracePeriod=30 Feb 02 21:42:03 crc kubenswrapper[4789]: I0202 21:42:03.724260 4789 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="5438d2ed-8212-4b05-9074-5b888caf6884" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.196:3000/\": EOF" Feb 02 21:42:03 crc kubenswrapper[4789]: I0202 21:42:03.784523 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 02 21:42:04 crc kubenswrapper[4789]: I0202 21:42:04.414807 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 02 21:42:04 crc kubenswrapper[4789]: I0202 21:42:04.432066 4789 generic.go:334] "Generic (PLEG): container finished" podID="5438d2ed-8212-4b05-9074-5b888caf6884" containerID="feadf05deec69fec401ad5dd588e4ad129121bed274e977bc029d1f08c15932d" exitCode=0 Feb 02 21:42:04 crc kubenswrapper[4789]: I0202 21:42:04.432096 4789 generic.go:334] "Generic (PLEG): container finished" podID="5438d2ed-8212-4b05-9074-5b888caf6884" containerID="bd42a71f2bc7bab8bb2ee79375643b36fe789cefa12607470aa8708245e0a03b" exitCode=2 Feb 02 21:42:04 crc kubenswrapper[4789]: I0202 21:42:04.432104 4789 generic.go:334] "Generic (PLEG): container finished" podID="5438d2ed-8212-4b05-9074-5b888caf6884" containerID="febcc628d8993fb3aa84fc28d8a1fab2eb763c23a62dd03cb1f67f0d559ad3cd" exitCode=0 Feb 02 21:42:04 crc kubenswrapper[4789]: I0202 21:42:04.432145 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5438d2ed-8212-4b05-9074-5b888caf6884","Type":"ContainerDied","Data":"feadf05deec69fec401ad5dd588e4ad129121bed274e977bc029d1f08c15932d"} Feb 02 21:42:04 crc kubenswrapper[4789]: I0202 21:42:04.432166 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5438d2ed-8212-4b05-9074-5b888caf6884","Type":"ContainerDied","Data":"bd42a71f2bc7bab8bb2ee79375643b36fe789cefa12607470aa8708245e0a03b"} Feb 02 21:42:04 crc kubenswrapper[4789]: I0202 21:42:04.432176 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5438d2ed-8212-4b05-9074-5b888caf6884","Type":"ContainerDied","Data":"febcc628d8993fb3aa84fc28d8a1fab2eb763c23a62dd03cb1f67f0d559ad3cd"} Feb 02 21:42:04 crc kubenswrapper[4789]: I0202 21:42:04.434154 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-r4rd5" event={"ID":"7cd6e7ff-266d-4288-9df4-dc22dbe5f19d","Type":"ContainerStarted","Data":"b576e41ea89fdc8a7019e121b2fb6790b3127f8d4fea54dd5986dfa02c0ad849"} Feb 02 21:42:04 crc kubenswrapper[4789]: I0202 21:42:04.434205 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="a7098ef7-f9ae-4cd8-8264-9cfec2c20343" containerName="nova-api-log" containerID="cri-o://9b3893ed83f44b2989913747db245e393592b03a247ee00dcedef8f1523a6e34" gracePeriod=30 Feb 02 21:42:04 crc kubenswrapper[4789]: I0202 21:42:04.434310 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="a7098ef7-f9ae-4cd8-8264-9cfec2c20343" containerName="nova-api-api" containerID="cri-o://2858d5548504cad9374d6af9e634a391a1a87c5d7ace7a37454a5740d5e0255d" gracePeriod=30 Feb 02 21:42:04 crc kubenswrapper[4789]: I0202 21:42:04.477933 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-89c5cd4d5-r4rd5" podStartSLOduration=3.477912756 podStartE2EDuration="3.477912756s" podCreationTimestamp="2026-02-02 21:42:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:42:04.470935899 +0000 UTC m=+1344.765960918" watchObservedRunningTime="2026-02-02 21:42:04.477912756 +0000 UTC m=+1344.772937775" Feb 02 21:42:05 crc kubenswrapper[4789]: I0202 21:42:05.446232 4789 generic.go:334] "Generic (PLEG): container finished" podID="a7098ef7-f9ae-4cd8-8264-9cfec2c20343" containerID="9b3893ed83f44b2989913747db245e393592b03a247ee00dcedef8f1523a6e34" exitCode=143 Feb 02 21:42:05 crc kubenswrapper[4789]: I0202 21:42:05.446318 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a7098ef7-f9ae-4cd8-8264-9cfec2c20343","Type":"ContainerDied","Data":"9b3893ed83f44b2989913747db245e393592b03a247ee00dcedef8f1523a6e34"} Feb 02 21:42:05 crc kubenswrapper[4789]: I0202 21:42:05.446809 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-89c5cd4d5-r4rd5" Feb 02 21:42:06 crc kubenswrapper[4789]: I0202 21:42:06.462380 4789 generic.go:334] "Generic (PLEG): container finished" podID="5438d2ed-8212-4b05-9074-5b888caf6884" containerID="16bfd6f01f555276b1f6797054bf250f16a6582842aa25a0c73e10c02f3adab4" exitCode=0 Feb 02 21:42:06 crc kubenswrapper[4789]: I0202 21:42:06.462460 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5438d2ed-8212-4b05-9074-5b888caf6884","Type":"ContainerDied","Data":"16bfd6f01f555276b1f6797054bf250f16a6582842aa25a0c73e10c02f3adab4"} Feb 02 21:42:06 crc kubenswrapper[4789]: I0202 21:42:06.788242 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 21:42:06 crc kubenswrapper[4789]: I0202 21:42:06.881625 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xd8ff\" (UniqueName: \"kubernetes.io/projected/5438d2ed-8212-4b05-9074-5b888caf6884-kube-api-access-xd8ff\") pod \"5438d2ed-8212-4b05-9074-5b888caf6884\" (UID: \"5438d2ed-8212-4b05-9074-5b888caf6884\") " Feb 02 21:42:06 crc kubenswrapper[4789]: I0202 21:42:06.881834 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5438d2ed-8212-4b05-9074-5b888caf6884-sg-core-conf-yaml\") pod \"5438d2ed-8212-4b05-9074-5b888caf6884\" (UID: \"5438d2ed-8212-4b05-9074-5b888caf6884\") " Feb 02 21:42:06 crc kubenswrapper[4789]: I0202 21:42:06.881972 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5438d2ed-8212-4b05-9074-5b888caf6884-scripts\") pod \"5438d2ed-8212-4b05-9074-5b888caf6884\" (UID: \"5438d2ed-8212-4b05-9074-5b888caf6884\") " Feb 02 21:42:06 crc kubenswrapper[4789]: I0202 21:42:06.882103 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5438d2ed-8212-4b05-9074-5b888caf6884-log-httpd\") pod \"5438d2ed-8212-4b05-9074-5b888caf6884\" (UID: \"5438d2ed-8212-4b05-9074-5b888caf6884\") " Feb 02 21:42:06 crc kubenswrapper[4789]: I0202 21:42:06.882175 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5438d2ed-8212-4b05-9074-5b888caf6884-run-httpd\") pod \"5438d2ed-8212-4b05-9074-5b888caf6884\" (UID: \"5438d2ed-8212-4b05-9074-5b888caf6884\") " Feb 02 21:42:06 crc kubenswrapper[4789]: I0202 21:42:06.882268 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5438d2ed-8212-4b05-9074-5b888caf6884-ceilometer-tls-certs\") pod \"5438d2ed-8212-4b05-9074-5b888caf6884\" (UID: \"5438d2ed-8212-4b05-9074-5b888caf6884\") " Feb 02 21:42:06 crc kubenswrapper[4789]: I0202 21:42:06.882367 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5438d2ed-8212-4b05-9074-5b888caf6884-config-data\") pod \"5438d2ed-8212-4b05-9074-5b888caf6884\" (UID: \"5438d2ed-8212-4b05-9074-5b888caf6884\") " Feb 02 21:42:06 crc kubenswrapper[4789]: I0202 21:42:06.882474 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5438d2ed-8212-4b05-9074-5b888caf6884-combined-ca-bundle\") pod \"5438d2ed-8212-4b05-9074-5b888caf6884\" (UID: \"5438d2ed-8212-4b05-9074-5b888caf6884\") " Feb 02 21:42:06 crc kubenswrapper[4789]: I0202 21:42:06.883069 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5438d2ed-8212-4b05-9074-5b888caf6884-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "5438d2ed-8212-4b05-9074-5b888caf6884" (UID: "5438d2ed-8212-4b05-9074-5b888caf6884"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 21:42:06 crc kubenswrapper[4789]: I0202 21:42:06.883168 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5438d2ed-8212-4b05-9074-5b888caf6884-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "5438d2ed-8212-4b05-9074-5b888caf6884" (UID: "5438d2ed-8212-4b05-9074-5b888caf6884"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 21:42:06 crc kubenswrapper[4789]: I0202 21:42:06.887467 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5438d2ed-8212-4b05-9074-5b888caf6884-kube-api-access-xd8ff" (OuterVolumeSpecName: "kube-api-access-xd8ff") pod "5438d2ed-8212-4b05-9074-5b888caf6884" (UID: "5438d2ed-8212-4b05-9074-5b888caf6884"). InnerVolumeSpecName "kube-api-access-xd8ff". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:42:06 crc kubenswrapper[4789]: I0202 21:42:06.888083 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5438d2ed-8212-4b05-9074-5b888caf6884-scripts" (OuterVolumeSpecName: "scripts") pod "5438d2ed-8212-4b05-9074-5b888caf6884" (UID: "5438d2ed-8212-4b05-9074-5b888caf6884"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:42:06 crc kubenswrapper[4789]: I0202 21:42:06.931711 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5438d2ed-8212-4b05-9074-5b888caf6884-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "5438d2ed-8212-4b05-9074-5b888caf6884" (UID: "5438d2ed-8212-4b05-9074-5b888caf6884"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:42:06 crc kubenswrapper[4789]: I0202 21:42:06.941814 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5438d2ed-8212-4b05-9074-5b888caf6884-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "5438d2ed-8212-4b05-9074-5b888caf6884" (UID: "5438d2ed-8212-4b05-9074-5b888caf6884"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:42:06 crc kubenswrapper[4789]: I0202 21:42:06.975121 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5438d2ed-8212-4b05-9074-5b888caf6884-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5438d2ed-8212-4b05-9074-5b888caf6884" (UID: "5438d2ed-8212-4b05-9074-5b888caf6884"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:42:06 crc kubenswrapper[4789]: I0202 21:42:06.984545 4789 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5438d2ed-8212-4b05-9074-5b888caf6884-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 21:42:06 crc kubenswrapper[4789]: I0202 21:42:06.984571 4789 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5438d2ed-8212-4b05-9074-5b888caf6884-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 21:42:06 crc kubenswrapper[4789]: I0202 21:42:06.984592 4789 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5438d2ed-8212-4b05-9074-5b888caf6884-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 21:42:06 crc kubenswrapper[4789]: I0202 21:42:06.984602 4789 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5438d2ed-8212-4b05-9074-5b888caf6884-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 21:42:06 crc kubenswrapper[4789]: I0202 21:42:06.984613 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xd8ff\" (UniqueName: \"kubernetes.io/projected/5438d2ed-8212-4b05-9074-5b888caf6884-kube-api-access-xd8ff\") on node \"crc\" DevicePath \"\"" Feb 02 21:42:06 crc kubenswrapper[4789]: I0202 21:42:06.984621 4789 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5438d2ed-8212-4b05-9074-5b888caf6884-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 02 21:42:06 crc kubenswrapper[4789]: I0202 21:42:06.984630 4789 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5438d2ed-8212-4b05-9074-5b888caf6884-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 21:42:07 crc kubenswrapper[4789]: I0202 21:42:07.008657 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5438d2ed-8212-4b05-9074-5b888caf6884-config-data" (OuterVolumeSpecName: "config-data") pod "5438d2ed-8212-4b05-9074-5b888caf6884" (UID: "5438d2ed-8212-4b05-9074-5b888caf6884"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:42:07 crc kubenswrapper[4789]: I0202 21:42:07.085940 4789 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5438d2ed-8212-4b05-9074-5b888caf6884-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 21:42:07 crc kubenswrapper[4789]: I0202 21:42:07.474670 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5438d2ed-8212-4b05-9074-5b888caf6884","Type":"ContainerDied","Data":"8b923ecdd3c8ee43f79cfdf54d07db86433100876e1fba369cff3f7fefe5f86e"} Feb 02 21:42:07 crc kubenswrapper[4789]: I0202 21:42:07.474721 4789 scope.go:117] "RemoveContainer" containerID="feadf05deec69fec401ad5dd588e4ad129121bed274e977bc029d1f08c15932d" Feb 02 21:42:07 crc kubenswrapper[4789]: I0202 21:42:07.474851 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 21:42:07 crc kubenswrapper[4789]: I0202 21:42:07.557215 4789 scope.go:117] "RemoveContainer" containerID="bd42a71f2bc7bab8bb2ee79375643b36fe789cefa12607470aa8708245e0a03b" Feb 02 21:42:07 crc kubenswrapper[4789]: I0202 21:42:07.561112 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 21:42:07 crc kubenswrapper[4789]: I0202 21:42:07.583666 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 02 21:42:07 crc kubenswrapper[4789]: I0202 21:42:07.586829 4789 scope.go:117] "RemoveContainer" containerID="16bfd6f01f555276b1f6797054bf250f16a6582842aa25a0c73e10c02f3adab4" Feb 02 21:42:07 crc kubenswrapper[4789]: I0202 21:42:07.589779 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 02 21:42:07 crc kubenswrapper[4789]: E0202 21:42:07.590268 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5438d2ed-8212-4b05-9074-5b888caf6884" containerName="proxy-httpd" Feb 02 21:42:07 crc kubenswrapper[4789]: I0202 21:42:07.590427 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="5438d2ed-8212-4b05-9074-5b888caf6884" containerName="proxy-httpd" Feb 02 21:42:07 crc kubenswrapper[4789]: E0202 21:42:07.590474 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5438d2ed-8212-4b05-9074-5b888caf6884" containerName="ceilometer-central-agent" Feb 02 21:42:07 crc kubenswrapper[4789]: I0202 21:42:07.590484 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="5438d2ed-8212-4b05-9074-5b888caf6884" containerName="ceilometer-central-agent" Feb 02 21:42:07 crc kubenswrapper[4789]: E0202 21:42:07.590504 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5438d2ed-8212-4b05-9074-5b888caf6884" containerName="ceilometer-notification-agent" Feb 02 21:42:07 crc kubenswrapper[4789]: I0202 21:42:07.590511 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="5438d2ed-8212-4b05-9074-5b888caf6884" containerName="ceilometer-notification-agent" Feb 02 21:42:07 crc kubenswrapper[4789]: E0202 21:42:07.590545 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5438d2ed-8212-4b05-9074-5b888caf6884" containerName="sg-core" Feb 02 21:42:07 crc kubenswrapper[4789]: I0202 21:42:07.590555 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="5438d2ed-8212-4b05-9074-5b888caf6884" containerName="sg-core" Feb 02 21:42:07 crc kubenswrapper[4789]: I0202 21:42:07.590782 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="5438d2ed-8212-4b05-9074-5b888caf6884" containerName="proxy-httpd" Feb 02 21:42:07 crc kubenswrapper[4789]: I0202 21:42:07.590800 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="5438d2ed-8212-4b05-9074-5b888caf6884" containerName="ceilometer-notification-agent" Feb 02 21:42:07 crc kubenswrapper[4789]: I0202 21:42:07.590811 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="5438d2ed-8212-4b05-9074-5b888caf6884" containerName="sg-core" Feb 02 21:42:07 crc kubenswrapper[4789]: I0202 21:42:07.590843 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="5438d2ed-8212-4b05-9074-5b888caf6884" containerName="ceilometer-central-agent" Feb 02 21:42:07 crc kubenswrapper[4789]: I0202 21:42:07.637127 4789 scope.go:117] "RemoveContainer" containerID="febcc628d8993fb3aa84fc28d8a1fab2eb763c23a62dd03cb1f67f0d559ad3cd" Feb 02 21:42:07 crc kubenswrapper[4789]: I0202 21:42:07.641055 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 21:42:07 crc kubenswrapper[4789]: I0202 21:42:07.641095 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 21:42:07 crc kubenswrapper[4789]: I0202 21:42:07.643928 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 02 21:42:07 crc kubenswrapper[4789]: I0202 21:42:07.643973 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 02 21:42:07 crc kubenswrapper[4789]: I0202 21:42:07.644212 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 02 21:42:07 crc kubenswrapper[4789]: I0202 21:42:07.730206 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b579f7f4-db1f-4d76-82fb-ef4cad438842-log-httpd\") pod \"ceilometer-0\" (UID: \"b579f7f4-db1f-4d76-82fb-ef4cad438842\") " pod="openstack/ceilometer-0" Feb 02 21:42:07 crc kubenswrapper[4789]: I0202 21:42:07.730269 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b579f7f4-db1f-4d76-82fb-ef4cad438842-scripts\") pod \"ceilometer-0\" (UID: \"b579f7f4-db1f-4d76-82fb-ef4cad438842\") " pod="openstack/ceilometer-0" Feb 02 21:42:07 crc kubenswrapper[4789]: I0202 21:42:07.730349 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b579f7f4-db1f-4d76-82fb-ef4cad438842-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b579f7f4-db1f-4d76-82fb-ef4cad438842\") " pod="openstack/ceilometer-0" Feb 02 21:42:07 crc kubenswrapper[4789]: I0202 21:42:07.730439 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b579f7f4-db1f-4d76-82fb-ef4cad438842-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b579f7f4-db1f-4d76-82fb-ef4cad438842\") " pod="openstack/ceilometer-0" Feb 02 21:42:07 crc kubenswrapper[4789]: I0202 21:42:07.730535 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b579f7f4-db1f-4d76-82fb-ef4cad438842-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b579f7f4-db1f-4d76-82fb-ef4cad438842\") " pod="openstack/ceilometer-0" Feb 02 21:42:07 crc kubenswrapper[4789]: I0202 21:42:07.730626 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b579f7f4-db1f-4d76-82fb-ef4cad438842-config-data\") pod \"ceilometer-0\" (UID: \"b579f7f4-db1f-4d76-82fb-ef4cad438842\") " pod="openstack/ceilometer-0" Feb 02 21:42:07 crc kubenswrapper[4789]: I0202 21:42:07.730654 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-745wq\" (UniqueName: \"kubernetes.io/projected/b579f7f4-db1f-4d76-82fb-ef4cad438842-kube-api-access-745wq\") pod \"ceilometer-0\" (UID: \"b579f7f4-db1f-4d76-82fb-ef4cad438842\") " pod="openstack/ceilometer-0" Feb 02 21:42:07 crc kubenswrapper[4789]: I0202 21:42:07.730710 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b579f7f4-db1f-4d76-82fb-ef4cad438842-run-httpd\") pod \"ceilometer-0\" (UID: \"b579f7f4-db1f-4d76-82fb-ef4cad438842\") " pod="openstack/ceilometer-0" Feb 02 21:42:07 crc kubenswrapper[4789]: I0202 21:42:07.831966 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b579f7f4-db1f-4d76-82fb-ef4cad438842-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b579f7f4-db1f-4d76-82fb-ef4cad438842\") " pod="openstack/ceilometer-0" Feb 02 21:42:07 crc kubenswrapper[4789]: I0202 21:42:07.832063 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b579f7f4-db1f-4d76-82fb-ef4cad438842-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b579f7f4-db1f-4d76-82fb-ef4cad438842\") " pod="openstack/ceilometer-0" Feb 02 21:42:07 crc kubenswrapper[4789]: I0202 21:42:07.832089 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b579f7f4-db1f-4d76-82fb-ef4cad438842-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b579f7f4-db1f-4d76-82fb-ef4cad438842\") " pod="openstack/ceilometer-0" Feb 02 21:42:07 crc kubenswrapper[4789]: I0202 21:42:07.832115 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b579f7f4-db1f-4d76-82fb-ef4cad438842-config-data\") pod \"ceilometer-0\" (UID: \"b579f7f4-db1f-4d76-82fb-ef4cad438842\") " pod="openstack/ceilometer-0" Feb 02 21:42:07 crc kubenswrapper[4789]: I0202 21:42:07.832138 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-745wq\" (UniqueName: \"kubernetes.io/projected/b579f7f4-db1f-4d76-82fb-ef4cad438842-kube-api-access-745wq\") pod \"ceilometer-0\" (UID: \"b579f7f4-db1f-4d76-82fb-ef4cad438842\") " pod="openstack/ceilometer-0" Feb 02 21:42:07 crc kubenswrapper[4789]: I0202 21:42:07.832166 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b579f7f4-db1f-4d76-82fb-ef4cad438842-run-httpd\") pod \"ceilometer-0\" (UID: \"b579f7f4-db1f-4d76-82fb-ef4cad438842\") " pod="openstack/ceilometer-0" Feb 02 21:42:07 crc kubenswrapper[4789]: I0202 21:42:07.832201 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b579f7f4-db1f-4d76-82fb-ef4cad438842-log-httpd\") pod \"ceilometer-0\" (UID: \"b579f7f4-db1f-4d76-82fb-ef4cad438842\") " pod="openstack/ceilometer-0" Feb 02 21:42:07 crc kubenswrapper[4789]: I0202 21:42:07.832242 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b579f7f4-db1f-4d76-82fb-ef4cad438842-scripts\") pod \"ceilometer-0\" (UID: \"b579f7f4-db1f-4d76-82fb-ef4cad438842\") " pod="openstack/ceilometer-0" Feb 02 21:42:07 crc kubenswrapper[4789]: I0202 21:42:07.833863 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b579f7f4-db1f-4d76-82fb-ef4cad438842-run-httpd\") pod \"ceilometer-0\" (UID: \"b579f7f4-db1f-4d76-82fb-ef4cad438842\") " pod="openstack/ceilometer-0" Feb 02 21:42:07 crc kubenswrapper[4789]: I0202 21:42:07.833964 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b579f7f4-db1f-4d76-82fb-ef4cad438842-log-httpd\") pod \"ceilometer-0\" (UID: \"b579f7f4-db1f-4d76-82fb-ef4cad438842\") " pod="openstack/ceilometer-0" Feb 02 21:42:07 crc kubenswrapper[4789]: I0202 21:42:07.838850 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b579f7f4-db1f-4d76-82fb-ef4cad438842-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b579f7f4-db1f-4d76-82fb-ef4cad438842\") " pod="openstack/ceilometer-0" Feb 02 21:42:07 crc kubenswrapper[4789]: I0202 21:42:07.838883 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b579f7f4-db1f-4d76-82fb-ef4cad438842-scripts\") pod \"ceilometer-0\" (UID: \"b579f7f4-db1f-4d76-82fb-ef4cad438842\") " pod="openstack/ceilometer-0" Feb 02 21:42:07 crc kubenswrapper[4789]: I0202 21:42:07.839736 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b579f7f4-db1f-4d76-82fb-ef4cad438842-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b579f7f4-db1f-4d76-82fb-ef4cad438842\") " pod="openstack/ceilometer-0" Feb 02 21:42:07 crc kubenswrapper[4789]: I0202 21:42:07.842260 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b579f7f4-db1f-4d76-82fb-ef4cad438842-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b579f7f4-db1f-4d76-82fb-ef4cad438842\") " pod="openstack/ceilometer-0" Feb 02 21:42:07 crc kubenswrapper[4789]: I0202 21:42:07.851843 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b579f7f4-db1f-4d76-82fb-ef4cad438842-config-data\") pod \"ceilometer-0\" (UID: \"b579f7f4-db1f-4d76-82fb-ef4cad438842\") " pod="openstack/ceilometer-0" Feb 02 21:42:07 crc kubenswrapper[4789]: I0202 21:42:07.861928 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-745wq\" (UniqueName: \"kubernetes.io/projected/b579f7f4-db1f-4d76-82fb-ef4cad438842-kube-api-access-745wq\") pod \"ceilometer-0\" (UID: \"b579f7f4-db1f-4d76-82fb-ef4cad438842\") " pod="openstack/ceilometer-0" Feb 02 21:42:07 crc kubenswrapper[4789]: I0202 21:42:07.968244 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 21:42:08 crc kubenswrapper[4789]: I0202 21:42:08.363358 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 21:42:08 crc kubenswrapper[4789]: I0202 21:42:08.430687 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5438d2ed-8212-4b05-9074-5b888caf6884" path="/var/lib/kubelet/pods/5438d2ed-8212-4b05-9074-5b888caf6884/volumes" Feb 02 21:42:08 crc kubenswrapper[4789]: I0202 21:42:08.485736 4789 generic.go:334] "Generic (PLEG): container finished" podID="a7098ef7-f9ae-4cd8-8264-9cfec2c20343" containerID="2858d5548504cad9374d6af9e634a391a1a87c5d7ace7a37454a5740d5e0255d" exitCode=0 Feb 02 21:42:08 crc kubenswrapper[4789]: I0202 21:42:08.485782 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a7098ef7-f9ae-4cd8-8264-9cfec2c20343","Type":"ContainerDied","Data":"2858d5548504cad9374d6af9e634a391a1a87c5d7ace7a37454a5740d5e0255d"} Feb 02 21:42:08 crc kubenswrapper[4789]: I0202 21:42:08.485807 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a7098ef7-f9ae-4cd8-8264-9cfec2c20343","Type":"ContainerDied","Data":"2f8df1c5cb268f99c420e7a2d002325138a9cc8310775a053079aafedc2a20f7"} Feb 02 21:42:08 crc kubenswrapper[4789]: I0202 21:42:08.485822 4789 scope.go:117] "RemoveContainer" containerID="2858d5548504cad9374d6af9e634a391a1a87c5d7ace7a37454a5740d5e0255d" Feb 02 21:42:08 crc kubenswrapper[4789]: I0202 21:42:08.485912 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 21:42:08 crc kubenswrapper[4789]: I0202 21:42:08.537120 4789 scope.go:117] "RemoveContainer" containerID="9b3893ed83f44b2989913747db245e393592b03a247ee00dcedef8f1523a6e34" Feb 02 21:42:08 crc kubenswrapper[4789]: I0202 21:42:08.554316 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7098ef7-f9ae-4cd8-8264-9cfec2c20343-combined-ca-bundle\") pod \"a7098ef7-f9ae-4cd8-8264-9cfec2c20343\" (UID: \"a7098ef7-f9ae-4cd8-8264-9cfec2c20343\") " Feb 02 21:42:08 crc kubenswrapper[4789]: I0202 21:42:08.554431 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7098ef7-f9ae-4cd8-8264-9cfec2c20343-config-data\") pod \"a7098ef7-f9ae-4cd8-8264-9cfec2c20343\" (UID: \"a7098ef7-f9ae-4cd8-8264-9cfec2c20343\") " Feb 02 21:42:08 crc kubenswrapper[4789]: I0202 21:42:08.554481 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a7098ef7-f9ae-4cd8-8264-9cfec2c20343-logs\") pod \"a7098ef7-f9ae-4cd8-8264-9cfec2c20343\" (UID: \"a7098ef7-f9ae-4cd8-8264-9cfec2c20343\") " Feb 02 21:42:08 crc kubenswrapper[4789]: I0202 21:42:08.554606 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c729j\" (UniqueName: \"kubernetes.io/projected/a7098ef7-f9ae-4cd8-8264-9cfec2c20343-kube-api-access-c729j\") pod \"a7098ef7-f9ae-4cd8-8264-9cfec2c20343\" (UID: \"a7098ef7-f9ae-4cd8-8264-9cfec2c20343\") " Feb 02 21:42:08 crc kubenswrapper[4789]: I0202 21:42:08.556795 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7098ef7-f9ae-4cd8-8264-9cfec2c20343-logs" (OuterVolumeSpecName: "logs") pod "a7098ef7-f9ae-4cd8-8264-9cfec2c20343" (UID: "a7098ef7-f9ae-4cd8-8264-9cfec2c20343"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 21:42:08 crc kubenswrapper[4789]: I0202 21:42:08.562743 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7098ef7-f9ae-4cd8-8264-9cfec2c20343-kube-api-access-c729j" (OuterVolumeSpecName: "kube-api-access-c729j") pod "a7098ef7-f9ae-4cd8-8264-9cfec2c20343" (UID: "a7098ef7-f9ae-4cd8-8264-9cfec2c20343"). InnerVolumeSpecName "kube-api-access-c729j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:42:08 crc kubenswrapper[4789]: I0202 21:42:08.570603 4789 scope.go:117] "RemoveContainer" containerID="2858d5548504cad9374d6af9e634a391a1a87c5d7ace7a37454a5740d5e0255d" Feb 02 21:42:08 crc kubenswrapper[4789]: E0202 21:42:08.571239 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2858d5548504cad9374d6af9e634a391a1a87c5d7ace7a37454a5740d5e0255d\": container with ID starting with 2858d5548504cad9374d6af9e634a391a1a87c5d7ace7a37454a5740d5e0255d not found: ID does not exist" containerID="2858d5548504cad9374d6af9e634a391a1a87c5d7ace7a37454a5740d5e0255d" Feb 02 21:42:08 crc kubenswrapper[4789]: I0202 21:42:08.571291 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2858d5548504cad9374d6af9e634a391a1a87c5d7ace7a37454a5740d5e0255d"} err="failed to get container status \"2858d5548504cad9374d6af9e634a391a1a87c5d7ace7a37454a5740d5e0255d\": rpc error: code = NotFound desc = could not find container \"2858d5548504cad9374d6af9e634a391a1a87c5d7ace7a37454a5740d5e0255d\": container with ID starting with 2858d5548504cad9374d6af9e634a391a1a87c5d7ace7a37454a5740d5e0255d not found: ID does not exist" Feb 02 21:42:08 crc kubenswrapper[4789]: I0202 21:42:08.571314 4789 scope.go:117] "RemoveContainer" containerID="9b3893ed83f44b2989913747db245e393592b03a247ee00dcedef8f1523a6e34" Feb 02 21:42:08 crc kubenswrapper[4789]: E0202 21:42:08.571478 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b3893ed83f44b2989913747db245e393592b03a247ee00dcedef8f1523a6e34\": container with ID starting with 9b3893ed83f44b2989913747db245e393592b03a247ee00dcedef8f1523a6e34 not found: ID does not exist" containerID="9b3893ed83f44b2989913747db245e393592b03a247ee00dcedef8f1523a6e34" Feb 02 21:42:08 crc kubenswrapper[4789]: I0202 21:42:08.571501 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b3893ed83f44b2989913747db245e393592b03a247ee00dcedef8f1523a6e34"} err="failed to get container status \"9b3893ed83f44b2989913747db245e393592b03a247ee00dcedef8f1523a6e34\": rpc error: code = NotFound desc = could not find container \"9b3893ed83f44b2989913747db245e393592b03a247ee00dcedef8f1523a6e34\": container with ID starting with 9b3893ed83f44b2989913747db245e393592b03a247ee00dcedef8f1523a6e34 not found: ID does not exist" Feb 02 21:42:08 crc kubenswrapper[4789]: I0202 21:42:08.585317 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 21:42:08 crc kubenswrapper[4789]: I0202 21:42:08.591283 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7098ef7-f9ae-4cd8-8264-9cfec2c20343-config-data" (OuterVolumeSpecName: "config-data") pod "a7098ef7-f9ae-4cd8-8264-9cfec2c20343" (UID: "a7098ef7-f9ae-4cd8-8264-9cfec2c20343"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:42:08 crc kubenswrapper[4789]: I0202 21:42:08.595910 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7098ef7-f9ae-4cd8-8264-9cfec2c20343-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a7098ef7-f9ae-4cd8-8264-9cfec2c20343" (UID: "a7098ef7-f9ae-4cd8-8264-9cfec2c20343"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:42:08 crc kubenswrapper[4789]: I0202 21:42:08.657812 4789 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7098ef7-f9ae-4cd8-8264-9cfec2c20343-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 21:42:08 crc kubenswrapper[4789]: I0202 21:42:08.657843 4789 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7098ef7-f9ae-4cd8-8264-9cfec2c20343-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 21:42:08 crc kubenswrapper[4789]: I0202 21:42:08.657853 4789 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a7098ef7-f9ae-4cd8-8264-9cfec2c20343-logs\") on node \"crc\" DevicePath \"\"" Feb 02 21:42:08 crc kubenswrapper[4789]: I0202 21:42:08.657862 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c729j\" (UniqueName: \"kubernetes.io/projected/a7098ef7-f9ae-4cd8-8264-9cfec2c20343-kube-api-access-c729j\") on node \"crc\" DevicePath \"\"" Feb 02 21:42:08 crc kubenswrapper[4789]: I0202 21:42:08.784239 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 02 21:42:08 crc kubenswrapper[4789]: I0202 21:42:08.801884 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 02 21:42:08 crc kubenswrapper[4789]: I0202 21:42:08.814727 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 02 21:42:08 crc kubenswrapper[4789]: I0202 21:42:08.823509 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 02 21:42:08 crc kubenswrapper[4789]: I0202 21:42:08.846257 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 02 21:42:08 crc kubenswrapper[4789]: E0202 21:42:08.846848 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7098ef7-f9ae-4cd8-8264-9cfec2c20343" containerName="nova-api-log" Feb 02 21:42:08 crc kubenswrapper[4789]: I0202 21:42:08.846870 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7098ef7-f9ae-4cd8-8264-9cfec2c20343" containerName="nova-api-log" Feb 02 21:42:08 crc kubenswrapper[4789]: E0202 21:42:08.846886 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7098ef7-f9ae-4cd8-8264-9cfec2c20343" containerName="nova-api-api" Feb 02 21:42:08 crc kubenswrapper[4789]: I0202 21:42:08.846896 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7098ef7-f9ae-4cd8-8264-9cfec2c20343" containerName="nova-api-api" Feb 02 21:42:08 crc kubenswrapper[4789]: I0202 21:42:08.847094 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7098ef7-f9ae-4cd8-8264-9cfec2c20343" containerName="nova-api-api" Feb 02 21:42:08 crc kubenswrapper[4789]: I0202 21:42:08.847127 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7098ef7-f9ae-4cd8-8264-9cfec2c20343" containerName="nova-api-log" Feb 02 21:42:08 crc kubenswrapper[4789]: I0202 21:42:08.848298 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 21:42:08 crc kubenswrapper[4789]: I0202 21:42:08.850777 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 02 21:42:08 crc kubenswrapper[4789]: I0202 21:42:08.851042 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 02 21:42:08 crc kubenswrapper[4789]: I0202 21:42:08.854544 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 02 21:42:08 crc kubenswrapper[4789]: I0202 21:42:08.867099 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 02 21:42:08 crc kubenswrapper[4789]: I0202 21:42:08.962374 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/50435b5c-9ccc-4dc9-932c-78a7b7427aa1-internal-tls-certs\") pod \"nova-api-0\" (UID: \"50435b5c-9ccc-4dc9-932c-78a7b7427aa1\") " pod="openstack/nova-api-0" Feb 02 21:42:08 crc kubenswrapper[4789]: I0202 21:42:08.962424 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/50435b5c-9ccc-4dc9-932c-78a7b7427aa1-public-tls-certs\") pod \"nova-api-0\" (UID: \"50435b5c-9ccc-4dc9-932c-78a7b7427aa1\") " pod="openstack/nova-api-0" Feb 02 21:42:08 crc kubenswrapper[4789]: I0202 21:42:08.962595 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/50435b5c-9ccc-4dc9-932c-78a7b7427aa1-logs\") pod \"nova-api-0\" (UID: \"50435b5c-9ccc-4dc9-932c-78a7b7427aa1\") " pod="openstack/nova-api-0" Feb 02 21:42:08 crc kubenswrapper[4789]: I0202 21:42:08.962678 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snv27\" (UniqueName: \"kubernetes.io/projected/50435b5c-9ccc-4dc9-932c-78a7b7427aa1-kube-api-access-snv27\") pod \"nova-api-0\" (UID: \"50435b5c-9ccc-4dc9-932c-78a7b7427aa1\") " pod="openstack/nova-api-0" Feb 02 21:42:08 crc kubenswrapper[4789]: I0202 21:42:08.962714 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50435b5c-9ccc-4dc9-932c-78a7b7427aa1-config-data\") pod \"nova-api-0\" (UID: \"50435b5c-9ccc-4dc9-932c-78a7b7427aa1\") " pod="openstack/nova-api-0" Feb 02 21:42:08 crc kubenswrapper[4789]: I0202 21:42:08.962800 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50435b5c-9ccc-4dc9-932c-78a7b7427aa1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"50435b5c-9ccc-4dc9-932c-78a7b7427aa1\") " pod="openstack/nova-api-0" Feb 02 21:42:09 crc kubenswrapper[4789]: I0202 21:42:09.066955 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/50435b5c-9ccc-4dc9-932c-78a7b7427aa1-logs\") pod \"nova-api-0\" (UID: \"50435b5c-9ccc-4dc9-932c-78a7b7427aa1\") " pod="openstack/nova-api-0" Feb 02 21:42:09 crc kubenswrapper[4789]: I0202 21:42:09.067024 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snv27\" (UniqueName: \"kubernetes.io/projected/50435b5c-9ccc-4dc9-932c-78a7b7427aa1-kube-api-access-snv27\") pod \"nova-api-0\" (UID: \"50435b5c-9ccc-4dc9-932c-78a7b7427aa1\") " pod="openstack/nova-api-0" Feb 02 21:42:09 crc kubenswrapper[4789]: I0202 21:42:09.067072 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50435b5c-9ccc-4dc9-932c-78a7b7427aa1-config-data\") pod \"nova-api-0\" (UID: \"50435b5c-9ccc-4dc9-932c-78a7b7427aa1\") " pod="openstack/nova-api-0" Feb 02 21:42:09 crc kubenswrapper[4789]: I0202 21:42:09.067113 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50435b5c-9ccc-4dc9-932c-78a7b7427aa1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"50435b5c-9ccc-4dc9-932c-78a7b7427aa1\") " pod="openstack/nova-api-0" Feb 02 21:42:09 crc kubenswrapper[4789]: I0202 21:42:09.067227 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/50435b5c-9ccc-4dc9-932c-78a7b7427aa1-internal-tls-certs\") pod \"nova-api-0\" (UID: \"50435b5c-9ccc-4dc9-932c-78a7b7427aa1\") " pod="openstack/nova-api-0" Feb 02 21:42:09 crc kubenswrapper[4789]: I0202 21:42:09.067257 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/50435b5c-9ccc-4dc9-932c-78a7b7427aa1-public-tls-certs\") pod \"nova-api-0\" (UID: \"50435b5c-9ccc-4dc9-932c-78a7b7427aa1\") " pod="openstack/nova-api-0" Feb 02 21:42:09 crc kubenswrapper[4789]: I0202 21:42:09.067430 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/50435b5c-9ccc-4dc9-932c-78a7b7427aa1-logs\") pod \"nova-api-0\" (UID: \"50435b5c-9ccc-4dc9-932c-78a7b7427aa1\") " pod="openstack/nova-api-0" Feb 02 21:42:09 crc kubenswrapper[4789]: I0202 21:42:09.073201 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/50435b5c-9ccc-4dc9-932c-78a7b7427aa1-public-tls-certs\") pod \"nova-api-0\" (UID: \"50435b5c-9ccc-4dc9-932c-78a7b7427aa1\") " pod="openstack/nova-api-0" Feb 02 21:42:09 crc kubenswrapper[4789]: I0202 21:42:09.073357 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50435b5c-9ccc-4dc9-932c-78a7b7427aa1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"50435b5c-9ccc-4dc9-932c-78a7b7427aa1\") " pod="openstack/nova-api-0" Feb 02 21:42:09 crc kubenswrapper[4789]: I0202 21:42:09.073393 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50435b5c-9ccc-4dc9-932c-78a7b7427aa1-config-data\") pod \"nova-api-0\" (UID: \"50435b5c-9ccc-4dc9-932c-78a7b7427aa1\") " pod="openstack/nova-api-0" Feb 02 21:42:09 crc kubenswrapper[4789]: I0202 21:42:09.073596 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/50435b5c-9ccc-4dc9-932c-78a7b7427aa1-internal-tls-certs\") pod \"nova-api-0\" (UID: \"50435b5c-9ccc-4dc9-932c-78a7b7427aa1\") " pod="openstack/nova-api-0" Feb 02 21:42:09 crc kubenswrapper[4789]: I0202 21:42:09.088725 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snv27\" (UniqueName: \"kubernetes.io/projected/50435b5c-9ccc-4dc9-932c-78a7b7427aa1-kube-api-access-snv27\") pod \"nova-api-0\" (UID: \"50435b5c-9ccc-4dc9-932c-78a7b7427aa1\") " pod="openstack/nova-api-0" Feb 02 21:42:09 crc kubenswrapper[4789]: I0202 21:42:09.165789 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 21:42:09 crc kubenswrapper[4789]: I0202 21:42:09.509007 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b579f7f4-db1f-4d76-82fb-ef4cad438842","Type":"ContainerStarted","Data":"28a7ed128e7bef7f569955019dd73ac9d95249468906497c95bad0c6363ebdd8"} Feb 02 21:42:09 crc kubenswrapper[4789]: I0202 21:42:09.509398 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b579f7f4-db1f-4d76-82fb-ef4cad438842","Type":"ContainerStarted","Data":"a859b36ae2f6d42dfb7cd8f54d60e3cb5732145a3e0c8c167eaeb102b2c4871f"} Feb 02 21:42:09 crc kubenswrapper[4789]: I0202 21:42:09.524516 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 02 21:42:09 crc kubenswrapper[4789]: I0202 21:42:09.667069 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 02 21:42:09 crc kubenswrapper[4789]: I0202 21:42:09.801526 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-hw2cp"] Feb 02 21:42:09 crc kubenswrapper[4789]: I0202 21:42:09.802728 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-hw2cp" Feb 02 21:42:09 crc kubenswrapper[4789]: I0202 21:42:09.804059 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 02 21:42:09 crc kubenswrapper[4789]: I0202 21:42:09.806404 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 02 21:42:09 crc kubenswrapper[4789]: I0202 21:42:09.811965 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-hw2cp"] Feb 02 21:42:09 crc kubenswrapper[4789]: I0202 21:42:09.896987 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/314d39fc-1687-4ed2-bac5-8ed19ba3cdab-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-hw2cp\" (UID: \"314d39fc-1687-4ed2-bac5-8ed19ba3cdab\") " pod="openstack/nova-cell1-cell-mapping-hw2cp" Feb 02 21:42:09 crc kubenswrapper[4789]: I0202 21:42:09.897189 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/314d39fc-1687-4ed2-bac5-8ed19ba3cdab-config-data\") pod \"nova-cell1-cell-mapping-hw2cp\" (UID: \"314d39fc-1687-4ed2-bac5-8ed19ba3cdab\") " pod="openstack/nova-cell1-cell-mapping-hw2cp" Feb 02 21:42:09 crc kubenswrapper[4789]: I0202 21:42:09.897295 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/314d39fc-1687-4ed2-bac5-8ed19ba3cdab-scripts\") pod \"nova-cell1-cell-mapping-hw2cp\" (UID: \"314d39fc-1687-4ed2-bac5-8ed19ba3cdab\") " pod="openstack/nova-cell1-cell-mapping-hw2cp" Feb 02 21:42:09 crc kubenswrapper[4789]: I0202 21:42:09.897629 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwkq6\" (UniqueName: \"kubernetes.io/projected/314d39fc-1687-4ed2-bac5-8ed19ba3cdab-kube-api-access-bwkq6\") pod \"nova-cell1-cell-mapping-hw2cp\" (UID: \"314d39fc-1687-4ed2-bac5-8ed19ba3cdab\") " pod="openstack/nova-cell1-cell-mapping-hw2cp" Feb 02 21:42:09 crc kubenswrapper[4789]: I0202 21:42:09.999775 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwkq6\" (UniqueName: \"kubernetes.io/projected/314d39fc-1687-4ed2-bac5-8ed19ba3cdab-kube-api-access-bwkq6\") pod \"nova-cell1-cell-mapping-hw2cp\" (UID: \"314d39fc-1687-4ed2-bac5-8ed19ba3cdab\") " pod="openstack/nova-cell1-cell-mapping-hw2cp" Feb 02 21:42:09 crc kubenswrapper[4789]: I0202 21:42:09.999879 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/314d39fc-1687-4ed2-bac5-8ed19ba3cdab-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-hw2cp\" (UID: \"314d39fc-1687-4ed2-bac5-8ed19ba3cdab\") " pod="openstack/nova-cell1-cell-mapping-hw2cp" Feb 02 21:42:09 crc kubenswrapper[4789]: I0202 21:42:09.999923 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/314d39fc-1687-4ed2-bac5-8ed19ba3cdab-config-data\") pod \"nova-cell1-cell-mapping-hw2cp\" (UID: \"314d39fc-1687-4ed2-bac5-8ed19ba3cdab\") " pod="openstack/nova-cell1-cell-mapping-hw2cp" Feb 02 21:42:09 crc kubenswrapper[4789]: I0202 21:42:09.999948 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/314d39fc-1687-4ed2-bac5-8ed19ba3cdab-scripts\") pod \"nova-cell1-cell-mapping-hw2cp\" (UID: \"314d39fc-1687-4ed2-bac5-8ed19ba3cdab\") " pod="openstack/nova-cell1-cell-mapping-hw2cp" Feb 02 21:42:10 crc kubenswrapper[4789]: I0202 21:42:10.004103 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/314d39fc-1687-4ed2-bac5-8ed19ba3cdab-config-data\") pod \"nova-cell1-cell-mapping-hw2cp\" (UID: \"314d39fc-1687-4ed2-bac5-8ed19ba3cdab\") " pod="openstack/nova-cell1-cell-mapping-hw2cp" Feb 02 21:42:10 crc kubenswrapper[4789]: I0202 21:42:10.004537 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/314d39fc-1687-4ed2-bac5-8ed19ba3cdab-scripts\") pod \"nova-cell1-cell-mapping-hw2cp\" (UID: \"314d39fc-1687-4ed2-bac5-8ed19ba3cdab\") " pod="openstack/nova-cell1-cell-mapping-hw2cp" Feb 02 21:42:10 crc kubenswrapper[4789]: I0202 21:42:10.005099 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/314d39fc-1687-4ed2-bac5-8ed19ba3cdab-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-hw2cp\" (UID: \"314d39fc-1687-4ed2-bac5-8ed19ba3cdab\") " pod="openstack/nova-cell1-cell-mapping-hw2cp" Feb 02 21:42:10 crc kubenswrapper[4789]: I0202 21:42:10.021276 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwkq6\" (UniqueName: \"kubernetes.io/projected/314d39fc-1687-4ed2-bac5-8ed19ba3cdab-kube-api-access-bwkq6\") pod \"nova-cell1-cell-mapping-hw2cp\" (UID: \"314d39fc-1687-4ed2-bac5-8ed19ba3cdab\") " pod="openstack/nova-cell1-cell-mapping-hw2cp" Feb 02 21:42:10 crc kubenswrapper[4789]: I0202 21:42:10.249142 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-hw2cp" Feb 02 21:42:10 crc kubenswrapper[4789]: I0202 21:42:10.439294 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7098ef7-f9ae-4cd8-8264-9cfec2c20343" path="/var/lib/kubelet/pods/a7098ef7-f9ae-4cd8-8264-9cfec2c20343/volumes" Feb 02 21:42:10 crc kubenswrapper[4789]: I0202 21:42:10.523148 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"50435b5c-9ccc-4dc9-932c-78a7b7427aa1","Type":"ContainerStarted","Data":"9238d33a318781099017d09ea25014a1beb0a95ed007f02cd3aea76b743d4039"} Feb 02 21:42:10 crc kubenswrapper[4789]: I0202 21:42:10.530137 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"50435b5c-9ccc-4dc9-932c-78a7b7427aa1","Type":"ContainerStarted","Data":"f04211c79574acf467ca6a377a4f1a4bfbeef77a81d8acc61a818ea4bebb8b1b"} Feb 02 21:42:10 crc kubenswrapper[4789]: I0202 21:42:10.530174 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"50435b5c-9ccc-4dc9-932c-78a7b7427aa1","Type":"ContainerStarted","Data":"a12c67457ae171b60c5004bb7370a71c36d462f430271d4ca267d8098d45536b"} Feb 02 21:42:10 crc kubenswrapper[4789]: I0202 21:42:10.530183 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b579f7f4-db1f-4d76-82fb-ef4cad438842","Type":"ContainerStarted","Data":"4442ad2bcd72e1f7d739ef50d0304ab053ba1e52fd3d4c19d121698c07aa9558"} Feb 02 21:42:10 crc kubenswrapper[4789]: I0202 21:42:10.801227 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.801209998 podStartE2EDuration="2.801209998s" podCreationTimestamp="2026-02-02 21:42:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:42:10.538868646 +0000 UTC m=+1350.833893665" watchObservedRunningTime="2026-02-02 21:42:10.801209998 +0000 UTC m=+1351.096235017" Feb 02 21:42:10 crc kubenswrapper[4789]: W0202 21:42:10.806319 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod314d39fc_1687_4ed2_bac5_8ed19ba3cdab.slice/crio-4015566f4654ee3eb2e4c9ae4103426c19d69025ccc6a22d5e9e96c64b89cd7e WatchSource:0}: Error finding container 4015566f4654ee3eb2e4c9ae4103426c19d69025ccc6a22d5e9e96c64b89cd7e: Status 404 returned error can't find the container with id 4015566f4654ee3eb2e4c9ae4103426c19d69025ccc6a22d5e9e96c64b89cd7e Feb 02 21:42:10 crc kubenswrapper[4789]: I0202 21:42:10.806808 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-hw2cp"] Feb 02 21:42:11 crc kubenswrapper[4789]: I0202 21:42:11.535757 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-hw2cp" event={"ID":"314d39fc-1687-4ed2-bac5-8ed19ba3cdab","Type":"ContainerStarted","Data":"4e428b3e90b59603738751c709dbc7de1ccef26fb2e50aac0b788d4a2d55579c"} Feb 02 21:42:11 crc kubenswrapper[4789]: I0202 21:42:11.535997 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-hw2cp" event={"ID":"314d39fc-1687-4ed2-bac5-8ed19ba3cdab","Type":"ContainerStarted","Data":"4015566f4654ee3eb2e4c9ae4103426c19d69025ccc6a22d5e9e96c64b89cd7e"} Feb 02 21:42:11 crc kubenswrapper[4789]: I0202 21:42:11.540398 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b579f7f4-db1f-4d76-82fb-ef4cad438842","Type":"ContainerStarted","Data":"c4d593fd14424a40e7eb4b508c719970461ef690e1eb1894e38dd03571b8b07b"} Feb 02 21:42:11 crc kubenswrapper[4789]: I0202 21:42:11.563429 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-hw2cp" podStartSLOduration=2.5634109819999997 podStartE2EDuration="2.563410982s" podCreationTimestamp="2026-02-02 21:42:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:42:11.554819359 +0000 UTC m=+1351.849844378" watchObservedRunningTime="2026-02-02 21:42:11.563410982 +0000 UTC m=+1351.858436001" Feb 02 21:42:12 crc kubenswrapper[4789]: I0202 21:42:12.042814 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-89c5cd4d5-r4rd5" Feb 02 21:42:12 crc kubenswrapper[4789]: I0202 21:42:12.109476 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-c7r2c"] Feb 02 21:42:12 crc kubenswrapper[4789]: I0202 21:42:12.109761 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-757b4f8459-c7r2c" podUID="6b8f57c8-9467-4545-89f7-bbda22025d26" containerName="dnsmasq-dns" containerID="cri-o://f71ea990fd211f90f73858215cc4f96f5720f05aa94fd89824ef0789717946ca" gracePeriod=10 Feb 02 21:42:12 crc kubenswrapper[4789]: E0202 21:42:12.340949 4789 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6b8f57c8_9467_4545_89f7_bbda22025d26.slice/crio-conmon-f71ea990fd211f90f73858215cc4f96f5720f05aa94fd89824ef0789717946ca.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6b8f57c8_9467_4545_89f7_bbda22025d26.slice/crio-f71ea990fd211f90f73858215cc4f96f5720f05aa94fd89824ef0789717946ca.scope\": RecentStats: unable to find data in memory cache]" Feb 02 21:42:12 crc kubenswrapper[4789]: I0202 21:42:12.548274 4789 generic.go:334] "Generic (PLEG): container finished" podID="6b8f57c8-9467-4545-89f7-bbda22025d26" containerID="f71ea990fd211f90f73858215cc4f96f5720f05aa94fd89824ef0789717946ca" exitCode=0 Feb 02 21:42:12 crc kubenswrapper[4789]: I0202 21:42:12.549413 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-c7r2c" event={"ID":"6b8f57c8-9467-4545-89f7-bbda22025d26","Type":"ContainerDied","Data":"f71ea990fd211f90f73858215cc4f96f5720f05aa94fd89824ef0789717946ca"} Feb 02 21:42:12 crc kubenswrapper[4789]: I0202 21:42:12.549509 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-c7r2c" event={"ID":"6b8f57c8-9467-4545-89f7-bbda22025d26","Type":"ContainerDied","Data":"0c2c7c161d7ae39b46d88b379c4d9802e3bae9f6fc8459b38df4b80491841591"} Feb 02 21:42:12 crc kubenswrapper[4789]: I0202 21:42:12.549598 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0c2c7c161d7ae39b46d88b379c4d9802e3bae9f6fc8459b38df4b80491841591" Feb 02 21:42:12 crc kubenswrapper[4789]: I0202 21:42:12.630118 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-c7r2c" Feb 02 21:42:12 crc kubenswrapper[4789]: I0202 21:42:12.759871 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6b8f57c8-9467-4545-89f7-bbda22025d26-dns-swift-storage-0\") pod \"6b8f57c8-9467-4545-89f7-bbda22025d26\" (UID: \"6b8f57c8-9467-4545-89f7-bbda22025d26\") " Feb 02 21:42:12 crc kubenswrapper[4789]: I0202 21:42:12.760231 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6b8f57c8-9467-4545-89f7-bbda22025d26-dns-svc\") pod \"6b8f57c8-9467-4545-89f7-bbda22025d26\" (UID: \"6b8f57c8-9467-4545-89f7-bbda22025d26\") " Feb 02 21:42:12 crc kubenswrapper[4789]: I0202 21:42:12.760351 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jf6j2\" (UniqueName: \"kubernetes.io/projected/6b8f57c8-9467-4545-89f7-bbda22025d26-kube-api-access-jf6j2\") pod \"6b8f57c8-9467-4545-89f7-bbda22025d26\" (UID: \"6b8f57c8-9467-4545-89f7-bbda22025d26\") " Feb 02 21:42:12 crc kubenswrapper[4789]: I0202 21:42:12.760374 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b8f57c8-9467-4545-89f7-bbda22025d26-config\") pod \"6b8f57c8-9467-4545-89f7-bbda22025d26\" (UID: \"6b8f57c8-9467-4545-89f7-bbda22025d26\") " Feb 02 21:42:12 crc kubenswrapper[4789]: I0202 21:42:12.760476 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6b8f57c8-9467-4545-89f7-bbda22025d26-ovsdbserver-sb\") pod \"6b8f57c8-9467-4545-89f7-bbda22025d26\" (UID: \"6b8f57c8-9467-4545-89f7-bbda22025d26\") " Feb 02 21:42:12 crc kubenswrapper[4789]: I0202 21:42:12.760493 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6b8f57c8-9467-4545-89f7-bbda22025d26-ovsdbserver-nb\") pod \"6b8f57c8-9467-4545-89f7-bbda22025d26\" (UID: \"6b8f57c8-9467-4545-89f7-bbda22025d26\") " Feb 02 21:42:12 crc kubenswrapper[4789]: I0202 21:42:12.766566 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b8f57c8-9467-4545-89f7-bbda22025d26-kube-api-access-jf6j2" (OuterVolumeSpecName: "kube-api-access-jf6j2") pod "6b8f57c8-9467-4545-89f7-bbda22025d26" (UID: "6b8f57c8-9467-4545-89f7-bbda22025d26"). InnerVolumeSpecName "kube-api-access-jf6j2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:42:12 crc kubenswrapper[4789]: I0202 21:42:12.816280 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b8f57c8-9467-4545-89f7-bbda22025d26-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "6b8f57c8-9467-4545-89f7-bbda22025d26" (UID: "6b8f57c8-9467-4545-89f7-bbda22025d26"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:42:12 crc kubenswrapper[4789]: I0202 21:42:12.817957 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b8f57c8-9467-4545-89f7-bbda22025d26-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6b8f57c8-9467-4545-89f7-bbda22025d26" (UID: "6b8f57c8-9467-4545-89f7-bbda22025d26"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:42:12 crc kubenswrapper[4789]: I0202 21:42:12.830046 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b8f57c8-9467-4545-89f7-bbda22025d26-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6b8f57c8-9467-4545-89f7-bbda22025d26" (UID: "6b8f57c8-9467-4545-89f7-bbda22025d26"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:42:12 crc kubenswrapper[4789]: I0202 21:42:12.840037 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b8f57c8-9467-4545-89f7-bbda22025d26-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6b8f57c8-9467-4545-89f7-bbda22025d26" (UID: "6b8f57c8-9467-4545-89f7-bbda22025d26"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:42:12 crc kubenswrapper[4789]: I0202 21:42:12.840098 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b8f57c8-9467-4545-89f7-bbda22025d26-config" (OuterVolumeSpecName: "config") pod "6b8f57c8-9467-4545-89f7-bbda22025d26" (UID: "6b8f57c8-9467-4545-89f7-bbda22025d26"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:42:12 crc kubenswrapper[4789]: I0202 21:42:12.862647 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jf6j2\" (UniqueName: \"kubernetes.io/projected/6b8f57c8-9467-4545-89f7-bbda22025d26-kube-api-access-jf6j2\") on node \"crc\" DevicePath \"\"" Feb 02 21:42:12 crc kubenswrapper[4789]: I0202 21:42:12.862675 4789 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b8f57c8-9467-4545-89f7-bbda22025d26-config\") on node \"crc\" DevicePath \"\"" Feb 02 21:42:12 crc kubenswrapper[4789]: I0202 21:42:12.862685 4789 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6b8f57c8-9467-4545-89f7-bbda22025d26-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 21:42:12 crc kubenswrapper[4789]: I0202 21:42:12.862695 4789 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6b8f57c8-9467-4545-89f7-bbda22025d26-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 21:42:12 crc kubenswrapper[4789]: I0202 21:42:12.862703 4789 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6b8f57c8-9467-4545-89f7-bbda22025d26-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 02 21:42:12 crc kubenswrapper[4789]: I0202 21:42:12.862712 4789 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6b8f57c8-9467-4545-89f7-bbda22025d26-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 21:42:13 crc kubenswrapper[4789]: I0202 21:42:13.557645 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-c7r2c" Feb 02 21:42:13 crc kubenswrapper[4789]: I0202 21:42:13.609061 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-c7r2c"] Feb 02 21:42:13 crc kubenswrapper[4789]: I0202 21:42:13.622444 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-c7r2c"] Feb 02 21:42:14 crc kubenswrapper[4789]: I0202 21:42:14.438185 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b8f57c8-9467-4545-89f7-bbda22025d26" path="/var/lib/kubelet/pods/6b8f57c8-9467-4545-89f7-bbda22025d26/volumes" Feb 02 21:42:14 crc kubenswrapper[4789]: I0202 21:42:14.571507 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b579f7f4-db1f-4d76-82fb-ef4cad438842","Type":"ContainerStarted","Data":"0947f8cdd1f5dab6746e2ce88b87d9cc21b32de7ac54eec8ed4b2dc8b2ff1f61"} Feb 02 21:42:14 crc kubenswrapper[4789]: I0202 21:42:14.571989 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 02 21:42:14 crc kubenswrapper[4789]: I0202 21:42:14.607619 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.485852833 podStartE2EDuration="7.607569549s" podCreationTimestamp="2026-02-02 21:42:07 +0000 UTC" firstStartedPulling="2026-02-02 21:42:08.587779014 +0000 UTC m=+1348.882804033" lastFinishedPulling="2026-02-02 21:42:13.70949574 +0000 UTC m=+1354.004520749" observedRunningTime="2026-02-02 21:42:14.593212463 +0000 UTC m=+1354.888237522" watchObservedRunningTime="2026-02-02 21:42:14.607569549 +0000 UTC m=+1354.902594608" Feb 02 21:42:16 crc kubenswrapper[4789]: I0202 21:42:16.600163 4789 generic.go:334] "Generic (PLEG): container finished" podID="314d39fc-1687-4ed2-bac5-8ed19ba3cdab" containerID="4e428b3e90b59603738751c709dbc7de1ccef26fb2e50aac0b788d4a2d55579c" exitCode=0 Feb 02 21:42:16 crc kubenswrapper[4789]: I0202 21:42:16.600217 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-hw2cp" event={"ID":"314d39fc-1687-4ed2-bac5-8ed19ba3cdab","Type":"ContainerDied","Data":"4e428b3e90b59603738751c709dbc7de1ccef26fb2e50aac0b788d4a2d55579c"} Feb 02 21:42:18 crc kubenswrapper[4789]: I0202 21:42:18.089670 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-hw2cp" Feb 02 21:42:18 crc kubenswrapper[4789]: I0202 21:42:18.172467 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/314d39fc-1687-4ed2-bac5-8ed19ba3cdab-scripts\") pod \"314d39fc-1687-4ed2-bac5-8ed19ba3cdab\" (UID: \"314d39fc-1687-4ed2-bac5-8ed19ba3cdab\") " Feb 02 21:42:18 crc kubenswrapper[4789]: I0202 21:42:18.172526 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bwkq6\" (UniqueName: \"kubernetes.io/projected/314d39fc-1687-4ed2-bac5-8ed19ba3cdab-kube-api-access-bwkq6\") pod \"314d39fc-1687-4ed2-bac5-8ed19ba3cdab\" (UID: \"314d39fc-1687-4ed2-bac5-8ed19ba3cdab\") " Feb 02 21:42:18 crc kubenswrapper[4789]: I0202 21:42:18.172640 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/314d39fc-1687-4ed2-bac5-8ed19ba3cdab-combined-ca-bundle\") pod \"314d39fc-1687-4ed2-bac5-8ed19ba3cdab\" (UID: \"314d39fc-1687-4ed2-bac5-8ed19ba3cdab\") " Feb 02 21:42:18 crc kubenswrapper[4789]: I0202 21:42:18.172766 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/314d39fc-1687-4ed2-bac5-8ed19ba3cdab-config-data\") pod \"314d39fc-1687-4ed2-bac5-8ed19ba3cdab\" (UID: \"314d39fc-1687-4ed2-bac5-8ed19ba3cdab\") " Feb 02 21:42:18 crc kubenswrapper[4789]: I0202 21:42:18.180969 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/314d39fc-1687-4ed2-bac5-8ed19ba3cdab-scripts" (OuterVolumeSpecName: "scripts") pod "314d39fc-1687-4ed2-bac5-8ed19ba3cdab" (UID: "314d39fc-1687-4ed2-bac5-8ed19ba3cdab"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:42:18 crc kubenswrapper[4789]: I0202 21:42:18.181018 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/314d39fc-1687-4ed2-bac5-8ed19ba3cdab-kube-api-access-bwkq6" (OuterVolumeSpecName: "kube-api-access-bwkq6") pod "314d39fc-1687-4ed2-bac5-8ed19ba3cdab" (UID: "314d39fc-1687-4ed2-bac5-8ed19ba3cdab"). InnerVolumeSpecName "kube-api-access-bwkq6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:42:18 crc kubenswrapper[4789]: I0202 21:42:18.217280 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/314d39fc-1687-4ed2-bac5-8ed19ba3cdab-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "314d39fc-1687-4ed2-bac5-8ed19ba3cdab" (UID: "314d39fc-1687-4ed2-bac5-8ed19ba3cdab"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:42:18 crc kubenswrapper[4789]: I0202 21:42:18.231291 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/314d39fc-1687-4ed2-bac5-8ed19ba3cdab-config-data" (OuterVolumeSpecName: "config-data") pod "314d39fc-1687-4ed2-bac5-8ed19ba3cdab" (UID: "314d39fc-1687-4ed2-bac5-8ed19ba3cdab"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:42:18 crc kubenswrapper[4789]: I0202 21:42:18.274747 4789 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/314d39fc-1687-4ed2-bac5-8ed19ba3cdab-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 21:42:18 crc kubenswrapper[4789]: I0202 21:42:18.274781 4789 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/314d39fc-1687-4ed2-bac5-8ed19ba3cdab-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 21:42:18 crc kubenswrapper[4789]: I0202 21:42:18.274793 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bwkq6\" (UniqueName: \"kubernetes.io/projected/314d39fc-1687-4ed2-bac5-8ed19ba3cdab-kube-api-access-bwkq6\") on node \"crc\" DevicePath \"\"" Feb 02 21:42:18 crc kubenswrapper[4789]: I0202 21:42:18.274806 4789 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/314d39fc-1687-4ed2-bac5-8ed19ba3cdab-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 21:42:18 crc kubenswrapper[4789]: I0202 21:42:18.624412 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-hw2cp" event={"ID":"314d39fc-1687-4ed2-bac5-8ed19ba3cdab","Type":"ContainerDied","Data":"4015566f4654ee3eb2e4c9ae4103426c19d69025ccc6a22d5e9e96c64b89cd7e"} Feb 02 21:42:18 crc kubenswrapper[4789]: I0202 21:42:18.624496 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-hw2cp" Feb 02 21:42:18 crc kubenswrapper[4789]: I0202 21:42:18.624511 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4015566f4654ee3eb2e4c9ae4103426c19d69025ccc6a22d5e9e96c64b89cd7e" Feb 02 21:42:18 crc kubenswrapper[4789]: I0202 21:42:18.825783 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 02 21:42:18 crc kubenswrapper[4789]: I0202 21:42:18.826064 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="50435b5c-9ccc-4dc9-932c-78a7b7427aa1" containerName="nova-api-log" containerID="cri-o://f04211c79574acf467ca6a377a4f1a4bfbeef77a81d8acc61a818ea4bebb8b1b" gracePeriod=30 Feb 02 21:42:18 crc kubenswrapper[4789]: I0202 21:42:18.826213 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="50435b5c-9ccc-4dc9-932c-78a7b7427aa1" containerName="nova-api-api" containerID="cri-o://9238d33a318781099017d09ea25014a1beb0a95ed007f02cd3aea76b743d4039" gracePeriod=30 Feb 02 21:42:18 crc kubenswrapper[4789]: I0202 21:42:18.867675 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 21:42:18 crc kubenswrapper[4789]: I0202 21:42:18.868226 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="731fdb9f-c2ed-41a9-a94e-9a2e1e9bd1a6" containerName="nova-scheduler-scheduler" containerID="cri-o://bbf3961f4b9988969194fdfa516605852c0d8137222ae22baf8d415f3a2897d2" gracePeriod=30 Feb 02 21:42:18 crc kubenswrapper[4789]: I0202 21:42:18.907737 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 21:42:18 crc kubenswrapper[4789]: I0202 21:42:18.908016 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="2eed9773-f2f1-4d61-8b88-c0eb30620612" containerName="nova-metadata-log" containerID="cri-o://01a4dbf7d2f219eb93cb798d6f034d38d23e5a5e159195b783019d1d6b5662fe" gracePeriod=30 Feb 02 21:42:18 crc kubenswrapper[4789]: I0202 21:42:18.908115 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="2eed9773-f2f1-4d61-8b88-c0eb30620612" containerName="nova-metadata-metadata" containerID="cri-o://1e4d57a0c906192712dd83cd3490316f6b4df1a328f976508a9336ce6fa60b36" gracePeriod=30 Feb 02 21:42:19 crc kubenswrapper[4789]: E0202 21:42:19.318486 4789 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bbf3961f4b9988969194fdfa516605852c0d8137222ae22baf8d415f3a2897d2" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 02 21:42:19 crc kubenswrapper[4789]: E0202 21:42:19.320352 4789 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bbf3961f4b9988969194fdfa516605852c0d8137222ae22baf8d415f3a2897d2" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 02 21:42:19 crc kubenswrapper[4789]: E0202 21:42:19.322155 4789 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bbf3961f4b9988969194fdfa516605852c0d8137222ae22baf8d415f3a2897d2" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 02 21:42:19 crc kubenswrapper[4789]: E0202 21:42:19.322219 4789 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="731fdb9f-c2ed-41a9-a94e-9a2e1e9bd1a6" containerName="nova-scheduler-scheduler" Feb 02 21:42:19 crc kubenswrapper[4789]: I0202 21:42:19.480811 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 21:42:19 crc kubenswrapper[4789]: I0202 21:42:19.606803 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/50435b5c-9ccc-4dc9-932c-78a7b7427aa1-logs\") pod \"50435b5c-9ccc-4dc9-932c-78a7b7427aa1\" (UID: \"50435b5c-9ccc-4dc9-932c-78a7b7427aa1\") " Feb 02 21:42:19 crc kubenswrapper[4789]: I0202 21:42:19.606961 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50435b5c-9ccc-4dc9-932c-78a7b7427aa1-combined-ca-bundle\") pod \"50435b5c-9ccc-4dc9-932c-78a7b7427aa1\" (UID: \"50435b5c-9ccc-4dc9-932c-78a7b7427aa1\") " Feb 02 21:42:19 crc kubenswrapper[4789]: I0202 21:42:19.607011 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/50435b5c-9ccc-4dc9-932c-78a7b7427aa1-internal-tls-certs\") pod \"50435b5c-9ccc-4dc9-932c-78a7b7427aa1\" (UID: \"50435b5c-9ccc-4dc9-932c-78a7b7427aa1\") " Feb 02 21:42:19 crc kubenswrapper[4789]: I0202 21:42:19.607068 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-snv27\" (UniqueName: \"kubernetes.io/projected/50435b5c-9ccc-4dc9-932c-78a7b7427aa1-kube-api-access-snv27\") pod \"50435b5c-9ccc-4dc9-932c-78a7b7427aa1\" (UID: \"50435b5c-9ccc-4dc9-932c-78a7b7427aa1\") " Feb 02 21:42:19 crc kubenswrapper[4789]: I0202 21:42:19.607077 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50435b5c-9ccc-4dc9-932c-78a7b7427aa1-logs" (OuterVolumeSpecName: "logs") pod "50435b5c-9ccc-4dc9-932c-78a7b7427aa1" (UID: "50435b5c-9ccc-4dc9-932c-78a7b7427aa1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 21:42:19 crc kubenswrapper[4789]: I0202 21:42:19.607097 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50435b5c-9ccc-4dc9-932c-78a7b7427aa1-config-data\") pod \"50435b5c-9ccc-4dc9-932c-78a7b7427aa1\" (UID: \"50435b5c-9ccc-4dc9-932c-78a7b7427aa1\") " Feb 02 21:42:19 crc kubenswrapper[4789]: I0202 21:42:19.607169 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/50435b5c-9ccc-4dc9-932c-78a7b7427aa1-public-tls-certs\") pod \"50435b5c-9ccc-4dc9-932c-78a7b7427aa1\" (UID: \"50435b5c-9ccc-4dc9-932c-78a7b7427aa1\") " Feb 02 21:42:19 crc kubenswrapper[4789]: I0202 21:42:19.607849 4789 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/50435b5c-9ccc-4dc9-932c-78a7b7427aa1-logs\") on node \"crc\" DevicePath \"\"" Feb 02 21:42:19 crc kubenswrapper[4789]: I0202 21:42:19.615748 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50435b5c-9ccc-4dc9-932c-78a7b7427aa1-kube-api-access-snv27" (OuterVolumeSpecName: "kube-api-access-snv27") pod "50435b5c-9ccc-4dc9-932c-78a7b7427aa1" (UID: "50435b5c-9ccc-4dc9-932c-78a7b7427aa1"). InnerVolumeSpecName "kube-api-access-snv27". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:42:19 crc kubenswrapper[4789]: I0202 21:42:19.635144 4789 generic.go:334] "Generic (PLEG): container finished" podID="2eed9773-f2f1-4d61-8b88-c0eb30620612" containerID="01a4dbf7d2f219eb93cb798d6f034d38d23e5a5e159195b783019d1d6b5662fe" exitCode=143 Feb 02 21:42:19 crc kubenswrapper[4789]: I0202 21:42:19.635219 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2eed9773-f2f1-4d61-8b88-c0eb30620612","Type":"ContainerDied","Data":"01a4dbf7d2f219eb93cb798d6f034d38d23e5a5e159195b783019d1d6b5662fe"} Feb 02 21:42:19 crc kubenswrapper[4789]: I0202 21:42:19.639691 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50435b5c-9ccc-4dc9-932c-78a7b7427aa1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "50435b5c-9ccc-4dc9-932c-78a7b7427aa1" (UID: "50435b5c-9ccc-4dc9-932c-78a7b7427aa1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:42:19 crc kubenswrapper[4789]: I0202 21:42:19.640964 4789 generic.go:334] "Generic (PLEG): container finished" podID="50435b5c-9ccc-4dc9-932c-78a7b7427aa1" containerID="9238d33a318781099017d09ea25014a1beb0a95ed007f02cd3aea76b743d4039" exitCode=0 Feb 02 21:42:19 crc kubenswrapper[4789]: I0202 21:42:19.641046 4789 generic.go:334] "Generic (PLEG): container finished" podID="50435b5c-9ccc-4dc9-932c-78a7b7427aa1" containerID="f04211c79574acf467ca6a377a4f1a4bfbeef77a81d8acc61a818ea4bebb8b1b" exitCode=143 Feb 02 21:42:19 crc kubenswrapper[4789]: I0202 21:42:19.641074 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"50435b5c-9ccc-4dc9-932c-78a7b7427aa1","Type":"ContainerDied","Data":"9238d33a318781099017d09ea25014a1beb0a95ed007f02cd3aea76b743d4039"} Feb 02 21:42:19 crc kubenswrapper[4789]: I0202 21:42:19.641104 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"50435b5c-9ccc-4dc9-932c-78a7b7427aa1","Type":"ContainerDied","Data":"f04211c79574acf467ca6a377a4f1a4bfbeef77a81d8acc61a818ea4bebb8b1b"} Feb 02 21:42:19 crc kubenswrapper[4789]: I0202 21:42:19.641117 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"50435b5c-9ccc-4dc9-932c-78a7b7427aa1","Type":"ContainerDied","Data":"a12c67457ae171b60c5004bb7370a71c36d462f430271d4ca267d8098d45536b"} Feb 02 21:42:19 crc kubenswrapper[4789]: I0202 21:42:19.641135 4789 scope.go:117] "RemoveContainer" containerID="9238d33a318781099017d09ea25014a1beb0a95ed007f02cd3aea76b743d4039" Feb 02 21:42:19 crc kubenswrapper[4789]: I0202 21:42:19.641313 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 21:42:19 crc kubenswrapper[4789]: I0202 21:42:19.644941 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50435b5c-9ccc-4dc9-932c-78a7b7427aa1-config-data" (OuterVolumeSpecName: "config-data") pod "50435b5c-9ccc-4dc9-932c-78a7b7427aa1" (UID: "50435b5c-9ccc-4dc9-932c-78a7b7427aa1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:42:19 crc kubenswrapper[4789]: I0202 21:42:19.663063 4789 scope.go:117] "RemoveContainer" containerID="f04211c79574acf467ca6a377a4f1a4bfbeef77a81d8acc61a818ea4bebb8b1b" Feb 02 21:42:19 crc kubenswrapper[4789]: I0202 21:42:19.670507 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50435b5c-9ccc-4dc9-932c-78a7b7427aa1-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "50435b5c-9ccc-4dc9-932c-78a7b7427aa1" (UID: "50435b5c-9ccc-4dc9-932c-78a7b7427aa1"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:42:19 crc kubenswrapper[4789]: I0202 21:42:19.685095 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50435b5c-9ccc-4dc9-932c-78a7b7427aa1-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "50435b5c-9ccc-4dc9-932c-78a7b7427aa1" (UID: "50435b5c-9ccc-4dc9-932c-78a7b7427aa1"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:42:19 crc kubenswrapper[4789]: I0202 21:42:19.688836 4789 scope.go:117] "RemoveContainer" containerID="9238d33a318781099017d09ea25014a1beb0a95ed007f02cd3aea76b743d4039" Feb 02 21:42:19 crc kubenswrapper[4789]: E0202 21:42:19.689403 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9238d33a318781099017d09ea25014a1beb0a95ed007f02cd3aea76b743d4039\": container with ID starting with 9238d33a318781099017d09ea25014a1beb0a95ed007f02cd3aea76b743d4039 not found: ID does not exist" containerID="9238d33a318781099017d09ea25014a1beb0a95ed007f02cd3aea76b743d4039" Feb 02 21:42:19 crc kubenswrapper[4789]: I0202 21:42:19.689460 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9238d33a318781099017d09ea25014a1beb0a95ed007f02cd3aea76b743d4039"} err="failed to get container status \"9238d33a318781099017d09ea25014a1beb0a95ed007f02cd3aea76b743d4039\": rpc error: code = NotFound desc = could not find container \"9238d33a318781099017d09ea25014a1beb0a95ed007f02cd3aea76b743d4039\": container with ID starting with 9238d33a318781099017d09ea25014a1beb0a95ed007f02cd3aea76b743d4039 not found: ID does not exist" Feb 02 21:42:19 crc kubenswrapper[4789]: I0202 21:42:19.689487 4789 scope.go:117] "RemoveContainer" containerID="f04211c79574acf467ca6a377a4f1a4bfbeef77a81d8acc61a818ea4bebb8b1b" Feb 02 21:42:19 crc kubenswrapper[4789]: E0202 21:42:19.689968 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f04211c79574acf467ca6a377a4f1a4bfbeef77a81d8acc61a818ea4bebb8b1b\": container with ID starting with f04211c79574acf467ca6a377a4f1a4bfbeef77a81d8acc61a818ea4bebb8b1b not found: ID does not exist" containerID="f04211c79574acf467ca6a377a4f1a4bfbeef77a81d8acc61a818ea4bebb8b1b" Feb 02 21:42:19 crc kubenswrapper[4789]: I0202 21:42:19.690035 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f04211c79574acf467ca6a377a4f1a4bfbeef77a81d8acc61a818ea4bebb8b1b"} err="failed to get container status \"f04211c79574acf467ca6a377a4f1a4bfbeef77a81d8acc61a818ea4bebb8b1b\": rpc error: code = NotFound desc = could not find container \"f04211c79574acf467ca6a377a4f1a4bfbeef77a81d8acc61a818ea4bebb8b1b\": container with ID starting with f04211c79574acf467ca6a377a4f1a4bfbeef77a81d8acc61a818ea4bebb8b1b not found: ID does not exist" Feb 02 21:42:19 crc kubenswrapper[4789]: I0202 21:42:19.690070 4789 scope.go:117] "RemoveContainer" containerID="9238d33a318781099017d09ea25014a1beb0a95ed007f02cd3aea76b743d4039" Feb 02 21:42:19 crc kubenswrapper[4789]: I0202 21:42:19.690385 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9238d33a318781099017d09ea25014a1beb0a95ed007f02cd3aea76b743d4039"} err="failed to get container status \"9238d33a318781099017d09ea25014a1beb0a95ed007f02cd3aea76b743d4039\": rpc error: code = NotFound desc = could not find container \"9238d33a318781099017d09ea25014a1beb0a95ed007f02cd3aea76b743d4039\": container with ID starting with 9238d33a318781099017d09ea25014a1beb0a95ed007f02cd3aea76b743d4039 not found: ID does not exist" Feb 02 21:42:19 crc kubenswrapper[4789]: I0202 21:42:19.690408 4789 scope.go:117] "RemoveContainer" containerID="f04211c79574acf467ca6a377a4f1a4bfbeef77a81d8acc61a818ea4bebb8b1b" Feb 02 21:42:19 crc kubenswrapper[4789]: I0202 21:42:19.690644 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f04211c79574acf467ca6a377a4f1a4bfbeef77a81d8acc61a818ea4bebb8b1b"} err="failed to get container status \"f04211c79574acf467ca6a377a4f1a4bfbeef77a81d8acc61a818ea4bebb8b1b\": rpc error: code = NotFound desc = could not find container \"f04211c79574acf467ca6a377a4f1a4bfbeef77a81d8acc61a818ea4bebb8b1b\": container with ID starting with f04211c79574acf467ca6a377a4f1a4bfbeef77a81d8acc61a818ea4bebb8b1b not found: ID does not exist" Feb 02 21:42:19 crc kubenswrapper[4789]: I0202 21:42:19.709261 4789 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50435b5c-9ccc-4dc9-932c-78a7b7427aa1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 21:42:19 crc kubenswrapper[4789]: I0202 21:42:19.709292 4789 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/50435b5c-9ccc-4dc9-932c-78a7b7427aa1-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 21:42:19 crc kubenswrapper[4789]: I0202 21:42:19.709305 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-snv27\" (UniqueName: \"kubernetes.io/projected/50435b5c-9ccc-4dc9-932c-78a7b7427aa1-kube-api-access-snv27\") on node \"crc\" DevicePath \"\"" Feb 02 21:42:19 crc kubenswrapper[4789]: I0202 21:42:19.709318 4789 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50435b5c-9ccc-4dc9-932c-78a7b7427aa1-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 21:42:19 crc kubenswrapper[4789]: I0202 21:42:19.709331 4789 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/50435b5c-9ccc-4dc9-932c-78a7b7427aa1-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 21:42:19 crc kubenswrapper[4789]: I0202 21:42:19.998271 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 02 21:42:20 crc kubenswrapper[4789]: I0202 21:42:20.015016 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 02 21:42:20 crc kubenswrapper[4789]: I0202 21:42:20.024697 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 02 21:42:20 crc kubenswrapper[4789]: E0202 21:42:20.025366 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b8f57c8-9467-4545-89f7-bbda22025d26" containerName="init" Feb 02 21:42:20 crc kubenswrapper[4789]: I0202 21:42:20.025386 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b8f57c8-9467-4545-89f7-bbda22025d26" containerName="init" Feb 02 21:42:20 crc kubenswrapper[4789]: E0202 21:42:20.025404 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50435b5c-9ccc-4dc9-932c-78a7b7427aa1" containerName="nova-api-log" Feb 02 21:42:20 crc kubenswrapper[4789]: I0202 21:42:20.025413 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="50435b5c-9ccc-4dc9-932c-78a7b7427aa1" containerName="nova-api-log" Feb 02 21:42:20 crc kubenswrapper[4789]: E0202 21:42:20.025434 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b8f57c8-9467-4545-89f7-bbda22025d26" containerName="dnsmasq-dns" Feb 02 21:42:20 crc kubenswrapper[4789]: I0202 21:42:20.025444 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b8f57c8-9467-4545-89f7-bbda22025d26" containerName="dnsmasq-dns" Feb 02 21:42:20 crc kubenswrapper[4789]: E0202 21:42:20.025460 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50435b5c-9ccc-4dc9-932c-78a7b7427aa1" containerName="nova-api-api" Feb 02 21:42:20 crc kubenswrapper[4789]: I0202 21:42:20.025468 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="50435b5c-9ccc-4dc9-932c-78a7b7427aa1" containerName="nova-api-api" Feb 02 21:42:20 crc kubenswrapper[4789]: E0202 21:42:20.025484 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="314d39fc-1687-4ed2-bac5-8ed19ba3cdab" containerName="nova-manage" Feb 02 21:42:20 crc kubenswrapper[4789]: I0202 21:42:20.025492 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="314d39fc-1687-4ed2-bac5-8ed19ba3cdab" containerName="nova-manage" Feb 02 21:42:20 crc kubenswrapper[4789]: I0202 21:42:20.025763 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b8f57c8-9467-4545-89f7-bbda22025d26" containerName="dnsmasq-dns" Feb 02 21:42:20 crc kubenswrapper[4789]: I0202 21:42:20.025778 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="50435b5c-9ccc-4dc9-932c-78a7b7427aa1" containerName="nova-api-api" Feb 02 21:42:20 crc kubenswrapper[4789]: I0202 21:42:20.025805 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="314d39fc-1687-4ed2-bac5-8ed19ba3cdab" containerName="nova-manage" Feb 02 21:42:20 crc kubenswrapper[4789]: I0202 21:42:20.025815 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="50435b5c-9ccc-4dc9-932c-78a7b7427aa1" containerName="nova-api-log" Feb 02 21:42:20 crc kubenswrapper[4789]: I0202 21:42:20.027788 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 21:42:20 crc kubenswrapper[4789]: I0202 21:42:20.032038 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 02 21:42:20 crc kubenswrapper[4789]: I0202 21:42:20.032278 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 02 21:42:20 crc kubenswrapper[4789]: I0202 21:42:20.032562 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 02 21:42:20 crc kubenswrapper[4789]: I0202 21:42:20.038480 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 02 21:42:20 crc kubenswrapper[4789]: I0202 21:42:20.118352 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ae097e7-380b-4044-8598-abc3e1059356-public-tls-certs\") pod \"nova-api-0\" (UID: \"1ae097e7-380b-4044-8598-abc3e1059356\") " pod="openstack/nova-api-0" Feb 02 21:42:20 crc kubenswrapper[4789]: I0202 21:42:20.118402 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1ae097e7-380b-4044-8598-abc3e1059356-logs\") pod \"nova-api-0\" (UID: \"1ae097e7-380b-4044-8598-abc3e1059356\") " pod="openstack/nova-api-0" Feb 02 21:42:20 crc kubenswrapper[4789]: I0202 21:42:20.118469 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9lg9\" (UniqueName: \"kubernetes.io/projected/1ae097e7-380b-4044-8598-abc3e1059356-kube-api-access-h9lg9\") pod \"nova-api-0\" (UID: \"1ae097e7-380b-4044-8598-abc3e1059356\") " pod="openstack/nova-api-0" Feb 02 21:42:20 crc kubenswrapper[4789]: I0202 21:42:20.118506 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ae097e7-380b-4044-8598-abc3e1059356-internal-tls-certs\") pod \"nova-api-0\" (UID: \"1ae097e7-380b-4044-8598-abc3e1059356\") " pod="openstack/nova-api-0" Feb 02 21:42:20 crc kubenswrapper[4789]: I0202 21:42:20.118524 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ae097e7-380b-4044-8598-abc3e1059356-config-data\") pod \"nova-api-0\" (UID: \"1ae097e7-380b-4044-8598-abc3e1059356\") " pod="openstack/nova-api-0" Feb 02 21:42:20 crc kubenswrapper[4789]: I0202 21:42:20.118562 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ae097e7-380b-4044-8598-abc3e1059356-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1ae097e7-380b-4044-8598-abc3e1059356\") " pod="openstack/nova-api-0" Feb 02 21:42:20 crc kubenswrapper[4789]: I0202 21:42:20.220346 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ae097e7-380b-4044-8598-abc3e1059356-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1ae097e7-380b-4044-8598-abc3e1059356\") " pod="openstack/nova-api-0" Feb 02 21:42:20 crc kubenswrapper[4789]: I0202 21:42:20.220430 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ae097e7-380b-4044-8598-abc3e1059356-public-tls-certs\") pod \"nova-api-0\" (UID: \"1ae097e7-380b-4044-8598-abc3e1059356\") " pod="openstack/nova-api-0" Feb 02 21:42:20 crc kubenswrapper[4789]: I0202 21:42:20.220466 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1ae097e7-380b-4044-8598-abc3e1059356-logs\") pod \"nova-api-0\" (UID: \"1ae097e7-380b-4044-8598-abc3e1059356\") " pod="openstack/nova-api-0" Feb 02 21:42:20 crc kubenswrapper[4789]: I0202 21:42:20.220535 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9lg9\" (UniqueName: \"kubernetes.io/projected/1ae097e7-380b-4044-8598-abc3e1059356-kube-api-access-h9lg9\") pod \"nova-api-0\" (UID: \"1ae097e7-380b-4044-8598-abc3e1059356\") " pod="openstack/nova-api-0" Feb 02 21:42:20 crc kubenswrapper[4789]: I0202 21:42:20.220593 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ae097e7-380b-4044-8598-abc3e1059356-internal-tls-certs\") pod \"nova-api-0\" (UID: \"1ae097e7-380b-4044-8598-abc3e1059356\") " pod="openstack/nova-api-0" Feb 02 21:42:20 crc kubenswrapper[4789]: I0202 21:42:20.220611 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ae097e7-380b-4044-8598-abc3e1059356-config-data\") pod \"nova-api-0\" (UID: \"1ae097e7-380b-4044-8598-abc3e1059356\") " pod="openstack/nova-api-0" Feb 02 21:42:20 crc kubenswrapper[4789]: I0202 21:42:20.221960 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1ae097e7-380b-4044-8598-abc3e1059356-logs\") pod \"nova-api-0\" (UID: \"1ae097e7-380b-4044-8598-abc3e1059356\") " pod="openstack/nova-api-0" Feb 02 21:42:20 crc kubenswrapper[4789]: I0202 21:42:20.224454 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ae097e7-380b-4044-8598-abc3e1059356-config-data\") pod \"nova-api-0\" (UID: \"1ae097e7-380b-4044-8598-abc3e1059356\") " pod="openstack/nova-api-0" Feb 02 21:42:20 crc kubenswrapper[4789]: I0202 21:42:20.224930 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ae097e7-380b-4044-8598-abc3e1059356-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1ae097e7-380b-4044-8598-abc3e1059356\") " pod="openstack/nova-api-0" Feb 02 21:42:20 crc kubenswrapper[4789]: I0202 21:42:20.226096 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ae097e7-380b-4044-8598-abc3e1059356-public-tls-certs\") pod \"nova-api-0\" (UID: \"1ae097e7-380b-4044-8598-abc3e1059356\") " pod="openstack/nova-api-0" Feb 02 21:42:20 crc kubenswrapper[4789]: I0202 21:42:20.230728 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ae097e7-380b-4044-8598-abc3e1059356-internal-tls-certs\") pod \"nova-api-0\" (UID: \"1ae097e7-380b-4044-8598-abc3e1059356\") " pod="openstack/nova-api-0" Feb 02 21:42:20 crc kubenswrapper[4789]: I0202 21:42:20.239002 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9lg9\" (UniqueName: \"kubernetes.io/projected/1ae097e7-380b-4044-8598-abc3e1059356-kube-api-access-h9lg9\") pod \"nova-api-0\" (UID: \"1ae097e7-380b-4044-8598-abc3e1059356\") " pod="openstack/nova-api-0" Feb 02 21:42:20 crc kubenswrapper[4789]: I0202 21:42:20.365715 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 21:42:20 crc kubenswrapper[4789]: I0202 21:42:20.436923 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50435b5c-9ccc-4dc9-932c-78a7b7427aa1" path="/var/lib/kubelet/pods/50435b5c-9ccc-4dc9-932c-78a7b7427aa1/volumes" Feb 02 21:42:20 crc kubenswrapper[4789]: W0202 21:42:20.869102 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1ae097e7_380b_4044_8598_abc3e1059356.slice/crio-78724d0ec65400d21856ae24e9f73faa2306cb0d027d1efa4277f832f50d08ac WatchSource:0}: Error finding container 78724d0ec65400d21856ae24e9f73faa2306cb0d027d1efa4277f832f50d08ac: Status 404 returned error can't find the container with id 78724d0ec65400d21856ae24e9f73faa2306cb0d027d1efa4277f832f50d08ac Feb 02 21:42:20 crc kubenswrapper[4789]: I0202 21:42:20.870747 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 02 21:42:21 crc kubenswrapper[4789]: I0202 21:42:21.672140 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1ae097e7-380b-4044-8598-abc3e1059356","Type":"ContainerStarted","Data":"0a2cae00db145b6560fcc0b648c1c292b3eb7df490809622a8a50541cde04a0c"} Feb 02 21:42:21 crc kubenswrapper[4789]: I0202 21:42:21.672735 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1ae097e7-380b-4044-8598-abc3e1059356","Type":"ContainerStarted","Data":"2234c362242e0356a4e9c41d9d9c119ece3aa80e6631194820c7f16fcb2df8fa"} Feb 02 21:42:21 crc kubenswrapper[4789]: I0202 21:42:21.672775 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1ae097e7-380b-4044-8598-abc3e1059356","Type":"ContainerStarted","Data":"78724d0ec65400d21856ae24e9f73faa2306cb0d027d1efa4277f832f50d08ac"} Feb 02 21:42:21 crc kubenswrapper[4789]: I0202 21:42:21.700514 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.700480524 podStartE2EDuration="2.700480524s" podCreationTimestamp="2026-02-02 21:42:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:42:21.698773766 +0000 UTC m=+1361.993798815" watchObservedRunningTime="2026-02-02 21:42:21.700480524 +0000 UTC m=+1361.995505593" Feb 02 21:42:22 crc kubenswrapper[4789]: I0202 21:42:22.057178 4789 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="2eed9773-f2f1-4d61-8b88-c0eb30620612" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.195:8775/\": read tcp 10.217.0.2:40970->10.217.0.195:8775: read: connection reset by peer" Feb 02 21:42:22 crc kubenswrapper[4789]: I0202 21:42:22.057229 4789 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="2eed9773-f2f1-4d61-8b88-c0eb30620612" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.195:8775/\": read tcp 10.217.0.2:40958->10.217.0.195:8775: read: connection reset by peer" Feb 02 21:42:22 crc kubenswrapper[4789]: I0202 21:42:22.635778 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 21:42:22 crc kubenswrapper[4789]: I0202 21:42:22.694939 4789 generic.go:334] "Generic (PLEG): container finished" podID="2eed9773-f2f1-4d61-8b88-c0eb30620612" containerID="1e4d57a0c906192712dd83cd3490316f6b4df1a328f976508a9336ce6fa60b36" exitCode=0 Feb 02 21:42:22 crc kubenswrapper[4789]: I0202 21:42:22.695141 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2eed9773-f2f1-4d61-8b88-c0eb30620612","Type":"ContainerDied","Data":"1e4d57a0c906192712dd83cd3490316f6b4df1a328f976508a9336ce6fa60b36"} Feb 02 21:42:22 crc kubenswrapper[4789]: I0202 21:42:22.695258 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2eed9773-f2f1-4d61-8b88-c0eb30620612","Type":"ContainerDied","Data":"1db476ce12680e953f5b4ccca8dcd2ebb48ad4f49c924be7a0af04c0463916d2"} Feb 02 21:42:22 crc kubenswrapper[4789]: I0202 21:42:22.695280 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 21:42:22 crc kubenswrapper[4789]: I0202 21:42:22.695303 4789 scope.go:117] "RemoveContainer" containerID="1e4d57a0c906192712dd83cd3490316f6b4df1a328f976508a9336ce6fa60b36" Feb 02 21:42:22 crc kubenswrapper[4789]: I0202 21:42:22.738884 4789 scope.go:117] "RemoveContainer" containerID="01a4dbf7d2f219eb93cb798d6f034d38d23e5a5e159195b783019d1d6b5662fe" Feb 02 21:42:22 crc kubenswrapper[4789]: I0202 21:42:22.759292 4789 scope.go:117] "RemoveContainer" containerID="1e4d57a0c906192712dd83cd3490316f6b4df1a328f976508a9336ce6fa60b36" Feb 02 21:42:22 crc kubenswrapper[4789]: E0202 21:42:22.760158 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e4d57a0c906192712dd83cd3490316f6b4df1a328f976508a9336ce6fa60b36\": container with ID starting with 1e4d57a0c906192712dd83cd3490316f6b4df1a328f976508a9336ce6fa60b36 not found: ID does not exist" containerID="1e4d57a0c906192712dd83cd3490316f6b4df1a328f976508a9336ce6fa60b36" Feb 02 21:42:22 crc kubenswrapper[4789]: I0202 21:42:22.760229 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e4d57a0c906192712dd83cd3490316f6b4df1a328f976508a9336ce6fa60b36"} err="failed to get container status \"1e4d57a0c906192712dd83cd3490316f6b4df1a328f976508a9336ce6fa60b36\": rpc error: code = NotFound desc = could not find container \"1e4d57a0c906192712dd83cd3490316f6b4df1a328f976508a9336ce6fa60b36\": container with ID starting with 1e4d57a0c906192712dd83cd3490316f6b4df1a328f976508a9336ce6fa60b36 not found: ID does not exist" Feb 02 21:42:22 crc kubenswrapper[4789]: I0202 21:42:22.760266 4789 scope.go:117] "RemoveContainer" containerID="01a4dbf7d2f219eb93cb798d6f034d38d23e5a5e159195b783019d1d6b5662fe" Feb 02 21:42:22 crc kubenswrapper[4789]: E0202 21:42:22.763741 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01a4dbf7d2f219eb93cb798d6f034d38d23e5a5e159195b783019d1d6b5662fe\": container with ID starting with 01a4dbf7d2f219eb93cb798d6f034d38d23e5a5e159195b783019d1d6b5662fe not found: ID does not exist" containerID="01a4dbf7d2f219eb93cb798d6f034d38d23e5a5e159195b783019d1d6b5662fe" Feb 02 21:42:22 crc kubenswrapper[4789]: I0202 21:42:22.763779 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01a4dbf7d2f219eb93cb798d6f034d38d23e5a5e159195b783019d1d6b5662fe"} err="failed to get container status \"01a4dbf7d2f219eb93cb798d6f034d38d23e5a5e159195b783019d1d6b5662fe\": rpc error: code = NotFound desc = could not find container \"01a4dbf7d2f219eb93cb798d6f034d38d23e5a5e159195b783019d1d6b5662fe\": container with ID starting with 01a4dbf7d2f219eb93cb798d6f034d38d23e5a5e159195b783019d1d6b5662fe not found: ID does not exist" Feb 02 21:42:22 crc kubenswrapper[4789]: I0202 21:42:22.770660 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2eed9773-f2f1-4d61-8b88-c0eb30620612-combined-ca-bundle\") pod \"2eed9773-f2f1-4d61-8b88-c0eb30620612\" (UID: \"2eed9773-f2f1-4d61-8b88-c0eb30620612\") " Feb 02 21:42:22 crc kubenswrapper[4789]: I0202 21:42:22.770694 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m4n89\" (UniqueName: \"kubernetes.io/projected/2eed9773-f2f1-4d61-8b88-c0eb30620612-kube-api-access-m4n89\") pod \"2eed9773-f2f1-4d61-8b88-c0eb30620612\" (UID: \"2eed9773-f2f1-4d61-8b88-c0eb30620612\") " Feb 02 21:42:22 crc kubenswrapper[4789]: I0202 21:42:22.770727 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2eed9773-f2f1-4d61-8b88-c0eb30620612-nova-metadata-tls-certs\") pod \"2eed9773-f2f1-4d61-8b88-c0eb30620612\" (UID: \"2eed9773-f2f1-4d61-8b88-c0eb30620612\") " Feb 02 21:42:22 crc kubenswrapper[4789]: I0202 21:42:22.770780 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2eed9773-f2f1-4d61-8b88-c0eb30620612-config-data\") pod \"2eed9773-f2f1-4d61-8b88-c0eb30620612\" (UID: \"2eed9773-f2f1-4d61-8b88-c0eb30620612\") " Feb 02 21:42:22 crc kubenswrapper[4789]: I0202 21:42:22.770976 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2eed9773-f2f1-4d61-8b88-c0eb30620612-logs\") pod \"2eed9773-f2f1-4d61-8b88-c0eb30620612\" (UID: \"2eed9773-f2f1-4d61-8b88-c0eb30620612\") " Feb 02 21:42:22 crc kubenswrapper[4789]: I0202 21:42:22.773142 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2eed9773-f2f1-4d61-8b88-c0eb30620612-logs" (OuterVolumeSpecName: "logs") pod "2eed9773-f2f1-4d61-8b88-c0eb30620612" (UID: "2eed9773-f2f1-4d61-8b88-c0eb30620612"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 21:42:22 crc kubenswrapper[4789]: I0202 21:42:22.785832 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2eed9773-f2f1-4d61-8b88-c0eb30620612-kube-api-access-m4n89" (OuterVolumeSpecName: "kube-api-access-m4n89") pod "2eed9773-f2f1-4d61-8b88-c0eb30620612" (UID: "2eed9773-f2f1-4d61-8b88-c0eb30620612"). InnerVolumeSpecName "kube-api-access-m4n89". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:42:22 crc kubenswrapper[4789]: I0202 21:42:22.801504 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2eed9773-f2f1-4d61-8b88-c0eb30620612-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2eed9773-f2f1-4d61-8b88-c0eb30620612" (UID: "2eed9773-f2f1-4d61-8b88-c0eb30620612"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:42:22 crc kubenswrapper[4789]: I0202 21:42:22.808252 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2eed9773-f2f1-4d61-8b88-c0eb30620612-config-data" (OuterVolumeSpecName: "config-data") pod "2eed9773-f2f1-4d61-8b88-c0eb30620612" (UID: "2eed9773-f2f1-4d61-8b88-c0eb30620612"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:42:22 crc kubenswrapper[4789]: I0202 21:42:22.835057 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2eed9773-f2f1-4d61-8b88-c0eb30620612-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "2eed9773-f2f1-4d61-8b88-c0eb30620612" (UID: "2eed9773-f2f1-4d61-8b88-c0eb30620612"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:42:22 crc kubenswrapper[4789]: I0202 21:42:22.873218 4789 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2eed9773-f2f1-4d61-8b88-c0eb30620612-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 21:42:22 crc kubenswrapper[4789]: I0202 21:42:22.873268 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m4n89\" (UniqueName: \"kubernetes.io/projected/2eed9773-f2f1-4d61-8b88-c0eb30620612-kube-api-access-m4n89\") on node \"crc\" DevicePath \"\"" Feb 02 21:42:22 crc kubenswrapper[4789]: I0202 21:42:22.873284 4789 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2eed9773-f2f1-4d61-8b88-c0eb30620612-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 21:42:22 crc kubenswrapper[4789]: I0202 21:42:22.873297 4789 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2eed9773-f2f1-4d61-8b88-c0eb30620612-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 21:42:22 crc kubenswrapper[4789]: I0202 21:42:22.873308 4789 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2eed9773-f2f1-4d61-8b88-c0eb30620612-logs\") on node \"crc\" DevicePath \"\"" Feb 02 21:42:23 crc kubenswrapper[4789]: I0202 21:42:23.026455 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 21:42:23 crc kubenswrapper[4789]: I0202 21:42:23.035525 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 21:42:23 crc kubenswrapper[4789]: I0202 21:42:23.053076 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 02 21:42:23 crc kubenswrapper[4789]: E0202 21:42:23.053734 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2eed9773-f2f1-4d61-8b88-c0eb30620612" containerName="nova-metadata-metadata" Feb 02 21:42:23 crc kubenswrapper[4789]: I0202 21:42:23.053765 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="2eed9773-f2f1-4d61-8b88-c0eb30620612" containerName="nova-metadata-metadata" Feb 02 21:42:23 crc kubenswrapper[4789]: E0202 21:42:23.053806 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2eed9773-f2f1-4d61-8b88-c0eb30620612" containerName="nova-metadata-log" Feb 02 21:42:23 crc kubenswrapper[4789]: I0202 21:42:23.053817 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="2eed9773-f2f1-4d61-8b88-c0eb30620612" containerName="nova-metadata-log" Feb 02 21:42:23 crc kubenswrapper[4789]: I0202 21:42:23.054117 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="2eed9773-f2f1-4d61-8b88-c0eb30620612" containerName="nova-metadata-metadata" Feb 02 21:42:23 crc kubenswrapper[4789]: I0202 21:42:23.054158 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="2eed9773-f2f1-4d61-8b88-c0eb30620612" containerName="nova-metadata-log" Feb 02 21:42:23 crc kubenswrapper[4789]: I0202 21:42:23.055748 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 21:42:23 crc kubenswrapper[4789]: I0202 21:42:23.058220 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 02 21:42:23 crc kubenswrapper[4789]: I0202 21:42:23.058513 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 02 21:42:23 crc kubenswrapper[4789]: I0202 21:42:23.065423 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 21:42:23 crc kubenswrapper[4789]: I0202 21:42:23.178685 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ba13473-b423-43a0-ab15-9d6be616cc7b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0ba13473-b423-43a0-ab15-9d6be616cc7b\") " pod="openstack/nova-metadata-0" Feb 02 21:42:23 crc kubenswrapper[4789]: I0202 21:42:23.179061 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0ba13473-b423-43a0-ab15-9d6be616cc7b-logs\") pod \"nova-metadata-0\" (UID: \"0ba13473-b423-43a0-ab15-9d6be616cc7b\") " pod="openstack/nova-metadata-0" Feb 02 21:42:23 crc kubenswrapper[4789]: I0202 21:42:23.179099 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfzd2\" (UniqueName: \"kubernetes.io/projected/0ba13473-b423-43a0-ab15-9d6be616cc7b-kube-api-access-dfzd2\") pod \"nova-metadata-0\" (UID: \"0ba13473-b423-43a0-ab15-9d6be616cc7b\") " pod="openstack/nova-metadata-0" Feb 02 21:42:23 crc kubenswrapper[4789]: I0202 21:42:23.179154 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ba13473-b423-43a0-ab15-9d6be616cc7b-config-data\") pod \"nova-metadata-0\" (UID: \"0ba13473-b423-43a0-ab15-9d6be616cc7b\") " pod="openstack/nova-metadata-0" Feb 02 21:42:23 crc kubenswrapper[4789]: I0202 21:42:23.179276 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ba13473-b423-43a0-ab15-9d6be616cc7b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0ba13473-b423-43a0-ab15-9d6be616cc7b\") " pod="openstack/nova-metadata-0" Feb 02 21:42:23 crc kubenswrapper[4789]: I0202 21:42:23.281013 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ba13473-b423-43a0-ab15-9d6be616cc7b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0ba13473-b423-43a0-ab15-9d6be616cc7b\") " pod="openstack/nova-metadata-0" Feb 02 21:42:23 crc kubenswrapper[4789]: I0202 21:42:23.281057 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0ba13473-b423-43a0-ab15-9d6be616cc7b-logs\") pod \"nova-metadata-0\" (UID: \"0ba13473-b423-43a0-ab15-9d6be616cc7b\") " pod="openstack/nova-metadata-0" Feb 02 21:42:23 crc kubenswrapper[4789]: I0202 21:42:23.281076 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfzd2\" (UniqueName: \"kubernetes.io/projected/0ba13473-b423-43a0-ab15-9d6be616cc7b-kube-api-access-dfzd2\") pod \"nova-metadata-0\" (UID: \"0ba13473-b423-43a0-ab15-9d6be616cc7b\") " pod="openstack/nova-metadata-0" Feb 02 21:42:23 crc kubenswrapper[4789]: I0202 21:42:23.281104 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ba13473-b423-43a0-ab15-9d6be616cc7b-config-data\") pod \"nova-metadata-0\" (UID: \"0ba13473-b423-43a0-ab15-9d6be616cc7b\") " pod="openstack/nova-metadata-0" Feb 02 21:42:23 crc kubenswrapper[4789]: I0202 21:42:23.281154 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ba13473-b423-43a0-ab15-9d6be616cc7b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0ba13473-b423-43a0-ab15-9d6be616cc7b\") " pod="openstack/nova-metadata-0" Feb 02 21:42:23 crc kubenswrapper[4789]: I0202 21:42:23.281545 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0ba13473-b423-43a0-ab15-9d6be616cc7b-logs\") pod \"nova-metadata-0\" (UID: \"0ba13473-b423-43a0-ab15-9d6be616cc7b\") " pod="openstack/nova-metadata-0" Feb 02 21:42:23 crc kubenswrapper[4789]: I0202 21:42:23.289237 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ba13473-b423-43a0-ab15-9d6be616cc7b-config-data\") pod \"nova-metadata-0\" (UID: \"0ba13473-b423-43a0-ab15-9d6be616cc7b\") " pod="openstack/nova-metadata-0" Feb 02 21:42:23 crc kubenswrapper[4789]: I0202 21:42:23.294147 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ba13473-b423-43a0-ab15-9d6be616cc7b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0ba13473-b423-43a0-ab15-9d6be616cc7b\") " pod="openstack/nova-metadata-0" Feb 02 21:42:23 crc kubenswrapper[4789]: I0202 21:42:23.295106 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ba13473-b423-43a0-ab15-9d6be616cc7b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0ba13473-b423-43a0-ab15-9d6be616cc7b\") " pod="openstack/nova-metadata-0" Feb 02 21:42:23 crc kubenswrapper[4789]: I0202 21:42:23.300420 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfzd2\" (UniqueName: \"kubernetes.io/projected/0ba13473-b423-43a0-ab15-9d6be616cc7b-kube-api-access-dfzd2\") pod \"nova-metadata-0\" (UID: \"0ba13473-b423-43a0-ab15-9d6be616cc7b\") " pod="openstack/nova-metadata-0" Feb 02 21:42:23 crc kubenswrapper[4789]: I0202 21:42:23.402753 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 21:42:23 crc kubenswrapper[4789]: I0202 21:42:23.413267 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 21:42:23 crc kubenswrapper[4789]: I0202 21:42:23.483891 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/731fdb9f-c2ed-41a9-a94e-9a2e1e9bd1a6-combined-ca-bundle\") pod \"731fdb9f-c2ed-41a9-a94e-9a2e1e9bd1a6\" (UID: \"731fdb9f-c2ed-41a9-a94e-9a2e1e9bd1a6\") " Feb 02 21:42:23 crc kubenswrapper[4789]: I0202 21:42:23.484212 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/731fdb9f-c2ed-41a9-a94e-9a2e1e9bd1a6-config-data\") pod \"731fdb9f-c2ed-41a9-a94e-9a2e1e9bd1a6\" (UID: \"731fdb9f-c2ed-41a9-a94e-9a2e1e9bd1a6\") " Feb 02 21:42:23 crc kubenswrapper[4789]: I0202 21:42:23.484249 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-545j4\" (UniqueName: \"kubernetes.io/projected/731fdb9f-c2ed-41a9-a94e-9a2e1e9bd1a6-kube-api-access-545j4\") pod \"731fdb9f-c2ed-41a9-a94e-9a2e1e9bd1a6\" (UID: \"731fdb9f-c2ed-41a9-a94e-9a2e1e9bd1a6\") " Feb 02 21:42:23 crc kubenswrapper[4789]: I0202 21:42:23.489515 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/731fdb9f-c2ed-41a9-a94e-9a2e1e9bd1a6-kube-api-access-545j4" (OuterVolumeSpecName: "kube-api-access-545j4") pod "731fdb9f-c2ed-41a9-a94e-9a2e1e9bd1a6" (UID: "731fdb9f-c2ed-41a9-a94e-9a2e1e9bd1a6"). InnerVolumeSpecName "kube-api-access-545j4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:42:23 crc kubenswrapper[4789]: I0202 21:42:23.558210 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/731fdb9f-c2ed-41a9-a94e-9a2e1e9bd1a6-config-data" (OuterVolumeSpecName: "config-data") pod "731fdb9f-c2ed-41a9-a94e-9a2e1e9bd1a6" (UID: "731fdb9f-c2ed-41a9-a94e-9a2e1e9bd1a6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:42:23 crc kubenswrapper[4789]: I0202 21:42:23.568534 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/731fdb9f-c2ed-41a9-a94e-9a2e1e9bd1a6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "731fdb9f-c2ed-41a9-a94e-9a2e1e9bd1a6" (UID: "731fdb9f-c2ed-41a9-a94e-9a2e1e9bd1a6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:42:23 crc kubenswrapper[4789]: I0202 21:42:23.588748 4789 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/731fdb9f-c2ed-41a9-a94e-9a2e1e9bd1a6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 21:42:23 crc kubenswrapper[4789]: I0202 21:42:23.588779 4789 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/731fdb9f-c2ed-41a9-a94e-9a2e1e9bd1a6-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 21:42:23 crc kubenswrapper[4789]: I0202 21:42:23.588789 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-545j4\" (UniqueName: \"kubernetes.io/projected/731fdb9f-c2ed-41a9-a94e-9a2e1e9bd1a6-kube-api-access-545j4\") on node \"crc\" DevicePath \"\"" Feb 02 21:42:23 crc kubenswrapper[4789]: I0202 21:42:23.716315 4789 generic.go:334] "Generic (PLEG): container finished" podID="731fdb9f-c2ed-41a9-a94e-9a2e1e9bd1a6" containerID="bbf3961f4b9988969194fdfa516605852c0d8137222ae22baf8d415f3a2897d2" exitCode=0 Feb 02 21:42:23 crc kubenswrapper[4789]: I0202 21:42:23.716357 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"731fdb9f-c2ed-41a9-a94e-9a2e1e9bd1a6","Type":"ContainerDied","Data":"bbf3961f4b9988969194fdfa516605852c0d8137222ae22baf8d415f3a2897d2"} Feb 02 21:42:23 crc kubenswrapper[4789]: I0202 21:42:23.716424 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"731fdb9f-c2ed-41a9-a94e-9a2e1e9bd1a6","Type":"ContainerDied","Data":"61da846cae4f3215cc186b7033b344b77c8cbacb74b49107957b6a90d7a82cd1"} Feb 02 21:42:23 crc kubenswrapper[4789]: I0202 21:42:23.716428 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 21:42:23 crc kubenswrapper[4789]: I0202 21:42:23.716489 4789 scope.go:117] "RemoveContainer" containerID="bbf3961f4b9988969194fdfa516605852c0d8137222ae22baf8d415f3a2897d2" Feb 02 21:42:23 crc kubenswrapper[4789]: I0202 21:42:23.738741 4789 scope.go:117] "RemoveContainer" containerID="bbf3961f4b9988969194fdfa516605852c0d8137222ae22baf8d415f3a2897d2" Feb 02 21:42:23 crc kubenswrapper[4789]: E0202 21:42:23.739169 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bbf3961f4b9988969194fdfa516605852c0d8137222ae22baf8d415f3a2897d2\": container with ID starting with bbf3961f4b9988969194fdfa516605852c0d8137222ae22baf8d415f3a2897d2 not found: ID does not exist" containerID="bbf3961f4b9988969194fdfa516605852c0d8137222ae22baf8d415f3a2897d2" Feb 02 21:42:23 crc kubenswrapper[4789]: I0202 21:42:23.739202 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbf3961f4b9988969194fdfa516605852c0d8137222ae22baf8d415f3a2897d2"} err="failed to get container status \"bbf3961f4b9988969194fdfa516605852c0d8137222ae22baf8d415f3a2897d2\": rpc error: code = NotFound desc = could not find container \"bbf3961f4b9988969194fdfa516605852c0d8137222ae22baf8d415f3a2897d2\": container with ID starting with bbf3961f4b9988969194fdfa516605852c0d8137222ae22baf8d415f3a2897d2 not found: ID does not exist" Feb 02 21:42:23 crc kubenswrapper[4789]: I0202 21:42:23.758281 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 21:42:23 crc kubenswrapper[4789]: I0202 21:42:23.768706 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 21:42:23 crc kubenswrapper[4789]: I0202 21:42:23.778044 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 21:42:23 crc kubenswrapper[4789]: E0202 21:42:23.778517 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="731fdb9f-c2ed-41a9-a94e-9a2e1e9bd1a6" containerName="nova-scheduler-scheduler" Feb 02 21:42:23 crc kubenswrapper[4789]: I0202 21:42:23.778539 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="731fdb9f-c2ed-41a9-a94e-9a2e1e9bd1a6" containerName="nova-scheduler-scheduler" Feb 02 21:42:23 crc kubenswrapper[4789]: I0202 21:42:23.778762 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="731fdb9f-c2ed-41a9-a94e-9a2e1e9bd1a6" containerName="nova-scheduler-scheduler" Feb 02 21:42:23 crc kubenswrapper[4789]: I0202 21:42:23.779378 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 21:42:23 crc kubenswrapper[4789]: I0202 21:42:23.786446 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 21:42:23 crc kubenswrapper[4789]: I0202 21:42:23.789195 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 02 21:42:23 crc kubenswrapper[4789]: I0202 21:42:23.893566 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xg4h4\" (UniqueName: \"kubernetes.io/projected/a0ceeffe-1326-4d2d-ab85-dbc02869bee1-kube-api-access-xg4h4\") pod \"nova-scheduler-0\" (UID: \"a0ceeffe-1326-4d2d-ab85-dbc02869bee1\") " pod="openstack/nova-scheduler-0" Feb 02 21:42:23 crc kubenswrapper[4789]: I0202 21:42:23.893981 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0ceeffe-1326-4d2d-ab85-dbc02869bee1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a0ceeffe-1326-4d2d-ab85-dbc02869bee1\") " pod="openstack/nova-scheduler-0" Feb 02 21:42:23 crc kubenswrapper[4789]: I0202 21:42:23.894324 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0ceeffe-1326-4d2d-ab85-dbc02869bee1-config-data\") pod \"nova-scheduler-0\" (UID: \"a0ceeffe-1326-4d2d-ab85-dbc02869bee1\") " pod="openstack/nova-scheduler-0" Feb 02 21:42:23 crc kubenswrapper[4789]: W0202 21:42:23.902102 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0ba13473_b423_43a0_ab15_9d6be616cc7b.slice/crio-0c3c5af28fbda31f21533271ca105636c61287445c3003313b9bbf88a863fc21 WatchSource:0}: Error finding container 0c3c5af28fbda31f21533271ca105636c61287445c3003313b9bbf88a863fc21: Status 404 returned error can't find the container with id 0c3c5af28fbda31f21533271ca105636c61287445c3003313b9bbf88a863fc21 Feb 02 21:42:23 crc kubenswrapper[4789]: I0202 21:42:23.912201 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 21:42:23 crc kubenswrapper[4789]: I0202 21:42:23.996046 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xg4h4\" (UniqueName: \"kubernetes.io/projected/a0ceeffe-1326-4d2d-ab85-dbc02869bee1-kube-api-access-xg4h4\") pod \"nova-scheduler-0\" (UID: \"a0ceeffe-1326-4d2d-ab85-dbc02869bee1\") " pod="openstack/nova-scheduler-0" Feb 02 21:42:23 crc kubenswrapper[4789]: I0202 21:42:23.996728 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0ceeffe-1326-4d2d-ab85-dbc02869bee1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a0ceeffe-1326-4d2d-ab85-dbc02869bee1\") " pod="openstack/nova-scheduler-0" Feb 02 21:42:23 crc kubenswrapper[4789]: I0202 21:42:23.996995 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0ceeffe-1326-4d2d-ab85-dbc02869bee1-config-data\") pod \"nova-scheduler-0\" (UID: \"a0ceeffe-1326-4d2d-ab85-dbc02869bee1\") " pod="openstack/nova-scheduler-0" Feb 02 21:42:24 crc kubenswrapper[4789]: I0202 21:42:24.002979 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0ceeffe-1326-4d2d-ab85-dbc02869bee1-config-data\") pod \"nova-scheduler-0\" (UID: \"a0ceeffe-1326-4d2d-ab85-dbc02869bee1\") " pod="openstack/nova-scheduler-0" Feb 02 21:42:24 crc kubenswrapper[4789]: I0202 21:42:24.007827 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0ceeffe-1326-4d2d-ab85-dbc02869bee1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a0ceeffe-1326-4d2d-ab85-dbc02869bee1\") " pod="openstack/nova-scheduler-0" Feb 02 21:42:24 crc kubenswrapper[4789]: I0202 21:42:24.023459 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xg4h4\" (UniqueName: \"kubernetes.io/projected/a0ceeffe-1326-4d2d-ab85-dbc02869bee1-kube-api-access-xg4h4\") pod \"nova-scheduler-0\" (UID: \"a0ceeffe-1326-4d2d-ab85-dbc02869bee1\") " pod="openstack/nova-scheduler-0" Feb 02 21:42:24 crc kubenswrapper[4789]: I0202 21:42:24.104139 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 21:42:24 crc kubenswrapper[4789]: I0202 21:42:24.441396 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2eed9773-f2f1-4d61-8b88-c0eb30620612" path="/var/lib/kubelet/pods/2eed9773-f2f1-4d61-8b88-c0eb30620612/volumes" Feb 02 21:42:24 crc kubenswrapper[4789]: I0202 21:42:24.444780 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="731fdb9f-c2ed-41a9-a94e-9a2e1e9bd1a6" path="/var/lib/kubelet/pods/731fdb9f-c2ed-41a9-a94e-9a2e1e9bd1a6/volumes" Feb 02 21:42:24 crc kubenswrapper[4789]: I0202 21:42:24.445603 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 21:42:24 crc kubenswrapper[4789]: I0202 21:42:24.729341 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0ba13473-b423-43a0-ab15-9d6be616cc7b","Type":"ContainerStarted","Data":"e343b555d9621789a633967b6cd533bf45c88272e650aba944e657e5737ee258"} Feb 02 21:42:24 crc kubenswrapper[4789]: I0202 21:42:24.731538 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0ba13473-b423-43a0-ab15-9d6be616cc7b","Type":"ContainerStarted","Data":"175ef66ad8a23cf5090dc4289e18344c14f9e8edd0b77edfc81aaff9cd62283b"} Feb 02 21:42:24 crc kubenswrapper[4789]: I0202 21:42:24.731568 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0ba13473-b423-43a0-ab15-9d6be616cc7b","Type":"ContainerStarted","Data":"0c3c5af28fbda31f21533271ca105636c61287445c3003313b9bbf88a863fc21"} Feb 02 21:42:24 crc kubenswrapper[4789]: I0202 21:42:24.732459 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a0ceeffe-1326-4d2d-ab85-dbc02869bee1","Type":"ContainerStarted","Data":"b302cbd832ca9db939c2b0bf4835c6ec6fb237f5c200d53981557f9c42498b12"} Feb 02 21:42:24 crc kubenswrapper[4789]: I0202 21:42:24.732499 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a0ceeffe-1326-4d2d-ab85-dbc02869bee1","Type":"ContainerStarted","Data":"af26a42fef9ae7a6ff5db400dfba0b2c06b5ebead6cd8fdc420ec28df0245212"} Feb 02 21:42:24 crc kubenswrapper[4789]: I0202 21:42:24.752285 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=1.752266729 podStartE2EDuration="1.752266729s" podCreationTimestamp="2026-02-02 21:42:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:42:24.745612081 +0000 UTC m=+1365.040637100" watchObservedRunningTime="2026-02-02 21:42:24.752266729 +0000 UTC m=+1365.047291748" Feb 02 21:42:24 crc kubenswrapper[4789]: I0202 21:42:24.776175 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=1.7761569160000001 podStartE2EDuration="1.776156916s" podCreationTimestamp="2026-02-02 21:42:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:42:24.762662454 +0000 UTC m=+1365.057687473" watchObservedRunningTime="2026-02-02 21:42:24.776156916 +0000 UTC m=+1365.071181935" Feb 02 21:42:28 crc kubenswrapper[4789]: I0202 21:42:28.414878 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 02 21:42:28 crc kubenswrapper[4789]: I0202 21:42:28.415462 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 02 21:42:29 crc kubenswrapper[4789]: I0202 21:42:29.104932 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 02 21:42:30 crc kubenswrapper[4789]: I0202 21:42:30.366843 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 02 21:42:30 crc kubenswrapper[4789]: I0202 21:42:30.367029 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 02 21:42:31 crc kubenswrapper[4789]: I0202 21:42:31.382862 4789 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="1ae097e7-380b-4044-8598-abc3e1059356" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.204:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 02 21:42:31 crc kubenswrapper[4789]: I0202 21:42:31.382874 4789 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="1ae097e7-380b-4044-8598-abc3e1059356" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.204:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 02 21:42:33 crc kubenswrapper[4789]: I0202 21:42:33.414348 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 02 21:42:33 crc kubenswrapper[4789]: I0202 21:42:33.414457 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 02 21:42:34 crc kubenswrapper[4789]: I0202 21:42:34.104669 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 02 21:42:34 crc kubenswrapper[4789]: I0202 21:42:34.151683 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 02 21:42:34 crc kubenswrapper[4789]: I0202 21:42:34.427874 4789 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="0ba13473-b423-43a0-ab15-9d6be616cc7b" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.205:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 02 21:42:34 crc kubenswrapper[4789]: I0202 21:42:34.427851 4789 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="0ba13473-b423-43a0-ab15-9d6be616cc7b" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.205:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 02 21:42:34 crc kubenswrapper[4789]: I0202 21:42:34.908387 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 02 21:42:37 crc kubenswrapper[4789]: I0202 21:42:37.981728 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 02 21:42:40 crc kubenswrapper[4789]: I0202 21:42:40.377777 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 02 21:42:40 crc kubenswrapper[4789]: I0202 21:42:40.378396 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 02 21:42:40 crc kubenswrapper[4789]: I0202 21:42:40.378606 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 02 21:42:40 crc kubenswrapper[4789]: I0202 21:42:40.378622 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 02 21:42:40 crc kubenswrapper[4789]: I0202 21:42:40.388970 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 02 21:42:40 crc kubenswrapper[4789]: I0202 21:42:40.390273 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 02 21:42:43 crc kubenswrapper[4789]: I0202 21:42:43.422818 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 02 21:42:43 crc kubenswrapper[4789]: I0202 21:42:43.423610 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 02 21:42:43 crc kubenswrapper[4789]: I0202 21:42:43.433177 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 02 21:42:43 crc kubenswrapper[4789]: I0202 21:42:43.433745 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 02 21:43:00 crc kubenswrapper[4789]: I0202 21:43:00.771807 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-nn5kg"] Feb 02 21:43:00 crc kubenswrapper[4789]: I0202 21:43:00.775298 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nn5kg" Feb 02 21:43:00 crc kubenswrapper[4789]: I0202 21:43:00.786226 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nn5kg"] Feb 02 21:43:00 crc kubenswrapper[4789]: I0202 21:43:00.910712 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0bf87933-483d-4608-9fab-9f0cfa9fb326-catalog-content\") pod \"redhat-operators-nn5kg\" (UID: \"0bf87933-483d-4608-9fab-9f0cfa9fb326\") " pod="openshift-marketplace/redhat-operators-nn5kg" Feb 02 21:43:00 crc kubenswrapper[4789]: I0202 21:43:00.910798 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9p9xb\" (UniqueName: \"kubernetes.io/projected/0bf87933-483d-4608-9fab-9f0cfa9fb326-kube-api-access-9p9xb\") pod \"redhat-operators-nn5kg\" (UID: \"0bf87933-483d-4608-9fab-9f0cfa9fb326\") " pod="openshift-marketplace/redhat-operators-nn5kg" Feb 02 21:43:00 crc kubenswrapper[4789]: I0202 21:43:00.911038 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0bf87933-483d-4608-9fab-9f0cfa9fb326-utilities\") pod \"redhat-operators-nn5kg\" (UID: \"0bf87933-483d-4608-9fab-9f0cfa9fb326\") " pod="openshift-marketplace/redhat-operators-nn5kg" Feb 02 21:43:01 crc kubenswrapper[4789]: I0202 21:43:01.013306 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0bf87933-483d-4608-9fab-9f0cfa9fb326-utilities\") pod \"redhat-operators-nn5kg\" (UID: \"0bf87933-483d-4608-9fab-9f0cfa9fb326\") " pod="openshift-marketplace/redhat-operators-nn5kg" Feb 02 21:43:01 crc kubenswrapper[4789]: I0202 21:43:01.013455 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0bf87933-483d-4608-9fab-9f0cfa9fb326-catalog-content\") pod \"redhat-operators-nn5kg\" (UID: \"0bf87933-483d-4608-9fab-9f0cfa9fb326\") " pod="openshift-marketplace/redhat-operators-nn5kg" Feb 02 21:43:01 crc kubenswrapper[4789]: I0202 21:43:01.013480 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9p9xb\" (UniqueName: \"kubernetes.io/projected/0bf87933-483d-4608-9fab-9f0cfa9fb326-kube-api-access-9p9xb\") pod \"redhat-operators-nn5kg\" (UID: \"0bf87933-483d-4608-9fab-9f0cfa9fb326\") " pod="openshift-marketplace/redhat-operators-nn5kg" Feb 02 21:43:01 crc kubenswrapper[4789]: I0202 21:43:01.013877 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0bf87933-483d-4608-9fab-9f0cfa9fb326-utilities\") pod \"redhat-operators-nn5kg\" (UID: \"0bf87933-483d-4608-9fab-9f0cfa9fb326\") " pod="openshift-marketplace/redhat-operators-nn5kg" Feb 02 21:43:01 crc kubenswrapper[4789]: I0202 21:43:01.014098 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0bf87933-483d-4608-9fab-9f0cfa9fb326-catalog-content\") pod \"redhat-operators-nn5kg\" (UID: \"0bf87933-483d-4608-9fab-9f0cfa9fb326\") " pod="openshift-marketplace/redhat-operators-nn5kg" Feb 02 21:43:01 crc kubenswrapper[4789]: I0202 21:43:01.034511 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9p9xb\" (UniqueName: \"kubernetes.io/projected/0bf87933-483d-4608-9fab-9f0cfa9fb326-kube-api-access-9p9xb\") pod \"redhat-operators-nn5kg\" (UID: \"0bf87933-483d-4608-9fab-9f0cfa9fb326\") " pod="openshift-marketplace/redhat-operators-nn5kg" Feb 02 21:43:01 crc kubenswrapper[4789]: I0202 21:43:01.097882 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nn5kg" Feb 02 21:43:01 crc kubenswrapper[4789]: I0202 21:43:01.584649 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nn5kg"] Feb 02 21:43:02 crc kubenswrapper[4789]: I0202 21:43:02.194393 4789 generic.go:334] "Generic (PLEG): container finished" podID="0bf87933-483d-4608-9fab-9f0cfa9fb326" containerID="0ecc6d0a811029b0d3a588b5bb03e52c54a14bb62d8653b1cae84cfe94675a80" exitCode=0 Feb 02 21:43:02 crc kubenswrapper[4789]: I0202 21:43:02.194451 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nn5kg" event={"ID":"0bf87933-483d-4608-9fab-9f0cfa9fb326","Type":"ContainerDied","Data":"0ecc6d0a811029b0d3a588b5bb03e52c54a14bb62d8653b1cae84cfe94675a80"} Feb 02 21:43:02 crc kubenswrapper[4789]: I0202 21:43:02.194651 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nn5kg" event={"ID":"0bf87933-483d-4608-9fab-9f0cfa9fb326","Type":"ContainerStarted","Data":"e7c3a1595ce13bbd3297b04c5762f5b858b010a5368c28ff7e0b48b29e55dbbb"} Feb 02 21:43:02 crc kubenswrapper[4789]: I0202 21:43:02.196323 4789 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 21:43:03 crc kubenswrapper[4789]: I0202 21:43:03.207778 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nn5kg" event={"ID":"0bf87933-483d-4608-9fab-9f0cfa9fb326","Type":"ContainerStarted","Data":"2be82df2b0e9a95d9535c329c4aac2b5684a8a513a868a9aef3785a4d0ed3c98"} Feb 02 21:43:04 crc kubenswrapper[4789]: I0202 21:43:04.150886 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-xzh8f"] Feb 02 21:43:04 crc kubenswrapper[4789]: I0202 21:43:04.160471 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-xzh8f"] Feb 02 21:43:04 crc kubenswrapper[4789]: I0202 21:43:04.213761 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-c430-account-create-update-gbnml"] Feb 02 21:43:04 crc kubenswrapper[4789]: I0202 21:43:04.228351 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-c430-account-create-update-gbnml"] Feb 02 21:43:04 crc kubenswrapper[4789]: I0202 21:43:04.263874 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-h7zb6"] Feb 02 21:43:04 crc kubenswrapper[4789]: I0202 21:43:04.265529 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-h7zb6" Feb 02 21:43:04 crc kubenswrapper[4789]: I0202 21:43:04.279520 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 02 21:43:04 crc kubenswrapper[4789]: I0202 21:43:04.283770 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Feb 02 21:43:04 crc kubenswrapper[4789]: I0202 21:43:04.283978 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="2e82084e-a68b-4e41-9d23-8888ab97e53e" containerName="openstackclient" containerID="cri-o://80ee62a2d791f82f667128eb01b609adcf2ee71d4a2647cc5abe16482c589540" gracePeriod=2 Feb 02 21:43:04 crc kubenswrapper[4789]: I0202 21:43:04.323888 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Feb 02 21:43:04 crc kubenswrapper[4789]: I0202 21:43:04.351906 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-h7zb6"] Feb 02 21:43:04 crc kubenswrapper[4789]: I0202 21:43:04.375910 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 02 21:43:04 crc kubenswrapper[4789]: I0202 21:43:04.405643 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/306f2aaf-92ed-4c14-92f4-a970a8240771-operator-scripts\") pod \"root-account-create-update-h7zb6\" (UID: \"306f2aaf-92ed-4c14-92f4-a970a8240771\") " pod="openstack/root-account-create-update-h7zb6" Feb 02 21:43:04 crc kubenswrapper[4789]: I0202 21:43:04.405731 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8x2b7\" (UniqueName: \"kubernetes.io/projected/306f2aaf-92ed-4c14-92f4-a970a8240771-kube-api-access-8x2b7\") pod \"root-account-create-update-h7zb6\" (UID: \"306f2aaf-92ed-4c14-92f4-a970a8240771\") " pod="openstack/root-account-create-update-h7zb6" Feb 02 21:43:04 crc kubenswrapper[4789]: I0202 21:43:04.408218 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-bf56-account-create-update-dp5x9"] Feb 02 21:43:04 crc kubenswrapper[4789]: I0202 21:43:04.417279 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-bf56-account-create-update-dp5x9"] Feb 02 21:43:04 crc kubenswrapper[4789]: I0202 21:43:04.467425 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8592ef51-732b-4428-adcb-1da5d2c7b2e8" path="/var/lib/kubelet/pods/8592ef51-732b-4428-adcb-1da5d2c7b2e8/volumes" Feb 02 21:43:04 crc kubenswrapper[4789]: I0202 21:43:04.468050 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a32183f3-d42d-459f-8fd6-268d398cbb82" path="/var/lib/kubelet/pods/a32183f3-d42d-459f-8fd6-268d398cbb82/volumes" Feb 02 21:43:04 crc kubenswrapper[4789]: I0202 21:43:04.468568 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e31a5d32-604d-4e80-a9f2-0f7f8f3bd48a" path="/var/lib/kubelet/pods/e31a5d32-604d-4e80-a9f2-0f7f8f3bd48a/volumes" Feb 02 21:43:04 crc kubenswrapper[4789]: I0202 21:43:04.469125 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 02 21:43:04 crc kubenswrapper[4789]: I0202 21:43:04.469150 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-c430-account-create-update-mtkw4"] Feb 02 21:43:04 crc kubenswrapper[4789]: E0202 21:43:04.469449 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e82084e-a68b-4e41-9d23-8888ab97e53e" containerName="openstackclient" Feb 02 21:43:04 crc kubenswrapper[4789]: I0202 21:43:04.469468 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e82084e-a68b-4e41-9d23-8888ab97e53e" containerName="openstackclient" Feb 02 21:43:04 crc kubenswrapper[4789]: I0202 21:43:04.469657 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e82084e-a68b-4e41-9d23-8888ab97e53e" containerName="openstackclient" Feb 02 21:43:04 crc kubenswrapper[4789]: I0202 21:43:04.470246 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-c430-account-create-update-mtkw4"] Feb 02 21:43:04 crc kubenswrapper[4789]: I0202 21:43:04.470263 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-32d6-account-create-update-5tgfk"] Feb 02 21:43:04 crc kubenswrapper[4789]: I0202 21:43:04.470382 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-c430-account-create-update-mtkw4" Feb 02 21:43:04 crc kubenswrapper[4789]: I0202 21:43:04.471351 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="c55d3f19-edf8-4cff-ab70-495607e77798" containerName="openstack-network-exporter" containerID="cri-o://e01772d808decb3380bc4d332c0752aeaf67cb8f5d0c5c9b2c8ae0ab15d89550" gracePeriod=300 Feb 02 21:43:04 crc kubenswrapper[4789]: I0202 21:43:04.480312 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-32d6-account-create-update-5tgfk"] Feb 02 21:43:04 crc kubenswrapper[4789]: I0202 21:43:04.481851 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 02 21:43:04 crc kubenswrapper[4789]: I0202 21:43:04.512408 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8x2b7\" (UniqueName: \"kubernetes.io/projected/306f2aaf-92ed-4c14-92f4-a970a8240771-kube-api-access-8x2b7\") pod \"root-account-create-update-h7zb6\" (UID: \"306f2aaf-92ed-4c14-92f4-a970a8240771\") " pod="openstack/root-account-create-update-h7zb6" Feb 02 21:43:04 crc kubenswrapper[4789]: I0202 21:43:04.512848 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/306f2aaf-92ed-4c14-92f4-a970a8240771-operator-scripts\") pod \"root-account-create-update-h7zb6\" (UID: \"306f2aaf-92ed-4c14-92f4-a970a8240771\") " pod="openstack/root-account-create-update-h7zb6" Feb 02 21:43:04 crc kubenswrapper[4789]: E0202 21:43:04.513010 4789 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Feb 02 21:43:04 crc kubenswrapper[4789]: E0202 21:43:04.513071 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b4db4b23-dae0-42a5-ad47-3336073d0b6a-config-data podName:b4db4b23-dae0-42a5-ad47-3336073d0b6a nodeName:}" failed. No retries permitted until 2026-02-02 21:43:05.013050593 +0000 UTC m=+1405.308075612 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/b4db4b23-dae0-42a5-ad47-3336073d0b6a-config-data") pod "rabbitmq-server-0" (UID: "b4db4b23-dae0-42a5-ad47-3336073d0b6a") : configmap "rabbitmq-config-data" not found Feb 02 21:43:04 crc kubenswrapper[4789]: I0202 21:43:04.517678 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/306f2aaf-92ed-4c14-92f4-a970a8240771-operator-scripts\") pod \"root-account-create-update-h7zb6\" (UID: \"306f2aaf-92ed-4c14-92f4-a970a8240771\") " pod="openstack/root-account-create-update-h7zb6" Feb 02 21:43:04 crc kubenswrapper[4789]: I0202 21:43:04.553311 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8x2b7\" (UniqueName: \"kubernetes.io/projected/306f2aaf-92ed-4c14-92f4-a970a8240771-kube-api-access-8x2b7\") pod \"root-account-create-update-h7zb6\" (UID: \"306f2aaf-92ed-4c14-92f4-a970a8240771\") " pod="openstack/root-account-create-update-h7zb6" Feb 02 21:43:04 crc kubenswrapper[4789]: I0202 21:43:04.629647 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/151bf5b0-b174-42a9-8a0a-f650d74ec2a3-operator-scripts\") pod \"barbican-c430-account-create-update-mtkw4\" (UID: \"151bf5b0-b174-42a9-8a0a-f650d74ec2a3\") " pod="openstack/barbican-c430-account-create-update-mtkw4" Feb 02 21:43:04 crc kubenswrapper[4789]: I0202 21:43:04.629763 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkqgw\" (UniqueName: \"kubernetes.io/projected/151bf5b0-b174-42a9-8a0a-f650d74ec2a3-kube-api-access-nkqgw\") pod \"barbican-c430-account-create-update-mtkw4\" (UID: \"151bf5b0-b174-42a9-8a0a-f650d74ec2a3\") " pod="openstack/barbican-c430-account-create-update-mtkw4" Feb 02 21:43:04 crc kubenswrapper[4789]: I0202 21:43:04.632706 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-h7zb6" Feb 02 21:43:04 crc kubenswrapper[4789]: I0202 21:43:04.651407 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="c55d3f19-edf8-4cff-ab70-495607e77798" containerName="ovsdbserver-sb" containerID="cri-o://7cf11c42fa6eee3581592e7cf6d8ad9c5bdb09ef4d82cebd87fe73a6989bc478" gracePeriod=300 Feb 02 21:43:04 crc kubenswrapper[4789]: I0202 21:43:04.711652 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-sxqwc"] Feb 02 21:43:04 crc kubenswrapper[4789]: I0202 21:43:04.733691 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/151bf5b0-b174-42a9-8a0a-f650d74ec2a3-operator-scripts\") pod \"barbican-c430-account-create-update-mtkw4\" (UID: \"151bf5b0-b174-42a9-8a0a-f650d74ec2a3\") " pod="openstack/barbican-c430-account-create-update-mtkw4" Feb 02 21:43:04 crc kubenswrapper[4789]: I0202 21:43:04.733796 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkqgw\" (UniqueName: \"kubernetes.io/projected/151bf5b0-b174-42a9-8a0a-f650d74ec2a3-kube-api-access-nkqgw\") pod \"barbican-c430-account-create-update-mtkw4\" (UID: \"151bf5b0-b174-42a9-8a0a-f650d74ec2a3\") " pod="openstack/barbican-c430-account-create-update-mtkw4" Feb 02 21:43:04 crc kubenswrapper[4789]: I0202 21:43:04.734798 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/151bf5b0-b174-42a9-8a0a-f650d74ec2a3-operator-scripts\") pod \"barbican-c430-account-create-update-mtkw4\" (UID: \"151bf5b0-b174-42a9-8a0a-f650d74ec2a3\") " pod="openstack/barbican-c430-account-create-update-mtkw4" Feb 02 21:43:04 crc kubenswrapper[4789]: I0202 21:43:04.751790 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-sxqwc"] Feb 02 21:43:04 crc kubenswrapper[4789]: I0202 21:43:04.770444 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkqgw\" (UniqueName: \"kubernetes.io/projected/151bf5b0-b174-42a9-8a0a-f650d74ec2a3-kube-api-access-nkqgw\") pod \"barbican-c430-account-create-update-mtkw4\" (UID: \"151bf5b0-b174-42a9-8a0a-f650d74ec2a3\") " pod="openstack/barbican-c430-account-create-update-mtkw4" Feb 02 21:43:04 crc kubenswrapper[4789]: I0202 21:43:04.771736 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Feb 02 21:43:04 crc kubenswrapper[4789]: I0202 21:43:04.771966 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="ab56a6da-6187-4fa6-bd4e-93046de2d432" containerName="ovn-northd" containerID="cri-o://9404edbdc9c7a81d7c48cab8b8c60b1fc5de57f009d5e80c304dd34c2eae41c2" gracePeriod=30 Feb 02 21:43:04 crc kubenswrapper[4789]: I0202 21:43:04.772338 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="ab56a6da-6187-4fa6-bd4e-93046de2d432" containerName="openstack-network-exporter" containerID="cri-o://f37965943ec7625f3192bcaac3c01b17a18ceddae04469351da0a2114b7fe47f" gracePeriod=30 Feb 02 21:43:04 crc kubenswrapper[4789]: I0202 21:43:04.796549 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-mc8z9"] Feb 02 21:43:04 crc kubenswrapper[4789]: I0202 21:43:04.820963 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-c430-account-create-update-mtkw4" Feb 02 21:43:04 crc kubenswrapper[4789]: I0202 21:43:04.824085 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-mc8z9"] Feb 02 21:43:04 crc kubenswrapper[4789]: I0202 21:43:04.872709 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-9127-account-create-update-jb8z8"] Feb 02 21:43:04 crc kubenswrapper[4789]: I0202 21:43:04.896046 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-9127-account-create-update-jb8z8"] Feb 02 21:43:04 crc kubenswrapper[4789]: I0202 21:43:04.922476 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-ccdfw"] Feb 02 21:43:04 crc kubenswrapper[4789]: I0202 21:43:04.931646 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-ccdfw"] Feb 02 21:43:04 crc kubenswrapper[4789]: I0202 21:43:04.945205 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-ac1d-account-create-update-rvstr"] Feb 02 21:43:04 crc kubenswrapper[4789]: I0202 21:43:04.947184 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-ac1d-account-create-update-rvstr" Feb 02 21:43:04 crc kubenswrapper[4789]: I0202 21:43:04.955554 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 02 21:43:04 crc kubenswrapper[4789]: I0202 21:43:04.988522 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-ac1d-account-create-update-rvstr"] Feb 02 21:43:05 crc kubenswrapper[4789]: I0202 21:43:05.039052 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1821366b-85cb-419f-9c57-9014300724be-operator-scripts\") pod \"nova-api-ac1d-account-create-update-rvstr\" (UID: \"1821366b-85cb-419f-9c57-9014300724be\") " pod="openstack/nova-api-ac1d-account-create-update-rvstr" Feb 02 21:43:05 crc kubenswrapper[4789]: I0202 21:43:05.039259 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x454g\" (UniqueName: \"kubernetes.io/projected/1821366b-85cb-419f-9c57-9014300724be-kube-api-access-x454g\") pod \"nova-api-ac1d-account-create-update-rvstr\" (UID: \"1821366b-85cb-419f-9c57-9014300724be\") " pod="openstack/nova-api-ac1d-account-create-update-rvstr" Feb 02 21:43:05 crc kubenswrapper[4789]: E0202 21:43:05.039422 4789 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Feb 02 21:43:05 crc kubenswrapper[4789]: E0202 21:43:05.039469 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b4db4b23-dae0-42a5-ad47-3336073d0b6a-config-data podName:b4db4b23-dae0-42a5-ad47-3336073d0b6a nodeName:}" failed. No retries permitted until 2026-02-02 21:43:06.039455399 +0000 UTC m=+1406.334480418 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/b4db4b23-dae0-42a5-ad47-3336073d0b6a-config-data") pod "rabbitmq-server-0" (UID: "b4db4b23-dae0-42a5-ad47-3336073d0b6a") : configmap "rabbitmq-config-data" not found Feb 02 21:43:05 crc kubenswrapper[4789]: I0202 21:43:05.136655 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-725d-account-create-update-6nrvh"] Feb 02 21:43:05 crc kubenswrapper[4789]: I0202 21:43:05.139272 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-725d-account-create-update-6nrvh" Feb 02 21:43:05 crc kubenswrapper[4789]: I0202 21:43:05.144922 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 02 21:43:05 crc kubenswrapper[4789]: I0202 21:43:05.175241 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x454g\" (UniqueName: \"kubernetes.io/projected/1821366b-85cb-419f-9c57-9014300724be-kube-api-access-x454g\") pod \"nova-api-ac1d-account-create-update-rvstr\" (UID: \"1821366b-85cb-419f-9c57-9014300724be\") " pod="openstack/nova-api-ac1d-account-create-update-rvstr" Feb 02 21:43:05 crc kubenswrapper[4789]: I0202 21:43:05.175344 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1821366b-85cb-419f-9c57-9014300724be-operator-scripts\") pod \"nova-api-ac1d-account-create-update-rvstr\" (UID: \"1821366b-85cb-419f-9c57-9014300724be\") " pod="openstack/nova-api-ac1d-account-create-update-rvstr" Feb 02 21:43:05 crc kubenswrapper[4789]: I0202 21:43:05.176195 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1821366b-85cb-419f-9c57-9014300724be-operator-scripts\") pod \"nova-api-ac1d-account-create-update-rvstr\" (UID: \"1821366b-85cb-419f-9c57-9014300724be\") " pod="openstack/nova-api-ac1d-account-create-update-rvstr" Feb 02 21:43:05 crc kubenswrapper[4789]: I0202 21:43:05.184398 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-3f0a-account-create-update-qc5vb"] Feb 02 21:43:05 crc kubenswrapper[4789]: I0202 21:43:05.237299 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x454g\" (UniqueName: \"kubernetes.io/projected/1821366b-85cb-419f-9c57-9014300724be-kube-api-access-x454g\") pod \"nova-api-ac1d-account-create-update-rvstr\" (UID: \"1821366b-85cb-419f-9c57-9014300724be\") " pod="openstack/nova-api-ac1d-account-create-update-rvstr" Feb 02 21:43:05 crc kubenswrapper[4789]: I0202 21:43:05.247808 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-725d-account-create-update-6nrvh"] Feb 02 21:43:05 crc kubenswrapper[4789]: I0202 21:43:05.255884 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-3f0a-account-create-update-qc5vb" Feb 02 21:43:05 crc kubenswrapper[4789]: I0202 21:43:05.274300 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 02 21:43:05 crc kubenswrapper[4789]: I0202 21:43:05.276350 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ef1040d3-c638-4098-a2ef-ce507371853e-operator-scripts\") pod \"nova-cell1-3f0a-account-create-update-qc5vb\" (UID: \"ef1040d3-c638-4098-a2ef-ce507371853e\") " pod="openstack/nova-cell1-3f0a-account-create-update-qc5vb" Feb 02 21:43:05 crc kubenswrapper[4789]: I0202 21:43:05.276432 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hw6d\" (UniqueName: \"kubernetes.io/projected/c7b70ce2-ef37-4547-9c3c-7be3ec4b02c8-kube-api-access-2hw6d\") pod \"nova-cell0-725d-account-create-update-6nrvh\" (UID: \"c7b70ce2-ef37-4547-9c3c-7be3ec4b02c8\") " pod="openstack/nova-cell0-725d-account-create-update-6nrvh" Feb 02 21:43:05 crc kubenswrapper[4789]: I0202 21:43:05.276540 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c7b70ce2-ef37-4547-9c3c-7be3ec4b02c8-operator-scripts\") pod \"nova-cell0-725d-account-create-update-6nrvh\" (UID: \"c7b70ce2-ef37-4547-9c3c-7be3ec4b02c8\") " pod="openstack/nova-cell0-725d-account-create-update-6nrvh" Feb 02 21:43:05 crc kubenswrapper[4789]: I0202 21:43:05.276563 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xd4gr\" (UniqueName: \"kubernetes.io/projected/ef1040d3-c638-4098-a2ef-ce507371853e-kube-api-access-xd4gr\") pod \"nova-cell1-3f0a-account-create-update-qc5vb\" (UID: \"ef1040d3-c638-4098-a2ef-ce507371853e\") " pod="openstack/nova-cell1-3f0a-account-create-update-qc5vb" Feb 02 21:43:05 crc kubenswrapper[4789]: I0202 21:43:05.322094 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-ac1d-account-create-update-rvstr" Feb 02 21:43:05 crc kubenswrapper[4789]: I0202 21:43:05.349267 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-3f0a-account-create-update-qc5vb"] Feb 02 21:43:05 crc kubenswrapper[4789]: I0202 21:43:05.355937 4789 generic.go:334] "Generic (PLEG): container finished" podID="c55d3f19-edf8-4cff-ab70-495607e77798" containerID="e01772d808decb3380bc4d332c0752aeaf67cb8f5d0c5c9b2c8ae0ab15d89550" exitCode=2 Feb 02 21:43:05 crc kubenswrapper[4789]: I0202 21:43:05.356031 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"c55d3f19-edf8-4cff-ab70-495607e77798","Type":"ContainerDied","Data":"e01772d808decb3380bc4d332c0752aeaf67cb8f5d0c5c9b2c8ae0ab15d89550"} Feb 02 21:43:05 crc kubenswrapper[4789]: I0202 21:43:05.382060 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hw6d\" (UniqueName: \"kubernetes.io/projected/c7b70ce2-ef37-4547-9c3c-7be3ec4b02c8-kube-api-access-2hw6d\") pod \"nova-cell0-725d-account-create-update-6nrvh\" (UID: \"c7b70ce2-ef37-4547-9c3c-7be3ec4b02c8\") " pod="openstack/nova-cell0-725d-account-create-update-6nrvh" Feb 02 21:43:05 crc kubenswrapper[4789]: I0202 21:43:05.382191 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c7b70ce2-ef37-4547-9c3c-7be3ec4b02c8-operator-scripts\") pod \"nova-cell0-725d-account-create-update-6nrvh\" (UID: \"c7b70ce2-ef37-4547-9c3c-7be3ec4b02c8\") " pod="openstack/nova-cell0-725d-account-create-update-6nrvh" Feb 02 21:43:05 crc kubenswrapper[4789]: I0202 21:43:05.382217 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xd4gr\" (UniqueName: \"kubernetes.io/projected/ef1040d3-c638-4098-a2ef-ce507371853e-kube-api-access-xd4gr\") pod \"nova-cell1-3f0a-account-create-update-qc5vb\" (UID: \"ef1040d3-c638-4098-a2ef-ce507371853e\") " pod="openstack/nova-cell1-3f0a-account-create-update-qc5vb" Feb 02 21:43:05 crc kubenswrapper[4789]: I0202 21:43:05.382250 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ef1040d3-c638-4098-a2ef-ce507371853e-operator-scripts\") pod \"nova-cell1-3f0a-account-create-update-qc5vb\" (UID: \"ef1040d3-c638-4098-a2ef-ce507371853e\") " pod="openstack/nova-cell1-3f0a-account-create-update-qc5vb" Feb 02 21:43:05 crc kubenswrapper[4789]: I0202 21:43:05.382916 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ef1040d3-c638-4098-a2ef-ce507371853e-operator-scripts\") pod \"nova-cell1-3f0a-account-create-update-qc5vb\" (UID: \"ef1040d3-c638-4098-a2ef-ce507371853e\") " pod="openstack/nova-cell1-3f0a-account-create-update-qc5vb" Feb 02 21:43:05 crc kubenswrapper[4789]: I0202 21:43:05.383616 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c7b70ce2-ef37-4547-9c3c-7be3ec4b02c8-operator-scripts\") pod \"nova-cell0-725d-account-create-update-6nrvh\" (UID: \"c7b70ce2-ef37-4547-9c3c-7be3ec4b02c8\") " pod="openstack/nova-cell0-725d-account-create-update-6nrvh" Feb 02 21:43:05 crc kubenswrapper[4789]: I0202 21:43:05.387639 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-d5hwz"] Feb 02 21:43:05 crc kubenswrapper[4789]: I0202 21:43:05.387869 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-metrics-d5hwz" podUID="d4127fa0-de5d-43ce-b257-46b80eecd670" containerName="openstack-network-exporter" containerID="cri-o://d4af60a82d31c25419cd380401fc674cf2e82d663ec23e3513afa5060752b0ed" gracePeriod=30 Feb 02 21:43:05 crc kubenswrapper[4789]: I0202 21:43:05.411704 4789 generic.go:334] "Generic (PLEG): container finished" podID="ab56a6da-6187-4fa6-bd4e-93046de2d432" containerID="f37965943ec7625f3192bcaac3c01b17a18ceddae04469351da0a2114b7fe47f" exitCode=2 Feb 02 21:43:05 crc kubenswrapper[4789]: I0202 21:43:05.411815 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"ab56a6da-6187-4fa6-bd4e-93046de2d432","Type":"ContainerDied","Data":"f37965943ec7625f3192bcaac3c01b17a18ceddae04469351da0a2114b7fe47f"} Feb 02 21:43:05 crc kubenswrapper[4789]: I0202 21:43:05.415950 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-gjls4"] Feb 02 21:43:05 crc kubenswrapper[4789]: I0202 21:43:05.447316 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 02 21:43:05 crc kubenswrapper[4789]: I0202 21:43:05.452956 4789 generic.go:334] "Generic (PLEG): container finished" podID="0bf87933-483d-4608-9fab-9f0cfa9fb326" containerID="2be82df2b0e9a95d9535c329c4aac2b5684a8a513a868a9aef3785a4d0ed3c98" exitCode=0 Feb 02 21:43:05 crc kubenswrapper[4789]: I0202 21:43:05.452994 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nn5kg" event={"ID":"0bf87933-483d-4608-9fab-9f0cfa9fb326","Type":"ContainerDied","Data":"2be82df2b0e9a95d9535c329c4aac2b5684a8a513a868a9aef3785a4d0ed3c98"} Feb 02 21:43:05 crc kubenswrapper[4789]: I0202 21:43:05.471755 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hw6d\" (UniqueName: \"kubernetes.io/projected/c7b70ce2-ef37-4547-9c3c-7be3ec4b02c8-kube-api-access-2hw6d\") pod \"nova-cell0-725d-account-create-update-6nrvh\" (UID: \"c7b70ce2-ef37-4547-9c3c-7be3ec4b02c8\") " pod="openstack/nova-cell0-725d-account-create-update-6nrvh" Feb 02 21:43:05 crc kubenswrapper[4789]: I0202 21:43:05.480085 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xd4gr\" (UniqueName: \"kubernetes.io/projected/ef1040d3-c638-4098-a2ef-ce507371853e-kube-api-access-xd4gr\") pod \"nova-cell1-3f0a-account-create-update-qc5vb\" (UID: \"ef1040d3-c638-4098-a2ef-ce507371853e\") " pod="openstack/nova-cell1-3f0a-account-create-update-qc5vb" Feb 02 21:43:05 crc kubenswrapper[4789]: I0202 21:43:05.498852 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-tjn59"] Feb 02 21:43:05 crc kubenswrapper[4789]: I0202 21:43:05.537886 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-rvwgc"] Feb 02 21:43:05 crc kubenswrapper[4789]: I0202 21:43:05.541064 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-725d-account-create-update-6nrvh" Feb 02 21:43:05 crc kubenswrapper[4789]: E0202 21:43:05.589957 4789 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Feb 02 21:43:05 crc kubenswrapper[4789]: E0202 21:43:05.590013 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b8917d54-451e-4a56-9e8a-142bb5db17e1-config-data podName:b8917d54-451e-4a56-9e8a-142bb5db17e1 nodeName:}" failed. No retries permitted until 2026-02-02 21:43:06.089996149 +0000 UTC m=+1406.385021158 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/b8917d54-451e-4a56-9e8a-142bb5db17e1-config-data") pod "rabbitmq-cell1-server-0" (UID: "b8917d54-451e-4a56-9e8a-142bb5db17e1") : configmap "rabbitmq-cell1-config-data" not found Feb 02 21:43:05 crc kubenswrapper[4789]: I0202 21:43:05.613898 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-rvwgc"] Feb 02 21:43:05 crc kubenswrapper[4789]: I0202 21:43:05.648797 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-e597-account-create-update-rz7c6"] Feb 02 21:43:05 crc kubenswrapper[4789]: I0202 21:43:05.671297 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-3f0a-account-create-update-qc5vb" Feb 02 21:43:05 crc kubenswrapper[4789]: I0202 21:43:05.677250 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-e597-account-create-update-rz7c6"] Feb 02 21:43:05 crc kubenswrapper[4789]: I0202 21:43:05.745925 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-5kfhw"] Feb 02 21:43:05 crc kubenswrapper[4789]: I0202 21:43:05.759721 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-5kfhw"] Feb 02 21:43:05 crc kubenswrapper[4789]: I0202 21:43:05.789247 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-ac1d-account-create-update-7qrx7"] Feb 02 21:43:05 crc kubenswrapper[4789]: I0202 21:43:05.815543 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-ac1d-account-create-update-7qrx7"] Feb 02 21:43:05 crc kubenswrapper[4789]: I0202 21:43:05.828293 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-725d-account-create-update-wv4nx"] Feb 02 21:43:05 crc kubenswrapper[4789]: I0202 21:43:05.837727 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 21:43:05 crc kubenswrapper[4789]: I0202 21:43:05.838038 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="24fb18f4-7a0f-4ae5-9104-e7dc45a479ff" containerName="glance-log" containerID="cri-o://8d41fcf5f05241ca690bf9be181cdbc0af2afc9c357aaaa7b133a7a3685d2601" gracePeriod=30 Feb 02 21:43:05 crc kubenswrapper[4789]: I0202 21:43:05.838509 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="24fb18f4-7a0f-4ae5-9104-e7dc45a479ff" containerName="glance-httpd" containerID="cri-o://a473f8d31f1d21a7c2b382a1e23b8b88890e3aa22648e9737f24020491949fe0" gracePeriod=30 Feb 02 21:43:05 crc kubenswrapper[4789]: I0202 21:43:05.865251 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-725d-account-create-update-wv4nx"] Feb 02 21:43:05 crc kubenswrapper[4789]: I0202 21:43:05.898140 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 02 21:43:05 crc kubenswrapper[4789]: I0202 21:43:05.899128 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="01c5293c-f7b0-4141-99a7-e423de507b87" containerName="openstack-network-exporter" containerID="cri-o://c302d40717f0c425b6e65f87b401026a5061ab6e38b1f75577a83208d8771c00" gracePeriod=300 Feb 02 21:43:05 crc kubenswrapper[4789]: I0202 21:43:05.915183 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-3f0a-account-create-update-4kw96"] Feb 02 21:43:05 crc kubenswrapper[4789]: I0202 21:43:05.932875 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-3f0a-account-create-update-4kw96"] Feb 02 21:43:05 crc kubenswrapper[4789]: I0202 21:43:05.962862 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-68d9498c68-84jcz"] Feb 02 21:43:05 crc kubenswrapper[4789]: I0202 21:43:05.964604 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-68d9498c68-84jcz" podUID="349cede5-331c-4454-8c9c-fda8fe886f07" containerName="placement-log" containerID="cri-o://7594027e1aa66be1d86466bb05745dd33d3b9a0771c64f3b195b5d3c4ef5fbca" gracePeriod=30 Feb 02 21:43:05 crc kubenswrapper[4789]: I0202 21:43:05.965046 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-68d9498c68-84jcz" podUID="349cede5-331c-4454-8c9c-fda8fe886f07" containerName="placement-api" containerID="cri-o://40a59db16d790bc9ade9d424000123015ece03fbc62bfe3a010f70a44b900736" gracePeriod=30 Feb 02 21:43:05 crc kubenswrapper[4789]: I0202 21:43:05.992920 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 21:43:05 crc kubenswrapper[4789]: I0202 21:43:05.993211 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="3bb81567-8536-4275-ab0e-a003ef904230" containerName="glance-log" containerID="cri-o://ce9ef55c9302edded2a55530533656268a3c7b21b0ae936aae0892ef6e043554" gracePeriod=30 Feb 02 21:43:05 crc kubenswrapper[4789]: I0202 21:43:05.993332 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="3bb81567-8536-4275-ab0e-a003ef904230" containerName="glance-httpd" containerID="cri-o://d9549a00930229585c1a660c46c1ee179871330062dec64c5947fd34ad7860f5" gracePeriod=30 Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.013487 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-r4rd5"] Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.014049 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-89c5cd4d5-r4rd5" podUID="7cd6e7ff-266d-4288-9df4-dc22dbe5f19d" containerName="dnsmasq-dns" containerID="cri-o://b576e41ea89fdc8a7019e121b2fb6790b3127f8d4fea54dd5986dfa02c0ad849" gracePeriod=10 Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.056188 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="01c5293c-f7b0-4141-99a7-e423de507b87" containerName="ovsdbserver-nb" containerID="cri-o://ecfa06e359801169bdd06bd88548fc6c7999a73aea8eb2d73c459b8201ac6223" gracePeriod=300 Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.087169 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-25sxm"] Feb 02 21:43:06 crc kubenswrapper[4789]: E0202 21:43:06.106744 4789 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Feb 02 21:43:06 crc kubenswrapper[4789]: E0202 21:43:06.106802 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b4db4b23-dae0-42a5-ad47-3336073d0b6a-config-data podName:b4db4b23-dae0-42a5-ad47-3336073d0b6a nodeName:}" failed. No retries permitted until 2026-02-02 21:43:08.106788994 +0000 UTC m=+1408.401814013 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/b4db4b23-dae0-42a5-ad47-3336073d0b6a-config-data") pod "rabbitmq-server-0" (UID: "b4db4b23-dae0-42a5-ad47-3336073d0b6a") : configmap "rabbitmq-config-data" not found Feb 02 21:43:06 crc kubenswrapper[4789]: E0202 21:43:06.107098 4789 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Feb 02 21:43:06 crc kubenswrapper[4789]: E0202 21:43:06.107121 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b8917d54-451e-4a56-9e8a-142bb5db17e1-config-data podName:b8917d54-451e-4a56-9e8a-142bb5db17e1 nodeName:}" failed. No retries permitted until 2026-02-02 21:43:07.107114343 +0000 UTC m=+1407.402139362 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/b8917d54-451e-4a56-9e8a-142bb5db17e1-config-data") pod "rabbitmq-cell1-server-0" (UID: "b8917d54-451e-4a56-9e8a-142bb5db17e1") : configmap "rabbitmq-cell1-config-data" not found Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.132628 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.133114 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="87f6bccb-d5fc-4868-aca2-734d16898805" containerName="account-server" containerID="cri-o://db66ce76b54133027343e52fa4a37bee9603c2a78eccea429cb9107f7f66533b" gracePeriod=30 Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.133485 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="87f6bccb-d5fc-4868-aca2-734d16898805" containerName="swift-recon-cron" containerID="cri-o://19152882f397a8eaf801b2e8d8fd5858677ede37b6cfd35d02fe8847efc8de27" gracePeriod=30 Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.133549 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="87f6bccb-d5fc-4868-aca2-734d16898805" containerName="rsync" containerID="cri-o://758668fe2c5ee9470a7c3aa0b9a80c8ff6b3ee015da4b7aab90845bdc8131fbe" gracePeriod=30 Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.133599 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="87f6bccb-d5fc-4868-aca2-734d16898805" containerName="object-expirer" containerID="cri-o://772b32b4a568764e9d52dc458b0ac79908b73b42aa7c0ab429a6e69ef36ff4ee" gracePeriod=30 Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.133631 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="87f6bccb-d5fc-4868-aca2-734d16898805" containerName="object-updater" containerID="cri-o://81a1db9e6f95967f7398c2d9e33aef20a4ebd27dac4bde8ca54c1d2cb9e32588" gracePeriod=30 Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.133659 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="87f6bccb-d5fc-4868-aca2-734d16898805" containerName="object-auditor" containerID="cri-o://292bcc186a04274a666bd4bca60221734a4bf42019919ba532cfde2503636ddb" gracePeriod=30 Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.133685 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="87f6bccb-d5fc-4868-aca2-734d16898805" containerName="object-replicator" containerID="cri-o://1e6fc4897376cc9d269976f61acf3f0cc76fb66b261f7e18fb05f5f9f439d27d" gracePeriod=30 Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.133712 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="87f6bccb-d5fc-4868-aca2-734d16898805" containerName="object-server" containerID="cri-o://41f66ea30afde5a33d387e2cc7b5c5ed11aef0e66a8afd458c8af299945c2460" gracePeriod=30 Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.133738 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="87f6bccb-d5fc-4868-aca2-734d16898805" containerName="container-updater" containerID="cri-o://7a20dacf9652208f7b99bf2a1079fa1a4eb150591b3740a517f85585c21a53d1" gracePeriod=30 Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.133765 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="87f6bccb-d5fc-4868-aca2-734d16898805" containerName="container-auditor" containerID="cri-o://aab045fa01e8633951d3b23cb6099a13479fc7bde9e851b10aeb53ad724f1a5a" gracePeriod=30 Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.133795 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="87f6bccb-d5fc-4868-aca2-734d16898805" containerName="container-replicator" containerID="cri-o://d8b8973838965c20503722920a92fa3f55adad61b2b29d0ad5b46e04847ba642" gracePeriod=30 Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.133821 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="87f6bccb-d5fc-4868-aca2-734d16898805" containerName="container-server" containerID="cri-o://f8710e800cb558add663bfff070701d51801997c411687aea039144baf3f407d" gracePeriod=30 Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.133849 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="87f6bccb-d5fc-4868-aca2-734d16898805" containerName="account-reaper" containerID="cri-o://b07c3c791de729e8c85f1895c49db2a43d74603b713f577900b8371d9d871050" gracePeriod=30 Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.133877 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="87f6bccb-d5fc-4868-aca2-734d16898805" containerName="account-auditor" containerID="cri-o://b2a613095dfded30ccf9e469a7904687f82e0e1076df8bb3c12d61ae91f09cbb" gracePeriod=30 Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.133916 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="87f6bccb-d5fc-4868-aca2-734d16898805" containerName="account-replicator" containerID="cri-o://dc1d8d39fd0b72fbfd8a3196945369271e6997b06ed178e120be5a8c661363c0" gracePeriod=30 Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.166316 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-25sxm"] Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.189828 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-q8pr6"] Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.232784 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-q8pr6"] Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.355845 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-tjn59" podUID="0ad9b78e-5f3a-42f0-a462-c6b0bf0c4ec1" containerName="ovs-vswitchd" containerID="cri-o://6c5300b0e812632a91b74ba2ab909199288368ea0a90d2cfd2fd7fbf3551f6b6" gracePeriod=30 Feb 02 21:43:06 crc kubenswrapper[4789]: E0202 21:43:06.361269 4789 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err=< Feb 02 21:43:06 crc kubenswrapper[4789]: command '/usr/local/bin/container-scripts/stop-ovsdb-server.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Feb 02 21:43:06 crc kubenswrapper[4789]: + source /usr/local/bin/container-scripts/functions Feb 02 21:43:06 crc kubenswrapper[4789]: ++ OVNBridge=br-int Feb 02 21:43:06 crc kubenswrapper[4789]: ++ OVNRemote=tcp:localhost:6642 Feb 02 21:43:06 crc kubenswrapper[4789]: ++ OVNEncapType=geneve Feb 02 21:43:06 crc kubenswrapper[4789]: ++ OVNAvailabilityZones= Feb 02 21:43:06 crc kubenswrapper[4789]: ++ EnableChassisAsGateway=true Feb 02 21:43:06 crc kubenswrapper[4789]: ++ PhysicalNetworks= Feb 02 21:43:06 crc kubenswrapper[4789]: ++ OVNHostName= Feb 02 21:43:06 crc kubenswrapper[4789]: ++ DB_FILE=/etc/openvswitch/conf.db Feb 02 21:43:06 crc kubenswrapper[4789]: ++ ovs_dir=/var/lib/openvswitch Feb 02 21:43:06 crc kubenswrapper[4789]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Feb 02 21:43:06 crc kubenswrapper[4789]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Feb 02 21:43:06 crc kubenswrapper[4789]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Feb 02 21:43:06 crc kubenswrapper[4789]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Feb 02 21:43:06 crc kubenswrapper[4789]: + sleep 0.5 Feb 02 21:43:06 crc kubenswrapper[4789]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Feb 02 21:43:06 crc kubenswrapper[4789]: + cleanup_ovsdb_server_semaphore Feb 02 21:43:06 crc kubenswrapper[4789]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Feb 02 21:43:06 crc kubenswrapper[4789]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Feb 02 21:43:06 crc kubenswrapper[4789]: > execCommand=["/usr/local/bin/container-scripts/stop-ovsdb-server.sh"] containerName="ovsdb-server" pod="openstack/ovn-controller-ovs-tjn59" message=< Feb 02 21:43:06 crc kubenswrapper[4789]: Exiting ovsdb-server (5) ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Feb 02 21:43:06 crc kubenswrapper[4789]: + source /usr/local/bin/container-scripts/functions Feb 02 21:43:06 crc kubenswrapper[4789]: ++ OVNBridge=br-int Feb 02 21:43:06 crc kubenswrapper[4789]: ++ OVNRemote=tcp:localhost:6642 Feb 02 21:43:06 crc kubenswrapper[4789]: ++ OVNEncapType=geneve Feb 02 21:43:06 crc kubenswrapper[4789]: ++ OVNAvailabilityZones= Feb 02 21:43:06 crc kubenswrapper[4789]: ++ EnableChassisAsGateway=true Feb 02 21:43:06 crc kubenswrapper[4789]: ++ PhysicalNetworks= Feb 02 21:43:06 crc kubenswrapper[4789]: ++ OVNHostName= Feb 02 21:43:06 crc kubenswrapper[4789]: ++ DB_FILE=/etc/openvswitch/conf.db Feb 02 21:43:06 crc kubenswrapper[4789]: ++ ovs_dir=/var/lib/openvswitch Feb 02 21:43:06 crc kubenswrapper[4789]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Feb 02 21:43:06 crc kubenswrapper[4789]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Feb 02 21:43:06 crc kubenswrapper[4789]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Feb 02 21:43:06 crc kubenswrapper[4789]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Feb 02 21:43:06 crc kubenswrapper[4789]: + sleep 0.5 Feb 02 21:43:06 crc kubenswrapper[4789]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Feb 02 21:43:06 crc kubenswrapper[4789]: + cleanup_ovsdb_server_semaphore Feb 02 21:43:06 crc kubenswrapper[4789]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Feb 02 21:43:06 crc kubenswrapper[4789]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Feb 02 21:43:06 crc kubenswrapper[4789]: > Feb 02 21:43:06 crc kubenswrapper[4789]: E0202 21:43:06.361359 4789 kuberuntime_container.go:691] "PreStop hook failed" err=< Feb 02 21:43:06 crc kubenswrapper[4789]: command '/usr/local/bin/container-scripts/stop-ovsdb-server.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Feb 02 21:43:06 crc kubenswrapper[4789]: + source /usr/local/bin/container-scripts/functions Feb 02 21:43:06 crc kubenswrapper[4789]: ++ OVNBridge=br-int Feb 02 21:43:06 crc kubenswrapper[4789]: ++ OVNRemote=tcp:localhost:6642 Feb 02 21:43:06 crc kubenswrapper[4789]: ++ OVNEncapType=geneve Feb 02 21:43:06 crc kubenswrapper[4789]: ++ OVNAvailabilityZones= Feb 02 21:43:06 crc kubenswrapper[4789]: ++ EnableChassisAsGateway=true Feb 02 21:43:06 crc kubenswrapper[4789]: ++ PhysicalNetworks= Feb 02 21:43:06 crc kubenswrapper[4789]: ++ OVNHostName= Feb 02 21:43:06 crc kubenswrapper[4789]: ++ DB_FILE=/etc/openvswitch/conf.db Feb 02 21:43:06 crc kubenswrapper[4789]: ++ ovs_dir=/var/lib/openvswitch Feb 02 21:43:06 crc kubenswrapper[4789]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Feb 02 21:43:06 crc kubenswrapper[4789]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Feb 02 21:43:06 crc kubenswrapper[4789]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Feb 02 21:43:06 crc kubenswrapper[4789]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Feb 02 21:43:06 crc kubenswrapper[4789]: + sleep 0.5 Feb 02 21:43:06 crc kubenswrapper[4789]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Feb 02 21:43:06 crc kubenswrapper[4789]: + cleanup_ovsdb_server_semaphore Feb 02 21:43:06 crc kubenswrapper[4789]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Feb 02 21:43:06 crc kubenswrapper[4789]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Feb 02 21:43:06 crc kubenswrapper[4789]: > pod="openstack/ovn-controller-ovs-tjn59" podUID="0ad9b78e-5f3a-42f0-a462-c6b0bf0c4ec1" containerName="ovsdb-server" containerID="cri-o://17dd53bbaea1cc18fc470d6b2376b40412de4caece9ebad3ce307cda50a79e10" Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.361568 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-tjn59" podUID="0ad9b78e-5f3a-42f0-a462-c6b0bf0c4ec1" containerName="ovsdb-server" containerID="cri-o://17dd53bbaea1cc18fc470d6b2376b40412de4caece9ebad3ce307cda50a79e10" gracePeriod=30 Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.374654 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7c4994f5f-462kb"] Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.374923 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7c4994f5f-462kb" podUID="78b23a1f-cc85-4767-b19c-6069adfc735a" containerName="neutron-api" containerID="cri-o://553d373b31d254acbe2370697ade07f36e41177b6244fed11902fec65d96f129" gracePeriod=30 Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.375307 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7c4994f5f-462kb" podUID="78b23a1f-cc85-4767-b19c-6069adfc735a" containerName="neutron-httpd" containerID="cri-o://299b4734565096b1be6400a79e47dcc680e20c6351889626bc796a381f662a16" gracePeriod=30 Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.407111 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-ghqst"] Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.427801 4789 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/nova-cell1-conductor-0" secret="" err="secret \"nova-nova-dockercfg-lfwsj\" not found" Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.469523 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="160db825-98d0-4663-80b5-1a50e382cfa5" path="/var/lib/kubelet/pods/160db825-98d0-4663-80b5-1a50e382cfa5/volumes" Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.470295 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22dd9cf7-e9fb-443a-a7d2-46f72c6ee5e3" path="/var/lib/kubelet/pods/22dd9cf7-e9fb-443a-a7d2-46f72c6ee5e3/volumes" Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.471474 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c632452-0823-4b9b-9eaf-b8e9da3084c9" path="/var/lib/kubelet/pods/2c632452-0823-4b9b-9eaf-b8e9da3084c9/volumes" Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.477523 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42e496d3-8d68-48a0-a0ca-058126b200a1" path="/var/lib/kubelet/pods/42e496d3-8d68-48a0-a0ca-058126b200a1/volumes" Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.485124 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4aaa8d11-6409-415e-836b-b7941b66f6e4" path="/var/lib/kubelet/pods/4aaa8d11-6409-415e-836b-b7941b66f6e4/volumes" Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.486403 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4be719bd-b5d3-4499-9e80-9d8055c1a8df" path="/var/lib/kubelet/pods/4be719bd-b5d3-4499-9e80-9d8055c1a8df/volumes" Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.486964 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a21aa3a8-7aa8-4eda-bc74-1809a4cc774b" path="/var/lib/kubelet/pods/a21aa3a8-7aa8-4eda-bc74-1809a4cc774b/volumes" Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.488433 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a55a234d-1af7-4e73-8f93-b614162be0c3" path="/var/lib/kubelet/pods/a55a234d-1af7-4e73-8f93-b614162be0c3/volumes" Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.489762 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b833200c-e96b-4baa-9654-e7a3c07369e5" path="/var/lib/kubelet/pods/b833200c-e96b-4baa-9654-e7a3c07369e5/volumes" Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.490643 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5c4da6b-2b71-4018-90ce-d569b9f03cfd" path="/var/lib/kubelet/pods/c5c4da6b-2b71-4018-90ce-d569b9f03cfd/volumes" Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.491217 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc5c3e3f-2b45-4376-9b5b-cc92c2d4837b" path="/var/lib/kubelet/pods/cc5c3e3f-2b45-4376-9b5b-cc92c2d4837b/volumes" Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.491732 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce78c9ad-cbd4-4761-8485-af675e18d85a" path="/var/lib/kubelet/pods/ce78c9ad-cbd4-4761-8485-af675e18d85a/volumes" Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.492555 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ead77939-6823-47d8-83e8-7dc74b841d49" path="/var/lib/kubelet/pods/ead77939-6823-47d8-83e8-7dc74b841d49/volumes" Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.493750 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-ghqst"] Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.493972 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-tw98w"] Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.503312 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-c430-account-create-update-mtkw4"] Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.515017 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-d5hwz_d4127fa0-de5d-43ce-b257-46b80eecd670/openstack-network-exporter/0.log" Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.515143 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-d5hwz" Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.519678 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-tw98w"] Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.527994 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-g4xxk"] Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.529180 4789 generic.go:334] "Generic (PLEG): container finished" podID="87f6bccb-d5fc-4868-aca2-734d16898805" containerID="772b32b4a568764e9d52dc458b0ac79908b73b42aa7c0ab429a6e69ef36ff4ee" exitCode=0 Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.529213 4789 generic.go:334] "Generic (PLEG): container finished" podID="87f6bccb-d5fc-4868-aca2-734d16898805" containerID="81a1db9e6f95967f7398c2d9e33aef20a4ebd27dac4bde8ca54c1d2cb9e32588" exitCode=0 Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.529222 4789 generic.go:334] "Generic (PLEG): container finished" podID="87f6bccb-d5fc-4868-aca2-734d16898805" containerID="292bcc186a04274a666bd4bca60221734a4bf42019919ba532cfde2503636ddb" exitCode=0 Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.529231 4789 generic.go:334] "Generic (PLEG): container finished" podID="87f6bccb-d5fc-4868-aca2-734d16898805" containerID="1e6fc4897376cc9d269976f61acf3f0cc76fb66b261f7e18fb05f5f9f439d27d" exitCode=0 Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.529241 4789 generic.go:334] "Generic (PLEG): container finished" podID="87f6bccb-d5fc-4868-aca2-734d16898805" containerID="7a20dacf9652208f7b99bf2a1079fa1a4eb150591b3740a517f85585c21a53d1" exitCode=0 Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.529248 4789 generic.go:334] "Generic (PLEG): container finished" podID="87f6bccb-d5fc-4868-aca2-734d16898805" containerID="d8b8973838965c20503722920a92fa3f55adad61b2b29d0ad5b46e04847ba642" exitCode=0 Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.529256 4789 generic.go:334] "Generic (PLEG): container finished" podID="87f6bccb-d5fc-4868-aca2-734d16898805" containerID="b2a613095dfded30ccf9e469a7904687f82e0e1076df8bb3c12d61ae91f09cbb" exitCode=0 Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.529264 4789 generic.go:334] "Generic (PLEG): container finished" podID="87f6bccb-d5fc-4868-aca2-734d16898805" containerID="dc1d8d39fd0b72fbfd8a3196945369271e6997b06ed178e120be5a8c661363c0" exitCode=0 Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.529331 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"87f6bccb-d5fc-4868-aca2-734d16898805","Type":"ContainerDied","Data":"772b32b4a568764e9d52dc458b0ac79908b73b42aa7c0ab429a6e69ef36ff4ee"} Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.529358 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"87f6bccb-d5fc-4868-aca2-734d16898805","Type":"ContainerDied","Data":"81a1db9e6f95967f7398c2d9e33aef20a4ebd27dac4bde8ca54c1d2cb9e32588"} Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.529398 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"87f6bccb-d5fc-4868-aca2-734d16898805","Type":"ContainerDied","Data":"292bcc186a04274a666bd4bca60221734a4bf42019919ba532cfde2503636ddb"} Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.529415 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"87f6bccb-d5fc-4868-aca2-734d16898805","Type":"ContainerDied","Data":"1e6fc4897376cc9d269976f61acf3f0cc76fb66b261f7e18fb05f5f9f439d27d"} Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.529427 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"87f6bccb-d5fc-4868-aca2-734d16898805","Type":"ContainerDied","Data":"7a20dacf9652208f7b99bf2a1079fa1a4eb150591b3740a517f85585c21a53d1"} Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.529439 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"87f6bccb-d5fc-4868-aca2-734d16898805","Type":"ContainerDied","Data":"d8b8973838965c20503722920a92fa3f55adad61b2b29d0ad5b46e04847ba642"} Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.529451 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"87f6bccb-d5fc-4868-aca2-734d16898805","Type":"ContainerDied","Data":"b2a613095dfded30ccf9e469a7904687f82e0e1076df8bb3c12d61ae91f09cbb"} Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.529462 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"87f6bccb-d5fc-4868-aca2-734d16898805","Type":"ContainerDied","Data":"dc1d8d39fd0b72fbfd8a3196945369271e6997b06ed178e120be5a8c661363c0"} Feb 02 21:43:06 crc kubenswrapper[4789]: E0202 21:43:06.538877 4789 secret.go:188] Couldn't get secret openstack/nova-cell1-conductor-config-data: secret "nova-cell1-conductor-config-data" not found Feb 02 21:43:06 crc kubenswrapper[4789]: E0202 21:43:06.539233 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/399d9417-2065-4e92-89c5-a04dbeaf2cca-config-data podName:399d9417-2065-4e92-89c5-a04dbeaf2cca nodeName:}" failed. No retries permitted until 2026-02-02 21:43:07.039213679 +0000 UTC m=+1407.334238698 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/399d9417-2065-4e92-89c5-a04dbeaf2cca-config-data") pod "nova-cell1-conductor-0" (UID: "399d9417-2065-4e92-89c5-a04dbeaf2cca") : secret "nova-cell1-conductor-config-data" not found Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.545730 4789 generic.go:334] "Generic (PLEG): container finished" podID="0ad9b78e-5f3a-42f0-a462-c6b0bf0c4ec1" containerID="17dd53bbaea1cc18fc470d6b2376b40412de4caece9ebad3ce307cda50a79e10" exitCode=0 Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.545808 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-tjn59" event={"ID":"0ad9b78e-5f3a-42f0-a462-c6b0bf0c4ec1","Type":"ContainerDied","Data":"17dd53bbaea1cc18fc470d6b2376b40412de4caece9ebad3ce307cda50a79e10"} Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.558824 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-g4xxk"] Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.604660 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-hw2cp"] Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.617088 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-hw2cp"] Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.631627 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-d5hwz_d4127fa0-de5d-43ce-b257-46b80eecd670/openstack-network-exporter/0.log" Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.631666 4789 generic.go:334] "Generic (PLEG): container finished" podID="d4127fa0-de5d-43ce-b257-46b80eecd670" containerID="d4af60a82d31c25419cd380401fc674cf2e82d663ec23e3513afa5060752b0ed" exitCode=2 Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.631750 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-d5hwz" Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.632348 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-d5hwz" event={"ID":"d4127fa0-de5d-43ce-b257-46b80eecd670","Type":"ContainerDied","Data":"d4af60a82d31c25419cd380401fc674cf2e82d663ec23e3513afa5060752b0ed"} Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.632399 4789 scope.go:117] "RemoveContainer" containerID="d4af60a82d31c25419cd380401fc674cf2e82d663ec23e3513afa5060752b0ed" Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.640348 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/d4127fa0-de5d-43ce-b257-46b80eecd670-ovn-rundir\") pod \"d4127fa0-de5d-43ce-b257-46b80eecd670\" (UID: \"d4127fa0-de5d-43ce-b257-46b80eecd670\") " Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.640551 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4127fa0-de5d-43ce-b257-46b80eecd670-config\") pod \"d4127fa0-de5d-43ce-b257-46b80eecd670\" (UID: \"d4127fa0-de5d-43ce-b257-46b80eecd670\") " Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.640574 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-txqhh\" (UniqueName: \"kubernetes.io/projected/d4127fa0-de5d-43ce-b257-46b80eecd670-kube-api-access-txqhh\") pod \"d4127fa0-de5d-43ce-b257-46b80eecd670\" (UID: \"d4127fa0-de5d-43ce-b257-46b80eecd670\") " Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.640606 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4127fa0-de5d-43ce-b257-46b80eecd670-combined-ca-bundle\") pod \"d4127fa0-de5d-43ce-b257-46b80eecd670\" (UID: \"d4127fa0-de5d-43ce-b257-46b80eecd670\") " Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.640645 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4127fa0-de5d-43ce-b257-46b80eecd670-metrics-certs-tls-certs\") pod \"d4127fa0-de5d-43ce-b257-46b80eecd670\" (UID: \"d4127fa0-de5d-43ce-b257-46b80eecd670\") " Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.640729 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/d4127fa0-de5d-43ce-b257-46b80eecd670-ovs-rundir\") pod \"d4127fa0-de5d-43ce-b257-46b80eecd670\" (UID: \"d4127fa0-de5d-43ce-b257-46b80eecd670\") " Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.641249 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d4127fa0-de5d-43ce-b257-46b80eecd670-ovs-rundir" (OuterVolumeSpecName: "ovs-rundir") pod "d4127fa0-de5d-43ce-b257-46b80eecd670" (UID: "d4127fa0-de5d-43ce-b257-46b80eecd670"). InnerVolumeSpecName "ovs-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.641301 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d4127fa0-de5d-43ce-b257-46b80eecd670-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "d4127fa0-de5d-43ce-b257-46b80eecd670" (UID: "d4127fa0-de5d-43ce-b257-46b80eecd670"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.646848 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4127fa0-de5d-43ce-b257-46b80eecd670-config" (OuterVolumeSpecName: "config") pod "d4127fa0-de5d-43ce-b257-46b80eecd670" (UID: "d4127fa0-de5d-43ce-b257-46b80eecd670"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.656133 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4127fa0-de5d-43ce-b257-46b80eecd670-kube-api-access-txqhh" (OuterVolumeSpecName: "kube-api-access-txqhh") pod "d4127fa0-de5d-43ce-b257-46b80eecd670" (UID: "d4127fa0-de5d-43ce-b257-46b80eecd670"). InnerVolumeSpecName "kube-api-access-txqhh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.682830 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4127fa0-de5d-43ce-b257-46b80eecd670-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d4127fa0-de5d-43ce-b257-46b80eecd670" (UID: "d4127fa0-de5d-43ce-b257-46b80eecd670"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.707268 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.707473 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="7acbb536-0a08-4132-a84a-848735b0e7f4" containerName="cinder-api-log" containerID="cri-o://c6597dc6aaeaebf47e22acb882e2ae643e5ed20e86abaacc9a1e3bf64ebb15a3" gracePeriod=30 Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.707816 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="7acbb536-0a08-4132-a84a-848735b0e7f4" containerName="cinder-api" containerID="cri-o://25ed4343b75caa0616ab66903bb372442dbf22b4a29f2c30b9fcf20df97021f7" gracePeriod=30 Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.713785 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.714026 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="57c9f301-615a-4182-b17e-3ae250e8335c" containerName="cinder-scheduler" containerID="cri-o://fbe1157b2a6d65b0c7188f948585dfc0be3a3d76f5c3b57620ea3d6091a4927c" gracePeriod=30 Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.714368 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="57c9f301-615a-4182-b17e-3ae250e8335c" containerName="probe" containerID="cri-o://ecc06c5902aa50d55c9a1d5a9d91397ab8aa463f6ac87ac09a03b387026f2890" gracePeriod=30 Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.718286 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_c55d3f19-edf8-4cff-ab70-495607e77798/ovsdbserver-sb/0.log" Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.718328 4789 generic.go:334] "Generic (PLEG): container finished" podID="c55d3f19-edf8-4cff-ab70-495607e77798" containerID="7cf11c42fa6eee3581592e7cf6d8ad9c5bdb09ef4d82cebd87fe73a6989bc478" exitCode=143 Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.719152 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"c55d3f19-edf8-4cff-ab70-495607e77798","Type":"ContainerDied","Data":"7cf11c42fa6eee3581592e7cf6d8ad9c5bdb09ef4d82cebd87fe73a6989bc478"} Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.742930 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-959f7f8c5-pmqjf"] Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.743537 4789 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4127fa0-de5d-43ce-b257-46b80eecd670-config\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.743554 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-959f7f8c5-pmqjf" podUID="5c3c85c8-d5b9-48f3-9408-0d9693d0cbf1" containerName="barbican-worker-log" containerID="cri-o://515297fe8dbc3fc649d583e30d1f7a1830bea72e21b40dc9d104ef3455ab5cb1" gracePeriod=30 Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.743605 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-txqhh\" (UniqueName: \"kubernetes.io/projected/d4127fa0-de5d-43ce-b257-46b80eecd670-kube-api-access-txqhh\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.743622 4789 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4127fa0-de5d-43ce-b257-46b80eecd670-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.743633 4789 reconciler_common.go:293] "Volume detached for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/d4127fa0-de5d-43ce-b257-46b80eecd670-ovs-rundir\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.743644 4789 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/d4127fa0-de5d-43ce-b257-46b80eecd670-ovn-rundir\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.743725 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-959f7f8c5-pmqjf" podUID="5c3c85c8-d5b9-48f3-9408-0d9693d0cbf1" containerName="barbican-worker" containerID="cri-o://7529e703a7ba79a3c7d9ce9adbb48f6652641d0b42790d00cab813d47b85c9b6" gracePeriod=30 Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.743876 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-828hm"] Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.745323 4789 generic.go:334] "Generic (PLEG): container finished" podID="349cede5-331c-4454-8c9c-fda8fe886f07" containerID="7594027e1aa66be1d86466bb05745dd33d3b9a0771c64f3b195b5d3c4ef5fbca" exitCode=143 Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.745404 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-68d9498c68-84jcz" event={"ID":"349cede5-331c-4454-8c9c-fda8fe886f07","Type":"ContainerDied","Data":"7594027e1aa66be1d86466bb05745dd33d3b9a0771c64f3b195b5d3c4ef5fbca"} Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.751902 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-828hm"] Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.760680 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-qvp7v"] Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.769337 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-6d964c7466-fpqld"] Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.770637 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-6d964c7466-fpqld" podUID="802bda4f-2363-4ca6-a126-2ccf1448ed71" containerName="barbican-keystone-listener-log" containerID="cri-o://0fe697a1f2000589c5ab93c3c47f9c76ebfb685c854fd86b08766edfb2d1a375" gracePeriod=30 Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.770817 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-6d964c7466-fpqld" podUID="802bda4f-2363-4ca6-a126-2ccf1448ed71" containerName="barbican-keystone-listener" containerID="cri-o://0a62728aedd4480cfd181d88be8ac00afa4f69cd9f3b44bd97a2e8305a5f31af" gracePeriod=30 Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.775592 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-qvp7v"] Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.776173 4789 generic.go:334] "Generic (PLEG): container finished" podID="7cd6e7ff-266d-4288-9df4-dc22dbe5f19d" containerID="b576e41ea89fdc8a7019e121b2fb6790b3127f8d4fea54dd5986dfa02c0ad849" exitCode=0 Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.776263 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-r4rd5" event={"ID":"7cd6e7ff-266d-4288-9df4-dc22dbe5f19d","Type":"ContainerDied","Data":"b576e41ea89fdc8a7019e121b2fb6790b3127f8d4fea54dd5986dfa02c0ad849"} Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.782344 4789 generic.go:334] "Generic (PLEG): container finished" podID="2e82084e-a68b-4e41-9d23-8888ab97e53e" containerID="80ee62a2d791f82f667128eb01b609adcf2ee71d4a2647cc5abe16482c589540" exitCode=137 Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.785628 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6b8b9b54f6-jfnqs"] Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.785891 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6b8b9b54f6-jfnqs" podUID="7d53e4c0-add2-4cfd-bbea-e0a1d3196091" containerName="barbican-api-log" containerID="cri-o://4d137886e123097c6077816303161de8f1beb2278c8b0ec65bb058b0d9f03c90" gracePeriod=30 Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.786043 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6b8b9b54f6-jfnqs" podUID="7d53e4c0-add2-4cfd-bbea-e0a1d3196091" containerName="barbican-api" containerID="cri-o://25969b57d6ee15da22b2fd6fac46c116130225ea93ef2013003c96e7fe1d6cca" gracePeriod=30 Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.796175 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.796903 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_01c5293c-f7b0-4141-99a7-e423de507b87/ovsdbserver-nb/0.log" Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.796944 4789 generic.go:334] "Generic (PLEG): container finished" podID="01c5293c-f7b0-4141-99a7-e423de507b87" containerID="c302d40717f0c425b6e65f87b401026a5061ab6e38b1f75577a83208d8771c00" exitCode=2 Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.796960 4789 generic.go:334] "Generic (PLEG): container finished" podID="01c5293c-f7b0-4141-99a7-e423de507b87" containerID="ecfa06e359801169bdd06bd88548fc6c7999a73aea8eb2d73c459b8201ac6223" exitCode=143 Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.797010 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"01c5293c-f7b0-4141-99a7-e423de507b87","Type":"ContainerDied","Data":"c302d40717f0c425b6e65f87b401026a5061ab6e38b1f75577a83208d8771c00"} Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.797041 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"01c5293c-f7b0-4141-99a7-e423de507b87","Type":"ContainerDied","Data":"ecfa06e359801169bdd06bd88548fc6c7999a73aea8eb2d73c459b8201ac6223"} Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.813900 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-64bb487f87-44hz8"] Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.814142 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-64bb487f87-44hz8" podUID="c08255d0-1dd6-4556-8f30-65367b7739f7" containerName="proxy-httpd" containerID="cri-o://711efcab439843aaeb94f91469a73186433bd21cfd9a9a56c0f9006d0ae1c9be" gracePeriod=30 Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.814235 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-64bb487f87-44hz8" podUID="c08255d0-1dd6-4556-8f30-65367b7739f7" containerName="proxy-server" containerID="cri-o://7d439ddc975b276958da145eaf095401f24feaac00038ca172395cfdde929e83" gracePeriod=30 Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.825405 4789 generic.go:334] "Generic (PLEG): container finished" podID="24fb18f4-7a0f-4ae5-9104-e7dc45a479ff" containerID="8d41fcf5f05241ca690bf9be181cdbc0af2afc9c357aaaa7b133a7a3685d2601" exitCode=143 Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.826250 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"24fb18f4-7a0f-4ae5-9104-e7dc45a479ff","Type":"ContainerDied","Data":"8d41fcf5f05241ca690bf9be181cdbc0af2afc9c357aaaa7b133a7a3685d2601"} Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.830820 4789 generic.go:334] "Generic (PLEG): container finished" podID="3bb81567-8536-4275-ab0e-a003ef904230" containerID="ce9ef55c9302edded2a55530533656268a3c7b21b0ae936aae0892ef6e043554" exitCode=143 Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.830843 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3bb81567-8536-4275-ab0e-a003ef904230","Type":"ContainerDied","Data":"ce9ef55c9302edded2a55530533656268a3c7b21b0ae936aae0892ef6e043554"} Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.838311 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 02 21:43:06 crc kubenswrapper[4789]: E0202 21:43:06.848630 4789 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 02 21:43:06 crc kubenswrapper[4789]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Feb 02 21:43:06 crc kubenswrapper[4789]: Feb 02 21:43:06 crc kubenswrapper[4789]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Feb 02 21:43:06 crc kubenswrapper[4789]: Feb 02 21:43:06 crc kubenswrapper[4789]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Feb 02 21:43:06 crc kubenswrapper[4789]: Feb 02 21:43:06 crc kubenswrapper[4789]: MYSQL_CMD="mysql -h -u root -P 3306" Feb 02 21:43:06 crc kubenswrapper[4789]: Feb 02 21:43:06 crc kubenswrapper[4789]: if [ -n "barbican" ]; then Feb 02 21:43:06 crc kubenswrapper[4789]: GRANT_DATABASE="barbican" Feb 02 21:43:06 crc kubenswrapper[4789]: else Feb 02 21:43:06 crc kubenswrapper[4789]: GRANT_DATABASE="*" Feb 02 21:43:06 crc kubenswrapper[4789]: fi Feb 02 21:43:06 crc kubenswrapper[4789]: Feb 02 21:43:06 crc kubenswrapper[4789]: # going for maximum compatibility here: Feb 02 21:43:06 crc kubenswrapper[4789]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Feb 02 21:43:06 crc kubenswrapper[4789]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Feb 02 21:43:06 crc kubenswrapper[4789]: # 3. create user with CREATE but then do all password and TLS with ALTER to Feb 02 21:43:06 crc kubenswrapper[4789]: # support updates Feb 02 21:43:06 crc kubenswrapper[4789]: Feb 02 21:43:06 crc kubenswrapper[4789]: $MYSQL_CMD < logger="UnhandledError" Feb 02 21:43:06 crc kubenswrapper[4789]: E0202 21:43:06.850728 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"barbican-db-secret\\\" not found\"" pod="openstack/barbican-c430-account-create-update-mtkw4" podUID="151bf5b0-b174-42a9-8a0a-f650d74ec2a3" Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.855150 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-725d-account-create-update-6nrvh"] Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.891696 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.891916 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="0ba13473-b423-43a0-ab15-9d6be616cc7b" containerName="nova-metadata-log" containerID="cri-o://175ef66ad8a23cf5090dc4289e18344c14f9e8edd0b77edfc81aaff9cd62283b" gracePeriod=30 Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.892263 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="0ba13473-b423-43a0-ab15-9d6be616cc7b" containerName="nova-metadata-metadata" containerID="cri-o://e343b555d9621789a633967b6cd533bf45c88272e650aba944e657e5737ee258" gracePeriod=30 Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.912385 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-9zj6q"] Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.913560 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="b4db4b23-dae0-42a5-ad47-3336073d0b6a" containerName="rabbitmq" containerID="cri-o://669108a572e6de86b6fe38547a253f5eabaaaa84647d8dcb02f45a63322c1bd9" gracePeriod=604800 Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.924531 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-c430-account-create-update-mtkw4"] Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.937466 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-3f0a-account-create-update-qc5vb"] Feb 02 21:43:06 crc kubenswrapper[4789]: E0202 21:43:06.944450 4789 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 02 21:43:06 crc kubenswrapper[4789]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Feb 02 21:43:06 crc kubenswrapper[4789]: Feb 02 21:43:06 crc kubenswrapper[4789]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Feb 02 21:43:06 crc kubenswrapper[4789]: Feb 02 21:43:06 crc kubenswrapper[4789]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Feb 02 21:43:06 crc kubenswrapper[4789]: Feb 02 21:43:06 crc kubenswrapper[4789]: MYSQL_CMD="mysql -h -u root -P 3306" Feb 02 21:43:06 crc kubenswrapper[4789]: Feb 02 21:43:06 crc kubenswrapper[4789]: if [ -n "nova_api" ]; then Feb 02 21:43:06 crc kubenswrapper[4789]: GRANT_DATABASE="nova_api" Feb 02 21:43:06 crc kubenswrapper[4789]: else Feb 02 21:43:06 crc kubenswrapper[4789]: GRANT_DATABASE="*" Feb 02 21:43:06 crc kubenswrapper[4789]: fi Feb 02 21:43:06 crc kubenswrapper[4789]: Feb 02 21:43:06 crc kubenswrapper[4789]: # going for maximum compatibility here: Feb 02 21:43:06 crc kubenswrapper[4789]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Feb 02 21:43:06 crc kubenswrapper[4789]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Feb 02 21:43:06 crc kubenswrapper[4789]: # 3. create user with CREATE but then do all password and TLS with ALTER to Feb 02 21:43:06 crc kubenswrapper[4789]: # support updates Feb 02 21:43:06 crc kubenswrapper[4789]: Feb 02 21:43:06 crc kubenswrapper[4789]: $MYSQL_CMD < logger="UnhandledError" Feb 02 21:43:06 crc kubenswrapper[4789]: E0202 21:43:06.944482 4789 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 02 21:43:06 crc kubenswrapper[4789]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Feb 02 21:43:06 crc kubenswrapper[4789]: Feb 02 21:43:06 crc kubenswrapper[4789]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Feb 02 21:43:06 crc kubenswrapper[4789]: Feb 02 21:43:06 crc kubenswrapper[4789]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Feb 02 21:43:06 crc kubenswrapper[4789]: Feb 02 21:43:06 crc kubenswrapper[4789]: MYSQL_CMD="mysql -h -u root -P 3306" Feb 02 21:43:06 crc kubenswrapper[4789]: Feb 02 21:43:06 crc kubenswrapper[4789]: if [ -n "" ]; then Feb 02 21:43:06 crc kubenswrapper[4789]: GRANT_DATABASE="" Feb 02 21:43:06 crc kubenswrapper[4789]: else Feb 02 21:43:06 crc kubenswrapper[4789]: GRANT_DATABASE="*" Feb 02 21:43:06 crc kubenswrapper[4789]: fi Feb 02 21:43:06 crc kubenswrapper[4789]: Feb 02 21:43:06 crc kubenswrapper[4789]: # going for maximum compatibility here: Feb 02 21:43:06 crc kubenswrapper[4789]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Feb 02 21:43:06 crc kubenswrapper[4789]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Feb 02 21:43:06 crc kubenswrapper[4789]: # 3. create user with CREATE but then do all password and TLS with ALTER to Feb 02 21:43:06 crc kubenswrapper[4789]: # support updates Feb 02 21:43:06 crc kubenswrapper[4789]: Feb 02 21:43:06 crc kubenswrapper[4789]: $MYSQL_CMD < logger="UnhandledError" Feb 02 21:43:06 crc kubenswrapper[4789]: E0202 21:43:06.946509 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"nova-api-db-secret\\\" not found\"" pod="openstack/nova-api-ac1d-account-create-update-rvstr" podUID="1821366b-85cb-419f-9c57-9014300724be" Feb 02 21:43:06 crc kubenswrapper[4789]: E0202 21:43:06.946538 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"openstack-cell1-mariadb-root-db-secret\\\" not found\"" pod="openstack/root-account-create-update-h7zb6" podUID="306f2aaf-92ed-4c14-92f4-a970a8240771" Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.951914 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-9zj6q"] Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.960154 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-xdt8b"] Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.975152 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-xdt8b"] Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.984963 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-ac1d-account-create-update-rvstr"] Feb 02 21:43:06 crc kubenswrapper[4789]: I0202 21:43:06.992485 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-xg9jg"] Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.016802 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.017293 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="a0ceeffe-1326-4d2d-ab85-dbc02869bee1" containerName="nova-scheduler-scheduler" containerID="cri-o://b302cbd832ca9db939c2b0bf4835c6ec6fb237f5c200d53981557f9c42498b12" gracePeriod=30 Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.030321 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-xg9jg"] Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.035212 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-h7zb6"] Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.041951 4789 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-89c5cd4d5-r4rd5" podUID="7cd6e7ff-266d-4288-9df4-dc22dbe5f19d" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.200:5353: connect: connection refused" Feb 02 21:43:07 crc kubenswrapper[4789]: E0202 21:43:07.054443 4789 secret.go:188] Couldn't get secret openstack/nova-cell1-conductor-config-data: secret "nova-cell1-conductor-config-data" not found Feb 02 21:43:07 crc kubenswrapper[4789]: E0202 21:43:07.054522 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/399d9417-2065-4e92-89c5-a04dbeaf2cca-config-data podName:399d9417-2065-4e92-89c5-a04dbeaf2cca nodeName:}" failed. No retries permitted until 2026-02-02 21:43:08.054497591 +0000 UTC m=+1408.349522610 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/399d9417-2065-4e92-89c5-a04dbeaf2cca-config-data") pod "nova-cell1-conductor-0" (UID: "399d9417-2065-4e92-89c5-a04dbeaf2cca") : secret "nova-cell1-conductor-config-data" not found Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.059935 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.060258 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="1ae097e7-380b-4044-8598-abc3e1059356" containerName="nova-api-log" containerID="cri-o://2234c362242e0356a4e9c41d9d9c119ece3aa80e6631194820c7f16fcb2df8fa" gracePeriod=30 Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.060600 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="1ae097e7-380b-4044-8598-abc3e1059356" containerName="nova-api-api" containerID="cri-o://0a2cae00db145b6560fcc0b648c1c292b3eb7df490809622a8a50541cde04a0c" gracePeriod=30 Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.090217 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.097894 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-h7zb6"] Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.105139 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-2xz72"] Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.110528 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4127fa0-de5d-43ce-b257-46b80eecd670-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "d4127fa0-de5d-43ce-b257-46b80eecd670" (UID: "d4127fa0-de5d-43ce-b257-46b80eecd670"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.117829 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-2xz72"] Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.127420 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.127662 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="399d9417-2065-4e92-89c5-a04dbeaf2cca" containerName="nova-cell1-conductor-conductor" containerID="cri-o://e98adb233ea0f16d2b2f46eddae689b1bb397a9ba532ccb91e5ece02b9397f95" gracePeriod=30 Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.136447 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_c55d3f19-edf8-4cff-ab70-495607e77798/ovsdbserver-sb/0.log" Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.136510 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.137414 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.137604 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="aa36ad4d-6c5e-4dd5-a7a3-34e5dddfba8b" containerName="nova-cell0-conductor-conductor" containerID="cri-o://dfd72fb016b8250b7d52cd2384cba2cc136043ef8ea07229e3afb0b578d3fbf4" gracePeriod=30 Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.157442 4789 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4127fa0-de5d-43ce-b257-46b80eecd670-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:07 crc kubenswrapper[4789]: E0202 21:43:07.157533 4789 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Feb 02 21:43:07 crc kubenswrapper[4789]: E0202 21:43:07.157572 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b8917d54-451e-4a56-9e8a-142bb5db17e1-config-data podName:b8917d54-451e-4a56-9e8a-142bb5db17e1 nodeName:}" failed. No retries permitted until 2026-02-02 21:43:09.15755772 +0000 UTC m=+1409.452582739 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/b8917d54-451e-4a56-9e8a-142bb5db17e1-config-data") pod "rabbitmq-cell1-server-0" (UID: "b8917d54-451e-4a56-9e8a-142bb5db17e1") : configmap "rabbitmq-cell1-config-data" not found Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.169679 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-78r7j"] Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.177100 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.177329 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="f9d0bd72-572d-4b90-b747-f37b490b3e4a" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://3526b18b1cdfe6a5a1f276e6a5cb128388504d6bdbc6ed184e29a01812ab1266" gracePeriod=30 Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.186083 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-78r7j"] Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.189617 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-ac1d-account-create-update-rvstr"] Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.196476 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-725d-account-create-update-6nrvh"] Feb 02 21:43:07 crc kubenswrapper[4789]: W0202 21:43:07.263908 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc7b70ce2_ef37_4547_9c3c_7be3ec4b02c8.slice/crio-c2983a3db760b9aa4b6d60ff43228b72d9c6eca9176950d85a8954d033e6f370 WatchSource:0}: Error finding container c2983a3db760b9aa4b6d60ff43228b72d9c6eca9176950d85a8954d033e6f370: Status 404 returned error can't find the container with id c2983a3db760b9aa4b6d60ff43228b72d9c6eca9176950d85a8954d033e6f370 Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.268533 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="96f4773a-9fa9-41c6-ab4b-54107e66a498" containerName="galera" containerID="cri-o://82ad90219c4a326128d9fb471414b4e0b74ee26ef1c8f08b2e8b89d720565f03" gracePeriod=30 Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.282676 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c55d3f19-edf8-4cff-ab70-495607e77798-scripts\") pod \"c55d3f19-edf8-4cff-ab70-495607e77798\" (UID: \"c55d3f19-edf8-4cff-ab70-495607e77798\") " Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.282818 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c55d3f19-edf8-4cff-ab70-495607e77798-ovsdb-rundir\") pod \"c55d3f19-edf8-4cff-ab70-495607e77798\" (UID: \"c55d3f19-edf8-4cff-ab70-495607e77798\") " Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.282889 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c55d3f19-edf8-4cff-ab70-495607e77798-metrics-certs-tls-certs\") pod \"c55d3f19-edf8-4cff-ab70-495607e77798\" (UID: \"c55d3f19-edf8-4cff-ab70-495607e77798\") " Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.282908 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-sb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"c55d3f19-edf8-4cff-ab70-495607e77798\" (UID: \"c55d3f19-edf8-4cff-ab70-495607e77798\") " Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.282933 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c55d3f19-edf8-4cff-ab70-495607e77798-combined-ca-bundle\") pod \"c55d3f19-edf8-4cff-ab70-495607e77798\" (UID: \"c55d3f19-edf8-4cff-ab70-495607e77798\") " Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.282990 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c55d3f19-edf8-4cff-ab70-495607e77798-ovsdbserver-sb-tls-certs\") pod \"c55d3f19-edf8-4cff-ab70-495607e77798\" (UID: \"c55d3f19-edf8-4cff-ab70-495607e77798\") " Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.283009 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p69km\" (UniqueName: \"kubernetes.io/projected/c55d3f19-edf8-4cff-ab70-495607e77798-kube-api-access-p69km\") pod \"c55d3f19-edf8-4cff-ab70-495607e77798\" (UID: \"c55d3f19-edf8-4cff-ab70-495607e77798\") " Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.283061 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c55d3f19-edf8-4cff-ab70-495607e77798-config\") pod \"c55d3f19-edf8-4cff-ab70-495607e77798\" (UID: \"c55d3f19-edf8-4cff-ab70-495607e77798\") " Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.292421 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="b8917d54-451e-4a56-9e8a-142bb5db17e1" containerName="rabbitmq" containerID="cri-o://c1c71c5e760475551c02af4a87ac69c6090f0b50c9bec80607975d728d2b02e2" gracePeriod=604800 Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.292943 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "ovndbcluster-sb-etc-ovn") pod "c55d3f19-edf8-4cff-ab70-495607e77798" (UID: "c55d3f19-edf8-4cff-ab70-495607e77798"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.293112 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c55d3f19-edf8-4cff-ab70-495607e77798-scripts" (OuterVolumeSpecName: "scripts") pod "c55d3f19-edf8-4cff-ab70-495607e77798" (UID: "c55d3f19-edf8-4cff-ab70-495607e77798"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:43:07 crc kubenswrapper[4789]: E0202 21:43:07.294555 4789 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 02 21:43:07 crc kubenswrapper[4789]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Feb 02 21:43:07 crc kubenswrapper[4789]: Feb 02 21:43:07 crc kubenswrapper[4789]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Feb 02 21:43:07 crc kubenswrapper[4789]: Feb 02 21:43:07 crc kubenswrapper[4789]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Feb 02 21:43:07 crc kubenswrapper[4789]: Feb 02 21:43:07 crc kubenswrapper[4789]: MYSQL_CMD="mysql -h -u root -P 3306" Feb 02 21:43:07 crc kubenswrapper[4789]: Feb 02 21:43:07 crc kubenswrapper[4789]: if [ -n "nova_cell0" ]; then Feb 02 21:43:07 crc kubenswrapper[4789]: GRANT_DATABASE="nova_cell0" Feb 02 21:43:07 crc kubenswrapper[4789]: else Feb 02 21:43:07 crc kubenswrapper[4789]: GRANT_DATABASE="*" Feb 02 21:43:07 crc kubenswrapper[4789]: fi Feb 02 21:43:07 crc kubenswrapper[4789]: Feb 02 21:43:07 crc kubenswrapper[4789]: # going for maximum compatibility here: Feb 02 21:43:07 crc kubenswrapper[4789]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Feb 02 21:43:07 crc kubenswrapper[4789]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Feb 02 21:43:07 crc kubenswrapper[4789]: # 3. create user with CREATE but then do all password and TLS with ALTER to Feb 02 21:43:07 crc kubenswrapper[4789]: # support updates Feb 02 21:43:07 crc kubenswrapper[4789]: Feb 02 21:43:07 crc kubenswrapper[4789]: $MYSQL_CMD < logger="UnhandledError" Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.295661 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c55d3f19-edf8-4cff-ab70-495607e77798-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "c55d3f19-edf8-4cff-ab70-495607e77798" (UID: "c55d3f19-edf8-4cff-ab70-495607e77798"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 21:43:07 crc kubenswrapper[4789]: E0202 21:43:07.296282 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"nova-cell0-db-secret\\\" not found\"" pod="openstack/nova-cell0-725d-account-create-update-6nrvh" podUID="c7b70ce2-ef37-4547-9c3c-7be3ec4b02c8" Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.300611 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c55d3f19-edf8-4cff-ab70-495607e77798-config" (OuterVolumeSpecName: "config") pod "c55d3f19-edf8-4cff-ab70-495607e77798" (UID: "c55d3f19-edf8-4cff-ab70-495607e77798"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.316061 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c55d3f19-edf8-4cff-ab70-495607e77798-kube-api-access-p69km" (OuterVolumeSpecName: "kube-api-access-p69km") pod "c55d3f19-edf8-4cff-ab70-495607e77798" (UID: "c55d3f19-edf8-4cff-ab70-495607e77798"). InnerVolumeSpecName "kube-api-access-p69km". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.376047 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c55d3f19-edf8-4cff-ab70-495607e77798-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c55d3f19-edf8-4cff-ab70-495607e77798" (UID: "c55d3f19-edf8-4cff-ab70-495607e77798"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.379644 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-d5hwz"] Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.388441 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-metrics-d5hwz"] Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.398488 4789 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c55d3f19-edf8-4cff-ab70-495607e77798-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.398531 4789 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.398540 4789 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c55d3f19-edf8-4cff-ab70-495607e77798-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.398550 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p69km\" (UniqueName: \"kubernetes.io/projected/c55d3f19-edf8-4cff-ab70-495607e77798-kube-api-access-p69km\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.398559 4789 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c55d3f19-edf8-4cff-ab70-495607e77798-config\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.398569 4789 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c55d3f19-edf8-4cff-ab70-495607e77798-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.435177 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-r4rd5" Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.447425 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.456920 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c55d3f19-edf8-4cff-ab70-495607e77798-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "c55d3f19-edf8-4cff-ab70-495607e77798" (UID: "c55d3f19-edf8-4cff-ab70-495607e77798"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.474294 4789 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.501428 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2e82084e-a68b-4e41-9d23-8888ab97e53e-openstack-config\") pod \"2e82084e-a68b-4e41-9d23-8888ab97e53e\" (UID: \"2e82084e-a68b-4e41-9d23-8888ab97e53e\") " Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.501506 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7cd6e7ff-266d-4288-9df4-dc22dbe5f19d-config\") pod \"7cd6e7ff-266d-4288-9df4-dc22dbe5f19d\" (UID: \"7cd6e7ff-266d-4288-9df4-dc22dbe5f19d\") " Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.502259 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-whdmt\" (UniqueName: \"kubernetes.io/projected/2e82084e-a68b-4e41-9d23-8888ab97e53e-kube-api-access-whdmt\") pod \"2e82084e-a68b-4e41-9d23-8888ab97e53e\" (UID: \"2e82084e-a68b-4e41-9d23-8888ab97e53e\") " Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.502716 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2e82084e-a68b-4e41-9d23-8888ab97e53e-openstack-config-secret\") pod \"2e82084e-a68b-4e41-9d23-8888ab97e53e\" (UID: \"2e82084e-a68b-4e41-9d23-8888ab97e53e\") " Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.502835 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7cd6e7ff-266d-4288-9df4-dc22dbe5f19d-ovsdbserver-sb\") pod \"7cd6e7ff-266d-4288-9df4-dc22dbe5f19d\" (UID: \"7cd6e7ff-266d-4288-9df4-dc22dbe5f19d\") " Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.502880 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5v59\" (UniqueName: \"kubernetes.io/projected/7cd6e7ff-266d-4288-9df4-dc22dbe5f19d-kube-api-access-t5v59\") pod \"7cd6e7ff-266d-4288-9df4-dc22dbe5f19d\" (UID: \"7cd6e7ff-266d-4288-9df4-dc22dbe5f19d\") " Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.502990 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e82084e-a68b-4e41-9d23-8888ab97e53e-combined-ca-bundle\") pod \"2e82084e-a68b-4e41-9d23-8888ab97e53e\" (UID: \"2e82084e-a68b-4e41-9d23-8888ab97e53e\") " Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.503026 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7cd6e7ff-266d-4288-9df4-dc22dbe5f19d-ovsdbserver-nb\") pod \"7cd6e7ff-266d-4288-9df4-dc22dbe5f19d\" (UID: \"7cd6e7ff-266d-4288-9df4-dc22dbe5f19d\") " Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.503052 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7cd6e7ff-266d-4288-9df4-dc22dbe5f19d-dns-swift-storage-0\") pod \"7cd6e7ff-266d-4288-9df4-dc22dbe5f19d\" (UID: \"7cd6e7ff-266d-4288-9df4-dc22dbe5f19d\") " Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.503092 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7cd6e7ff-266d-4288-9df4-dc22dbe5f19d-dns-svc\") pod \"7cd6e7ff-266d-4288-9df4-dc22dbe5f19d\" (UID: \"7cd6e7ff-266d-4288-9df4-dc22dbe5f19d\") " Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.503542 4789 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c55d3f19-edf8-4cff-ab70-495607e77798-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.503559 4789 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.530562 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cd6e7ff-266d-4288-9df4-dc22dbe5f19d-kube-api-access-t5v59" (OuterVolumeSpecName: "kube-api-access-t5v59") pod "7cd6e7ff-266d-4288-9df4-dc22dbe5f19d" (UID: "7cd6e7ff-266d-4288-9df4-dc22dbe5f19d"). InnerVolumeSpecName "kube-api-access-t5v59". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.535126 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e82084e-a68b-4e41-9d23-8888ab97e53e-kube-api-access-whdmt" (OuterVolumeSpecName: "kube-api-access-whdmt") pod "2e82084e-a68b-4e41-9d23-8888ab97e53e" (UID: "2e82084e-a68b-4e41-9d23-8888ab97e53e"). InnerVolumeSpecName "kube-api-access-whdmt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.605065 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c55d3f19-edf8-4cff-ab70-495607e77798-ovsdbserver-sb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-sb-tls-certs") pod "c55d3f19-edf8-4cff-ab70-495607e77798" (UID: "c55d3f19-edf8-4cff-ab70-495607e77798"). InnerVolumeSpecName "ovsdbserver-sb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.605261 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c55d3f19-edf8-4cff-ab70-495607e77798-ovsdbserver-sb-tls-certs\") pod \"c55d3f19-edf8-4cff-ab70-495607e77798\" (UID: \"c55d3f19-edf8-4cff-ab70-495607e77798\") " Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.605683 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e82084e-a68b-4e41-9d23-8888ab97e53e-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "2e82084e-a68b-4e41-9d23-8888ab97e53e" (UID: "2e82084e-a68b-4e41-9d23-8888ab97e53e"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:43:07 crc kubenswrapper[4789]: W0202 21:43:07.605829 4789 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/c55d3f19-edf8-4cff-ab70-495607e77798/volumes/kubernetes.io~secret/ovsdbserver-sb-tls-certs Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.605843 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c55d3f19-edf8-4cff-ab70-495607e77798-ovsdbserver-sb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-sb-tls-certs") pod "c55d3f19-edf8-4cff-ab70-495607e77798" (UID: "c55d3f19-edf8-4cff-ab70-495607e77798"). InnerVolumeSpecName "ovsdbserver-sb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.606002 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-whdmt\" (UniqueName: \"kubernetes.io/projected/2e82084e-a68b-4e41-9d23-8888ab97e53e-kube-api-access-whdmt\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.606024 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t5v59\" (UniqueName: \"kubernetes.io/projected/7cd6e7ff-266d-4288-9df4-dc22dbe5f19d-kube-api-access-t5v59\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.606038 4789 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c55d3f19-edf8-4cff-ab70-495607e77798-ovsdbserver-sb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.606049 4789 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2e82084e-a68b-4e41-9d23-8888ab97e53e-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.625398 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e82084e-a68b-4e41-9d23-8888ab97e53e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2e82084e-a68b-4e41-9d23-8888ab97e53e" (UID: "2e82084e-a68b-4e41-9d23-8888ab97e53e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.631854 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7cd6e7ff-266d-4288-9df4-dc22dbe5f19d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "7cd6e7ff-266d-4288-9df4-dc22dbe5f19d" (UID: "7cd6e7ff-266d-4288-9df4-dc22dbe5f19d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.651313 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e82084e-a68b-4e41-9d23-8888ab97e53e-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "2e82084e-a68b-4e41-9d23-8888ab97e53e" (UID: "2e82084e-a68b-4e41-9d23-8888ab97e53e"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.653692 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7cd6e7ff-266d-4288-9df4-dc22dbe5f19d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7cd6e7ff-266d-4288-9df4-dc22dbe5f19d" (UID: "7cd6e7ff-266d-4288-9df4-dc22dbe5f19d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.661342 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7cd6e7ff-266d-4288-9df4-dc22dbe5f19d-config" (OuterVolumeSpecName: "config") pod "7cd6e7ff-266d-4288-9df4-dc22dbe5f19d" (UID: "7cd6e7ff-266d-4288-9df4-dc22dbe5f19d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.709623 4789 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7cd6e7ff-266d-4288-9df4-dc22dbe5f19d-config\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.709665 4789 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2e82084e-a68b-4e41-9d23-8888ab97e53e-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.709678 4789 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7cd6e7ff-266d-4288-9df4-dc22dbe5f19d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.709689 4789 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e82084e-a68b-4e41-9d23-8888ab97e53e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.709700 4789 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7cd6e7ff-266d-4288-9df4-dc22dbe5f19d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.737396 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7cd6e7ff-266d-4288-9df4-dc22dbe5f19d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7cd6e7ff-266d-4288-9df4-dc22dbe5f19d" (UID: "7cd6e7ff-266d-4288-9df4-dc22dbe5f19d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.737859 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7cd6e7ff-266d-4288-9df4-dc22dbe5f19d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7cd6e7ff-266d-4288-9df4-dc22dbe5f19d" (UID: "7cd6e7ff-266d-4288-9df4-dc22dbe5f19d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.812784 4789 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7cd6e7ff-266d-4288-9df4-dc22dbe5f19d-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.812811 4789 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7cd6e7ff-266d-4288-9df4-dc22dbe5f19d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.874852 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-r4rd5" event={"ID":"7cd6e7ff-266d-4288-9df4-dc22dbe5f19d","Type":"ContainerDied","Data":"1d12f57b2b89440a76b1e76b886dbb4a7b35501e6e9a79dd7c13a09511cf73f4"} Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.874946 4789 scope.go:117] "RemoveContainer" containerID="b576e41ea89fdc8a7019e121b2fb6790b3127f8d4fea54dd5986dfa02c0ad849" Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.875135 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-r4rd5" Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.886370 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_01c5293c-f7b0-4141-99a7-e423de507b87/ovsdbserver-nb/0.log" Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.886457 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"01c5293c-f7b0-4141-99a7-e423de507b87","Type":"ContainerDied","Data":"4c418005c61aedc2db0e22250a2bd998aa9fbf796a307443b5065e80ae029167"} Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.886478 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c418005c61aedc2db0e22250a2bd998aa9fbf796a307443b5065e80ae029167" Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.890654 4789 generic.go:334] "Generic (PLEG): container finished" podID="c08255d0-1dd6-4556-8f30-65367b7739f7" containerID="7d439ddc975b276958da145eaf095401f24feaac00038ca172395cfdde929e83" exitCode=0 Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.890683 4789 generic.go:334] "Generic (PLEG): container finished" podID="c08255d0-1dd6-4556-8f30-65367b7739f7" containerID="711efcab439843aaeb94f91469a73186433bd21cfd9a9a56c0f9006d0ae1c9be" exitCode=0 Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.890723 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-64bb487f87-44hz8" event={"ID":"c08255d0-1dd6-4556-8f30-65367b7739f7","Type":"ContainerDied","Data":"7d439ddc975b276958da145eaf095401f24feaac00038ca172395cfdde929e83"} Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.890749 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-64bb487f87-44hz8" event={"ID":"c08255d0-1dd6-4556-8f30-65367b7739f7","Type":"ContainerDied","Data":"711efcab439843aaeb94f91469a73186433bd21cfd9a9a56c0f9006d0ae1c9be"} Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.891778 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-725d-account-create-update-6nrvh" event={"ID":"c7b70ce2-ef37-4547-9c3c-7be3ec4b02c8","Type":"ContainerStarted","Data":"c2983a3db760b9aa4b6d60ff43228b72d9c6eca9176950d85a8954d033e6f370"} Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.901927 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_c55d3f19-edf8-4cff-ab70-495607e77798/ovsdbserver-sb/0.log" Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.902085 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.906016 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"c55d3f19-edf8-4cff-ab70-495607e77798","Type":"ContainerDied","Data":"7ec1a310034257a2df09caaa21c724be8dc3217fad77778a34bb55891dbc4ebc"} Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.912748 4789 scope.go:117] "RemoveContainer" containerID="44f9eeaf0ad5b799a8a00f9424dda2c0646881c42b4eba10f3cf88542d1f3cbe" Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.915743 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-ac1d-account-create-update-rvstr" event={"ID":"1821366b-85cb-419f-9c57-9014300724be","Type":"ContainerStarted","Data":"5cc091e69650107d52b98902ffc02640ad214738be467300f40730cf122182b5"} Feb 02 21:43:07 crc kubenswrapper[4789]: E0202 21:43:07.919082 4789 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 02 21:43:07 crc kubenswrapper[4789]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Feb 02 21:43:07 crc kubenswrapper[4789]: Feb 02 21:43:07 crc kubenswrapper[4789]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Feb 02 21:43:07 crc kubenswrapper[4789]: Feb 02 21:43:07 crc kubenswrapper[4789]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Feb 02 21:43:07 crc kubenswrapper[4789]: Feb 02 21:43:07 crc kubenswrapper[4789]: MYSQL_CMD="mysql -h -u root -P 3306" Feb 02 21:43:07 crc kubenswrapper[4789]: Feb 02 21:43:07 crc kubenswrapper[4789]: if [ -n "nova_api" ]; then Feb 02 21:43:07 crc kubenswrapper[4789]: GRANT_DATABASE="nova_api" Feb 02 21:43:07 crc kubenswrapper[4789]: else Feb 02 21:43:07 crc kubenswrapper[4789]: GRANT_DATABASE="*" Feb 02 21:43:07 crc kubenswrapper[4789]: fi Feb 02 21:43:07 crc kubenswrapper[4789]: Feb 02 21:43:07 crc kubenswrapper[4789]: # going for maximum compatibility here: Feb 02 21:43:07 crc kubenswrapper[4789]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Feb 02 21:43:07 crc kubenswrapper[4789]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Feb 02 21:43:07 crc kubenswrapper[4789]: # 3. create user with CREATE but then do all password and TLS with ALTER to Feb 02 21:43:07 crc kubenswrapper[4789]: # support updates Feb 02 21:43:07 crc kubenswrapper[4789]: Feb 02 21:43:07 crc kubenswrapper[4789]: $MYSQL_CMD < logger="UnhandledError" Feb 02 21:43:07 crc kubenswrapper[4789]: E0202 21:43:07.920629 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"nova-api-db-secret\\\" not found\"" pod="openstack/nova-api-ac1d-account-create-update-rvstr" podUID="1821366b-85cb-419f-9c57-9014300724be" Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.943998 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_01c5293c-f7b0-4141-99a7-e423de507b87/ovsdbserver-nb/0.log" Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.944090 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.952294 4789 scope.go:117] "RemoveContainer" containerID="e01772d808decb3380bc4d332c0752aeaf67cb8f5d0c5c9b2c8ae0ab15d89550" Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.961115 4789 generic.go:334] "Generic (PLEG): container finished" podID="87f6bccb-d5fc-4868-aca2-734d16898805" containerID="758668fe2c5ee9470a7c3aa0b9a80c8ff6b3ee015da4b7aab90845bdc8131fbe" exitCode=0 Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.961141 4789 generic.go:334] "Generic (PLEG): container finished" podID="87f6bccb-d5fc-4868-aca2-734d16898805" containerID="41f66ea30afde5a33d387e2cc7b5c5ed11aef0e66a8afd458c8af299945c2460" exitCode=0 Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.961150 4789 generic.go:334] "Generic (PLEG): container finished" podID="87f6bccb-d5fc-4868-aca2-734d16898805" containerID="aab045fa01e8633951d3b23cb6099a13479fc7bde9e851b10aeb53ad724f1a5a" exitCode=0 Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.961158 4789 generic.go:334] "Generic (PLEG): container finished" podID="87f6bccb-d5fc-4868-aca2-734d16898805" containerID="f8710e800cb558add663bfff070701d51801997c411687aea039144baf3f407d" exitCode=0 Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.961166 4789 generic.go:334] "Generic (PLEG): container finished" podID="87f6bccb-d5fc-4868-aca2-734d16898805" containerID="b07c3c791de729e8c85f1895c49db2a43d74603b713f577900b8371d9d871050" exitCode=0 Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.961173 4789 generic.go:334] "Generic (PLEG): container finished" podID="87f6bccb-d5fc-4868-aca2-734d16898805" containerID="db66ce76b54133027343e52fa4a37bee9603c2a78eccea429cb9107f7f66533b" exitCode=0 Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.961212 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"87f6bccb-d5fc-4868-aca2-734d16898805","Type":"ContainerDied","Data":"758668fe2c5ee9470a7c3aa0b9a80c8ff6b3ee015da4b7aab90845bdc8131fbe"} Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.961238 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"87f6bccb-d5fc-4868-aca2-734d16898805","Type":"ContainerDied","Data":"41f66ea30afde5a33d387e2cc7b5c5ed11aef0e66a8afd458c8af299945c2460"} Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.961259 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"87f6bccb-d5fc-4868-aca2-734d16898805","Type":"ContainerDied","Data":"aab045fa01e8633951d3b23cb6099a13479fc7bde9e851b10aeb53ad724f1a5a"} Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.961269 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"87f6bccb-d5fc-4868-aca2-734d16898805","Type":"ContainerDied","Data":"f8710e800cb558add663bfff070701d51801997c411687aea039144baf3f407d"} Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.961278 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"87f6bccb-d5fc-4868-aca2-734d16898805","Type":"ContainerDied","Data":"b07c3c791de729e8c85f1895c49db2a43d74603b713f577900b8371d9d871050"} Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.961286 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"87f6bccb-d5fc-4868-aca2-734d16898805","Type":"ContainerDied","Data":"db66ce76b54133027343e52fa4a37bee9603c2a78eccea429cb9107f7f66533b"} Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.964289 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-r4rd5"] Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.965456 4789 generic.go:334] "Generic (PLEG): container finished" podID="5c3c85c8-d5b9-48f3-9408-0d9693d0cbf1" containerID="515297fe8dbc3fc649d583e30d1f7a1830bea72e21b40dc9d104ef3455ab5cb1" exitCode=143 Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.965495 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-959f7f8c5-pmqjf" event={"ID":"5c3c85c8-d5b9-48f3-9408-0d9693d0cbf1","Type":"ContainerDied","Data":"515297fe8dbc3fc649d583e30d1f7a1830bea72e21b40dc9d104ef3455ab5cb1"} Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.974353 4789 generic.go:334] "Generic (PLEG): container finished" podID="78b23a1f-cc85-4767-b19c-6069adfc735a" containerID="299b4734565096b1be6400a79e47dcc680e20c6351889626bc796a381f662a16" exitCode=0 Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.974404 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7c4994f5f-462kb" event={"ID":"78b23a1f-cc85-4767-b19c-6069adfc735a","Type":"ContainerDied","Data":"299b4734565096b1be6400a79e47dcc680e20c6351889626bc796a381f662a16"} Feb 02 21:43:07 crc kubenswrapper[4789]: I0202 21:43:07.989480 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-r4rd5"] Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.011828 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6d964c7466-fpqld" event={"ID":"802bda4f-2363-4ca6-a126-2ccf1448ed71","Type":"ContainerDied","Data":"0fe697a1f2000589c5ab93c3c47f9c76ebfb685c854fd86b08766edfb2d1a375"} Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.011831 4789 generic.go:334] "Generic (PLEG): container finished" podID="802bda4f-2363-4ca6-a126-2ccf1448ed71" containerID="0fe697a1f2000589c5ab93c3c47f9c76ebfb685c854fd86b08766edfb2d1a375" exitCode=143 Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.012063 4789 scope.go:117] "RemoveContainer" containerID="7cf11c42fa6eee3581592e7cf6d8ad9c5bdb09ef4d82cebd87fe73a6989bc478" Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.015929 4789 generic.go:334] "Generic (PLEG): container finished" podID="7acbb536-0a08-4132-a84a-848735b0e7f4" containerID="c6597dc6aaeaebf47e22acb882e2ae643e5ed20e86abaacc9a1e3bf64ebb15a3" exitCode=143 Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.016462 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/01c5293c-f7b0-4141-99a7-e423de507b87-ovsdbserver-nb-tls-certs\") pod \"01c5293c-f7b0-4141-99a7-e423de507b87\" (UID: \"01c5293c-f7b0-4141-99a7-e423de507b87\") " Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.016544 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/01c5293c-f7b0-4141-99a7-e423de507b87-ovsdb-rundir\") pod \"01c5293c-f7b0-4141-99a7-e423de507b87\" (UID: \"01c5293c-f7b0-4141-99a7-e423de507b87\") " Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.016591 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01c5293c-f7b0-4141-99a7-e423de507b87-combined-ca-bundle\") pod \"01c5293c-f7b0-4141-99a7-e423de507b87\" (UID: \"01c5293c-f7b0-4141-99a7-e423de507b87\") " Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.016683 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-nb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"01c5293c-f7b0-4141-99a7-e423de507b87\" (UID: \"01c5293c-f7b0-4141-99a7-e423de507b87\") " Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.016768 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01c5293c-f7b0-4141-99a7-e423de507b87-config\") pod \"01c5293c-f7b0-4141-99a7-e423de507b87\" (UID: \"01c5293c-f7b0-4141-99a7-e423de507b87\") " Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.016841 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/01c5293c-f7b0-4141-99a7-e423de507b87-metrics-certs-tls-certs\") pod \"01c5293c-f7b0-4141-99a7-e423de507b87\" (UID: \"01c5293c-f7b0-4141-99a7-e423de507b87\") " Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.016990 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vgkdz\" (UniqueName: \"kubernetes.io/projected/01c5293c-f7b0-4141-99a7-e423de507b87-kube-api-access-vgkdz\") pod \"01c5293c-f7b0-4141-99a7-e423de507b87\" (UID: \"01c5293c-f7b0-4141-99a7-e423de507b87\") " Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.017035 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/01c5293c-f7b0-4141-99a7-e423de507b87-scripts\") pod \"01c5293c-f7b0-4141-99a7-e423de507b87\" (UID: \"01c5293c-f7b0-4141-99a7-e423de507b87\") " Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.018146 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7acbb536-0a08-4132-a84a-848735b0e7f4","Type":"ContainerDied","Data":"c6597dc6aaeaebf47e22acb882e2ae643e5ed20e86abaacc9a1e3bf64ebb15a3"} Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.029022 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01c5293c-f7b0-4141-99a7-e423de507b87-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "01c5293c-f7b0-4141-99a7-e423de507b87" (UID: "01c5293c-f7b0-4141-99a7-e423de507b87"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.030319 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.030554 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01c5293c-f7b0-4141-99a7-e423de507b87-scripts" (OuterVolumeSpecName: "scripts") pod "01c5293c-f7b0-4141-99a7-e423de507b87" (UID: "01c5293c-f7b0-4141-99a7-e423de507b87"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.030842 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "ovndbcluster-nb-etc-ovn") pod "01c5293c-f7b0-4141-99a7-e423de507b87" (UID: "01c5293c-f7b0-4141-99a7-e423de507b87"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.032573 4789 generic.go:334] "Generic (PLEG): container finished" podID="7d53e4c0-add2-4cfd-bbea-e0a1d3196091" containerID="4d137886e123097c6077816303161de8f1beb2278c8b0ec65bb058b0d9f03c90" exitCode=143 Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.032716 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6b8b9b54f6-jfnqs" event={"ID":"7d53e4c0-add2-4cfd-bbea-e0a1d3196091","Type":"ContainerDied","Data":"4d137886e123097c6077816303161de8f1beb2278c8b0ec65bb058b0d9f03c90"} Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.035417 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-c430-account-create-update-mtkw4" event={"ID":"151bf5b0-b174-42a9-8a0a-f650d74ec2a3","Type":"ContainerStarted","Data":"46d738ccb8938e81ad7d864bee88b5b0a7da2d123ab63fb5d6eecef4a915a72b"} Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.038093 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01c5293c-f7b0-4141-99a7-e423de507b87-kube-api-access-vgkdz" (OuterVolumeSpecName: "kube-api-access-vgkdz") pod "01c5293c-f7b0-4141-99a7-e423de507b87" (UID: "01c5293c-f7b0-4141-99a7-e423de507b87"). InnerVolumeSpecName "kube-api-access-vgkdz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.042780 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01c5293c-f7b0-4141-99a7-e423de507b87-config" (OuterVolumeSpecName: "config") pod "01c5293c-f7b0-4141-99a7-e423de507b87" (UID: "01c5293c-f7b0-4141-99a7-e423de507b87"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:43:08 crc kubenswrapper[4789]: E0202 21:43:08.052418 4789 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 02 21:43:08 crc kubenswrapper[4789]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Feb 02 21:43:08 crc kubenswrapper[4789]: Feb 02 21:43:08 crc kubenswrapper[4789]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Feb 02 21:43:08 crc kubenswrapper[4789]: Feb 02 21:43:08 crc kubenswrapper[4789]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Feb 02 21:43:08 crc kubenswrapper[4789]: Feb 02 21:43:08 crc kubenswrapper[4789]: MYSQL_CMD="mysql -h -u root -P 3306" Feb 02 21:43:08 crc kubenswrapper[4789]: Feb 02 21:43:08 crc kubenswrapper[4789]: if [ -n "nova_cell1" ]; then Feb 02 21:43:08 crc kubenswrapper[4789]: GRANT_DATABASE="nova_cell1" Feb 02 21:43:08 crc kubenswrapper[4789]: else Feb 02 21:43:08 crc kubenswrapper[4789]: GRANT_DATABASE="*" Feb 02 21:43:08 crc kubenswrapper[4789]: fi Feb 02 21:43:08 crc kubenswrapper[4789]: Feb 02 21:43:08 crc kubenswrapper[4789]: # going for maximum compatibility here: Feb 02 21:43:08 crc kubenswrapper[4789]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Feb 02 21:43:08 crc kubenswrapper[4789]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Feb 02 21:43:08 crc kubenswrapper[4789]: # 3. create user with CREATE but then do all password and TLS with ALTER to Feb 02 21:43:08 crc kubenswrapper[4789]: # support updates Feb 02 21:43:08 crc kubenswrapper[4789]: Feb 02 21:43:08 crc kubenswrapper[4789]: $MYSQL_CMD < logger="UnhandledError" Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.053043 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-h7zb6" event={"ID":"306f2aaf-92ed-4c14-92f4-a970a8240771","Type":"ContainerStarted","Data":"a0fe3dde3cc43bc0d1319bffd7a71533f90a6dc9188661e056ed045674ac6da4"} Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.053115 4789 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/root-account-create-update-h7zb6" secret="" err="secret \"galera-openstack-cell1-dockercfg-96qfl\" not found" Feb 02 21:43:08 crc kubenswrapper[4789]: E0202 21:43:08.053483 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"nova-cell1-db-secret\\\" not found\"" pod="openstack/nova-cell1-3f0a-account-create-update-qc5vb" podUID="ef1040d3-c638-4098-a2ef-ce507371853e" Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.058304 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 02 21:43:08 crc kubenswrapper[4789]: E0202 21:43:08.076451 4789 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 02 21:43:08 crc kubenswrapper[4789]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Feb 02 21:43:08 crc kubenswrapper[4789]: Feb 02 21:43:08 crc kubenswrapper[4789]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Feb 02 21:43:08 crc kubenswrapper[4789]: Feb 02 21:43:08 crc kubenswrapper[4789]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Feb 02 21:43:08 crc kubenswrapper[4789]: Feb 02 21:43:08 crc kubenswrapper[4789]: MYSQL_CMD="mysql -h -u root -P 3306" Feb 02 21:43:08 crc kubenswrapper[4789]: Feb 02 21:43:08 crc kubenswrapper[4789]: if [ -n "" ]; then Feb 02 21:43:08 crc kubenswrapper[4789]: GRANT_DATABASE="" Feb 02 21:43:08 crc kubenswrapper[4789]: else Feb 02 21:43:08 crc kubenswrapper[4789]: GRANT_DATABASE="*" Feb 02 21:43:08 crc kubenswrapper[4789]: fi Feb 02 21:43:08 crc kubenswrapper[4789]: Feb 02 21:43:08 crc kubenswrapper[4789]: # going for maximum compatibility here: Feb 02 21:43:08 crc kubenswrapper[4789]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Feb 02 21:43:08 crc kubenswrapper[4789]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Feb 02 21:43:08 crc kubenswrapper[4789]: # 3. create user with CREATE but then do all password and TLS with ALTER to Feb 02 21:43:08 crc kubenswrapper[4789]: # support updates Feb 02 21:43:08 crc kubenswrapper[4789]: Feb 02 21:43:08 crc kubenswrapper[4789]: $MYSQL_CMD < logger="UnhandledError" Feb 02 21:43:08 crc kubenswrapper[4789]: E0202 21:43:08.077674 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"openstack-cell1-mariadb-root-db-secret\\\" not found\"" pod="openstack/root-account-create-update-h7zb6" podUID="306f2aaf-92ed-4c14-92f4-a970a8240771" Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.080941 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01c5293c-f7b0-4141-99a7-e423de507b87-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "01c5293c-f7b0-4141-99a7-e423de507b87" (UID: "01c5293c-f7b0-4141-99a7-e423de507b87"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.082896 4789 generic.go:334] "Generic (PLEG): container finished" podID="57c9f301-615a-4182-b17e-3ae250e8335c" containerID="ecc06c5902aa50d55c9a1d5a9d91397ab8aa463f6ac87ac09a03b387026f2890" exitCode=0 Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.083004 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"57c9f301-615a-4182-b17e-3ae250e8335c","Type":"ContainerDied","Data":"ecc06c5902aa50d55c9a1d5a9d91397ab8aa463f6ac87ac09a03b387026f2890"} Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.116742 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-3f0a-account-create-update-qc5vb"] Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.117372 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.118825 4789 scope.go:117] "RemoveContainer" containerID="80ee62a2d791f82f667128eb01b609adcf2ee71d4a2647cc5abe16482c589540" Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.125038 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vgkdz\" (UniqueName: \"kubernetes.io/projected/01c5293c-f7b0-4141-99a7-e423de507b87-kube-api-access-vgkdz\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.125079 4789 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/01c5293c-f7b0-4141-99a7-e423de507b87-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.125089 4789 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/01c5293c-f7b0-4141-99a7-e423de507b87-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.125100 4789 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01c5293c-f7b0-4141-99a7-e423de507b87-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.125124 4789 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.125151 4789 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01c5293c-f7b0-4141-99a7-e423de507b87-config\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:08 crc kubenswrapper[4789]: E0202 21:43:08.125819 4789 configmap.go:193] Couldn't get configMap openstack/openstack-cell1-scripts: configmap "openstack-cell1-scripts" not found Feb 02 21:43:08 crc kubenswrapper[4789]: E0202 21:43:08.125876 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/306f2aaf-92ed-4c14-92f4-a970a8240771-operator-scripts podName:306f2aaf-92ed-4c14-92f4-a970a8240771 nodeName:}" failed. No retries permitted until 2026-02-02 21:43:08.62585975 +0000 UTC m=+1408.920884759 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/306f2aaf-92ed-4c14-92f4-a970a8240771-operator-scripts") pod "root-account-create-update-h7zb6" (UID: "306f2aaf-92ed-4c14-92f4-a970a8240771") : configmap "openstack-cell1-scripts" not found Feb 02 21:43:08 crc kubenswrapper[4789]: E0202 21:43:08.126140 4789 secret.go:188] Couldn't get secret openstack/nova-cell1-conductor-config-data: secret "nova-cell1-conductor-config-data" not found Feb 02 21:43:08 crc kubenswrapper[4789]: E0202 21:43:08.126179 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/399d9417-2065-4e92-89c5-a04dbeaf2cca-config-data podName:399d9417-2065-4e92-89c5-a04dbeaf2cca nodeName:}" failed. No retries permitted until 2026-02-02 21:43:10.126171209 +0000 UTC m=+1410.421196228 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/399d9417-2065-4e92-89c5-a04dbeaf2cca-config-data") pod "nova-cell1-conductor-0" (UID: "399d9417-2065-4e92-89c5-a04dbeaf2cca") : secret "nova-cell1-conductor-config-data" not found Feb 02 21:43:08 crc kubenswrapper[4789]: E0202 21:43:08.126211 4789 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Feb 02 21:43:08 crc kubenswrapper[4789]: E0202 21:43:08.126230 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b4db4b23-dae0-42a5-ad47-3336073d0b6a-config-data podName:b4db4b23-dae0-42a5-ad47-3336073d0b6a nodeName:}" failed. No retries permitted until 2026-02-02 21:43:12.126224331 +0000 UTC m=+1412.421249350 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/b4db4b23-dae0-42a5-ad47-3336073d0b6a-config-data") pod "rabbitmq-server-0" (UID: "b4db4b23-dae0-42a5-ad47-3336073d0b6a") : configmap "rabbitmq-config-data" not found Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.128107 4789 generic.go:334] "Generic (PLEG): container finished" podID="1ae097e7-380b-4044-8598-abc3e1059356" containerID="2234c362242e0356a4e9c41d9d9c119ece3aa80e6631194820c7f16fcb2df8fa" exitCode=143 Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.128170 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1ae097e7-380b-4044-8598-abc3e1059356","Type":"ContainerDied","Data":"2234c362242e0356a4e9c41d9d9c119ece3aa80e6631194820c7f16fcb2df8fa"} Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.141206 4789 generic.go:334] "Generic (PLEG): container finished" podID="0ba13473-b423-43a0-ab15-9d6be616cc7b" containerID="175ef66ad8a23cf5090dc4289e18344c14f9e8edd0b77edfc81aaff9cd62283b" exitCode=143 Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.141313 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0ba13473-b423-43a0-ab15-9d6be616cc7b","Type":"ContainerDied","Data":"175ef66ad8a23cf5090dc4289e18344c14f9e8edd0b77edfc81aaff9cd62283b"} Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.149090 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nn5kg" event={"ID":"0bf87933-483d-4608-9fab-9f0cfa9fb326","Type":"ContainerStarted","Data":"663731cad30dfe107241ae97eb0f5c1d3dd773984c0b7800fda4945d05ae1793"} Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.157094 4789 generic.go:334] "Generic (PLEG): container finished" podID="f9d0bd72-572d-4b90-b747-f37b490b3e4a" containerID="3526b18b1cdfe6a5a1f276e6a5cb128388504d6bdbc6ed184e29a01812ab1266" exitCode=0 Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.157149 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f9d0bd72-572d-4b90-b747-f37b490b3e4a","Type":"ContainerDied","Data":"3526b18b1cdfe6a5a1f276e6a5cb128388504d6bdbc6ed184e29a01812ab1266"} Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.158082 4789 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.174458 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-nn5kg" podStartSLOduration=3.876063764 podStartE2EDuration="8.174438766s" podCreationTimestamp="2026-02-02 21:43:00 +0000 UTC" firstStartedPulling="2026-02-02 21:43:02.196118161 +0000 UTC m=+1402.491143170" lastFinishedPulling="2026-02-02 21:43:06.494493153 +0000 UTC m=+1406.789518172" observedRunningTime="2026-02-02 21:43:08.167566981 +0000 UTC m=+1408.462592000" watchObservedRunningTime="2026-02-02 21:43:08.174438766 +0000 UTC m=+1408.469463785" Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.176400 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01c5293c-f7b0-4141-99a7-e423de507b87-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "01c5293c-f7b0-4141-99a7-e423de507b87" (UID: "01c5293c-f7b0-4141-99a7-e423de507b87"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.203035 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 02 21:43:08 crc kubenswrapper[4789]: E0202 21:43:08.226783 4789 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6c5300b0e812632a91b74ba2ab909199288368ea0a90d2cfd2fd7fbf3551f6b6" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.227167 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-64bb487f87-44hz8" Feb 02 21:43:08 crc kubenswrapper[4789]: E0202 21:43:08.227239 4789 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 17dd53bbaea1cc18fc470d6b2376b40412de4caece9ebad3ce307cda50a79e10 is running failed: container process not found" containerID="17dd53bbaea1cc18fc470d6b2376b40412de4caece9ebad3ce307cda50a79e10" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.228227 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9d0bd72-572d-4b90-b747-f37b490b3e4a-combined-ca-bundle\") pod \"f9d0bd72-572d-4b90-b747-f37b490b3e4a\" (UID: \"f9d0bd72-572d-4b90-b747-f37b490b3e4a\") " Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.228313 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9d0bd72-572d-4b90-b747-f37b490b3e4a-config-data\") pod \"f9d0bd72-572d-4b90-b747-f37b490b3e4a\" (UID: \"f9d0bd72-572d-4b90-b747-f37b490b3e4a\") " Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.228345 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9d0bd72-572d-4b90-b747-f37b490b3e4a-nova-novncproxy-tls-certs\") pod \"f9d0bd72-572d-4b90-b747-f37b490b3e4a\" (UID: \"f9d0bd72-572d-4b90-b747-f37b490b3e4a\") " Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.228378 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9d0bd72-572d-4b90-b747-f37b490b3e4a-vencrypt-tls-certs\") pod \"f9d0bd72-572d-4b90-b747-f37b490b3e4a\" (UID: \"f9d0bd72-572d-4b90-b747-f37b490b3e4a\") " Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.228437 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hshqq\" (UniqueName: \"kubernetes.io/projected/f9d0bd72-572d-4b90-b747-f37b490b3e4a-kube-api-access-hshqq\") pod \"f9d0bd72-572d-4b90-b747-f37b490b3e4a\" (UID: \"f9d0bd72-572d-4b90-b747-f37b490b3e4a\") " Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.228977 4789 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/01c5293c-f7b0-4141-99a7-e423de507b87-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.228990 4789 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:08 crc kubenswrapper[4789]: E0202 21:43:08.229109 4789 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 17dd53bbaea1cc18fc470d6b2376b40412de4caece9ebad3ce307cda50a79e10 is running failed: container process not found" containerID="17dd53bbaea1cc18fc470d6b2376b40412de4caece9ebad3ce307cda50a79e10" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 02 21:43:08 crc kubenswrapper[4789]: E0202 21:43:08.230051 4789 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6c5300b0e812632a91b74ba2ab909199288368ea0a90d2cfd2fd7fbf3551f6b6" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.230084 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01c5293c-f7b0-4141-99a7-e423de507b87-ovsdbserver-nb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-nb-tls-certs") pod "01c5293c-f7b0-4141-99a7-e423de507b87" (UID: "01c5293c-f7b0-4141-99a7-e423de507b87"). InnerVolumeSpecName "ovsdbserver-nb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:43:08 crc kubenswrapper[4789]: E0202 21:43:08.230152 4789 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 17dd53bbaea1cc18fc470d6b2376b40412de4caece9ebad3ce307cda50a79e10 is running failed: container process not found" containerID="17dd53bbaea1cc18fc470d6b2376b40412de4caece9ebad3ce307cda50a79e10" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 02 21:43:08 crc kubenswrapper[4789]: E0202 21:43:08.230188 4789 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 17dd53bbaea1cc18fc470d6b2376b40412de4caece9ebad3ce307cda50a79e10 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-tjn59" podUID="0ad9b78e-5f3a-42f0-a462-c6b0bf0c4ec1" containerName="ovsdb-server" Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.245543 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9d0bd72-572d-4b90-b747-f37b490b3e4a-kube-api-access-hshqq" (OuterVolumeSpecName: "kube-api-access-hshqq") pod "f9d0bd72-572d-4b90-b747-f37b490b3e4a" (UID: "f9d0bd72-572d-4b90-b747-f37b490b3e4a"). InnerVolumeSpecName "kube-api-access-hshqq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:43:08 crc kubenswrapper[4789]: E0202 21:43:08.253882 4789 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6c5300b0e812632a91b74ba2ab909199288368ea0a90d2cfd2fd7fbf3551f6b6" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 02 21:43:08 crc kubenswrapper[4789]: E0202 21:43:08.253951 4789 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-tjn59" podUID="0ad9b78e-5f3a-42f0-a462-c6b0bf0c4ec1" containerName="ovs-vswitchd" Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.275484 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9d0bd72-572d-4b90-b747-f37b490b3e4a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f9d0bd72-572d-4b90-b747-f37b490b3e4a" (UID: "f9d0bd72-572d-4b90-b747-f37b490b3e4a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.318344 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-725d-account-create-update-6nrvh" Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.318742 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9d0bd72-572d-4b90-b747-f37b490b3e4a-config-data" (OuterVolumeSpecName: "config-data") pod "f9d0bd72-572d-4b90-b747-f37b490b3e4a" (UID: "f9d0bd72-572d-4b90-b747-f37b490b3e4a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.330283 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c08255d0-1dd6-4556-8f30-65367b7739f7-etc-swift\") pod \"c08255d0-1dd6-4556-8f30-65367b7739f7\" (UID: \"c08255d0-1dd6-4556-8f30-65367b7739f7\") " Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.330368 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c08255d0-1dd6-4556-8f30-65367b7739f7-run-httpd\") pod \"c08255d0-1dd6-4556-8f30-65367b7739f7\" (UID: \"c08255d0-1dd6-4556-8f30-65367b7739f7\") " Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.330403 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c08255d0-1dd6-4556-8f30-65367b7739f7-log-httpd\") pod \"c08255d0-1dd6-4556-8f30-65367b7739f7\" (UID: \"c08255d0-1dd6-4556-8f30-65367b7739f7\") " Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.330426 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c08255d0-1dd6-4556-8f30-65367b7739f7-combined-ca-bundle\") pod \"c08255d0-1dd6-4556-8f30-65367b7739f7\" (UID: \"c08255d0-1dd6-4556-8f30-65367b7739f7\") " Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.330442 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c08255d0-1dd6-4556-8f30-65367b7739f7-public-tls-certs\") pod \"c08255d0-1dd6-4556-8f30-65367b7739f7\" (UID: \"c08255d0-1dd6-4556-8f30-65367b7739f7\") " Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.330463 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c7b70ce2-ef37-4547-9c3c-7be3ec4b02c8-operator-scripts\") pod \"c7b70ce2-ef37-4547-9c3c-7be3ec4b02c8\" (UID: \"c7b70ce2-ef37-4547-9c3c-7be3ec4b02c8\") " Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.330506 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c08255d0-1dd6-4556-8f30-65367b7739f7-internal-tls-certs\") pod \"c08255d0-1dd6-4556-8f30-65367b7739f7\" (UID: \"c08255d0-1dd6-4556-8f30-65367b7739f7\") " Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.330569 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c08255d0-1dd6-4556-8f30-65367b7739f7-config-data\") pod \"c08255d0-1dd6-4556-8f30-65367b7739f7\" (UID: \"c08255d0-1dd6-4556-8f30-65367b7739f7\") " Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.330613 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rxwlc\" (UniqueName: \"kubernetes.io/projected/c08255d0-1dd6-4556-8f30-65367b7739f7-kube-api-access-rxwlc\") pod \"c08255d0-1dd6-4556-8f30-65367b7739f7\" (UID: \"c08255d0-1dd6-4556-8f30-65367b7739f7\") " Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.330640 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2hw6d\" (UniqueName: \"kubernetes.io/projected/c7b70ce2-ef37-4547-9c3c-7be3ec4b02c8-kube-api-access-2hw6d\") pod \"c7b70ce2-ef37-4547-9c3c-7be3ec4b02c8\" (UID: \"c7b70ce2-ef37-4547-9c3c-7be3ec4b02c8\") " Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.331044 4789 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9d0bd72-572d-4b90-b747-f37b490b3e4a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.331060 4789 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9d0bd72-572d-4b90-b747-f37b490b3e4a-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.331069 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hshqq\" (UniqueName: \"kubernetes.io/projected/f9d0bd72-572d-4b90-b747-f37b490b3e4a-kube-api-access-hshqq\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.331080 4789 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/01c5293c-f7b0-4141-99a7-e423de507b87-ovsdbserver-nb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.332204 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c08255d0-1dd6-4556-8f30-65367b7739f7-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c08255d0-1dd6-4556-8f30-65367b7739f7" (UID: "c08255d0-1dd6-4556-8f30-65367b7739f7"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.332329 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7b70ce2-ef37-4547-9c3c-7be3ec4b02c8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c7b70ce2-ef37-4547-9c3c-7be3ec4b02c8" (UID: "c7b70ce2-ef37-4547-9c3c-7be3ec4b02c8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.334494 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c08255d0-1dd6-4556-8f30-65367b7739f7-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c08255d0-1dd6-4556-8f30-65367b7739f7" (UID: "c08255d0-1dd6-4556-8f30-65367b7739f7"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.337277 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7b70ce2-ef37-4547-9c3c-7be3ec4b02c8-kube-api-access-2hw6d" (OuterVolumeSpecName: "kube-api-access-2hw6d") pod "c7b70ce2-ef37-4547-9c3c-7be3ec4b02c8" (UID: "c7b70ce2-ef37-4547-9c3c-7be3ec4b02c8"). InnerVolumeSpecName "kube-api-access-2hw6d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.337535 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c08255d0-1dd6-4556-8f30-65367b7739f7-kube-api-access-rxwlc" (OuterVolumeSpecName: "kube-api-access-rxwlc") pod "c08255d0-1dd6-4556-8f30-65367b7739f7" (UID: "c08255d0-1dd6-4556-8f30-65367b7739f7"). InnerVolumeSpecName "kube-api-access-rxwlc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.338611 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9d0bd72-572d-4b90-b747-f37b490b3e4a-vencrypt-tls-certs" (OuterVolumeSpecName: "vencrypt-tls-certs") pod "f9d0bd72-572d-4b90-b747-f37b490b3e4a" (UID: "f9d0bd72-572d-4b90-b747-f37b490b3e4a"). InnerVolumeSpecName "vencrypt-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.339328 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c08255d0-1dd6-4556-8f30-65367b7739f7-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "c08255d0-1dd6-4556-8f30-65367b7739f7" (UID: "c08255d0-1dd6-4556-8f30-65367b7739f7"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.347016 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9d0bd72-572d-4b90-b747-f37b490b3e4a-nova-novncproxy-tls-certs" (OuterVolumeSpecName: "nova-novncproxy-tls-certs") pod "f9d0bd72-572d-4b90-b747-f37b490b3e4a" (UID: "f9d0bd72-572d-4b90-b747-f37b490b3e4a"). InnerVolumeSpecName "nova-novncproxy-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.393087 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c08255d0-1dd6-4556-8f30-65367b7739f7-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "c08255d0-1dd6-4556-8f30-65367b7739f7" (UID: "c08255d0-1dd6-4556-8f30-65367b7739f7"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.405076 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-c430-account-create-update-mtkw4" Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.412743 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c08255d0-1dd6-4556-8f30-65367b7739f7-config-data" (OuterVolumeSpecName: "config-data") pod "c08255d0-1dd6-4556-8f30-65367b7739f7" (UID: "c08255d0-1dd6-4556-8f30-65367b7739f7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.425725 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c08255d0-1dd6-4556-8f30-65367b7739f7-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "c08255d0-1dd6-4556-8f30-65367b7739f7" (UID: "c08255d0-1dd6-4556-8f30-65367b7739f7"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.428815 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c08255d0-1dd6-4556-8f30-65367b7739f7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c08255d0-1dd6-4556-8f30-65367b7739f7" (UID: "c08255d0-1dd6-4556-8f30-65367b7739f7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.431736 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nkqgw\" (UniqueName: \"kubernetes.io/projected/151bf5b0-b174-42a9-8a0a-f650d74ec2a3-kube-api-access-nkqgw\") pod \"151bf5b0-b174-42a9-8a0a-f650d74ec2a3\" (UID: \"151bf5b0-b174-42a9-8a0a-f650d74ec2a3\") " Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.431831 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/151bf5b0-b174-42a9-8a0a-f650d74ec2a3-operator-scripts\") pod \"151bf5b0-b174-42a9-8a0a-f650d74ec2a3\" (UID: \"151bf5b0-b174-42a9-8a0a-f650d74ec2a3\") " Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.432352 4789 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c08255d0-1dd6-4556-8f30-65367b7739f7-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.432370 4789 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c08255d0-1dd6-4556-8f30-65367b7739f7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.432380 4789 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c08255d0-1dd6-4556-8f30-65367b7739f7-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.432389 4789 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c7b70ce2-ef37-4547-9c3c-7be3ec4b02c8-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.432398 4789 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c08255d0-1dd6-4556-8f30-65367b7739f7-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.432408 4789 reconciler_common.go:293] "Volume detached for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9d0bd72-572d-4b90-b747-f37b490b3e4a-nova-novncproxy-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.432433 4789 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c08255d0-1dd6-4556-8f30-65367b7739f7-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.432441 4789 reconciler_common.go:293] "Volume detached for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9d0bd72-572d-4b90-b747-f37b490b3e4a-vencrypt-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.432450 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rxwlc\" (UniqueName: \"kubernetes.io/projected/c08255d0-1dd6-4556-8f30-65367b7739f7-kube-api-access-rxwlc\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.432458 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2hw6d\" (UniqueName: \"kubernetes.io/projected/c7b70ce2-ef37-4547-9c3c-7be3ec4b02c8-kube-api-access-2hw6d\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.432465 4789 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c08255d0-1dd6-4556-8f30-65367b7739f7-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.432473 4789 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c08255d0-1dd6-4556-8f30-65367b7739f7-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.432511 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/151bf5b0-b174-42a9-8a0a-f650d74ec2a3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "151bf5b0-b174-42a9-8a0a-f650d74ec2a3" (UID: "151bf5b0-b174-42a9-8a0a-f650d74ec2a3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.434463 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/151bf5b0-b174-42a9-8a0a-f650d74ec2a3-kube-api-access-nkqgw" (OuterVolumeSpecName: "kube-api-access-nkqgw") pod "151bf5b0-b174-42a9-8a0a-f650d74ec2a3" (UID: "151bf5b0-b174-42a9-8a0a-f650d74ec2a3"). InnerVolumeSpecName "kube-api-access-nkqgw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.439109 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05dabff6-c489-4c3a-9030-4206f14e27fd" path="/var/lib/kubelet/pods/05dabff6-c489-4c3a-9030-4206f14e27fd/volumes" Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.439928 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23e4386a-307f-4ab1-bac5-fb2260dff5ec" path="/var/lib/kubelet/pods/23e4386a-307f-4ab1-bac5-fb2260dff5ec/volumes" Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.440665 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e82084e-a68b-4e41-9d23-8888ab97e53e" path="/var/lib/kubelet/pods/2e82084e-a68b-4e41-9d23-8888ab97e53e/volumes" Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.441597 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30f5a928-4b52-4eef-acb5-7748decc816e" path="/var/lib/kubelet/pods/30f5a928-4b52-4eef-acb5-7748decc816e/volumes" Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.442050 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="314d39fc-1687-4ed2-bac5-8ed19ba3cdab" path="/var/lib/kubelet/pods/314d39fc-1687-4ed2-bac5-8ed19ba3cdab/volumes" Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.442565 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3addc62a-b5f6-4e9e-9d32-6f4f9d5550b7" path="/var/lib/kubelet/pods/3addc62a-b5f6-4e9e-9d32-6f4f9d5550b7/volumes" Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.443039 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f69804c-fec9-4fea-a392-b9a2a54b1155" path="/var/lib/kubelet/pods/3f69804c-fec9-4fea-a392-b9a2a54b1155/volumes" Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.443934 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a1e8a56-47de-4a5e-b4f6-389ebf616658" path="/var/lib/kubelet/pods/4a1e8a56-47de-4a5e-b4f6-389ebf616658/volumes" Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.445094 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7cd6e7ff-266d-4288-9df4-dc22dbe5f19d" path="/var/lib/kubelet/pods/7cd6e7ff-266d-4288-9df4-dc22dbe5f19d/volumes" Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.446594 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d80bd6c-a1d7-4ee6-bc8c-8e534d7bfa1a" path="/var/lib/kubelet/pods/7d80bd6c-a1d7-4ee6-bc8c-8e534d7bfa1a/volumes" Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.447477 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a55dfeeb-4219-4e3f-834b-0f4de4381c96" path="/var/lib/kubelet/pods/a55dfeeb-4219-4e3f-834b-0f4de4381c96/volumes" Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.448114 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c55d3f19-edf8-4cff-ab70-495607e77798" path="/var/lib/kubelet/pods/c55d3f19-edf8-4cff-ab70-495607e77798/volumes" Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.448706 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4127fa0-de5d-43ce-b257-46b80eecd670" path="/var/lib/kubelet/pods/d4127fa0-de5d-43ce-b257-46b80eecd670/volumes" Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.450253 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db3bfd1d-3fb9-4406-bd26-7b58e943e963" path="/var/lib/kubelet/pods/db3bfd1d-3fb9-4406-bd26-7b58e943e963/volumes" Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.451116 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee0798dd-aec6-4d0f-b4e9-efde747097cd" path="/var/lib/kubelet/pods/ee0798dd-aec6-4d0f-b4e9-efde747097cd/volumes" Feb 02 21:43:08 crc kubenswrapper[4789]: E0202 21:43:08.455786 4789 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 82ad90219c4a326128d9fb471414b4e0b74ee26ef1c8f08b2e8b89d720565f03 is running failed: container process not found" containerID="82ad90219c4a326128d9fb471414b4e0b74ee26ef1c8f08b2e8b89d720565f03" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Feb 02 21:43:08 crc kubenswrapper[4789]: E0202 21:43:08.456234 4789 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 82ad90219c4a326128d9fb471414b4e0b74ee26ef1c8f08b2e8b89d720565f03 is running failed: container process not found" containerID="82ad90219c4a326128d9fb471414b4e0b74ee26ef1c8f08b2e8b89d720565f03" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Feb 02 21:43:08 crc kubenswrapper[4789]: E0202 21:43:08.456549 4789 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 82ad90219c4a326128d9fb471414b4e0b74ee26ef1c8f08b2e8b89d720565f03 is running failed: container process not found" containerID="82ad90219c4a326128d9fb471414b4e0b74ee26ef1c8f08b2e8b89d720565f03" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Feb 02 21:43:08 crc kubenswrapper[4789]: E0202 21:43:08.456645 4789 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 82ad90219c4a326128d9fb471414b4e0b74ee26ef1c8f08b2e8b89d720565f03 is running failed: container process not found" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="96f4773a-9fa9-41c6-ab4b-54107e66a498" containerName="galera" Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.533968 4789 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/151bf5b0-b174-42a9-8a0a-f650d74ec2a3-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.533997 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nkqgw\" (UniqueName: \"kubernetes.io/projected/151bf5b0-b174-42a9-8a0a-f650d74ec2a3-kube-api-access-nkqgw\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:08 crc kubenswrapper[4789]: E0202 21:43:08.637345 4789 configmap.go:193] Couldn't get configMap openstack/openstack-cell1-scripts: configmap "openstack-cell1-scripts" not found Feb 02 21:43:08 crc kubenswrapper[4789]: E0202 21:43:08.638020 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/306f2aaf-92ed-4c14-92f4-a970a8240771-operator-scripts podName:306f2aaf-92ed-4c14-92f4-a970a8240771 nodeName:}" failed. No retries permitted until 2026-02-02 21:43:09.638000332 +0000 UTC m=+1409.933025351 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/306f2aaf-92ed-4c14-92f4-a970a8240771-operator-scripts") pod "root-account-create-update-h7zb6" (UID: "306f2aaf-92ed-4c14-92f4-a970a8240771") : configmap "openstack-cell1-scripts" not found Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.691329 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.737962 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/96f4773a-9fa9-41c6-ab4b-54107e66a498-config-data-default\") pod \"96f4773a-9fa9-41c6-ab4b-54107e66a498\" (UID: \"96f4773a-9fa9-41c6-ab4b-54107e66a498\") " Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.738031 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/96f4773a-9fa9-41c6-ab4b-54107e66a498-operator-scripts\") pod \"96f4773a-9fa9-41c6-ab4b-54107e66a498\" (UID: \"96f4773a-9fa9-41c6-ab4b-54107e66a498\") " Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.738070 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/96f4773a-9fa9-41c6-ab4b-54107e66a498-galera-tls-certs\") pod \"96f4773a-9fa9-41c6-ab4b-54107e66a498\" (UID: \"96f4773a-9fa9-41c6-ab4b-54107e66a498\") " Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.738107 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/96f4773a-9fa9-41c6-ab4b-54107e66a498-config-data-generated\") pod \"96f4773a-9fa9-41c6-ab4b-54107e66a498\" (UID: \"96f4773a-9fa9-41c6-ab4b-54107e66a498\") " Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.738146 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"96f4773a-9fa9-41c6-ab4b-54107e66a498\" (UID: \"96f4773a-9fa9-41c6-ab4b-54107e66a498\") " Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.738221 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/96f4773a-9fa9-41c6-ab4b-54107e66a498-kolla-config\") pod \"96f4773a-9fa9-41c6-ab4b-54107e66a498\" (UID: \"96f4773a-9fa9-41c6-ab4b-54107e66a498\") " Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.738255 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gc5k4\" (UniqueName: \"kubernetes.io/projected/96f4773a-9fa9-41c6-ab4b-54107e66a498-kube-api-access-gc5k4\") pod \"96f4773a-9fa9-41c6-ab4b-54107e66a498\" (UID: \"96f4773a-9fa9-41c6-ab4b-54107e66a498\") " Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.738302 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96f4773a-9fa9-41c6-ab4b-54107e66a498-combined-ca-bundle\") pod \"96f4773a-9fa9-41c6-ab4b-54107e66a498\" (UID: \"96f4773a-9fa9-41c6-ab4b-54107e66a498\") " Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.738677 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96f4773a-9fa9-41c6-ab4b-54107e66a498-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "96f4773a-9fa9-41c6-ab4b-54107e66a498" (UID: "96f4773a-9fa9-41c6-ab4b-54107e66a498"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.738708 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96f4773a-9fa9-41c6-ab4b-54107e66a498-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "96f4773a-9fa9-41c6-ab4b-54107e66a498" (UID: "96f4773a-9fa9-41c6-ab4b-54107e66a498"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.738768 4789 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/96f4773a-9fa9-41c6-ab4b-54107e66a498-config-data-generated\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.739260 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96f4773a-9fa9-41c6-ab4b-54107e66a498-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "96f4773a-9fa9-41c6-ab4b-54107e66a498" (UID: "96f4773a-9fa9-41c6-ab4b-54107e66a498"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.739837 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96f4773a-9fa9-41c6-ab4b-54107e66a498-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "96f4773a-9fa9-41c6-ab4b-54107e66a498" (UID: "96f4773a-9fa9-41c6-ab4b-54107e66a498"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.746961 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96f4773a-9fa9-41c6-ab4b-54107e66a498-kube-api-access-gc5k4" (OuterVolumeSpecName: "kube-api-access-gc5k4") pod "96f4773a-9fa9-41c6-ab4b-54107e66a498" (UID: "96f4773a-9fa9-41c6-ab4b-54107e66a498"). InnerVolumeSpecName "kube-api-access-gc5k4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.772327 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "mysql-db") pod "96f4773a-9fa9-41c6-ab4b-54107e66a498" (UID: "96f4773a-9fa9-41c6-ab4b-54107e66a498"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.786143 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96f4773a-9fa9-41c6-ab4b-54107e66a498-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "96f4773a-9fa9-41c6-ab4b-54107e66a498" (UID: "96f4773a-9fa9-41c6-ab4b-54107e66a498"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.798763 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96f4773a-9fa9-41c6-ab4b-54107e66a498-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "96f4773a-9fa9-41c6-ab4b-54107e66a498" (UID: "96f4773a-9fa9-41c6-ab4b-54107e66a498"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.840925 4789 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/96f4773a-9fa9-41c6-ab4b-54107e66a498-config-data-default\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.840954 4789 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/96f4773a-9fa9-41c6-ab4b-54107e66a498-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.840963 4789 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/96f4773a-9fa9-41c6-ab4b-54107e66a498-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.840987 4789 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.840995 4789 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/96f4773a-9fa9-41c6-ab4b-54107e66a498-kolla-config\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.841022 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gc5k4\" (UniqueName: \"kubernetes.io/projected/96f4773a-9fa9-41c6-ab4b-54107e66a498-kube-api-access-gc5k4\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.841035 4789 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96f4773a-9fa9-41c6-ab4b-54107e66a498-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.865048 4789 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.948072 4789 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.987883 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.988384 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b579f7f4-db1f-4d76-82fb-ef4cad438842" containerName="ceilometer-central-agent" containerID="cri-o://28a7ed128e7bef7f569955019dd73ac9d95249468906497c95bad0c6363ebdd8" gracePeriod=30 Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.988856 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b579f7f4-db1f-4d76-82fb-ef4cad438842" containerName="proxy-httpd" containerID="cri-o://0947f8cdd1f5dab6746e2ce88b87d9cc21b32de7ac54eec8ed4b2dc8b2ff1f61" gracePeriod=30 Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.988975 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b579f7f4-db1f-4d76-82fb-ef4cad438842" containerName="sg-core" containerID="cri-o://c4d593fd14424a40e7eb4b508c719970461ef690e1eb1894e38dd03571b8b07b" gracePeriod=30 Feb 02 21:43:08 crc kubenswrapper[4789]: I0202 21:43:08.989077 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b579f7f4-db1f-4d76-82fb-ef4cad438842" containerName="ceilometer-notification-agent" containerID="cri-o://4442ad2bcd72e1f7d739ef50d0304ab053ba1e52fd3d4c19d121698c07aa9558" gracePeriod=30 Feb 02 21:43:09 crc kubenswrapper[4789]: I0202 21:43:09.014398 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-sbz55"] Feb 02 21:43:09 crc kubenswrapper[4789]: E0202 21:43:09.015110 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c55d3f19-edf8-4cff-ab70-495607e77798" containerName="ovsdbserver-sb" Feb 02 21:43:09 crc kubenswrapper[4789]: I0202 21:43:09.015134 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="c55d3f19-edf8-4cff-ab70-495607e77798" containerName="ovsdbserver-sb" Feb 02 21:43:09 crc kubenswrapper[4789]: E0202 21:43:09.015154 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c08255d0-1dd6-4556-8f30-65367b7739f7" containerName="proxy-server" Feb 02 21:43:09 crc kubenswrapper[4789]: I0202 21:43:09.015162 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="c08255d0-1dd6-4556-8f30-65367b7739f7" containerName="proxy-server" Feb 02 21:43:09 crc kubenswrapper[4789]: E0202 21:43:09.015177 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c08255d0-1dd6-4556-8f30-65367b7739f7" containerName="proxy-httpd" Feb 02 21:43:09 crc kubenswrapper[4789]: I0202 21:43:09.015186 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="c08255d0-1dd6-4556-8f30-65367b7739f7" containerName="proxy-httpd" Feb 02 21:43:09 crc kubenswrapper[4789]: E0202 21:43:09.015200 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cd6e7ff-266d-4288-9df4-dc22dbe5f19d" containerName="init" Feb 02 21:43:09 crc kubenswrapper[4789]: I0202 21:43:09.015207 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cd6e7ff-266d-4288-9df4-dc22dbe5f19d" containerName="init" Feb 02 21:43:09 crc kubenswrapper[4789]: E0202 21:43:09.015219 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cd6e7ff-266d-4288-9df4-dc22dbe5f19d" containerName="dnsmasq-dns" Feb 02 21:43:09 crc kubenswrapper[4789]: I0202 21:43:09.015227 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cd6e7ff-266d-4288-9df4-dc22dbe5f19d" containerName="dnsmasq-dns" Feb 02 21:43:09 crc kubenswrapper[4789]: E0202 21:43:09.015243 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9d0bd72-572d-4b90-b747-f37b490b3e4a" containerName="nova-cell1-novncproxy-novncproxy" Feb 02 21:43:09 crc kubenswrapper[4789]: I0202 21:43:09.015251 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9d0bd72-572d-4b90-b747-f37b490b3e4a" containerName="nova-cell1-novncproxy-novncproxy" Feb 02 21:43:09 crc kubenswrapper[4789]: E0202 21:43:09.015264 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4127fa0-de5d-43ce-b257-46b80eecd670" containerName="openstack-network-exporter" Feb 02 21:43:09 crc kubenswrapper[4789]: I0202 21:43:09.015272 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4127fa0-de5d-43ce-b257-46b80eecd670" containerName="openstack-network-exporter" Feb 02 21:43:09 crc kubenswrapper[4789]: E0202 21:43:09.015288 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c55d3f19-edf8-4cff-ab70-495607e77798" containerName="openstack-network-exporter" Feb 02 21:43:09 crc kubenswrapper[4789]: I0202 21:43:09.015295 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="c55d3f19-edf8-4cff-ab70-495607e77798" containerName="openstack-network-exporter" Feb 02 21:43:09 crc kubenswrapper[4789]: E0202 21:43:09.015304 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01c5293c-f7b0-4141-99a7-e423de507b87" containerName="ovsdbserver-nb" Feb 02 21:43:09 crc kubenswrapper[4789]: I0202 21:43:09.015311 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="01c5293c-f7b0-4141-99a7-e423de507b87" containerName="ovsdbserver-nb" Feb 02 21:43:09 crc kubenswrapper[4789]: E0202 21:43:09.015324 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96f4773a-9fa9-41c6-ab4b-54107e66a498" containerName="mysql-bootstrap" Feb 02 21:43:09 crc kubenswrapper[4789]: I0202 21:43:09.015330 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="96f4773a-9fa9-41c6-ab4b-54107e66a498" containerName="mysql-bootstrap" Feb 02 21:43:09 crc kubenswrapper[4789]: E0202 21:43:09.015347 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01c5293c-f7b0-4141-99a7-e423de507b87" containerName="openstack-network-exporter" Feb 02 21:43:09 crc kubenswrapper[4789]: I0202 21:43:09.015356 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="01c5293c-f7b0-4141-99a7-e423de507b87" containerName="openstack-network-exporter" Feb 02 21:43:09 crc kubenswrapper[4789]: E0202 21:43:09.015381 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96f4773a-9fa9-41c6-ab4b-54107e66a498" containerName="galera" Feb 02 21:43:09 crc kubenswrapper[4789]: I0202 21:43:09.015390 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="96f4773a-9fa9-41c6-ab4b-54107e66a498" containerName="galera" Feb 02 21:43:09 crc kubenswrapper[4789]: I0202 21:43:09.016681 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="c08255d0-1dd6-4556-8f30-65367b7739f7" containerName="proxy-httpd" Feb 02 21:43:09 crc kubenswrapper[4789]: I0202 21:43:09.016707 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4127fa0-de5d-43ce-b257-46b80eecd670" containerName="openstack-network-exporter" Feb 02 21:43:09 crc kubenswrapper[4789]: I0202 21:43:09.016723 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="c55d3f19-edf8-4cff-ab70-495607e77798" containerName="openstack-network-exporter" Feb 02 21:43:09 crc kubenswrapper[4789]: I0202 21:43:09.016730 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9d0bd72-572d-4b90-b747-f37b490b3e4a" containerName="nova-cell1-novncproxy-novncproxy" Feb 02 21:43:09 crc kubenswrapper[4789]: I0202 21:43:09.016739 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="01c5293c-f7b0-4141-99a7-e423de507b87" containerName="openstack-network-exporter" Feb 02 21:43:09 crc kubenswrapper[4789]: I0202 21:43:09.016746 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="96f4773a-9fa9-41c6-ab4b-54107e66a498" containerName="galera" Feb 02 21:43:09 crc kubenswrapper[4789]: I0202 21:43:09.016754 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="c08255d0-1dd6-4556-8f30-65367b7739f7" containerName="proxy-server" Feb 02 21:43:09 crc kubenswrapper[4789]: I0202 21:43:09.016764 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="01c5293c-f7b0-4141-99a7-e423de507b87" containerName="ovsdbserver-nb" Feb 02 21:43:09 crc kubenswrapper[4789]: I0202 21:43:09.016771 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="c55d3f19-edf8-4cff-ab70-495607e77798" containerName="ovsdbserver-sb" Feb 02 21:43:09 crc kubenswrapper[4789]: I0202 21:43:09.016782 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cd6e7ff-266d-4288-9df4-dc22dbe5f19d" containerName="dnsmasq-dns" Feb 02 21:43:09 crc kubenswrapper[4789]: I0202 21:43:09.017357 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-sbz55" Feb 02 21:43:09 crc kubenswrapper[4789]: I0202 21:43:09.021651 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-sbz55"] Feb 02 21:43:09 crc kubenswrapper[4789]: I0202 21:43:09.028464 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 02 21:43:09 crc kubenswrapper[4789]: I0202 21:43:09.044397 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 21:43:09 crc kubenswrapper[4789]: I0202 21:43:09.044613 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="212c4e72-7988-4770-ba07-ae0362baac7e" containerName="kube-state-metrics" containerID="cri-o://a8f8731d69214821017cbca7eb7712c56bfef4b649bbc850b8c1ced28aa04dc5" gracePeriod=30 Feb 02 21:43:09 crc kubenswrapper[4789]: I0202 21:43:09.050666 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64a9a0c6-c663-400b-8c60-43c582b7cac0-operator-scripts\") pod \"root-account-create-update-sbz55\" (UID: \"64a9a0c6-c663-400b-8c60-43c582b7cac0\") " pod="openstack/root-account-create-update-sbz55" Feb 02 21:43:09 crc kubenswrapper[4789]: I0202 21:43:09.050918 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7qsd\" (UniqueName: \"kubernetes.io/projected/64a9a0c6-c663-400b-8c60-43c582b7cac0-kube-api-access-g7qsd\") pod \"root-account-create-update-sbz55\" (UID: \"64a9a0c6-c663-400b-8c60-43c582b7cac0\") " pod="openstack/root-account-create-update-sbz55" Feb 02 21:43:09 crc kubenswrapper[4789]: E0202 21:43:09.131506 4789 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b302cbd832ca9db939c2b0bf4835c6ec6fb237f5c200d53981557f9c42498b12" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 02 21:43:09 crc kubenswrapper[4789]: I0202 21:43:09.143854 4789 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="212c4e72-7988-4770-ba07-ae0362baac7e" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.0.173:8080/livez\": unexpected EOF" Feb 02 21:43:09 crc kubenswrapper[4789]: E0202 21:43:09.144192 4789 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b302cbd832ca9db939c2b0bf4835c6ec6fb237f5c200d53981557f9c42498b12" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 02 21:43:09 crc kubenswrapper[4789]: E0202 21:43:09.148289 4789 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b302cbd832ca9db939c2b0bf4835c6ec6fb237f5c200d53981557f9c42498b12" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 02 21:43:09 crc kubenswrapper[4789]: E0202 21:43:09.148355 4789 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="a0ceeffe-1326-4d2d-ab85-dbc02869bee1" containerName="nova-scheduler-scheduler" Feb 02 21:43:09 crc kubenswrapper[4789]: I0202 21:43:09.151645 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Feb 02 21:43:09 crc kubenswrapper[4789]: I0202 21:43:09.151849 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/memcached-0" podUID="078a8abb-3926-40cd-9340-0bef088c130f" containerName="memcached" containerID="cri-o://3d1acdaf38b8f90e2888fd9bb9d6b2a8fab388dd54ec79c7218017d80c8b5670" gracePeriod=30 Feb 02 21:43:09 crc kubenswrapper[4789]: I0202 21:43:09.153143 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64a9a0c6-c663-400b-8c60-43c582b7cac0-operator-scripts\") pod \"root-account-create-update-sbz55\" (UID: \"64a9a0c6-c663-400b-8c60-43c582b7cac0\") " pod="openstack/root-account-create-update-sbz55" Feb 02 21:43:09 crc kubenswrapper[4789]: I0202 21:43:09.153178 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7qsd\" (UniqueName: \"kubernetes.io/projected/64a9a0c6-c663-400b-8c60-43c582b7cac0-kube-api-access-g7qsd\") pod \"root-account-create-update-sbz55\" (UID: \"64a9a0c6-c663-400b-8c60-43c582b7cac0\") " pod="openstack/root-account-create-update-sbz55" Feb 02 21:43:09 crc kubenswrapper[4789]: I0202 21:43:09.153983 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64a9a0c6-c663-400b-8c60-43c582b7cac0-operator-scripts\") pod \"root-account-create-update-sbz55\" (UID: \"64a9a0c6-c663-400b-8c60-43c582b7cac0\") " pod="openstack/root-account-create-update-sbz55" Feb 02 21:43:09 crc kubenswrapper[4789]: I0202 21:43:09.169017 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-4dc6-account-create-update-bf2l2"] Feb 02 21:43:09 crc kubenswrapper[4789]: I0202 21:43:09.201624 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-4dc6-account-create-update-bf2l2"] Feb 02 21:43:09 crc kubenswrapper[4789]: I0202 21:43:09.236009 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-4dc6-account-create-update-fjwtt"] Feb 02 21:43:09 crc kubenswrapper[4789]: I0202 21:43:09.237336 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-4dc6-account-create-update-fjwtt" Feb 02 21:43:09 crc kubenswrapper[4789]: I0202 21:43:09.240511 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-64bb487f87-44hz8" event={"ID":"c08255d0-1dd6-4556-8f30-65367b7739f7","Type":"ContainerDied","Data":"8851dc22e78005742ed4deebbb727c494051bb43cf575e215b7106870d3c7a31"} Feb 02 21:43:09 crc kubenswrapper[4789]: I0202 21:43:09.240561 4789 scope.go:117] "RemoveContainer" containerID="7d439ddc975b276958da145eaf095401f24feaac00038ca172395cfdde929e83" Feb 02 21:43:09 crc kubenswrapper[4789]: I0202 21:43:09.240710 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-64bb487f87-44hz8" Feb 02 21:43:09 crc kubenswrapper[4789]: I0202 21:43:09.241956 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7qsd\" (UniqueName: \"kubernetes.io/projected/64a9a0c6-c663-400b-8c60-43c582b7cac0-kube-api-access-g7qsd\") pod \"root-account-create-update-sbz55\" (UID: \"64a9a0c6-c663-400b-8c60-43c582b7cac0\") " pod="openstack/root-account-create-update-sbz55" Feb 02 21:43:09 crc kubenswrapper[4789]: I0202 21:43:09.245487 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 02 21:43:09 crc kubenswrapper[4789]: I0202 21:43:09.245830 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-725d-account-create-update-6nrvh" Feb 02 21:43:09 crc kubenswrapper[4789]: I0202 21:43:09.247644 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-4dc6-account-create-update-fjwtt"] Feb 02 21:43:09 crc kubenswrapper[4789]: I0202 21:43:09.247674 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-725d-account-create-update-6nrvh" event={"ID":"c7b70ce2-ef37-4547-9c3c-7be3ec4b02c8","Type":"ContainerDied","Data":"c2983a3db760b9aa4b6d60ff43228b72d9c6eca9176950d85a8954d033e6f370"} Feb 02 21:43:09 crc kubenswrapper[4789]: I0202 21:43:09.250713 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-bvbsm"] Feb 02 21:43:09 crc kubenswrapper[4789]: I0202 21:43:09.255500 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-px4jz\" (UniqueName: \"kubernetes.io/projected/e3b2df70-e493-42b7-9009-64c6bdaf4dad-kube-api-access-px4jz\") pod \"keystone-4dc6-account-create-update-fjwtt\" (UID: \"e3b2df70-e493-42b7-9009-64c6bdaf4dad\") " pod="openstack/keystone-4dc6-account-create-update-fjwtt" Feb 02 21:43:09 crc kubenswrapper[4789]: I0202 21:43:09.255608 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e3b2df70-e493-42b7-9009-64c6bdaf4dad-operator-scripts\") pod \"keystone-4dc6-account-create-update-fjwtt\" (UID: \"e3b2df70-e493-42b7-9009-64c6bdaf4dad\") " pod="openstack/keystone-4dc6-account-create-update-fjwtt" Feb 02 21:43:09 crc kubenswrapper[4789]: E0202 21:43:09.255949 4789 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Feb 02 21:43:09 crc kubenswrapper[4789]: E0202 21:43:09.255987 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b8917d54-451e-4a56-9e8a-142bb5db17e1-config-data podName:b8917d54-451e-4a56-9e8a-142bb5db17e1 nodeName:}" failed. No retries permitted until 2026-02-02 21:43:13.255973712 +0000 UTC m=+1413.550998731 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/b8917d54-451e-4a56-9e8a-142bb5db17e1-config-data") pod "rabbitmq-cell1-server-0" (UID: "b8917d54-451e-4a56-9e8a-142bb5db17e1") : configmap "rabbitmq-cell1-config-data" not found Feb 02 21:43:09 crc kubenswrapper[4789]: I0202 21:43:09.269691 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-rbrvz"] Feb 02 21:43:09 crc kubenswrapper[4789]: I0202 21:43:09.279997 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-bvbsm"] Feb 02 21:43:09 crc kubenswrapper[4789]: I0202 21:43:09.286465 4789 generic.go:334] "Generic (PLEG): container finished" podID="212c4e72-7988-4770-ba07-ae0362baac7e" containerID="a8f8731d69214821017cbca7eb7712c56bfef4b649bbc850b8c1ced28aa04dc5" exitCode=2 Feb 02 21:43:09 crc kubenswrapper[4789]: I0202 21:43:09.286542 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"212c4e72-7988-4770-ba07-ae0362baac7e","Type":"ContainerDied","Data":"a8f8731d69214821017cbca7eb7712c56bfef4b649bbc850b8c1ced28aa04dc5"} Feb 02 21:43:09 crc kubenswrapper[4789]: I0202 21:43:09.293715 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-3f0a-account-create-update-qc5vb" event={"ID":"ef1040d3-c638-4098-a2ef-ce507371853e","Type":"ContainerStarted","Data":"a802ed4c160caa66ef7c5a33c2902a5711959773ae1b9603a0155bcdd7bcca98"} Feb 02 21:43:09 crc kubenswrapper[4789]: I0202 21:43:09.304634 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-c430-account-create-update-mtkw4" event={"ID":"151bf5b0-b174-42a9-8a0a-f650d74ec2a3","Type":"ContainerDied","Data":"46d738ccb8938e81ad7d864bee88b5b0a7da2d123ab63fb5d6eecef4a915a72b"} Feb 02 21:43:09 crc kubenswrapper[4789]: I0202 21:43:09.304729 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-c430-account-create-update-mtkw4" Feb 02 21:43:09 crc kubenswrapper[4789]: I0202 21:43:09.311557 4789 scope.go:117] "RemoveContainer" containerID="711efcab439843aaeb94f91469a73186433bd21cfd9a9a56c0f9006d0ae1c9be" Feb 02 21:43:09 crc kubenswrapper[4789]: I0202 21:43:09.315200 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f9d0bd72-572d-4b90-b747-f37b490b3e4a","Type":"ContainerDied","Data":"d22e9493ffa3fed25bfc87497d839cb4f9f443c3335599f33d8e713eb4b9ecaf"} Feb 02 21:43:09 crc kubenswrapper[4789]: I0202 21:43:09.315297 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 02 21:43:09 crc kubenswrapper[4789]: I0202 21:43:09.340310 4789 generic.go:334] "Generic (PLEG): container finished" podID="96f4773a-9fa9-41c6-ab4b-54107e66a498" containerID="82ad90219c4a326128d9fb471414b4e0b74ee26ef1c8f08b2e8b89d720565f03" exitCode=0 Feb 02 21:43:09 crc kubenswrapper[4789]: I0202 21:43:09.340372 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"96f4773a-9fa9-41c6-ab4b-54107e66a498","Type":"ContainerDied","Data":"82ad90219c4a326128d9fb471414b4e0b74ee26ef1c8f08b2e8b89d720565f03"} Feb 02 21:43:09 crc kubenswrapper[4789]: I0202 21:43:09.340395 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"96f4773a-9fa9-41c6-ab4b-54107e66a498","Type":"ContainerDied","Data":"ee8a86a649f3594509c94cf0c417ef1bf0bbd58671003431b12c9dd6d4093172"} Feb 02 21:43:09 crc kubenswrapper[4789]: I0202 21:43:09.340446 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 02 21:43:09 crc kubenswrapper[4789]: I0202 21:43:09.342380 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-rbrvz"] Feb 02 21:43:09 crc kubenswrapper[4789]: I0202 21:43:09.344541 4789 generic.go:334] "Generic (PLEG): container finished" podID="b579f7f4-db1f-4d76-82fb-ef4cad438842" containerID="c4d593fd14424a40e7eb4b508c719970461ef690e1eb1894e38dd03571b8b07b" exitCode=2 Feb 02 21:43:09 crc kubenswrapper[4789]: I0202 21:43:09.344780 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b579f7f4-db1f-4d76-82fb-ef4cad438842","Type":"ContainerDied","Data":"c4d593fd14424a40e7eb4b508c719970461ef690e1eb1894e38dd03571b8b07b"} Feb 02 21:43:09 crc kubenswrapper[4789]: I0202 21:43:09.344948 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 02 21:43:09 crc kubenswrapper[4789]: I0202 21:43:09.356250 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-595cf58668-hfkcq"] Feb 02 21:43:09 crc kubenswrapper[4789]: I0202 21:43:09.356475 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/keystone-595cf58668-hfkcq" podUID="0f86f59c-9db0-4580-a8f3-2d3fe558c905" containerName="keystone-api" containerID="cri-o://bbeb0176ca8c9142d15e473d306a3fc80a2f4568a8a86ce41b47afe19830a87f" gracePeriod=30 Feb 02 21:43:09 crc kubenswrapper[4789]: I0202 21:43:09.357619 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-px4jz\" (UniqueName: \"kubernetes.io/projected/e3b2df70-e493-42b7-9009-64c6bdaf4dad-kube-api-access-px4jz\") pod \"keystone-4dc6-account-create-update-fjwtt\" (UID: \"e3b2df70-e493-42b7-9009-64c6bdaf4dad\") " pod="openstack/keystone-4dc6-account-create-update-fjwtt" Feb 02 21:43:09 crc kubenswrapper[4789]: I0202 21:43:09.357697 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e3b2df70-e493-42b7-9009-64c6bdaf4dad-operator-scripts\") pod \"keystone-4dc6-account-create-update-fjwtt\" (UID: \"e3b2df70-e493-42b7-9009-64c6bdaf4dad\") " pod="openstack/keystone-4dc6-account-create-update-fjwtt" Feb 02 21:43:09 crc kubenswrapper[4789]: E0202 21:43:09.357830 4789 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Feb 02 21:43:09 crc kubenswrapper[4789]: E0202 21:43:09.357865 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e3b2df70-e493-42b7-9009-64c6bdaf4dad-operator-scripts podName:e3b2df70-e493-42b7-9009-64c6bdaf4dad nodeName:}" failed. No retries permitted until 2026-02-02 21:43:09.857854477 +0000 UTC m=+1410.152879486 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/e3b2df70-e493-42b7-9009-64c6bdaf4dad-operator-scripts") pod "keystone-4dc6-account-create-update-fjwtt" (UID: "e3b2df70-e493-42b7-9009-64c6bdaf4dad") : configmap "openstack-scripts" not found Feb 02 21:43:09 crc kubenswrapper[4789]: E0202 21:43:09.372142 4789 projected.go:194] Error preparing data for projected volume kube-api-access-px4jz for pod openstack/keystone-4dc6-account-create-update-fjwtt: failed to fetch token: serviceaccounts "galera-openstack" not found Feb 02 21:43:09 crc kubenswrapper[4789]: E0202 21:43:09.372210 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e3b2df70-e493-42b7-9009-64c6bdaf4dad-kube-api-access-px4jz podName:e3b2df70-e493-42b7-9009-64c6bdaf4dad nodeName:}" failed. No retries permitted until 2026-02-02 21:43:09.872192203 +0000 UTC m=+1410.167217222 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-px4jz" (UniqueName: "kubernetes.io/projected/e3b2df70-e493-42b7-9009-64c6bdaf4dad-kube-api-access-px4jz") pod "keystone-4dc6-account-create-update-fjwtt" (UID: "e3b2df70-e493-42b7-9009-64c6bdaf4dad") : failed to fetch token: serviceaccounts "galera-openstack" not found Feb 02 21:43:09 crc kubenswrapper[4789]: I0202 21:43:09.390507 4789 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/root-account-create-update-sbz55" secret="" err="secret \"galera-openstack-dockercfg-8n86d\" not found" Feb 02 21:43:09 crc kubenswrapper[4789]: I0202 21:43:09.390548 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-sbz55" Feb 02 21:43:09 crc kubenswrapper[4789]: I0202 21:43:09.436798 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-wrn6h"] Feb 02 21:43:09 crc kubenswrapper[4789]: I0202 21:43:09.448637 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Feb 02 21:43:09 crc kubenswrapper[4789]: I0202 21:43:09.482183 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-4dc6-account-create-update-fjwtt"] Feb 02 21:43:09 crc kubenswrapper[4789]: E0202 21:43:09.482819 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-px4jz operator-scripts], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/keystone-4dc6-account-create-update-fjwtt" podUID="e3b2df70-e493-42b7-9009-64c6bdaf4dad" Feb 02 21:43:09 crc kubenswrapper[4789]: I0202 21:43:09.493318 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-wrn6h"] Feb 02 21:43:09 crc kubenswrapper[4789]: I0202 21:43:09.507002 4789 scope.go:117] "RemoveContainer" containerID="3526b18b1cdfe6a5a1f276e6a5cb128388504d6bdbc6ed184e29a01812ab1266" Feb 02 21:43:09 crc kubenswrapper[4789]: I0202 21:43:09.533558 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-64bb487f87-44hz8"] Feb 02 21:43:09 crc kubenswrapper[4789]: I0202 21:43:09.552121 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-proxy-64bb487f87-44hz8"] Feb 02 21:43:09 crc kubenswrapper[4789]: I0202 21:43:09.588613 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-sbz55"] Feb 02 21:43:09 crc kubenswrapper[4789]: I0202 21:43:09.631592 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-725d-account-create-update-6nrvh"] Feb 02 21:43:09 crc kubenswrapper[4789]: I0202 21:43:09.642465 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-725d-account-create-update-6nrvh"] Feb 02 21:43:09 crc kubenswrapper[4789]: E0202 21:43:09.664790 4789 configmap.go:193] Couldn't get configMap openstack/openstack-cell1-scripts: configmap "openstack-cell1-scripts" not found Feb 02 21:43:09 crc kubenswrapper[4789]: E0202 21:43:09.664857 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/306f2aaf-92ed-4c14-92f4-a970a8240771-operator-scripts podName:306f2aaf-92ed-4c14-92f4-a970a8240771 nodeName:}" failed. No retries permitted until 2026-02-02 21:43:11.66484055 +0000 UTC m=+1411.959865569 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/306f2aaf-92ed-4c14-92f4-a970a8240771-operator-scripts") pod "root-account-create-update-h7zb6" (UID: "306f2aaf-92ed-4c14-92f4-a970a8240771") : configmap "openstack-cell1-scripts" not found Feb 02 21:43:09 crc kubenswrapper[4789]: I0202 21:43:09.681200 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-c430-account-create-update-mtkw4"] Feb 02 21:43:09 crc kubenswrapper[4789]: I0202 21:43:09.683201 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-c430-account-create-update-mtkw4"] Feb 02 21:43:09 crc kubenswrapper[4789]: I0202 21:43:09.693217 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 02 21:43:09 crc kubenswrapper[4789]: I0202 21:43:09.703632 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 02 21:43:09 crc kubenswrapper[4789]: I0202 21:43:09.707614 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 02 21:43:09 crc kubenswrapper[4789]: I0202 21:43:09.730104 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 02 21:43:09 crc kubenswrapper[4789]: I0202 21:43:09.738189 4789 scope.go:117] "RemoveContainer" containerID="82ad90219c4a326128d9fb471414b4e0b74ee26ef1c8f08b2e8b89d720565f03" Feb 02 21:43:09 crc kubenswrapper[4789]: I0202 21:43:09.739848 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 21:43:09 crc kubenswrapper[4789]: I0202 21:43:09.740055 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 21:43:09 crc kubenswrapper[4789]: I0202 21:43:09.806204 4789 scope.go:117] "RemoveContainer" containerID="a08b45f3dfbed710991b90377397df02266bd543ec0be74a9a29feca9df69385" Feb 02 21:43:09 crc kubenswrapper[4789]: I0202 21:43:09.812389 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="a77ac0de-f396-45e6-a92c-07fbddc4ec60" containerName="galera" containerID="cri-o://56c1fc152ae9c83eb013d9170e2ee84fae3556ed6cca1265e4e91d1f2bb54861" gracePeriod=30 Feb 02 21:43:09 crc kubenswrapper[4789]: I0202 21:43:09.852853 4789 scope.go:117] "RemoveContainer" containerID="82ad90219c4a326128d9fb471414b4e0b74ee26ef1c8f08b2e8b89d720565f03" Feb 02 21:43:09 crc kubenswrapper[4789]: E0202 21:43:09.855553 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82ad90219c4a326128d9fb471414b4e0b74ee26ef1c8f08b2e8b89d720565f03\": container with ID starting with 82ad90219c4a326128d9fb471414b4e0b74ee26ef1c8f08b2e8b89d720565f03 not found: ID does not exist" containerID="82ad90219c4a326128d9fb471414b4e0b74ee26ef1c8f08b2e8b89d720565f03" Feb 02 21:43:09 crc kubenswrapper[4789]: I0202 21:43:09.855607 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82ad90219c4a326128d9fb471414b4e0b74ee26ef1c8f08b2e8b89d720565f03"} err="failed to get container status \"82ad90219c4a326128d9fb471414b4e0b74ee26ef1c8f08b2e8b89d720565f03\": rpc error: code = NotFound desc = could not find container \"82ad90219c4a326128d9fb471414b4e0b74ee26ef1c8f08b2e8b89d720565f03\": container with ID starting with 82ad90219c4a326128d9fb471414b4e0b74ee26ef1c8f08b2e8b89d720565f03 not found: ID does not exist" Feb 02 21:43:09 crc kubenswrapper[4789]: I0202 21:43:09.855626 4789 scope.go:117] "RemoveContainer" containerID="a08b45f3dfbed710991b90377397df02266bd543ec0be74a9a29feca9df69385" Feb 02 21:43:09 crc kubenswrapper[4789]: E0202 21:43:09.855902 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a08b45f3dfbed710991b90377397df02266bd543ec0be74a9a29feca9df69385\": container with ID starting with a08b45f3dfbed710991b90377397df02266bd543ec0be74a9a29feca9df69385 not found: ID does not exist" containerID="a08b45f3dfbed710991b90377397df02266bd543ec0be74a9a29feca9df69385" Feb 02 21:43:09 crc kubenswrapper[4789]: I0202 21:43:09.855920 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a08b45f3dfbed710991b90377397df02266bd543ec0be74a9a29feca9df69385"} err="failed to get container status \"a08b45f3dfbed710991b90377397df02266bd543ec0be74a9a29feca9df69385\": rpc error: code = NotFound desc = could not find container \"a08b45f3dfbed710991b90377397df02266bd543ec0be74a9a29feca9df69385\": container with ID starting with a08b45f3dfbed710991b90377397df02266bd543ec0be74a9a29feca9df69385 not found: ID does not exist" Feb 02 21:43:09 crc kubenswrapper[4789]: I0202 21:43:09.877058 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-px4jz\" (UniqueName: \"kubernetes.io/projected/e3b2df70-e493-42b7-9009-64c6bdaf4dad-kube-api-access-px4jz\") pod \"keystone-4dc6-account-create-update-fjwtt\" (UID: \"e3b2df70-e493-42b7-9009-64c6bdaf4dad\") " pod="openstack/keystone-4dc6-account-create-update-fjwtt" Feb 02 21:43:09 crc kubenswrapper[4789]: I0202 21:43:09.877154 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e3b2df70-e493-42b7-9009-64c6bdaf4dad-operator-scripts\") pod \"keystone-4dc6-account-create-update-fjwtt\" (UID: \"e3b2df70-e493-42b7-9009-64c6bdaf4dad\") " pod="openstack/keystone-4dc6-account-create-update-fjwtt" Feb 02 21:43:09 crc kubenswrapper[4789]: E0202 21:43:09.877287 4789 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Feb 02 21:43:09 crc kubenswrapper[4789]: E0202 21:43:09.877337 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e3b2df70-e493-42b7-9009-64c6bdaf4dad-operator-scripts podName:e3b2df70-e493-42b7-9009-64c6bdaf4dad nodeName:}" failed. No retries permitted until 2026-02-02 21:43:10.877323928 +0000 UTC m=+1411.172348947 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/e3b2df70-e493-42b7-9009-64c6bdaf4dad-operator-scripts") pod "keystone-4dc6-account-create-update-fjwtt" (UID: "e3b2df70-e493-42b7-9009-64c6bdaf4dad") : configmap "openstack-scripts" not found Feb 02 21:43:09 crc kubenswrapper[4789]: E0202 21:43:09.881363 4789 projected.go:194] Error preparing data for projected volume kube-api-access-px4jz for pod openstack/keystone-4dc6-account-create-update-fjwtt: failed to fetch token: serviceaccounts "galera-openstack" not found Feb 02 21:43:09 crc kubenswrapper[4789]: E0202 21:43:09.881443 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e3b2df70-e493-42b7-9009-64c6bdaf4dad-kube-api-access-px4jz podName:e3b2df70-e493-42b7-9009-64c6bdaf4dad nodeName:}" failed. No retries permitted until 2026-02-02 21:43:10.881423504 +0000 UTC m=+1411.176448523 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-px4jz" (UniqueName: "kubernetes.io/projected/e3b2df70-e493-42b7-9009-64c6bdaf4dad-kube-api-access-px4jz") pod "keystone-4dc6-account-create-update-fjwtt" (UID: "e3b2df70-e493-42b7-9009-64c6bdaf4dad") : failed to fetch token: serviceaccounts "galera-openstack" not found Feb 02 21:43:09 crc kubenswrapper[4789]: I0202 21:43:09.907611 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.085247 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/212c4e72-7988-4770-ba07-ae0362baac7e-kube-state-metrics-tls-config\") pod \"212c4e72-7988-4770-ba07-ae0362baac7e\" (UID: \"212c4e72-7988-4770-ba07-ae0362baac7e\") " Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.085286 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/212c4e72-7988-4770-ba07-ae0362baac7e-kube-state-metrics-tls-certs\") pod \"212c4e72-7988-4770-ba07-ae0362baac7e\" (UID: \"212c4e72-7988-4770-ba07-ae0362baac7e\") " Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.085316 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5sb78\" (UniqueName: \"kubernetes.io/projected/212c4e72-7988-4770-ba07-ae0362baac7e-kube-api-access-5sb78\") pod \"212c4e72-7988-4770-ba07-ae0362baac7e\" (UID: \"212c4e72-7988-4770-ba07-ae0362baac7e\") " Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.085353 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/212c4e72-7988-4770-ba07-ae0362baac7e-combined-ca-bundle\") pod \"212c4e72-7988-4770-ba07-ae0362baac7e\" (UID: \"212c4e72-7988-4770-ba07-ae0362baac7e\") " Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.100890 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/212c4e72-7988-4770-ba07-ae0362baac7e-kube-api-access-5sb78" (OuterVolumeSpecName: "kube-api-access-5sb78") pod "212c4e72-7988-4770-ba07-ae0362baac7e" (UID: "212c4e72-7988-4770-ba07-ae0362baac7e"). InnerVolumeSpecName "kube-api-access-5sb78". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.115652 4789 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="0ba13473-b423-43a0-ab15-9d6be616cc7b" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.205:8775/\": read tcp 10.217.0.2:39646->10.217.0.205:8775: read: connection reset by peer" Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.115957 4789 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="0ba13473-b423-43a0-ab15-9d6be616cc7b" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.205:8775/\": read tcp 10.217.0.2:39636->10.217.0.205:8775: read: connection reset by peer" Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.127540 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/212c4e72-7988-4770-ba07-ae0362baac7e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "212c4e72-7988-4770-ba07-ae0362baac7e" (UID: "212c4e72-7988-4770-ba07-ae0362baac7e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.146469 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-3f0a-account-create-update-qc5vb" Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.164852 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/212c4e72-7988-4770-ba07-ae0362baac7e-kube-state-metrics-tls-config" (OuterVolumeSpecName: "kube-state-metrics-tls-config") pod "212c4e72-7988-4770-ba07-ae0362baac7e" (UID: "212c4e72-7988-4770-ba07-ae0362baac7e"). InnerVolumeSpecName "kube-state-metrics-tls-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.188091 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xd4gr\" (UniqueName: \"kubernetes.io/projected/ef1040d3-c638-4098-a2ef-ce507371853e-kube-api-access-xd4gr\") pod \"ef1040d3-c638-4098-a2ef-ce507371853e\" (UID: \"ef1040d3-c638-4098-a2ef-ce507371853e\") " Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.188138 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ef1040d3-c638-4098-a2ef-ce507371853e-operator-scripts\") pod \"ef1040d3-c638-4098-a2ef-ce507371853e\" (UID: \"ef1040d3-c638-4098-a2ef-ce507371853e\") " Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.188502 4789 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/212c4e72-7988-4770-ba07-ae0362baac7e-kube-state-metrics-tls-config\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.188515 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5sb78\" (UniqueName: \"kubernetes.io/projected/212c4e72-7988-4770-ba07-ae0362baac7e-kube-api-access-5sb78\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.188524 4789 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/212c4e72-7988-4770-ba07-ae0362baac7e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:10 crc kubenswrapper[4789]: E0202 21:43:10.188621 4789 secret.go:188] Couldn't get secret openstack/nova-cell1-conductor-config-data: secret "nova-cell1-conductor-config-data" not found Feb 02 21:43:10 crc kubenswrapper[4789]: E0202 21:43:10.188671 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/399d9417-2065-4e92-89c5-a04dbeaf2cca-config-data podName:399d9417-2065-4e92-89c5-a04dbeaf2cca nodeName:}" failed. No retries permitted until 2026-02-02 21:43:14.188654994 +0000 UTC m=+1414.483680013 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/399d9417-2065-4e92-89c5-a04dbeaf2cca-config-data") pod "nova-cell1-conductor-0" (UID: "399d9417-2065-4e92-89c5-a04dbeaf2cca") : secret "nova-cell1-conductor-config-data" not found Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.189802 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef1040d3-c638-4098-a2ef-ce507371853e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ef1040d3-c638-4098-a2ef-ce507371853e" (UID: "ef1040d3-c638-4098-a2ef-ce507371853e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.202842 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef1040d3-c638-4098-a2ef-ce507371853e-kube-api-access-xd4gr" (OuterVolumeSpecName: "kube-api-access-xd4gr") pod "ef1040d3-c638-4098-a2ef-ce507371853e" (UID: "ef1040d3-c638-4098-a2ef-ce507371853e"). InnerVolumeSpecName "kube-api-access-xd4gr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.227725 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/212c4e72-7988-4770-ba07-ae0362baac7e-kube-state-metrics-tls-certs" (OuterVolumeSpecName: "kube-state-metrics-tls-certs") pod "212c4e72-7988-4770-ba07-ae0362baac7e" (UID: "212c4e72-7988-4770-ba07-ae0362baac7e"). InnerVolumeSpecName "kube-state-metrics-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.270833 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-h7zb6" Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.276333 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-68d9498c68-84jcz" Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.290447 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xd4gr\" (UniqueName: \"kubernetes.io/projected/ef1040d3-c638-4098-a2ef-ce507371853e-kube-api-access-xd4gr\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.290474 4789 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ef1040d3-c638-4098-a2ef-ce507371853e-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.290483 4789 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/212c4e72-7988-4770-ba07-ae0362baac7e-kube-state-metrics-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.345355 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-ac1d-account-create-update-rvstr" Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.381700 4789 generic.go:334] "Generic (PLEG): container finished" podID="0ba13473-b423-43a0-ab15-9d6be616cc7b" containerID="e343b555d9621789a633967b6cd533bf45c88272e650aba944e657e5737ee258" exitCode=0 Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.381799 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0ba13473-b423-43a0-ab15-9d6be616cc7b","Type":"ContainerDied","Data":"e343b555d9621789a633967b6cd533bf45c88272e650aba944e657e5737ee258"} Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.392024 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/306f2aaf-92ed-4c14-92f4-a970a8240771-operator-scripts\") pod \"306f2aaf-92ed-4c14-92f4-a970a8240771\" (UID: \"306f2aaf-92ed-4c14-92f4-a970a8240771\") " Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.392091 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/349cede5-331c-4454-8c9c-fda8fe886f07-combined-ca-bundle\") pod \"349cede5-331c-4454-8c9c-fda8fe886f07\" (UID: \"349cede5-331c-4454-8c9c-fda8fe886f07\") " Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.392145 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c5d8r\" (UniqueName: \"kubernetes.io/projected/349cede5-331c-4454-8c9c-fda8fe886f07-kube-api-access-c5d8r\") pod \"349cede5-331c-4454-8c9c-fda8fe886f07\" (UID: \"349cede5-331c-4454-8c9c-fda8fe886f07\") " Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.392174 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/349cede5-331c-4454-8c9c-fda8fe886f07-public-tls-certs\") pod \"349cede5-331c-4454-8c9c-fda8fe886f07\" (UID: \"349cede5-331c-4454-8c9c-fda8fe886f07\") " Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.392215 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/349cede5-331c-4454-8c9c-fda8fe886f07-config-data\") pod \"349cede5-331c-4454-8c9c-fda8fe886f07\" (UID: \"349cede5-331c-4454-8c9c-fda8fe886f07\") " Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.392234 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/349cede5-331c-4454-8c9c-fda8fe886f07-internal-tls-certs\") pod \"349cede5-331c-4454-8c9c-fda8fe886f07\" (UID: \"349cede5-331c-4454-8c9c-fda8fe886f07\") " Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.392292 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8x2b7\" (UniqueName: \"kubernetes.io/projected/306f2aaf-92ed-4c14-92f4-a970a8240771-kube-api-access-8x2b7\") pod \"306f2aaf-92ed-4c14-92f4-a970a8240771\" (UID: \"306f2aaf-92ed-4c14-92f4-a970a8240771\") " Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.392325 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/349cede5-331c-4454-8c9c-fda8fe886f07-scripts\") pod \"349cede5-331c-4454-8c9c-fda8fe886f07\" (UID: \"349cede5-331c-4454-8c9c-fda8fe886f07\") " Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.392409 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/349cede5-331c-4454-8c9c-fda8fe886f07-logs\") pod \"349cede5-331c-4454-8c9c-fda8fe886f07\" (UID: \"349cede5-331c-4454-8c9c-fda8fe886f07\") " Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.393106 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/349cede5-331c-4454-8c9c-fda8fe886f07-logs" (OuterVolumeSpecName: "logs") pod "349cede5-331c-4454-8c9c-fda8fe886f07" (UID: "349cede5-331c-4454-8c9c-fda8fe886f07"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.393858 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/306f2aaf-92ed-4c14-92f4-a970a8240771-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "306f2aaf-92ed-4c14-92f4-a970a8240771" (UID: "306f2aaf-92ed-4c14-92f4-a970a8240771"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.405107 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/349cede5-331c-4454-8c9c-fda8fe886f07-kube-api-access-c5d8r" (OuterVolumeSpecName: "kube-api-access-c5d8r") pod "349cede5-331c-4454-8c9c-fda8fe886f07" (UID: "349cede5-331c-4454-8c9c-fda8fe886f07"). InnerVolumeSpecName "kube-api-access-c5d8r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.406954 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/306f2aaf-92ed-4c14-92f4-a970a8240771-kube-api-access-8x2b7" (OuterVolumeSpecName: "kube-api-access-8x2b7") pod "306f2aaf-92ed-4c14-92f4-a970a8240771" (UID: "306f2aaf-92ed-4c14-92f4-a970a8240771"). InnerVolumeSpecName "kube-api-access-8x2b7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.419070 4789 generic.go:334] "Generic (PLEG): container finished" podID="7d53e4c0-add2-4cfd-bbea-e0a1d3196091" containerID="25969b57d6ee15da22b2fd6fac46c116130225ea93ef2013003c96e7fe1d6cca" exitCode=0 Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.419132 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6b8b9b54f6-jfnqs" event={"ID":"7d53e4c0-add2-4cfd-bbea-e0a1d3196091","Type":"ContainerDied","Data":"25969b57d6ee15da22b2fd6fac46c116130225ea93ef2013003c96e7fe1d6cca"} Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.436269 4789 generic.go:334] "Generic (PLEG): container finished" podID="1ae097e7-380b-4044-8598-abc3e1059356" containerID="0a2cae00db145b6560fcc0b648c1c292b3eb7df490809622a8a50541cde04a0c" exitCode=0 Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.440962 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/349cede5-331c-4454-8c9c-fda8fe886f07-scripts" (OuterVolumeSpecName: "scripts") pod "349cede5-331c-4454-8c9c-fda8fe886f07" (UID: "349cede5-331c-4454-8c9c-fda8fe886f07"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.441914 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.443669 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-3f0a-account-create-update-qc5vb" Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.446647 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-ac1d-account-create-update-rvstr" Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.486617 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.492209 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01c5293c-f7b0-4141-99a7-e423de507b87" path="/var/lib/kubelet/pods/01c5293c-f7b0-4141-99a7-e423de507b87/volumes" Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.495000 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="151bf5b0-b174-42a9-8a0a-f650d74ec2a3" path="/var/lib/kubelet/pods/151bf5b0-b174-42a9-8a0a-f650d74ec2a3/volumes" Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.498352 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27d741b5-10a4-4acb-b4b8-cf06f35a66f2" path="/var/lib/kubelet/pods/27d741b5-10a4-4acb-b4b8-cf06f35a66f2/volumes" Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.503799 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/24fb18f4-7a0f-4ae5-9104-e7dc45a479ff-httpd-run\") pod \"24fb18f4-7a0f-4ae5-9104-e7dc45a479ff\" (UID: \"24fb18f4-7a0f-4ae5-9104-e7dc45a479ff\") " Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.503828 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/24fb18f4-7a0f-4ae5-9104-e7dc45a479ff-internal-tls-certs\") pod \"24fb18f4-7a0f-4ae5-9104-e7dc45a479ff\" (UID: \"24fb18f4-7a0f-4ae5-9104-e7dc45a479ff\") " Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.503858 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"24fb18f4-7a0f-4ae5-9104-e7dc45a479ff\" (UID: \"24fb18f4-7a0f-4ae5-9104-e7dc45a479ff\") " Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.503896 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24fb18f4-7a0f-4ae5-9104-e7dc45a479ff-config-data\") pod \"24fb18f4-7a0f-4ae5-9104-e7dc45a479ff\" (UID: \"24fb18f4-7a0f-4ae5-9104-e7dc45a479ff\") " Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.503919 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jt6bh\" (UniqueName: \"kubernetes.io/projected/24fb18f4-7a0f-4ae5-9104-e7dc45a479ff-kube-api-access-jt6bh\") pod \"24fb18f4-7a0f-4ae5-9104-e7dc45a479ff\" (UID: \"24fb18f4-7a0f-4ae5-9104-e7dc45a479ff\") " Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.503939 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24fb18f4-7a0f-4ae5-9104-e7dc45a479ff-combined-ca-bundle\") pod \"24fb18f4-7a0f-4ae5-9104-e7dc45a479ff\" (UID: \"24fb18f4-7a0f-4ae5-9104-e7dc45a479ff\") " Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.503962 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x454g\" (UniqueName: \"kubernetes.io/projected/1821366b-85cb-419f-9c57-9014300724be-kube-api-access-x454g\") pod \"1821366b-85cb-419f-9c57-9014300724be\" (UID: \"1821366b-85cb-419f-9c57-9014300724be\") " Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.503989 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24fb18f4-7a0f-4ae5-9104-e7dc45a479ff-scripts\") pod \"24fb18f4-7a0f-4ae5-9104-e7dc45a479ff\" (UID: \"24fb18f4-7a0f-4ae5-9104-e7dc45a479ff\") " Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.504018 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1821366b-85cb-419f-9c57-9014300724be-operator-scripts\") pod \"1821366b-85cb-419f-9c57-9014300724be\" (UID: \"1821366b-85cb-419f-9c57-9014300724be\") " Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.504045 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24fb18f4-7a0f-4ae5-9104-e7dc45a479ff-logs\") pod \"24fb18f4-7a0f-4ae5-9104-e7dc45a479ff\" (UID: \"24fb18f4-7a0f-4ae5-9104-e7dc45a479ff\") " Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.504378 4789 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/349cede5-331c-4454-8c9c-fda8fe886f07-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.504390 4789 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/349cede5-331c-4454-8c9c-fda8fe886f07-logs\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.504399 4789 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/306f2aaf-92ed-4c14-92f4-a970a8240771-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.504409 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c5d8r\" (UniqueName: \"kubernetes.io/projected/349cede5-331c-4454-8c9c-fda8fe886f07-kube-api-access-c5d8r\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.504418 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8x2b7\" (UniqueName: \"kubernetes.io/projected/306f2aaf-92ed-4c14-92f4-a970a8240771-kube-api-access-8x2b7\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.505339 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24fb18f4-7a0f-4ae5-9104-e7dc45a479ff-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "24fb18f4-7a0f-4ae5-9104-e7dc45a479ff" (UID: "24fb18f4-7a0f-4ae5-9104-e7dc45a479ff"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.506963 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="509b2067-171f-4e99-86fa-12cd19ff40ee" path="/var/lib/kubelet/pods/509b2067-171f-4e99-86fa-12cd19ff40ee/volumes" Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.512350 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="743bffd7-f479-4b98-8cd6-9714dfcfeab1" path="/var/lib/kubelet/pods/743bffd7-f479-4b98-8cd6-9714dfcfeab1/volumes" Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.513402 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96f4773a-9fa9-41c6-ab4b-54107e66a498" path="/var/lib/kubelet/pods/96f4773a-9fa9-41c6-ab4b-54107e66a498/volumes" Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.514179 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c08255d0-1dd6-4556-8f30-65367b7739f7" path="/var/lib/kubelet/pods/c08255d0-1dd6-4556-8f30-65367b7739f7/volumes" Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.515333 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7b70ce2-ef37-4547-9c3c-7be3ec4b02c8" path="/var/lib/kubelet/pods/c7b70ce2-ef37-4547-9c3c-7be3ec4b02c8/volumes" Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.515796 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ddd49c35-2c4d-4fad-a207-ca8d0be92036" path="/var/lib/kubelet/pods/ddd49c35-2c4d-4fad-a207-ca8d0be92036/volumes" Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.516274 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9d0bd72-572d-4b90-b747-f37b490b3e4a" path="/var/lib/kubelet/pods/f9d0bd72-572d-4b90-b747-f37b490b3e4a/volumes" Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.516505 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24fb18f4-7a0f-4ae5-9104-e7dc45a479ff-logs" (OuterVolumeSpecName: "logs") pod "24fb18f4-7a0f-4ae5-9104-e7dc45a479ff" (UID: "24fb18f4-7a0f-4ae5-9104-e7dc45a479ff"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.519876 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.520056 4789 generic.go:334] "Generic (PLEG): container finished" podID="24fb18f4-7a0f-4ae5-9104-e7dc45a479ff" containerID="a473f8d31f1d21a7c2b382a1e23b8b88890e3aa22648e9737f24020491949fe0" exitCode=0 Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.520126 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.521687 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1821366b-85cb-419f-9c57-9014300724be-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1821366b-85cb-419f-9c57-9014300724be" (UID: "1821366b-85cb-419f-9c57-9014300724be"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.529820 4789 generic.go:334] "Generic (PLEG): container finished" podID="3bb81567-8536-4275-ab0e-a003ef904230" containerID="d9549a00930229585c1a660c46c1ee179871330062dec64c5947fd34ad7860f5" exitCode=0 Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.537025 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24fb18f4-7a0f-4ae5-9104-e7dc45a479ff-scripts" (OuterVolumeSpecName: "scripts") pod "24fb18f4-7a0f-4ae5-9104-e7dc45a479ff" (UID: "24fb18f4-7a0f-4ae5-9104-e7dc45a479ff"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.540562 4789 generic.go:334] "Generic (PLEG): container finished" podID="7acbb536-0a08-4132-a84a-848735b0e7f4" containerID="25ed4343b75caa0616ab66903bb372442dbf22b4a29f2c30b9fcf20df97021f7" exitCode=0 Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.544794 4789 generic.go:334] "Generic (PLEG): container finished" podID="078a8abb-3926-40cd-9340-0bef088c130f" containerID="3d1acdaf38b8f90e2888fd9bb9d6b2a8fab388dd54ec79c7218017d80c8b5670" exitCode=0 Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.544874 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.564319 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/349cede5-331c-4454-8c9c-fda8fe886f07-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "349cede5-331c-4454-8c9c-fda8fe886f07" (UID: "349cede5-331c-4454-8c9c-fda8fe886f07"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.578269 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24fb18f4-7a0f-4ae5-9104-e7dc45a479ff-kube-api-access-jt6bh" (OuterVolumeSpecName: "kube-api-access-jt6bh") pod "24fb18f4-7a0f-4ae5-9104-e7dc45a479ff" (UID: "24fb18f4-7a0f-4ae5-9104-e7dc45a479ff"). InnerVolumeSpecName "kube-api-access-jt6bh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.580123 4789 generic.go:334] "Generic (PLEG): container finished" podID="349cede5-331c-4454-8c9c-fda8fe886f07" containerID="40a59db16d790bc9ade9d424000123015ece03fbc62bfe3a010f70a44b900736" exitCode=0 Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.580216 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-68d9498c68-84jcz" Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.581415 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24fb18f4-7a0f-4ae5-9104-e7dc45a479ff-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "24fb18f4-7a0f-4ae5-9104-e7dc45a479ff" (UID: "24fb18f4-7a0f-4ae5-9104-e7dc45a479ff"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.582227 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-h7zb6" Feb 02 21:43:10 crc kubenswrapper[4789]: E0202 21:43:10.584351 4789 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e98adb233ea0f16d2b2f46eddae689b1bb397a9ba532ccb91e5ece02b9397f95" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 02 21:43:10 crc kubenswrapper[4789]: E0202 21:43:10.590853 4789 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e98adb233ea0f16d2b2f46eddae689b1bb397a9ba532ccb91e5ece02b9397f95" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.590991 4789 generic.go:334] "Generic (PLEG): container finished" podID="b579f7f4-db1f-4d76-82fb-ef4cad438842" containerID="0947f8cdd1f5dab6746e2ce88b87d9cc21b32de7ac54eec8ed4b2dc8b2ff1f61" exitCode=0 Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.591004 4789 generic.go:334] "Generic (PLEG): container finished" podID="b579f7f4-db1f-4d76-82fb-ef4cad438842" containerID="28a7ed128e7bef7f569955019dd73ac9d95249468906497c95bad0c6363ebdd8" exitCode=0 Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.591060 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-4dc6-account-create-update-fjwtt" Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.604714 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/349cede5-331c-4454-8c9c-fda8fe886f07-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "349cede5-331c-4454-8c9c-fda8fe886f07" (UID: "349cede5-331c-4454-8c9c-fda8fe886f07"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.604727 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/078a8abb-3926-40cd-9340-0bef088c130f-kolla-config\") pod \"078a8abb-3926-40cd-9340-0bef088c130f\" (UID: \"078a8abb-3926-40cd-9340-0bef088c130f\") " Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.604871 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/078a8abb-3926-40cd-9340-0bef088c130f-config-data\") pod \"078a8abb-3926-40cd-9340-0bef088c130f\" (UID: \"078a8abb-3926-40cd-9340-0bef088c130f\") " Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.604931 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-88qk8\" (UniqueName: \"kubernetes.io/projected/078a8abb-3926-40cd-9340-0bef088c130f-kube-api-access-88qk8\") pod \"078a8abb-3926-40cd-9340-0bef088c130f\" (UID: \"078a8abb-3926-40cd-9340-0bef088c130f\") " Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.605012 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/349cede5-331c-4454-8c9c-fda8fe886f07-public-tls-certs\") pod \"349cede5-331c-4454-8c9c-fda8fe886f07\" (UID: \"349cede5-331c-4454-8c9c-fda8fe886f07\") " Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.605051 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/078a8abb-3926-40cd-9340-0bef088c130f-memcached-tls-certs\") pod \"078a8abb-3926-40cd-9340-0bef088c130f\" (UID: \"078a8abb-3926-40cd-9340-0bef088c130f\") " Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.605107 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/078a8abb-3926-40cd-9340-0bef088c130f-combined-ca-bundle\") pod \"078a8abb-3926-40cd-9340-0bef088c130f\" (UID: \"078a8abb-3926-40cd-9340-0bef088c130f\") " Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.605286 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/078a8abb-3926-40cd-9340-0bef088c130f-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "078a8abb-3926-40cd-9340-0bef088c130f" (UID: "078a8abb-3926-40cd-9340-0bef088c130f"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.605769 4789 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/24fb18f4-7a0f-4ae5-9104-e7dc45a479ff-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.605783 4789 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/078a8abb-3926-40cd-9340-0bef088c130f-kolla-config\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.605794 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jt6bh\" (UniqueName: \"kubernetes.io/projected/24fb18f4-7a0f-4ae5-9104-e7dc45a479ff-kube-api-access-jt6bh\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.605803 4789 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24fb18f4-7a0f-4ae5-9104-e7dc45a479ff-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.605812 4789 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24fb18f4-7a0f-4ae5-9104-e7dc45a479ff-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.605820 4789 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1821366b-85cb-419f-9c57-9014300724be-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.605828 4789 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/349cede5-331c-4454-8c9c-fda8fe886f07-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.605836 4789 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24fb18f4-7a0f-4ae5-9104-e7dc45a479ff-logs\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:10 crc kubenswrapper[4789]: W0202 21:43:10.606744 4789 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/349cede5-331c-4454-8c9c-fda8fe886f07/volumes/kubernetes.io~secret/public-tls-certs Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.606774 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/349cede5-331c-4454-8c9c-fda8fe886f07-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "349cede5-331c-4454-8c9c-fda8fe886f07" (UID: "349cede5-331c-4454-8c9c-fda8fe886f07"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.607029 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/078a8abb-3926-40cd-9340-0bef088c130f-config-data" (OuterVolumeSpecName: "config-data") pod "078a8abb-3926-40cd-9340-0bef088c130f" (UID: "078a8abb-3926-40cd-9340-0bef088c130f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.607108 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "24fb18f4-7a0f-4ae5-9104-e7dc45a479ff" (UID: "24fb18f4-7a0f-4ae5-9104-e7dc45a479ff"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.608607 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/078a8abb-3926-40cd-9340-0bef088c130f-kube-api-access-88qk8" (OuterVolumeSpecName: "kube-api-access-88qk8") pod "078a8abb-3926-40cd-9340-0bef088c130f" (UID: "078a8abb-3926-40cd-9340-0bef088c130f"). InnerVolumeSpecName "kube-api-access-88qk8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.608787 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1821366b-85cb-419f-9c57-9014300724be-kube-api-access-x454g" (OuterVolumeSpecName: "kube-api-access-x454g") pod "1821366b-85cb-419f-9c57-9014300724be" (UID: "1821366b-85cb-419f-9c57-9014300724be"). InnerVolumeSpecName "kube-api-access-x454g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:43:10 crc kubenswrapper[4789]: E0202 21:43:10.614343 4789 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e98adb233ea0f16d2b2f46eddae689b1bb397a9ba532ccb91e5ece02b9397f95" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 02 21:43:10 crc kubenswrapper[4789]: E0202 21:43:10.614415 4789 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell1-conductor-0" podUID="399d9417-2065-4e92-89c5-a04dbeaf2cca" containerName="nova-cell1-conductor-conductor" Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.615032 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/349cede5-331c-4454-8c9c-fda8fe886f07-config-data" (OuterVolumeSpecName: "config-data") pod "349cede5-331c-4454-8c9c-fda8fe886f07" (UID: "349cede5-331c-4454-8c9c-fda8fe886f07"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.624692 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/349cede5-331c-4454-8c9c-fda8fe886f07-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "349cede5-331c-4454-8c9c-fda8fe886f07" (UID: "349cede5-331c-4454-8c9c-fda8fe886f07"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.699755 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24fb18f4-7a0f-4ae5-9104-e7dc45a479ff-config-data" (OuterVolumeSpecName: "config-data") pod "24fb18f4-7a0f-4ae5-9104-e7dc45a479ff" (UID: "24fb18f4-7a0f-4ae5-9104-e7dc45a479ff"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.702339 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24fb18f4-7a0f-4ae5-9104-e7dc45a479ff-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "24fb18f4-7a0f-4ae5-9104-e7dc45a479ff" (UID: "24fb18f4-7a0f-4ae5-9104-e7dc45a479ff"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.706917 4789 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/349cede5-331c-4454-8c9c-fda8fe886f07-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.706947 4789 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/349cede5-331c-4454-8c9c-fda8fe886f07-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.706956 4789 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/349cede5-331c-4454-8c9c-fda8fe886f07-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.706966 4789 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/24fb18f4-7a0f-4ae5-9104-e7dc45a479ff-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.706985 4789 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.706994 4789 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24fb18f4-7a0f-4ae5-9104-e7dc45a479ff-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.707003 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x454g\" (UniqueName: \"kubernetes.io/projected/1821366b-85cb-419f-9c57-9014300724be-kube-api-access-x454g\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.707013 4789 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/078a8abb-3926-40cd-9340-0bef088c130f-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.707021 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-88qk8\" (UniqueName: \"kubernetes.io/projected/078a8abb-3926-40cd-9340-0bef088c130f-kube-api-access-88qk8\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.723760 4789 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.736099 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/078a8abb-3926-40cd-9340-0bef088c130f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "078a8abb-3926-40cd-9340-0bef088c130f" (UID: "078a8abb-3926-40cd-9340-0bef088c130f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.803487 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1ae097e7-380b-4044-8598-abc3e1059356","Type":"ContainerDied","Data":"0a2cae00db145b6560fcc0b648c1c292b3eb7df490809622a8a50541cde04a0c"} Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.803841 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-3f0a-account-create-update-qc5vb"] Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.803889 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-3f0a-account-create-update-qc5vb"] Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.803919 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-ac1d-account-create-update-rvstr" event={"ID":"1821366b-85cb-419f-9c57-9014300724be","Type":"ContainerDied","Data":"5cc091e69650107d52b98902ffc02640ad214738be467300f40730cf122182b5"} Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.803961 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"212c4e72-7988-4770-ba07-ae0362baac7e","Type":"ContainerDied","Data":"061832fab16bcca4a9567dd47e3aa9f4f35856126e73b29d74d98749c45032a7"} Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.803983 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"24fb18f4-7a0f-4ae5-9104-e7dc45a479ff","Type":"ContainerDied","Data":"a473f8d31f1d21a7c2b382a1e23b8b88890e3aa22648e9737f24020491949fe0"} Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.804003 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3bb81567-8536-4275-ab0e-a003ef904230","Type":"ContainerDied","Data":"d9549a00930229585c1a660c46c1ee179871330062dec64c5947fd34ad7860f5"} Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.804046 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7acbb536-0a08-4132-a84a-848735b0e7f4","Type":"ContainerDied","Data":"25ed4343b75caa0616ab66903bb372442dbf22b4a29f2c30b9fcf20df97021f7"} Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.804066 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"078a8abb-3926-40cd-9340-0bef088c130f","Type":"ContainerDied","Data":"3d1acdaf38b8f90e2888fd9bb9d6b2a8fab388dd54ec79c7218017d80c8b5670"} Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.804080 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-68d9498c68-84jcz" event={"ID":"349cede5-331c-4454-8c9c-fda8fe886f07","Type":"ContainerDied","Data":"40a59db16d790bc9ade9d424000123015ece03fbc62bfe3a010f70a44b900736"} Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.804094 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-68d9498c68-84jcz" event={"ID":"349cede5-331c-4454-8c9c-fda8fe886f07","Type":"ContainerDied","Data":"332ec0c69f4ecd8301ba4e0268ea9cf965fa65934dfdb14dfd73d8fdbe8fbed3"} Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.804133 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-h7zb6" event={"ID":"306f2aaf-92ed-4c14-92f4-a970a8240771","Type":"ContainerDied","Data":"a0fe3dde3cc43bc0d1319bffd7a71533f90a6dc9188661e056ed045674ac6da4"} Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.804150 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b579f7f4-db1f-4d76-82fb-ef4cad438842","Type":"ContainerDied","Data":"0947f8cdd1f5dab6746e2ce88b87d9cc21b32de7ac54eec8ed4b2dc8b2ff1f61"} Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.804164 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b579f7f4-db1f-4d76-82fb-ef4cad438842","Type":"ContainerDied","Data":"28a7ed128e7bef7f569955019dd73ac9d95249468906497c95bad0c6363ebdd8"} Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.804183 4789 scope.go:117] "RemoveContainer" containerID="a8f8731d69214821017cbca7eb7712c56bfef4b649bbc850b8c1ced28aa04dc5" Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.810127 4789 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/078a8abb-3926-40cd-9340-0bef088c130f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.810249 4789 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.815373 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.843224 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/078a8abb-3926-40cd-9340-0bef088c130f-memcached-tls-certs" (OuterVolumeSpecName: "memcached-tls-certs") pod "078a8abb-3926-40cd-9340-0bef088c130f" (UID: "078a8abb-3926-40cd-9340-0bef088c130f"). InnerVolumeSpecName "memcached-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.918008 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3bb81567-8536-4275-ab0e-a003ef904230-httpd-run\") pod \"3bb81567-8536-4275-ab0e-a003ef904230\" (UID: \"3bb81567-8536-4275-ab0e-a003ef904230\") " Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.918368 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bb81567-8536-4275-ab0e-a003ef904230-public-tls-certs\") pod \"3bb81567-8536-4275-ab0e-a003ef904230\" (UID: \"3bb81567-8536-4275-ab0e-a003ef904230\") " Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.918436 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-stsll\" (UniqueName: \"kubernetes.io/projected/3bb81567-8536-4275-ab0e-a003ef904230-kube-api-access-stsll\") pod \"3bb81567-8536-4275-ab0e-a003ef904230\" (UID: \"3bb81567-8536-4275-ab0e-a003ef904230\") " Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.918536 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3bb81567-8536-4275-ab0e-a003ef904230-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "3bb81567-8536-4275-ab0e-a003ef904230" (UID: "3bb81567-8536-4275-ab0e-a003ef904230"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.918528 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"3bb81567-8536-4275-ab0e-a003ef904230\" (UID: \"3bb81567-8536-4275-ab0e-a003ef904230\") " Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.918605 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bb81567-8536-4275-ab0e-a003ef904230-combined-ca-bundle\") pod \"3bb81567-8536-4275-ab0e-a003ef904230\" (UID: \"3bb81567-8536-4275-ab0e-a003ef904230\") " Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.918654 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3bb81567-8536-4275-ab0e-a003ef904230-logs\") pod \"3bb81567-8536-4275-ab0e-a003ef904230\" (UID: \"3bb81567-8536-4275-ab0e-a003ef904230\") " Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.918687 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3bb81567-8536-4275-ab0e-a003ef904230-scripts\") pod \"3bb81567-8536-4275-ab0e-a003ef904230\" (UID: \"3bb81567-8536-4275-ab0e-a003ef904230\") " Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.918714 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bb81567-8536-4275-ab0e-a003ef904230-config-data\") pod \"3bb81567-8536-4275-ab0e-a003ef904230\" (UID: \"3bb81567-8536-4275-ab0e-a003ef904230\") " Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.919126 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e3b2df70-e493-42b7-9009-64c6bdaf4dad-operator-scripts\") pod \"keystone-4dc6-account-create-update-fjwtt\" (UID: \"e3b2df70-e493-42b7-9009-64c6bdaf4dad\") " pod="openstack/keystone-4dc6-account-create-update-fjwtt" Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.919435 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-px4jz\" (UniqueName: \"kubernetes.io/projected/e3b2df70-e493-42b7-9009-64c6bdaf4dad-kube-api-access-px4jz\") pod \"keystone-4dc6-account-create-update-fjwtt\" (UID: \"e3b2df70-e493-42b7-9009-64c6bdaf4dad\") " pod="openstack/keystone-4dc6-account-create-update-fjwtt" Feb 02 21:43:10 crc kubenswrapper[4789]: E0202 21:43:10.919866 4789 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Feb 02 21:43:10 crc kubenswrapper[4789]: E0202 21:43:10.919936 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e3b2df70-e493-42b7-9009-64c6bdaf4dad-operator-scripts podName:e3b2df70-e493-42b7-9009-64c6bdaf4dad nodeName:}" failed. No retries permitted until 2026-02-02 21:43:12.919919902 +0000 UTC m=+1413.214944921 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/e3b2df70-e493-42b7-9009-64c6bdaf4dad-operator-scripts") pod "keystone-4dc6-account-create-update-fjwtt" (UID: "e3b2df70-e493-42b7-9009-64c6bdaf4dad") : configmap "openstack-scripts" not found Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.920127 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3bb81567-8536-4275-ab0e-a003ef904230-logs" (OuterVolumeSpecName: "logs") pod "3bb81567-8536-4275-ab0e-a003ef904230" (UID: "3bb81567-8536-4275-ab0e-a003ef904230"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.920606 4789 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3bb81567-8536-4275-ab0e-a003ef904230-logs\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.920627 4789 reconciler_common.go:293] "Volume detached for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/078a8abb-3926-40cd-9340-0bef088c130f-memcached-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.920652 4789 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3bb81567-8536-4275-ab0e-a003ef904230-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.928985 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "glance") pod "3bb81567-8536-4275-ab0e-a003ef904230" (UID: "3bb81567-8536-4275-ab0e-a003ef904230"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 02 21:43:10 crc kubenswrapper[4789]: E0202 21:43:10.929188 4789 projected.go:194] Error preparing data for projected volume kube-api-access-px4jz for pod openstack/keystone-4dc6-account-create-update-fjwtt: failed to fetch token: serviceaccounts "galera-openstack" not found Feb 02 21:43:10 crc kubenswrapper[4789]: E0202 21:43:10.929265 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e3b2df70-e493-42b7-9009-64c6bdaf4dad-kube-api-access-px4jz podName:e3b2df70-e493-42b7-9009-64c6bdaf4dad nodeName:}" failed. No retries permitted until 2026-02-02 21:43:12.929244336 +0000 UTC m=+1413.224269345 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-px4jz" (UniqueName: "kubernetes.io/projected/e3b2df70-e493-42b7-9009-64c6bdaf4dad-kube-api-access-px4jz") pod "keystone-4dc6-account-create-update-fjwtt" (UID: "e3b2df70-e493-42b7-9009-64c6bdaf4dad") : failed to fetch token: serviceaccounts "galera-openstack" not found Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.962906 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bb81567-8536-4275-ab0e-a003ef904230-kube-api-access-stsll" (OuterVolumeSpecName: "kube-api-access-stsll") pod "3bb81567-8536-4275-ab0e-a003ef904230" (UID: "3bb81567-8536-4275-ab0e-a003ef904230"). InnerVolumeSpecName "kube-api-access-stsll". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.966297 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bb81567-8536-4275-ab0e-a003ef904230-scripts" (OuterVolumeSpecName: "scripts") pod "3bb81567-8536-4275-ab0e-a003ef904230" (UID: "3bb81567-8536-4275-ab0e-a003ef904230"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:43:10 crc kubenswrapper[4789]: I0202 21:43:10.998928 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bb81567-8536-4275-ab0e-a003ef904230-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3bb81567-8536-4275-ab0e-a003ef904230" (UID: "3bb81567-8536-4275-ab0e-a003ef904230"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.013123 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bb81567-8536-4275-ab0e-a003ef904230-config-data" (OuterVolumeSpecName: "config-data") pod "3bb81567-8536-4275-ab0e-a003ef904230" (UID: "3bb81567-8536-4275-ab0e-a003ef904230"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.022734 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-stsll\" (UniqueName: \"kubernetes.io/projected/3bb81567-8536-4275-ab0e-a003ef904230-kube-api-access-stsll\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.022796 4789 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.022807 4789 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bb81567-8536-4275-ab0e-a003ef904230-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.022817 4789 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3bb81567-8536-4275-ab0e-a003ef904230-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.022826 4789 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bb81567-8536-4275-ab0e-a003ef904230-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.029666 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bb81567-8536-4275-ab0e-a003ef904230-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "3bb81567-8536-4275-ab0e-a003ef904230" (UID: "3bb81567-8536-4275-ab0e-a003ef904230"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.059844 4789 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.063697 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-4dc6-account-create-update-fjwtt" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.073217 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.079183 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.099498 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-nn5kg" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.099722 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-nn5kg" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.100951 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6b8b9b54f6-jfnqs" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.103841 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.104531 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.105675 4789 scope.go:117] "RemoveContainer" containerID="a473f8d31f1d21a7c2b382a1e23b8b88890e3aa22648e9737f24020491949fe0" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.122906 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-ac1d-account-create-update-rvstr"] Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.123532 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d53e4c0-add2-4cfd-bbea-e0a1d3196091-combined-ca-bundle\") pod \"7d53e4c0-add2-4cfd-bbea-e0a1d3196091\" (UID: \"7d53e4c0-add2-4cfd-bbea-e0a1d3196091\") " Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.123570 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d53e4c0-add2-4cfd-bbea-e0a1d3196091-config-data\") pod \"7d53e4c0-add2-4cfd-bbea-e0a1d3196091\" (UID: \"7d53e4c0-add2-4cfd-bbea-e0a1d3196091\") " Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.123611 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7acbb536-0a08-4132-a84a-848735b0e7f4-etc-machine-id\") pod \"7acbb536-0a08-4132-a84a-848735b0e7f4\" (UID: \"7acbb536-0a08-4132-a84a-848735b0e7f4\") " Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.123645 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h9lg9\" (UniqueName: \"kubernetes.io/projected/1ae097e7-380b-4044-8598-abc3e1059356-kube-api-access-h9lg9\") pod \"1ae097e7-380b-4044-8598-abc3e1059356\" (UID: \"1ae097e7-380b-4044-8598-abc3e1059356\") " Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.123665 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ae097e7-380b-4044-8598-abc3e1059356-combined-ca-bundle\") pod \"1ae097e7-380b-4044-8598-abc3e1059356\" (UID: \"1ae097e7-380b-4044-8598-abc3e1059356\") " Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.123692 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7acbb536-0a08-4132-a84a-848735b0e7f4-logs\") pod \"7acbb536-0a08-4132-a84a-848735b0e7f4\" (UID: \"7acbb536-0a08-4132-a84a-848735b0e7f4\") " Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.123706 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d53e4c0-add2-4cfd-bbea-e0a1d3196091-logs\") pod \"7d53e4c0-add2-4cfd-bbea-e0a1d3196091\" (UID: \"7d53e4c0-add2-4cfd-bbea-e0a1d3196091\") " Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.123748 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ae097e7-380b-4044-8598-abc3e1059356-internal-tls-certs\") pod \"1ae097e7-380b-4044-8598-abc3e1059356\" (UID: \"1ae097e7-380b-4044-8598-abc3e1059356\") " Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.123780 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2mbtn\" (UniqueName: \"kubernetes.io/projected/7acbb536-0a08-4132-a84a-848735b0e7f4-kube-api-access-2mbtn\") pod \"7acbb536-0a08-4132-a84a-848735b0e7f4\" (UID: \"7acbb536-0a08-4132-a84a-848735b0e7f4\") " Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.123795 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ae097e7-380b-4044-8598-abc3e1059356-config-data\") pod \"1ae097e7-380b-4044-8598-abc3e1059356\" (UID: \"1ae097e7-380b-4044-8598-abc3e1059356\") " Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.123811 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7d53e4c0-add2-4cfd-bbea-e0a1d3196091-config-data-custom\") pod \"7d53e4c0-add2-4cfd-bbea-e0a1d3196091\" (UID: \"7d53e4c0-add2-4cfd-bbea-e0a1d3196091\") " Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.123834 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7acbb536-0a08-4132-a84a-848735b0e7f4-config-data\") pod \"7acbb536-0a08-4132-a84a-848735b0e7f4\" (UID: \"7acbb536-0a08-4132-a84a-848735b0e7f4\") " Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.123865 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d53e4c0-add2-4cfd-bbea-e0a1d3196091-internal-tls-certs\") pod \"7d53e4c0-add2-4cfd-bbea-e0a1d3196091\" (UID: \"7d53e4c0-add2-4cfd-bbea-e0a1d3196091\") " Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.123909 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7acbb536-0a08-4132-a84a-848735b0e7f4-scripts\") pod \"7acbb536-0a08-4132-a84a-848735b0e7f4\" (UID: \"7acbb536-0a08-4132-a84a-848735b0e7f4\") " Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.123937 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d53e4c0-add2-4cfd-bbea-e0a1d3196091-public-tls-certs\") pod \"7d53e4c0-add2-4cfd-bbea-e0a1d3196091\" (UID: \"7d53e4c0-add2-4cfd-bbea-e0a1d3196091\") " Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.123956 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7acbb536-0a08-4132-a84a-848735b0e7f4-internal-tls-certs\") pod \"7acbb536-0a08-4132-a84a-848735b0e7f4\" (UID: \"7acbb536-0a08-4132-a84a-848735b0e7f4\") " Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.123972 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7acbb536-0a08-4132-a84a-848735b0e7f4-public-tls-certs\") pod \"7acbb536-0a08-4132-a84a-848735b0e7f4\" (UID: \"7acbb536-0a08-4132-a84a-848735b0e7f4\") " Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.123993 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7acbb536-0a08-4132-a84a-848735b0e7f4-config-data-custom\") pod \"7acbb536-0a08-4132-a84a-848735b0e7f4\" (UID: \"7acbb536-0a08-4132-a84a-848735b0e7f4\") " Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.124008 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ae097e7-380b-4044-8598-abc3e1059356-public-tls-certs\") pod \"1ae097e7-380b-4044-8598-abc3e1059356\" (UID: \"1ae097e7-380b-4044-8598-abc3e1059356\") " Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.124024 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9jhtx\" (UniqueName: \"kubernetes.io/projected/7d53e4c0-add2-4cfd-bbea-e0a1d3196091-kube-api-access-9jhtx\") pod \"7d53e4c0-add2-4cfd-bbea-e0a1d3196091\" (UID: \"7d53e4c0-add2-4cfd-bbea-e0a1d3196091\") " Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.124048 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7acbb536-0a08-4132-a84a-848735b0e7f4-combined-ca-bundle\") pod \"7acbb536-0a08-4132-a84a-848735b0e7f4\" (UID: \"7acbb536-0a08-4132-a84a-848735b0e7f4\") " Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.124185 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1ae097e7-380b-4044-8598-abc3e1059356-logs\") pod \"1ae097e7-380b-4044-8598-abc3e1059356\" (UID: \"1ae097e7-380b-4044-8598-abc3e1059356\") " Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.124370 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7acbb536-0a08-4132-a84a-848735b0e7f4-logs" (OuterVolumeSpecName: "logs") pod "7acbb536-0a08-4132-a84a-848735b0e7f4" (UID: "7acbb536-0a08-4132-a84a-848735b0e7f4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.124808 4789 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7acbb536-0a08-4132-a84a-848735b0e7f4-logs\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.124844 4789 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bb81567-8536-4275-ab0e-a003ef904230-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.124904 4789 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.124947 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7acbb536-0a08-4132-a84a-848735b0e7f4-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "7acbb536-0a08-4132-a84a-848735b0e7f4" (UID: "7acbb536-0a08-4132-a84a-848735b0e7f4"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.125697 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d53e4c0-add2-4cfd-bbea-e0a1d3196091-logs" (OuterVolumeSpecName: "logs") pod "7d53e4c0-add2-4cfd-bbea-e0a1d3196091" (UID: "7d53e4c0-add2-4cfd-bbea-e0a1d3196091"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.128980 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.129331 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ae097e7-380b-4044-8598-abc3e1059356-kube-api-access-h9lg9" (OuterVolumeSpecName: "kube-api-access-h9lg9") pod "1ae097e7-380b-4044-8598-abc3e1059356" (UID: "1ae097e7-380b-4044-8598-abc3e1059356"). InnerVolumeSpecName "kube-api-access-h9lg9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.132791 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-ac1d-account-create-update-rvstr"] Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.139633 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ae097e7-380b-4044-8598-abc3e1059356-logs" (OuterVolumeSpecName: "logs") pod "1ae097e7-380b-4044-8598-abc3e1059356" (UID: "1ae097e7-380b-4044-8598-abc3e1059356"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.165219 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d53e4c0-add2-4cfd-bbea-e0a1d3196091-kube-api-access-9jhtx" (OuterVolumeSpecName: "kube-api-access-9jhtx") pod "7d53e4c0-add2-4cfd-bbea-e0a1d3196091" (UID: "7d53e4c0-add2-4cfd-bbea-e0a1d3196091"). InnerVolumeSpecName "kube-api-access-9jhtx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.165727 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7acbb536-0a08-4132-a84a-848735b0e7f4-scripts" (OuterVolumeSpecName: "scripts") pod "7acbb536-0a08-4132-a84a-848735b0e7f4" (UID: "7acbb536-0a08-4132-a84a-848735b0e7f4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.166896 4789 scope.go:117] "RemoveContainer" containerID="8d41fcf5f05241ca690bf9be181cdbc0af2afc9c357aaaa7b133a7a3685d2601" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.171687 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-h7zb6"] Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.173835 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7acbb536-0a08-4132-a84a-848735b0e7f4-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "7acbb536-0a08-4132-a84a-848735b0e7f4" (UID: "7acbb536-0a08-4132-a84a-848735b0e7f4"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.185840 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-h7zb6"] Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.188588 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6d964c7466-fpqld" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.198468 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-959f7f8c5-pmqjf" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.200810 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7acbb536-0a08-4132-a84a-848735b0e7f4-kube-api-access-2mbtn" (OuterVolumeSpecName: "kube-api-access-2mbtn") pod "7acbb536-0a08-4132-a84a-848735b0e7f4" (UID: "7acbb536-0a08-4132-a84a-848735b0e7f4"). InnerVolumeSpecName "kube-api-access-2mbtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.201415 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d53e4c0-add2-4cfd-bbea-e0a1d3196091-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "7d53e4c0-add2-4cfd-bbea-e0a1d3196091" (UID: "7d53e4c0-add2-4cfd-bbea-e0a1d3196091"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.220882 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.225965 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mzllv\" (UniqueName: \"kubernetes.io/projected/5c3c85c8-d5b9-48f3-9408-0d9693d0cbf1-kube-api-access-mzllv\") pod \"5c3c85c8-d5b9-48f3-9408-0d9693d0cbf1\" (UID: \"5c3c85c8-d5b9-48f3-9408-0d9693d0cbf1\") " Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.226026 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c3c85c8-d5b9-48f3-9408-0d9693d0cbf1-config-data\") pod \"5c3c85c8-d5b9-48f3-9408-0d9693d0cbf1\" (UID: \"5c3c85c8-d5b9-48f3-9408-0d9693d0cbf1\") " Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.226055 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bvzsv\" (UniqueName: \"kubernetes.io/projected/802bda4f-2363-4ca6-a126-2ccf1448ed71-kube-api-access-bvzsv\") pod \"802bda4f-2363-4ca6-a126-2ccf1448ed71\" (UID: \"802bda4f-2363-4ca6-a126-2ccf1448ed71\") " Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.226080 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/802bda4f-2363-4ca6-a126-2ccf1448ed71-config-data\") pod \"802bda4f-2363-4ca6-a126-2ccf1448ed71\" (UID: \"802bda4f-2363-4ca6-a126-2ccf1448ed71\") " Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.226162 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/802bda4f-2363-4ca6-a126-2ccf1448ed71-config-data-custom\") pod \"802bda4f-2363-4ca6-a126-2ccf1448ed71\" (UID: \"802bda4f-2363-4ca6-a126-2ccf1448ed71\") " Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.226196 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c3c85c8-d5b9-48f3-9408-0d9693d0cbf1-logs\") pod \"5c3c85c8-d5b9-48f3-9408-0d9693d0cbf1\" (UID: \"5c3c85c8-d5b9-48f3-9408-0d9693d0cbf1\") " Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.226229 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5c3c85c8-d5b9-48f3-9408-0d9693d0cbf1-config-data-custom\") pod \"5c3c85c8-d5b9-48f3-9408-0d9693d0cbf1\" (UID: \"5c3c85c8-d5b9-48f3-9408-0d9693d0cbf1\") " Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.226771 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0ba13473-b423-43a0-ab15-9d6be616cc7b-logs\") pod \"0ba13473-b423-43a0-ab15-9d6be616cc7b\" (UID: \"0ba13473-b423-43a0-ab15-9d6be616cc7b\") " Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.226803 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dfzd2\" (UniqueName: \"kubernetes.io/projected/0ba13473-b423-43a0-ab15-9d6be616cc7b-kube-api-access-dfzd2\") pod \"0ba13473-b423-43a0-ab15-9d6be616cc7b\" (UID: \"0ba13473-b423-43a0-ab15-9d6be616cc7b\") " Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.226894 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ba13473-b423-43a0-ab15-9d6be616cc7b-nova-metadata-tls-certs\") pod \"0ba13473-b423-43a0-ab15-9d6be616cc7b\" (UID: \"0ba13473-b423-43a0-ab15-9d6be616cc7b\") " Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.226942 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ba13473-b423-43a0-ab15-9d6be616cc7b-config-data\") pod \"0ba13473-b423-43a0-ab15-9d6be616cc7b\" (UID: \"0ba13473-b423-43a0-ab15-9d6be616cc7b\") " Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.226977 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/802bda4f-2363-4ca6-a126-2ccf1448ed71-combined-ca-bundle\") pod \"802bda4f-2363-4ca6-a126-2ccf1448ed71\" (UID: \"802bda4f-2363-4ca6-a126-2ccf1448ed71\") " Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.227048 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ba13473-b423-43a0-ab15-9d6be616cc7b-combined-ca-bundle\") pod \"0ba13473-b423-43a0-ab15-9d6be616cc7b\" (UID: \"0ba13473-b423-43a0-ab15-9d6be616cc7b\") " Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.227128 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/802bda4f-2363-4ca6-a126-2ccf1448ed71-logs\") pod \"802bda4f-2363-4ca6-a126-2ccf1448ed71\" (UID: \"802bda4f-2363-4ca6-a126-2ccf1448ed71\") " Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.227164 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c3c85c8-d5b9-48f3-9408-0d9693d0cbf1-combined-ca-bundle\") pod \"5c3c85c8-d5b9-48f3-9408-0d9693d0cbf1\" (UID: \"5c3c85c8-d5b9-48f3-9408-0d9693d0cbf1\") " Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.227557 4789 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7acbb536-0a08-4132-a84a-848735b0e7f4-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.227589 4789 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7acbb536-0a08-4132-a84a-848735b0e7f4-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.227600 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9jhtx\" (UniqueName: \"kubernetes.io/projected/7d53e4c0-add2-4cfd-bbea-e0a1d3196091-kube-api-access-9jhtx\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.227608 4789 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1ae097e7-380b-4044-8598-abc3e1059356-logs\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.227617 4789 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7acbb536-0a08-4132-a84a-848735b0e7f4-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.227626 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h9lg9\" (UniqueName: \"kubernetes.io/projected/1ae097e7-380b-4044-8598-abc3e1059356-kube-api-access-h9lg9\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.227635 4789 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d53e4c0-add2-4cfd-bbea-e0a1d3196091-logs\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.227643 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2mbtn\" (UniqueName: \"kubernetes.io/projected/7acbb536-0a08-4132-a84a-848735b0e7f4-kube-api-access-2mbtn\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.227652 4789 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7d53e4c0-add2-4cfd-bbea-e0a1d3196091-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.233244 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.235530 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ba13473-b423-43a0-ab15-9d6be616cc7b-logs" (OuterVolumeSpecName: "logs") pod "0ba13473-b423-43a0-ab15-9d6be616cc7b" (UID: "0ba13473-b423-43a0-ab15-9d6be616cc7b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.235694 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/802bda4f-2363-4ca6-a126-2ccf1448ed71-logs" (OuterVolumeSpecName: "logs") pod "802bda4f-2363-4ca6-a126-2ccf1448ed71" (UID: "802bda4f-2363-4ca6-a126-2ccf1448ed71"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.240696 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c3c85c8-d5b9-48f3-9408-0d9693d0cbf1-logs" (OuterVolumeSpecName: "logs") pod "5c3c85c8-d5b9-48f3-9408-0d9693d0cbf1" (UID: "5c3c85c8-d5b9-48f3-9408-0d9693d0cbf1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.258072 4789 scope.go:117] "RemoveContainer" containerID="3d1acdaf38b8f90e2888fd9bb9d6b2a8fab388dd54ec79c7218017d80c8b5670" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.295654 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c3c85c8-d5b9-48f3-9408-0d9693d0cbf1-kube-api-access-mzllv" (OuterVolumeSpecName: "kube-api-access-mzllv") pod "5c3c85c8-d5b9-48f3-9408-0d9693d0cbf1" (UID: "5c3c85c8-d5b9-48f3-9408-0d9693d0cbf1"). InnerVolumeSpecName "kube-api-access-mzllv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.295656 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c3c85c8-d5b9-48f3-9408-0d9693d0cbf1-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "5c3c85c8-d5b9-48f3-9408-0d9693d0cbf1" (UID: "5c3c85c8-d5b9-48f3-9408-0d9693d0cbf1"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.295729 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/802bda4f-2363-4ca6-a126-2ccf1448ed71-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "802bda4f-2363-4ca6-a126-2ccf1448ed71" (UID: "802bda4f-2363-4ca6-a126-2ccf1448ed71"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.295757 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ba13473-b423-43a0-ab15-9d6be616cc7b-kube-api-access-dfzd2" (OuterVolumeSpecName: "kube-api-access-dfzd2") pod "0ba13473-b423-43a0-ab15-9d6be616cc7b" (UID: "0ba13473-b423-43a0-ab15-9d6be616cc7b"). InnerVolumeSpecName "kube-api-access-dfzd2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.295785 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/802bda4f-2363-4ca6-a126-2ccf1448ed71-kube-api-access-bvzsv" (OuterVolumeSpecName: "kube-api-access-bvzsv") pod "802bda4f-2363-4ca6-a126-2ccf1448ed71" (UID: "802bda4f-2363-4ca6-a126-2ccf1448ed71"). InnerVolumeSpecName "kube-api-access-bvzsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.295814 4789 scope.go:117] "RemoveContainer" containerID="40a59db16d790bc9ade9d424000123015ece03fbc62bfe3a010f70a44b900736" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.299217 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.305661 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ae097e7-380b-4044-8598-abc3e1059356-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1ae097e7-380b-4044-8598-abc3e1059356" (UID: "1ae097e7-380b-4044-8598-abc3e1059356"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.324153 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/memcached-0"] Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.329611 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-68d9498c68-84jcz"] Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.332316 4789 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0ba13473-b423-43a0-ab15-9d6be616cc7b-logs\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.332349 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dfzd2\" (UniqueName: \"kubernetes.io/projected/0ba13473-b423-43a0-ab15-9d6be616cc7b-kube-api-access-dfzd2\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.332363 4789 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/802bda4f-2363-4ca6-a126-2ccf1448ed71-logs\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.332374 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mzllv\" (UniqueName: \"kubernetes.io/projected/5c3c85c8-d5b9-48f3-9408-0d9693d0cbf1-kube-api-access-mzllv\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.332386 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bvzsv\" (UniqueName: \"kubernetes.io/projected/802bda4f-2363-4ca6-a126-2ccf1448ed71-kube-api-access-bvzsv\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.332397 4789 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/802bda4f-2363-4ca6-a126-2ccf1448ed71-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.332408 4789 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c3c85c8-d5b9-48f3-9408-0d9693d0cbf1-logs\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.332421 4789 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ae097e7-380b-4044-8598-abc3e1059356-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.332434 4789 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5c3c85c8-d5b9-48f3-9408-0d9693d0cbf1-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.334785 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7acbb536-0a08-4132-a84a-848735b0e7f4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7acbb536-0a08-4132-a84a-848735b0e7f4" (UID: "7acbb536-0a08-4132-a84a-848735b0e7f4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.334812 4789 scope.go:117] "RemoveContainer" containerID="7594027e1aa66be1d86466bb05745dd33d3b9a0771c64f3b195b5d3c4ef5fbca" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.341946 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-68d9498c68-84jcz"] Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.365860 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d53e4c0-add2-4cfd-bbea-e0a1d3196091-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7d53e4c0-add2-4cfd-bbea-e0a1d3196091" (UID: "7d53e4c0-add2-4cfd-bbea-e0a1d3196091"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.375790 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ae097e7-380b-4044-8598-abc3e1059356-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "1ae097e7-380b-4044-8598-abc3e1059356" (UID: "1ae097e7-380b-4044-8598-abc3e1059356"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.376102 4789 scope.go:117] "RemoveContainer" containerID="40a59db16d790bc9ade9d424000123015ece03fbc62bfe3a010f70a44b900736" Feb 02 21:43:11 crc kubenswrapper[4789]: E0202 21:43:11.376435 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40a59db16d790bc9ade9d424000123015ece03fbc62bfe3a010f70a44b900736\": container with ID starting with 40a59db16d790bc9ade9d424000123015ece03fbc62bfe3a010f70a44b900736 not found: ID does not exist" containerID="40a59db16d790bc9ade9d424000123015ece03fbc62bfe3a010f70a44b900736" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.376473 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40a59db16d790bc9ade9d424000123015ece03fbc62bfe3a010f70a44b900736"} err="failed to get container status \"40a59db16d790bc9ade9d424000123015ece03fbc62bfe3a010f70a44b900736\": rpc error: code = NotFound desc = could not find container \"40a59db16d790bc9ade9d424000123015ece03fbc62bfe3a010f70a44b900736\": container with ID starting with 40a59db16d790bc9ade9d424000123015ece03fbc62bfe3a010f70a44b900736 not found: ID does not exist" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.376500 4789 scope.go:117] "RemoveContainer" containerID="7594027e1aa66be1d86466bb05745dd33d3b9a0771c64f3b195b5d3c4ef5fbca" Feb 02 21:43:11 crc kubenswrapper[4789]: E0202 21:43:11.376823 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7594027e1aa66be1d86466bb05745dd33d3b9a0771c64f3b195b5d3c4ef5fbca\": container with ID starting with 7594027e1aa66be1d86466bb05745dd33d3b9a0771c64f3b195b5d3c4ef5fbca not found: ID does not exist" containerID="7594027e1aa66be1d86466bb05745dd33d3b9a0771c64f3b195b5d3c4ef5fbca" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.376843 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7594027e1aa66be1d86466bb05745dd33d3b9a0771c64f3b195b5d3c4ef5fbca"} err="failed to get container status \"7594027e1aa66be1d86466bb05745dd33d3b9a0771c64f3b195b5d3c4ef5fbca\": rpc error: code = NotFound desc = could not find container \"7594027e1aa66be1d86466bb05745dd33d3b9a0771c64f3b195b5d3c4ef5fbca\": container with ID starting with 7594027e1aa66be1d86466bb05745dd33d3b9a0771c64f3b195b5d3c4ef5fbca not found: ID does not exist" Feb 02 21:43:11 crc kubenswrapper[4789]: E0202 21:43:11.391044 4789 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 02 21:43:11 crc kubenswrapper[4789]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Feb 02 21:43:11 crc kubenswrapper[4789]: Feb 02 21:43:11 crc kubenswrapper[4789]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Feb 02 21:43:11 crc kubenswrapper[4789]: Feb 02 21:43:11 crc kubenswrapper[4789]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Feb 02 21:43:11 crc kubenswrapper[4789]: Feb 02 21:43:11 crc kubenswrapper[4789]: MYSQL_CMD="mysql -h -u root -P 3306" Feb 02 21:43:11 crc kubenswrapper[4789]: Feb 02 21:43:11 crc kubenswrapper[4789]: if [ -n "" ]; then Feb 02 21:43:11 crc kubenswrapper[4789]: GRANT_DATABASE="" Feb 02 21:43:11 crc kubenswrapper[4789]: else Feb 02 21:43:11 crc kubenswrapper[4789]: GRANT_DATABASE="*" Feb 02 21:43:11 crc kubenswrapper[4789]: fi Feb 02 21:43:11 crc kubenswrapper[4789]: Feb 02 21:43:11 crc kubenswrapper[4789]: # going for maximum compatibility here: Feb 02 21:43:11 crc kubenswrapper[4789]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Feb 02 21:43:11 crc kubenswrapper[4789]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Feb 02 21:43:11 crc kubenswrapper[4789]: # 3. create user with CREATE but then do all password and TLS with ALTER to Feb 02 21:43:11 crc kubenswrapper[4789]: # support updates Feb 02 21:43:11 crc kubenswrapper[4789]: Feb 02 21:43:11 crc kubenswrapper[4789]: $MYSQL_CMD < logger="UnhandledError" Feb 02 21:43:11 crc kubenswrapper[4789]: E0202 21:43:11.392325 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"openstack-mariadb-root-db-secret\\\" not found\"" pod="openstack/root-account-create-update-sbz55" podUID="64a9a0c6-c663-400b-8c60-43c582b7cac0" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.396819 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/802bda4f-2363-4ca6-a126-2ccf1448ed71-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "802bda4f-2363-4ca6-a126-2ccf1448ed71" (UID: "802bda4f-2363-4ca6-a126-2ccf1448ed71"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.402720 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ba13473-b423-43a0-ab15-9d6be616cc7b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0ba13473-b423-43a0-ab15-9d6be616cc7b" (UID: "0ba13473-b423-43a0-ab15-9d6be616cc7b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.414099 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ae097e7-380b-4044-8598-abc3e1059356-config-data" (OuterVolumeSpecName: "config-data") pod "1ae097e7-380b-4044-8598-abc3e1059356" (UID: "1ae097e7-380b-4044-8598-abc3e1059356"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.432206 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-sbz55"] Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.433264 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d53e4c0-add2-4cfd-bbea-e0a1d3196091-config-data" (OuterVolumeSpecName: "config-data") pod "7d53e4c0-add2-4cfd-bbea-e0a1d3196091" (UID: "7d53e4c0-add2-4cfd-bbea-e0a1d3196091"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.434107 4789 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ae097e7-380b-4044-8598-abc3e1059356-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.434132 4789 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/802bda4f-2363-4ca6-a126-2ccf1448ed71-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.434144 4789 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ba13473-b423-43a0-ab15-9d6be616cc7b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.434154 4789 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ae097e7-380b-4044-8598-abc3e1059356-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.434164 4789 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7acbb536-0a08-4132-a84a-848735b0e7f4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.434173 4789 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d53e4c0-add2-4cfd-bbea-e0a1d3196091-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.434181 4789 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d53e4c0-add2-4cfd-bbea-e0a1d3196091-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.443328 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c3c85c8-d5b9-48f3-9408-0d9693d0cbf1-config-data" (OuterVolumeSpecName: "config-data") pod "5c3c85c8-d5b9-48f3-9408-0d9693d0cbf1" (UID: "5c3c85c8-d5b9-48f3-9408-0d9693d0cbf1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.447219 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7acbb536-0a08-4132-a84a-848735b0e7f4-config-data" (OuterVolumeSpecName: "config-data") pod "7acbb536-0a08-4132-a84a-848735b0e7f4" (UID: "7acbb536-0a08-4132-a84a-848735b0e7f4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.449917 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c3c85c8-d5b9-48f3-9408-0d9693d0cbf1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5c3c85c8-d5b9-48f3-9408-0d9693d0cbf1" (UID: "5c3c85c8-d5b9-48f3-9408-0d9693d0cbf1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.462557 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ba13473-b423-43a0-ab15-9d6be616cc7b-config-data" (OuterVolumeSpecName: "config-data") pod "0ba13473-b423-43a0-ab15-9d6be616cc7b" (UID: "0ba13473-b423-43a0-ab15-9d6be616cc7b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.463370 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ae097e7-380b-4044-8598-abc3e1059356-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "1ae097e7-380b-4044-8598-abc3e1059356" (UID: "1ae097e7-380b-4044-8598-abc3e1059356"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.470649 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d53e4c0-add2-4cfd-bbea-e0a1d3196091-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "7d53e4c0-add2-4cfd-bbea-e0a1d3196091" (UID: "7d53e4c0-add2-4cfd-bbea-e0a1d3196091"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.471871 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ba13473-b423-43a0-ab15-9d6be616cc7b-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "0ba13473-b423-43a0-ab15-9d6be616cc7b" (UID: "0ba13473-b423-43a0-ab15-9d6be616cc7b"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.479741 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d53e4c0-add2-4cfd-bbea-e0a1d3196091-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "7d53e4c0-add2-4cfd-bbea-e0a1d3196091" (UID: "7d53e4c0-add2-4cfd-bbea-e0a1d3196091"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.485719 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/802bda4f-2363-4ca6-a126-2ccf1448ed71-config-data" (OuterVolumeSpecName: "config-data") pod "802bda4f-2363-4ca6-a126-2ccf1448ed71" (UID: "802bda4f-2363-4ca6-a126-2ccf1448ed71"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.493807 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7acbb536-0a08-4132-a84a-848735b0e7f4-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "7acbb536-0a08-4132-a84a-848735b0e7f4" (UID: "7acbb536-0a08-4132-a84a-848735b0e7f4"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.500968 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7acbb536-0a08-4132-a84a-848735b0e7f4-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "7acbb536-0a08-4132-a84a-848735b0e7f4" (UID: "7acbb536-0a08-4132-a84a-848735b0e7f4"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.535915 4789 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d53e4c0-add2-4cfd-bbea-e0a1d3196091-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.535944 4789 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7acbb536-0a08-4132-a84a-848735b0e7f4-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.535963 4789 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7acbb536-0a08-4132-a84a-848735b0e7f4-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.535974 4789 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c3c85c8-d5b9-48f3-9408-0d9693d0cbf1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.535984 4789 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c3c85c8-d5b9-48f3-9408-0d9693d0cbf1-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.535994 4789 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/802bda4f-2363-4ca6-a126-2ccf1448ed71-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.536003 4789 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ae097e7-380b-4044-8598-abc3e1059356-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.536012 4789 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ba13473-b423-43a0-ab15-9d6be616cc7b-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.536022 4789 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7acbb536-0a08-4132-a84a-848735b0e7f4-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.536031 4789 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ba13473-b423-43a0-ab15-9d6be616cc7b-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.536039 4789 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d53e4c0-add2-4cfd-bbea-e0a1d3196091-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.548735 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.609211 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6b8b9b54f6-jfnqs" event={"ID":"7d53e4c0-add2-4cfd-bbea-e0a1d3196091","Type":"ContainerDied","Data":"7a775f6f6d427969f7331fcfc27a064e0d64b044f52aba9cb29e1ee6c0b0084f"} Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.609267 4789 scope.go:117] "RemoveContainer" containerID="25969b57d6ee15da22b2fd6fac46c116130225ea93ef2013003c96e7fe1d6cca" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.609726 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6b8b9b54f6-jfnqs" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.624262 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.624265 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3bb81567-8536-4275-ab0e-a003ef904230","Type":"ContainerDied","Data":"767ead280a730bc38964cad1cff17fed926d00c1769f11adb821fd37e93cd85d"} Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.634202 4789 generic.go:334] "Generic (PLEG): container finished" podID="399d9417-2065-4e92-89c5-a04dbeaf2cca" containerID="e98adb233ea0f16d2b2f46eddae689b1bb397a9ba532ccb91e5ece02b9397f95" exitCode=0 Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.634257 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"399d9417-2065-4e92-89c5-a04dbeaf2cca","Type":"ContainerDied","Data":"e98adb233ea0f16d2b2f46eddae689b1bb397a9ba532ccb91e5ece02b9397f95"} Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.634283 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"399d9417-2065-4e92-89c5-a04dbeaf2cca","Type":"ContainerDied","Data":"aa4caa858ae8b9bebea327895e4749f90535e00fc76b86bda685bc300c0b326a"} Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.634344 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.638031 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6695\" (UniqueName: \"kubernetes.io/projected/399d9417-2065-4e92-89c5-a04dbeaf2cca-kube-api-access-b6695\") pod \"399d9417-2065-4e92-89c5-a04dbeaf2cca\" (UID: \"399d9417-2065-4e92-89c5-a04dbeaf2cca\") " Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.638150 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/399d9417-2065-4e92-89c5-a04dbeaf2cca-combined-ca-bundle\") pod \"399d9417-2065-4e92-89c5-a04dbeaf2cca\" (UID: \"399d9417-2065-4e92-89c5-a04dbeaf2cca\") " Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.638197 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/399d9417-2065-4e92-89c5-a04dbeaf2cca-config-data\") pod \"399d9417-2065-4e92-89c5-a04dbeaf2cca\" (UID: \"399d9417-2065-4e92-89c5-a04dbeaf2cca\") " Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.642185 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/399d9417-2065-4e92-89c5-a04dbeaf2cca-kube-api-access-b6695" (OuterVolumeSpecName: "kube-api-access-b6695") pod "399d9417-2065-4e92-89c5-a04dbeaf2cca" (UID: "399d9417-2065-4e92-89c5-a04dbeaf2cca"). InnerVolumeSpecName "kube-api-access-b6695". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.646420 4789 generic.go:334] "Generic (PLEG): container finished" podID="802bda4f-2363-4ca6-a126-2ccf1448ed71" containerID="0a62728aedd4480cfd181d88be8ac00afa4f69cd9f3b44bd97a2e8305a5f31af" exitCode=0 Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.646486 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6d964c7466-fpqld" event={"ID":"802bda4f-2363-4ca6-a126-2ccf1448ed71","Type":"ContainerDied","Data":"0a62728aedd4480cfd181d88be8ac00afa4f69cd9f3b44bd97a2e8305a5f31af"} Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.646512 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6d964c7466-fpqld" event={"ID":"802bda4f-2363-4ca6-a126-2ccf1448ed71","Type":"ContainerDied","Data":"46aa30d8fda521d7d43ed37e4ab7c1f2d26f48e3a6b0587028c5fd3ccb7aec9e"} Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.646571 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6d964c7466-fpqld" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.660229 4789 generic.go:334] "Generic (PLEG): container finished" podID="5c3c85c8-d5b9-48f3-9408-0d9693d0cbf1" containerID="7529e703a7ba79a3c7d9ce9adbb48f6652641d0b42790d00cab813d47b85c9b6" exitCode=0 Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.660352 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-959f7f8c5-pmqjf" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.660496 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-959f7f8c5-pmqjf" event={"ID":"5c3c85c8-d5b9-48f3-9408-0d9693d0cbf1","Type":"ContainerDied","Data":"7529e703a7ba79a3c7d9ce9adbb48f6652641d0b42790d00cab813d47b85c9b6"} Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.660700 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-959f7f8c5-pmqjf" event={"ID":"5c3c85c8-d5b9-48f3-9408-0d9693d0cbf1","Type":"ContainerDied","Data":"7f48a67ac57235127ac9a38de9ba7ffb00f02956d68819cd8606b5387e065667"} Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.660667 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/399d9417-2065-4e92-89c5-a04dbeaf2cca-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "399d9417-2065-4e92-89c5-a04dbeaf2cca" (UID: "399d9417-2065-4e92-89c5-a04dbeaf2cca"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.664489 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0ba13473-b423-43a0-ab15-9d6be616cc7b","Type":"ContainerDied","Data":"0c3c5af28fbda31f21533271ca105636c61287445c3003313b9bbf88a863fc21"} Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.664565 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.667033 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-sbz55" event={"ID":"64a9a0c6-c663-400b-8c60-43c582b7cac0","Type":"ContainerStarted","Data":"a22206a7859412eb7185bb7a1f7c3bcdaf737b02c1bb9a99d4ed045ad6143df8"} Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.686448 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_ab56a6da-6187-4fa6-bd4e-93046de2d432/ovn-northd/0.log" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.686494 4789 generic.go:334] "Generic (PLEG): container finished" podID="ab56a6da-6187-4fa6-bd4e-93046de2d432" containerID="9404edbdc9c7a81d7c48cab8b8c60b1fc5de57f009d5e80c304dd34c2eae41c2" exitCode=139 Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.686562 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"ab56a6da-6187-4fa6-bd4e-93046de2d432","Type":"ContainerDied","Data":"9404edbdc9c7a81d7c48cab8b8c60b1fc5de57f009d5e80c304dd34c2eae41c2"} Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.687371 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/399d9417-2065-4e92-89c5-a04dbeaf2cca-config-data" (OuterVolumeSpecName: "config-data") pod "399d9417-2065-4e92-89c5-a04dbeaf2cca" (UID: "399d9417-2065-4e92-89c5-a04dbeaf2cca"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.689948 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1ae097e7-380b-4044-8598-abc3e1059356","Type":"ContainerDied","Data":"78724d0ec65400d21856ae24e9f73faa2306cb0d027d1efa4277f832f50d08ac"} Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.689958 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.694814 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7acbb536-0a08-4132-a84a-848735b0e7f4","Type":"ContainerDied","Data":"786426b72abb5ee16b53d1263b8a0ac435b1b567312952f1f1931e7409c1d80f"} Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.694838 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.697354 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-4dc6-account-create-update-fjwtt" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.740230 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b6695\" (UniqueName: \"kubernetes.io/projected/399d9417-2065-4e92-89c5-a04dbeaf2cca-kube-api-access-b6695\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.740254 4789 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/399d9417-2065-4e92-89c5-a04dbeaf2cca-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.740265 4789 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/399d9417-2065-4e92-89c5-a04dbeaf2cca-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.786784 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_ab56a6da-6187-4fa6-bd4e-93046de2d432/ovn-northd/0.log" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.786876 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.793190 4789 scope.go:117] "RemoveContainer" containerID="4d137886e123097c6077816303161de8f1beb2278c8b0ec65bb058b0d9f03c90" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.840638 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ab56a6da-6187-4fa6-bd4e-93046de2d432-ovn-rundir\") pod \"ab56a6da-6187-4fa6-bd4e-93046de2d432\" (UID: \"ab56a6da-6187-4fa6-bd4e-93046de2d432\") " Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.840705 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab56a6da-6187-4fa6-bd4e-93046de2d432-combined-ca-bundle\") pod \"ab56a6da-6187-4fa6-bd4e-93046de2d432\" (UID: \"ab56a6da-6187-4fa6-bd4e-93046de2d432\") " Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.840743 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab56a6da-6187-4fa6-bd4e-93046de2d432-metrics-certs-tls-certs\") pod \"ab56a6da-6187-4fa6-bd4e-93046de2d432\" (UID: \"ab56a6da-6187-4fa6-bd4e-93046de2d432\") " Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.840783 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab56a6da-6187-4fa6-bd4e-93046de2d432-config\") pod \"ab56a6da-6187-4fa6-bd4e-93046de2d432\" (UID: \"ab56a6da-6187-4fa6-bd4e-93046de2d432\") " Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.840810 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab56a6da-6187-4fa6-bd4e-93046de2d432-ovn-northd-tls-certs\") pod \"ab56a6da-6187-4fa6-bd4e-93046de2d432\" (UID: \"ab56a6da-6187-4fa6-bd4e-93046de2d432\") " Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.840867 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fs2r5\" (UniqueName: \"kubernetes.io/projected/ab56a6da-6187-4fa6-bd4e-93046de2d432-kube-api-access-fs2r5\") pod \"ab56a6da-6187-4fa6-bd4e-93046de2d432\" (UID: \"ab56a6da-6187-4fa6-bd4e-93046de2d432\") " Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.840885 4789 scope.go:117] "RemoveContainer" containerID="d9549a00930229585c1a660c46c1ee179871330062dec64c5947fd34ad7860f5" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.840896 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ab56a6da-6187-4fa6-bd4e-93046de2d432-scripts\") pod \"ab56a6da-6187-4fa6-bd4e-93046de2d432\" (UID: \"ab56a6da-6187-4fa6-bd4e-93046de2d432\") " Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.841700 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab56a6da-6187-4fa6-bd4e-93046de2d432-config" (OuterVolumeSpecName: "config") pod "ab56a6da-6187-4fa6-bd4e-93046de2d432" (UID: "ab56a6da-6187-4fa6-bd4e-93046de2d432"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.841710 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab56a6da-6187-4fa6-bd4e-93046de2d432-scripts" (OuterVolumeSpecName: "scripts") pod "ab56a6da-6187-4fa6-bd4e-93046de2d432" (UID: "ab56a6da-6187-4fa6-bd4e-93046de2d432"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.841921 4789 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ab56a6da-6187-4fa6-bd4e-93046de2d432-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.841935 4789 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab56a6da-6187-4fa6-bd4e-93046de2d432-config\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.848068 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab56a6da-6187-4fa6-bd4e-93046de2d432-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "ab56a6da-6187-4fa6-bd4e-93046de2d432" (UID: "ab56a6da-6187-4fa6-bd4e-93046de2d432"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.858710 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab56a6da-6187-4fa6-bd4e-93046de2d432-kube-api-access-fs2r5" (OuterVolumeSpecName: "kube-api-access-fs2r5") pod "ab56a6da-6187-4fa6-bd4e-93046de2d432" (UID: "ab56a6da-6187-4fa6-bd4e-93046de2d432"). InnerVolumeSpecName "kube-api-access-fs2r5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.877626 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab56a6da-6187-4fa6-bd4e-93046de2d432-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ab56a6da-6187-4fa6-bd4e-93046de2d432" (UID: "ab56a6da-6187-4fa6-bd4e-93046de2d432"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.883024 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-4dc6-account-create-update-fjwtt"] Feb 02 21:43:11 crc kubenswrapper[4789]: E0202 21:43:11.887200 4789 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="dfd72fb016b8250b7d52cd2384cba2cc136043ef8ea07229e3afb0b578d3fbf4" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.888056 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-4dc6-account-create-update-fjwtt"] Feb 02 21:43:11 crc kubenswrapper[4789]: E0202 21:43:11.888412 4789 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="dfd72fb016b8250b7d52cd2384cba2cc136043ef8ea07229e3afb0b578d3fbf4" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 02 21:43:11 crc kubenswrapper[4789]: E0202 21:43:11.889457 4789 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="dfd72fb016b8250b7d52cd2384cba2cc136043ef8ea07229e3afb0b578d3fbf4" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 02 21:43:11 crc kubenswrapper[4789]: E0202 21:43:11.889491 4789 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="aa36ad4d-6c5e-4dd5-a7a3-34e5dddfba8b" containerName="nova-cell0-conductor-conductor" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.926719 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.934756 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab56a6da-6187-4fa6-bd4e-93046de2d432-ovn-northd-tls-certs" (OuterVolumeSpecName: "ovn-northd-tls-certs") pod "ab56a6da-6187-4fa6-bd4e-93046de2d432" (UID: "ab56a6da-6187-4fa6-bd4e-93046de2d432"). InnerVolumeSpecName "ovn-northd-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.947449 4789 reconciler_common.go:293] "Volume detached for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab56a6da-6187-4fa6-bd4e-93046de2d432-ovn-northd-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.965540 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.962814 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fs2r5\" (UniqueName: \"kubernetes.io/projected/ab56a6da-6187-4fa6-bd4e-93046de2d432-kube-api-access-fs2r5\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.972770 4789 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ab56a6da-6187-4fa6-bd4e-93046de2d432-ovn-rundir\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.972787 4789 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab56a6da-6187-4fa6-bd4e-93046de2d432-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.972798 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-px4jz\" (UniqueName: \"kubernetes.io/projected/e3b2df70-e493-42b7-9009-64c6bdaf4dad-kube-api-access-px4jz\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.987737 4789 scope.go:117] "RemoveContainer" containerID="ce9ef55c9302edded2a55530533656268a3c7b21b0ae936aae0892ef6e043554" Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.992340 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 21:43:11 crc kubenswrapper[4789]: I0202 21:43:11.995471 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab56a6da-6187-4fa6-bd4e-93046de2d432-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "ab56a6da-6187-4fa6-bd4e-93046de2d432" (UID: "ab56a6da-6187-4fa6-bd4e-93046de2d432"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:43:12 crc kubenswrapper[4789]: I0202 21:43:11.998716 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 21:43:12 crc kubenswrapper[4789]: I0202 21:43:12.008544 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 21:43:12 crc kubenswrapper[4789]: I0202 21:43:12.016406 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 21:43:12 crc kubenswrapper[4789]: I0202 21:43:12.023270 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6b8b9b54f6-jfnqs"] Feb 02 21:43:12 crc kubenswrapper[4789]: I0202 21:43:12.026846 4789 scope.go:117] "RemoveContainer" containerID="e98adb233ea0f16d2b2f46eddae689b1bb397a9ba532ccb91e5ece02b9397f95" Feb 02 21:43:12 crc kubenswrapper[4789]: I0202 21:43:12.035960 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-6b8b9b54f6-jfnqs"] Feb 02 21:43:12 crc kubenswrapper[4789]: I0202 21:43:12.036124 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-6d964c7466-fpqld"] Feb 02 21:43:12 crc kubenswrapper[4789]: I0202 21:43:12.060231 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-6d964c7466-fpqld"] Feb 02 21:43:12 crc kubenswrapper[4789]: I0202 21:43:12.063239 4789 scope.go:117] "RemoveContainer" containerID="e98adb233ea0f16d2b2f46eddae689b1bb397a9ba532ccb91e5ece02b9397f95" Feb 02 21:43:12 crc kubenswrapper[4789]: E0202 21:43:12.063617 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e98adb233ea0f16d2b2f46eddae689b1bb397a9ba532ccb91e5ece02b9397f95\": container with ID starting with e98adb233ea0f16d2b2f46eddae689b1bb397a9ba532ccb91e5ece02b9397f95 not found: ID does not exist" containerID="e98adb233ea0f16d2b2f46eddae689b1bb397a9ba532ccb91e5ece02b9397f95" Feb 02 21:43:12 crc kubenswrapper[4789]: I0202 21:43:12.063650 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e98adb233ea0f16d2b2f46eddae689b1bb397a9ba532ccb91e5ece02b9397f95"} err="failed to get container status \"e98adb233ea0f16d2b2f46eddae689b1bb397a9ba532ccb91e5ece02b9397f95\": rpc error: code = NotFound desc = could not find container \"e98adb233ea0f16d2b2f46eddae689b1bb397a9ba532ccb91e5ece02b9397f95\": container with ID starting with e98adb233ea0f16d2b2f46eddae689b1bb397a9ba532ccb91e5ece02b9397f95 not found: ID does not exist" Feb 02 21:43:12 crc kubenswrapper[4789]: I0202 21:43:12.063672 4789 scope.go:117] "RemoveContainer" containerID="0a62728aedd4480cfd181d88be8ac00afa4f69cd9f3b44bd97a2e8305a5f31af" Feb 02 21:43:12 crc kubenswrapper[4789]: I0202 21:43:12.064369 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-sbz55" Feb 02 21:43:12 crc kubenswrapper[4789]: I0202 21:43:12.069676 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 02 21:43:12 crc kubenswrapper[4789]: I0202 21:43:12.073195 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 02 21:43:12 crc kubenswrapper[4789]: I0202 21:43:12.074147 4789 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e3b2df70-e493-42b7-9009-64c6bdaf4dad-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:12 crc kubenswrapper[4789]: I0202 21:43:12.074165 4789 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab56a6da-6187-4fa6-bd4e-93046de2d432-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:12 crc kubenswrapper[4789]: I0202 21:43:12.078444 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-959f7f8c5-pmqjf"] Feb 02 21:43:12 crc kubenswrapper[4789]: I0202 21:43:12.082951 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-959f7f8c5-pmqjf"] Feb 02 21:43:12 crc kubenswrapper[4789]: I0202 21:43:12.087286 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 02 21:43:12 crc kubenswrapper[4789]: I0202 21:43:12.091447 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 02 21:43:12 crc kubenswrapper[4789]: I0202 21:43:12.160693 4789 scope.go:117] "RemoveContainer" containerID="0fe697a1f2000589c5ab93c3c47f9c76ebfb685c854fd86b08766edfb2d1a375" Feb 02 21:43:12 crc kubenswrapper[4789]: I0202 21:43:12.175291 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64a9a0c6-c663-400b-8c60-43c582b7cac0-operator-scripts\") pod \"64a9a0c6-c663-400b-8c60-43c582b7cac0\" (UID: \"64a9a0c6-c663-400b-8c60-43c582b7cac0\") " Feb 02 21:43:12 crc kubenswrapper[4789]: I0202 21:43:12.175391 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g7qsd\" (UniqueName: \"kubernetes.io/projected/64a9a0c6-c663-400b-8c60-43c582b7cac0-kube-api-access-g7qsd\") pod \"64a9a0c6-c663-400b-8c60-43c582b7cac0\" (UID: \"64a9a0c6-c663-400b-8c60-43c582b7cac0\") " Feb 02 21:43:12 crc kubenswrapper[4789]: E0202 21:43:12.175828 4789 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Feb 02 21:43:12 crc kubenswrapper[4789]: E0202 21:43:12.175869 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b4db4b23-dae0-42a5-ad47-3336073d0b6a-config-data podName:b4db4b23-dae0-42a5-ad47-3336073d0b6a nodeName:}" failed. No retries permitted until 2026-02-02 21:43:20.175856677 +0000 UTC m=+1420.470881696 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/b4db4b23-dae0-42a5-ad47-3336073d0b6a-config-data") pod "rabbitmq-server-0" (UID: "b4db4b23-dae0-42a5-ad47-3336073d0b6a") : configmap "rabbitmq-config-data" not found Feb 02 21:43:12 crc kubenswrapper[4789]: I0202 21:43:12.176509 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64a9a0c6-c663-400b-8c60-43c582b7cac0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "64a9a0c6-c663-400b-8c60-43c582b7cac0" (UID: "64a9a0c6-c663-400b-8c60-43c582b7cac0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:43:12 crc kubenswrapper[4789]: I0202 21:43:12.181911 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64a9a0c6-c663-400b-8c60-43c582b7cac0-kube-api-access-g7qsd" (OuterVolumeSpecName: "kube-api-access-g7qsd") pod "64a9a0c6-c663-400b-8c60-43c582b7cac0" (UID: "64a9a0c6-c663-400b-8c60-43c582b7cac0"). InnerVolumeSpecName "kube-api-access-g7qsd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:43:12 crc kubenswrapper[4789]: I0202 21:43:12.186472 4789 scope.go:117] "RemoveContainer" containerID="0a62728aedd4480cfd181d88be8ac00afa4f69cd9f3b44bd97a2e8305a5f31af" Feb 02 21:43:12 crc kubenswrapper[4789]: E0202 21:43:12.187014 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a62728aedd4480cfd181d88be8ac00afa4f69cd9f3b44bd97a2e8305a5f31af\": container with ID starting with 0a62728aedd4480cfd181d88be8ac00afa4f69cd9f3b44bd97a2e8305a5f31af not found: ID does not exist" containerID="0a62728aedd4480cfd181d88be8ac00afa4f69cd9f3b44bd97a2e8305a5f31af" Feb 02 21:43:12 crc kubenswrapper[4789]: I0202 21:43:12.187069 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a62728aedd4480cfd181d88be8ac00afa4f69cd9f3b44bd97a2e8305a5f31af"} err="failed to get container status \"0a62728aedd4480cfd181d88be8ac00afa4f69cd9f3b44bd97a2e8305a5f31af\": rpc error: code = NotFound desc = could not find container \"0a62728aedd4480cfd181d88be8ac00afa4f69cd9f3b44bd97a2e8305a5f31af\": container with ID starting with 0a62728aedd4480cfd181d88be8ac00afa4f69cd9f3b44bd97a2e8305a5f31af not found: ID does not exist" Feb 02 21:43:12 crc kubenswrapper[4789]: I0202 21:43:12.187113 4789 scope.go:117] "RemoveContainer" containerID="0fe697a1f2000589c5ab93c3c47f9c76ebfb685c854fd86b08766edfb2d1a375" Feb 02 21:43:12 crc kubenswrapper[4789]: E0202 21:43:12.187413 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0fe697a1f2000589c5ab93c3c47f9c76ebfb685c854fd86b08766edfb2d1a375\": container with ID starting with 0fe697a1f2000589c5ab93c3c47f9c76ebfb685c854fd86b08766edfb2d1a375 not found: ID does not exist" containerID="0fe697a1f2000589c5ab93c3c47f9c76ebfb685c854fd86b08766edfb2d1a375" Feb 02 21:43:12 crc kubenswrapper[4789]: I0202 21:43:12.187435 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0fe697a1f2000589c5ab93c3c47f9c76ebfb685c854fd86b08766edfb2d1a375"} err="failed to get container status \"0fe697a1f2000589c5ab93c3c47f9c76ebfb685c854fd86b08766edfb2d1a375\": rpc error: code = NotFound desc = could not find container \"0fe697a1f2000589c5ab93c3c47f9c76ebfb685c854fd86b08766edfb2d1a375\": container with ID starting with 0fe697a1f2000589c5ab93c3c47f9c76ebfb685c854fd86b08766edfb2d1a375 not found: ID does not exist" Feb 02 21:43:12 crc kubenswrapper[4789]: I0202 21:43:12.187449 4789 scope.go:117] "RemoveContainer" containerID="7529e703a7ba79a3c7d9ce9adbb48f6652641d0b42790d00cab813d47b85c9b6" Feb 02 21:43:12 crc kubenswrapper[4789]: I0202 21:43:12.204969 4789 scope.go:117] "RemoveContainer" containerID="515297fe8dbc3fc649d583e30d1f7a1830bea72e21b40dc9d104ef3455ab5cb1" Feb 02 21:43:12 crc kubenswrapper[4789]: I0202 21:43:12.233543 4789 scope.go:117] "RemoveContainer" containerID="7529e703a7ba79a3c7d9ce9adbb48f6652641d0b42790d00cab813d47b85c9b6" Feb 02 21:43:12 crc kubenswrapper[4789]: E0202 21:43:12.233943 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7529e703a7ba79a3c7d9ce9adbb48f6652641d0b42790d00cab813d47b85c9b6\": container with ID starting with 7529e703a7ba79a3c7d9ce9adbb48f6652641d0b42790d00cab813d47b85c9b6 not found: ID does not exist" containerID="7529e703a7ba79a3c7d9ce9adbb48f6652641d0b42790d00cab813d47b85c9b6" Feb 02 21:43:12 crc kubenswrapper[4789]: I0202 21:43:12.233971 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7529e703a7ba79a3c7d9ce9adbb48f6652641d0b42790d00cab813d47b85c9b6"} err="failed to get container status \"7529e703a7ba79a3c7d9ce9adbb48f6652641d0b42790d00cab813d47b85c9b6\": rpc error: code = NotFound desc = could not find container \"7529e703a7ba79a3c7d9ce9adbb48f6652641d0b42790d00cab813d47b85c9b6\": container with ID starting with 7529e703a7ba79a3c7d9ce9adbb48f6652641d0b42790d00cab813d47b85c9b6 not found: ID does not exist" Feb 02 21:43:12 crc kubenswrapper[4789]: I0202 21:43:12.233990 4789 scope.go:117] "RemoveContainer" containerID="515297fe8dbc3fc649d583e30d1f7a1830bea72e21b40dc9d104ef3455ab5cb1" Feb 02 21:43:12 crc kubenswrapper[4789]: E0202 21:43:12.234191 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"515297fe8dbc3fc649d583e30d1f7a1830bea72e21b40dc9d104ef3455ab5cb1\": container with ID starting with 515297fe8dbc3fc649d583e30d1f7a1830bea72e21b40dc9d104ef3455ab5cb1 not found: ID does not exist" containerID="515297fe8dbc3fc649d583e30d1f7a1830bea72e21b40dc9d104ef3455ab5cb1" Feb 02 21:43:12 crc kubenswrapper[4789]: I0202 21:43:12.234211 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"515297fe8dbc3fc649d583e30d1f7a1830bea72e21b40dc9d104ef3455ab5cb1"} err="failed to get container status \"515297fe8dbc3fc649d583e30d1f7a1830bea72e21b40dc9d104ef3455ab5cb1\": rpc error: code = NotFound desc = could not find container \"515297fe8dbc3fc649d583e30d1f7a1830bea72e21b40dc9d104ef3455ab5cb1\": container with ID starting with 515297fe8dbc3fc649d583e30d1f7a1830bea72e21b40dc9d104ef3455ab5cb1 not found: ID does not exist" Feb 02 21:43:12 crc kubenswrapper[4789]: I0202 21:43:12.234223 4789 scope.go:117] "RemoveContainer" containerID="e343b555d9621789a633967b6cd533bf45c88272e650aba944e657e5737ee258" Feb 02 21:43:12 crc kubenswrapper[4789]: I0202 21:43:12.244048 4789 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nn5kg" podUID="0bf87933-483d-4608-9fab-9f0cfa9fb326" containerName="registry-server" probeResult="failure" output=< Feb 02 21:43:12 crc kubenswrapper[4789]: timeout: failed to connect service ":50051" within 1s Feb 02 21:43:12 crc kubenswrapper[4789]: > Feb 02 21:43:12 crc kubenswrapper[4789]: I0202 21:43:12.254444 4789 scope.go:117] "RemoveContainer" containerID="175ef66ad8a23cf5090dc4289e18344c14f9e8edd0b77edfc81aaff9cd62283b" Feb 02 21:43:12 crc kubenswrapper[4789]: I0202 21:43:12.277223 4789 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64a9a0c6-c663-400b-8c60-43c582b7cac0-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:12 crc kubenswrapper[4789]: I0202 21:43:12.277470 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g7qsd\" (UniqueName: \"kubernetes.io/projected/64a9a0c6-c663-400b-8c60-43c582b7cac0-kube-api-access-g7qsd\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:12 crc kubenswrapper[4789]: I0202 21:43:12.279428 4789 scope.go:117] "RemoveContainer" containerID="0a2cae00db145b6560fcc0b648c1c292b3eb7df490809622a8a50541cde04a0c" Feb 02 21:43:12 crc kubenswrapper[4789]: I0202 21:43:12.318133 4789 scope.go:117] "RemoveContainer" containerID="2234c362242e0356a4e9c41d9d9c119ece3aa80e6631194820c7f16fcb2df8fa" Feb 02 21:43:12 crc kubenswrapper[4789]: I0202 21:43:12.349706 4789 scope.go:117] "RemoveContainer" containerID="25ed4343b75caa0616ab66903bb372442dbf22b4a29f2c30b9fcf20df97021f7" Feb 02 21:43:12 crc kubenswrapper[4789]: I0202 21:43:12.378671 4789 scope.go:117] "RemoveContainer" containerID="c6597dc6aaeaebf47e22acb882e2ae643e5ed20e86abaacc9a1e3bf64ebb15a3" Feb 02 21:43:12 crc kubenswrapper[4789]: I0202 21:43:12.427140 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 02 21:43:12 crc kubenswrapper[4789]: I0202 21:43:12.429667 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="078a8abb-3926-40cd-9340-0bef088c130f" path="/var/lib/kubelet/pods/078a8abb-3926-40cd-9340-0bef088c130f/volumes" Feb 02 21:43:12 crc kubenswrapper[4789]: I0202 21:43:12.430165 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ba13473-b423-43a0-ab15-9d6be616cc7b" path="/var/lib/kubelet/pods/0ba13473-b423-43a0-ab15-9d6be616cc7b/volumes" Feb 02 21:43:12 crc kubenswrapper[4789]: I0202 21:43:12.430694 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1821366b-85cb-419f-9c57-9014300724be" path="/var/lib/kubelet/pods/1821366b-85cb-419f-9c57-9014300724be/volumes" Feb 02 21:43:12 crc kubenswrapper[4789]: I0202 21:43:12.431594 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ae097e7-380b-4044-8598-abc3e1059356" path="/var/lib/kubelet/pods/1ae097e7-380b-4044-8598-abc3e1059356/volumes" Feb 02 21:43:12 crc kubenswrapper[4789]: I0202 21:43:12.432134 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="212c4e72-7988-4770-ba07-ae0362baac7e" path="/var/lib/kubelet/pods/212c4e72-7988-4770-ba07-ae0362baac7e/volumes" Feb 02 21:43:12 crc kubenswrapper[4789]: I0202 21:43:12.432647 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24fb18f4-7a0f-4ae5-9104-e7dc45a479ff" path="/var/lib/kubelet/pods/24fb18f4-7a0f-4ae5-9104-e7dc45a479ff/volumes" Feb 02 21:43:12 crc kubenswrapper[4789]: I0202 21:43:12.433966 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="306f2aaf-92ed-4c14-92f4-a970a8240771" path="/var/lib/kubelet/pods/306f2aaf-92ed-4c14-92f4-a970a8240771/volumes" Feb 02 21:43:12 crc kubenswrapper[4789]: I0202 21:43:12.434330 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="349cede5-331c-4454-8c9c-fda8fe886f07" path="/var/lib/kubelet/pods/349cede5-331c-4454-8c9c-fda8fe886f07/volumes" Feb 02 21:43:12 crc kubenswrapper[4789]: I0202 21:43:12.434899 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="399d9417-2065-4e92-89c5-a04dbeaf2cca" path="/var/lib/kubelet/pods/399d9417-2065-4e92-89c5-a04dbeaf2cca/volumes" Feb 02 21:43:12 crc kubenswrapper[4789]: I0202 21:43:12.435390 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bb81567-8536-4275-ab0e-a003ef904230" path="/var/lib/kubelet/pods/3bb81567-8536-4275-ab0e-a003ef904230/volumes" Feb 02 21:43:12 crc kubenswrapper[4789]: I0202 21:43:12.437096 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c3c85c8-d5b9-48f3-9408-0d9693d0cbf1" path="/var/lib/kubelet/pods/5c3c85c8-d5b9-48f3-9408-0d9693d0cbf1/volumes" Feb 02 21:43:12 crc kubenswrapper[4789]: I0202 21:43:12.437640 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7acbb536-0a08-4132-a84a-848735b0e7f4" path="/var/lib/kubelet/pods/7acbb536-0a08-4132-a84a-848735b0e7f4/volumes" Feb 02 21:43:12 crc kubenswrapper[4789]: I0202 21:43:12.438564 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d53e4c0-add2-4cfd-bbea-e0a1d3196091" path="/var/lib/kubelet/pods/7d53e4c0-add2-4cfd-bbea-e0a1d3196091/volumes" Feb 02 21:43:12 crc kubenswrapper[4789]: I0202 21:43:12.439094 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="802bda4f-2363-4ca6-a126-2ccf1448ed71" path="/var/lib/kubelet/pods/802bda4f-2363-4ca6-a126-2ccf1448ed71/volumes" Feb 02 21:43:12 crc kubenswrapper[4789]: I0202 21:43:12.439482 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3b2df70-e493-42b7-9009-64c6bdaf4dad" path="/var/lib/kubelet/pods/e3b2df70-e493-42b7-9009-64c6bdaf4dad/volumes" Feb 02 21:43:12 crc kubenswrapper[4789]: I0202 21:43:12.439872 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef1040d3-c638-4098-a2ef-ce507371853e" path="/var/lib/kubelet/pods/ef1040d3-c638-4098-a2ef-ce507371853e/volumes" Feb 02 21:43:12 crc kubenswrapper[4789]: I0202 21:43:12.480362 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ws68l\" (UniqueName: \"kubernetes.io/projected/a77ac0de-f396-45e6-a92c-07fbddc4ec60-kube-api-access-ws68l\") pod \"a77ac0de-f396-45e6-a92c-07fbddc4ec60\" (UID: \"a77ac0de-f396-45e6-a92c-07fbddc4ec60\") " Feb 02 21:43:12 crc kubenswrapper[4789]: I0202 21:43:12.480456 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a77ac0de-f396-45e6-a92c-07fbddc4ec60-config-data-generated\") pod \"a77ac0de-f396-45e6-a92c-07fbddc4ec60\" (UID: \"a77ac0de-f396-45e6-a92c-07fbddc4ec60\") " Feb 02 21:43:12 crc kubenswrapper[4789]: I0202 21:43:12.480511 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"a77ac0de-f396-45e6-a92c-07fbddc4ec60\" (UID: \"a77ac0de-f396-45e6-a92c-07fbddc4ec60\") " Feb 02 21:43:12 crc kubenswrapper[4789]: I0202 21:43:12.480557 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a77ac0de-f396-45e6-a92c-07fbddc4ec60-kolla-config\") pod \"a77ac0de-f396-45e6-a92c-07fbddc4ec60\" (UID: \"a77ac0de-f396-45e6-a92c-07fbddc4ec60\") " Feb 02 21:43:12 crc kubenswrapper[4789]: I0202 21:43:12.480630 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a77ac0de-f396-45e6-a92c-07fbddc4ec60-galera-tls-certs\") pod \"a77ac0de-f396-45e6-a92c-07fbddc4ec60\" (UID: \"a77ac0de-f396-45e6-a92c-07fbddc4ec60\") " Feb 02 21:43:12 crc kubenswrapper[4789]: I0202 21:43:12.480659 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a77ac0de-f396-45e6-a92c-07fbddc4ec60-operator-scripts\") pod \"a77ac0de-f396-45e6-a92c-07fbddc4ec60\" (UID: \"a77ac0de-f396-45e6-a92c-07fbddc4ec60\") " Feb 02 21:43:12 crc kubenswrapper[4789]: I0202 21:43:12.480708 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a77ac0de-f396-45e6-a92c-07fbddc4ec60-config-data-default\") pod \"a77ac0de-f396-45e6-a92c-07fbddc4ec60\" (UID: \"a77ac0de-f396-45e6-a92c-07fbddc4ec60\") " Feb 02 21:43:12 crc kubenswrapper[4789]: I0202 21:43:12.480724 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a77ac0de-f396-45e6-a92c-07fbddc4ec60-combined-ca-bundle\") pod \"a77ac0de-f396-45e6-a92c-07fbddc4ec60\" (UID: \"a77ac0de-f396-45e6-a92c-07fbddc4ec60\") " Feb 02 21:43:12 crc kubenswrapper[4789]: I0202 21:43:12.481480 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a77ac0de-f396-45e6-a92c-07fbddc4ec60-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "a77ac0de-f396-45e6-a92c-07fbddc4ec60" (UID: "a77ac0de-f396-45e6-a92c-07fbddc4ec60"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:43:12 crc kubenswrapper[4789]: I0202 21:43:12.482129 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a77ac0de-f396-45e6-a92c-07fbddc4ec60-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "a77ac0de-f396-45e6-a92c-07fbddc4ec60" (UID: "a77ac0de-f396-45e6-a92c-07fbddc4ec60"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 21:43:12 crc kubenswrapper[4789]: I0202 21:43:12.482494 4789 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a77ac0de-f396-45e6-a92c-07fbddc4ec60-kolla-config\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:12 crc kubenswrapper[4789]: I0202 21:43:12.482528 4789 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a77ac0de-f396-45e6-a92c-07fbddc4ec60-config-data-generated\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:12 crc kubenswrapper[4789]: I0202 21:43:12.482940 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a77ac0de-f396-45e6-a92c-07fbddc4ec60-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "a77ac0de-f396-45e6-a92c-07fbddc4ec60" (UID: "a77ac0de-f396-45e6-a92c-07fbddc4ec60"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:43:12 crc kubenswrapper[4789]: I0202 21:43:12.484350 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a77ac0de-f396-45e6-a92c-07fbddc4ec60-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a77ac0de-f396-45e6-a92c-07fbddc4ec60" (UID: "a77ac0de-f396-45e6-a92c-07fbddc4ec60"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:43:12 crc kubenswrapper[4789]: I0202 21:43:12.496636 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "mysql-db") pod "a77ac0de-f396-45e6-a92c-07fbddc4ec60" (UID: "a77ac0de-f396-45e6-a92c-07fbddc4ec60"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 02 21:43:12 crc kubenswrapper[4789]: I0202 21:43:12.509333 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a77ac0de-f396-45e6-a92c-07fbddc4ec60-kube-api-access-ws68l" (OuterVolumeSpecName: "kube-api-access-ws68l") pod "a77ac0de-f396-45e6-a92c-07fbddc4ec60" (UID: "a77ac0de-f396-45e6-a92c-07fbddc4ec60"). InnerVolumeSpecName "kube-api-access-ws68l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:43:12 crc kubenswrapper[4789]: I0202 21:43:12.510071 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a77ac0de-f396-45e6-a92c-07fbddc4ec60-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a77ac0de-f396-45e6-a92c-07fbddc4ec60" (UID: "a77ac0de-f396-45e6-a92c-07fbddc4ec60"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:43:12 crc kubenswrapper[4789]: I0202 21:43:12.531303 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a77ac0de-f396-45e6-a92c-07fbddc4ec60-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "a77ac0de-f396-45e6-a92c-07fbddc4ec60" (UID: "a77ac0de-f396-45e6-a92c-07fbddc4ec60"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:43:12 crc kubenswrapper[4789]: I0202 21:43:12.590574 4789 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Feb 02 21:43:12 crc kubenswrapper[4789]: I0202 21:43:12.590665 4789 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a77ac0de-f396-45e6-a92c-07fbddc4ec60-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:12 crc kubenswrapper[4789]: I0202 21:43:12.590707 4789 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a77ac0de-f396-45e6-a92c-07fbddc4ec60-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:12 crc kubenswrapper[4789]: I0202 21:43:12.590721 4789 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a77ac0de-f396-45e6-a92c-07fbddc4ec60-config-data-default\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:12 crc kubenswrapper[4789]: I0202 21:43:12.590735 4789 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a77ac0de-f396-45e6-a92c-07fbddc4ec60-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:12 crc kubenswrapper[4789]: I0202 21:43:12.590747 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ws68l\" (UniqueName: \"kubernetes.io/projected/a77ac0de-f396-45e6-a92c-07fbddc4ec60-kube-api-access-ws68l\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:12 crc kubenswrapper[4789]: I0202 21:43:12.608035 4789 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Feb 02 21:43:12 crc kubenswrapper[4789]: I0202 21:43:12.691818 4789 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:12 crc kubenswrapper[4789]: I0202 21:43:12.713365 4789 generic.go:334] "Generic (PLEG): container finished" podID="0f86f59c-9db0-4580-a8f3-2d3fe558c905" containerID="bbeb0176ca8c9142d15e473d306a3fc80a2f4568a8a86ce41b47afe19830a87f" exitCode=0 Feb 02 21:43:12 crc kubenswrapper[4789]: I0202 21:43:12.713422 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-595cf58668-hfkcq" event={"ID":"0f86f59c-9db0-4580-a8f3-2d3fe558c905","Type":"ContainerDied","Data":"bbeb0176ca8c9142d15e473d306a3fc80a2f4568a8a86ce41b47afe19830a87f"} Feb 02 21:43:12 crc kubenswrapper[4789]: I0202 21:43:12.723177 4789 generic.go:334] "Generic (PLEG): container finished" podID="a77ac0de-f396-45e6-a92c-07fbddc4ec60" containerID="56c1fc152ae9c83eb013d9170e2ee84fae3556ed6cca1265e4e91d1f2bb54861" exitCode=0 Feb 02 21:43:12 crc kubenswrapper[4789]: I0202 21:43:12.723312 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"a77ac0de-f396-45e6-a92c-07fbddc4ec60","Type":"ContainerDied","Data":"56c1fc152ae9c83eb013d9170e2ee84fae3556ed6cca1265e4e91d1f2bb54861"} Feb 02 21:43:12 crc kubenswrapper[4789]: I0202 21:43:12.723344 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"a77ac0de-f396-45e6-a92c-07fbddc4ec60","Type":"ContainerDied","Data":"1cee6c445449104e880b0bc100b90f19a2b6fa5905bdf43dce3fa8461974d7df"} Feb 02 21:43:12 crc kubenswrapper[4789]: I0202 21:43:12.723364 4789 scope.go:117] "RemoveContainer" containerID="56c1fc152ae9c83eb013d9170e2ee84fae3556ed6cca1265e4e91d1f2bb54861" Feb 02 21:43:12 crc kubenswrapper[4789]: I0202 21:43:12.723515 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 02 21:43:12 crc kubenswrapper[4789]: I0202 21:43:12.727732 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_ab56a6da-6187-4fa6-bd4e-93046de2d432/ovn-northd/0.log" Feb 02 21:43:12 crc kubenswrapper[4789]: I0202 21:43:12.727853 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 02 21:43:12 crc kubenswrapper[4789]: I0202 21:43:12.727855 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"ab56a6da-6187-4fa6-bd4e-93046de2d432","Type":"ContainerDied","Data":"2baa97b2e3a2b45df71dd00ff3af1026ded3306e49237f3b0ac6550c673a9545"} Feb 02 21:43:12 crc kubenswrapper[4789]: I0202 21:43:12.737730 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-sbz55" event={"ID":"64a9a0c6-c663-400b-8c60-43c582b7cac0","Type":"ContainerDied","Data":"a22206a7859412eb7185bb7a1f7c3bcdaf737b02c1bb9a99d4ed045ad6143df8"} Feb 02 21:43:12 crc kubenswrapper[4789]: I0202 21:43:12.737808 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-sbz55" Feb 02 21:43:12 crc kubenswrapper[4789]: I0202 21:43:12.753482 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Feb 02 21:43:12 crc kubenswrapper[4789]: I0202 21:43:12.789955 4789 scope.go:117] "RemoveContainer" containerID="37070194a254abfa3aad802e5fbe6112834841dccb39ba3bd770d2c932dfbb36" Feb 02 21:43:12 crc kubenswrapper[4789]: I0202 21:43:12.795919 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-northd-0"] Feb 02 21:43:12 crc kubenswrapper[4789]: I0202 21:43:12.823944 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-sbz55"] Feb 02 21:43:12 crc kubenswrapper[4789]: I0202 21:43:12.851884 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-sbz55"] Feb 02 21:43:12 crc kubenswrapper[4789]: I0202 21:43:12.854690 4789 scope.go:117] "RemoveContainer" containerID="56c1fc152ae9c83eb013d9170e2ee84fae3556ed6cca1265e4e91d1f2bb54861" Feb 02 21:43:12 crc kubenswrapper[4789]: E0202 21:43:12.855920 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56c1fc152ae9c83eb013d9170e2ee84fae3556ed6cca1265e4e91d1f2bb54861\": container with ID starting with 56c1fc152ae9c83eb013d9170e2ee84fae3556ed6cca1265e4e91d1f2bb54861 not found: ID does not exist" containerID="56c1fc152ae9c83eb013d9170e2ee84fae3556ed6cca1265e4e91d1f2bb54861" Feb 02 21:43:12 crc kubenswrapper[4789]: I0202 21:43:12.855948 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56c1fc152ae9c83eb013d9170e2ee84fae3556ed6cca1265e4e91d1f2bb54861"} err="failed to get container status \"56c1fc152ae9c83eb013d9170e2ee84fae3556ed6cca1265e4e91d1f2bb54861\": rpc error: code = NotFound desc = could not find container \"56c1fc152ae9c83eb013d9170e2ee84fae3556ed6cca1265e4e91d1f2bb54861\": container with ID starting with 56c1fc152ae9c83eb013d9170e2ee84fae3556ed6cca1265e4e91d1f2bb54861 not found: ID does not exist" Feb 02 21:43:12 crc kubenswrapper[4789]: I0202 21:43:12.855968 4789 scope.go:117] "RemoveContainer" containerID="37070194a254abfa3aad802e5fbe6112834841dccb39ba3bd770d2c932dfbb36" Feb 02 21:43:12 crc kubenswrapper[4789]: E0202 21:43:12.856150 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37070194a254abfa3aad802e5fbe6112834841dccb39ba3bd770d2c932dfbb36\": container with ID starting with 37070194a254abfa3aad802e5fbe6112834841dccb39ba3bd770d2c932dfbb36 not found: ID does not exist" containerID="37070194a254abfa3aad802e5fbe6112834841dccb39ba3bd770d2c932dfbb36" Feb 02 21:43:12 crc kubenswrapper[4789]: I0202 21:43:12.856170 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37070194a254abfa3aad802e5fbe6112834841dccb39ba3bd770d2c932dfbb36"} err="failed to get container status \"37070194a254abfa3aad802e5fbe6112834841dccb39ba3bd770d2c932dfbb36\": rpc error: code = NotFound desc = could not find container \"37070194a254abfa3aad802e5fbe6112834841dccb39ba3bd770d2c932dfbb36\": container with ID starting with 37070194a254abfa3aad802e5fbe6112834841dccb39ba3bd770d2c932dfbb36 not found: ID does not exist" Feb 02 21:43:12 crc kubenswrapper[4789]: I0202 21:43:12.856181 4789 scope.go:117] "RemoveContainer" containerID="f37965943ec7625f3192bcaac3c01b17a18ceddae04469351da0a2114b7fe47f" Feb 02 21:43:12 crc kubenswrapper[4789]: I0202 21:43:12.857172 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Feb 02 21:43:12 crc kubenswrapper[4789]: I0202 21:43:12.861473 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-galera-0"] Feb 02 21:43:12 crc kubenswrapper[4789]: I0202 21:43:12.883468 4789 scope.go:117] "RemoveContainer" containerID="9404edbdc9c7a81d7c48cab8b8c60b1fc5de57f009d5e80c304dd34c2eae41c2" Feb 02 21:43:12 crc kubenswrapper[4789]: I0202 21:43:12.948273 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-595cf58668-hfkcq" Feb 02 21:43:13 crc kubenswrapper[4789]: I0202 21:43:13.021747 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0f86f59c-9db0-4580-a8f3-2d3fe558c905-credential-keys\") pod \"0f86f59c-9db0-4580-a8f3-2d3fe558c905\" (UID: \"0f86f59c-9db0-4580-a8f3-2d3fe558c905\") " Feb 02 21:43:13 crc kubenswrapper[4789]: I0202 21:43:13.021833 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wzkpp\" (UniqueName: \"kubernetes.io/projected/0f86f59c-9db0-4580-a8f3-2d3fe558c905-kube-api-access-wzkpp\") pod \"0f86f59c-9db0-4580-a8f3-2d3fe558c905\" (UID: \"0f86f59c-9db0-4580-a8f3-2d3fe558c905\") " Feb 02 21:43:13 crc kubenswrapper[4789]: I0202 21:43:13.021882 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f86f59c-9db0-4580-a8f3-2d3fe558c905-combined-ca-bundle\") pod \"0f86f59c-9db0-4580-a8f3-2d3fe558c905\" (UID: \"0f86f59c-9db0-4580-a8f3-2d3fe558c905\") " Feb 02 21:43:13 crc kubenswrapper[4789]: I0202 21:43:13.021907 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f86f59c-9db0-4580-a8f3-2d3fe558c905-config-data\") pod \"0f86f59c-9db0-4580-a8f3-2d3fe558c905\" (UID: \"0f86f59c-9db0-4580-a8f3-2d3fe558c905\") " Feb 02 21:43:13 crc kubenswrapper[4789]: I0202 21:43:13.021928 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f86f59c-9db0-4580-a8f3-2d3fe558c905-scripts\") pod \"0f86f59c-9db0-4580-a8f3-2d3fe558c905\" (UID: \"0f86f59c-9db0-4580-a8f3-2d3fe558c905\") " Feb 02 21:43:13 crc kubenswrapper[4789]: I0202 21:43:13.021965 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f86f59c-9db0-4580-a8f3-2d3fe558c905-internal-tls-certs\") pod \"0f86f59c-9db0-4580-a8f3-2d3fe558c905\" (UID: \"0f86f59c-9db0-4580-a8f3-2d3fe558c905\") " Feb 02 21:43:13 crc kubenswrapper[4789]: I0202 21:43:13.021995 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0f86f59c-9db0-4580-a8f3-2d3fe558c905-fernet-keys\") pod \"0f86f59c-9db0-4580-a8f3-2d3fe558c905\" (UID: \"0f86f59c-9db0-4580-a8f3-2d3fe558c905\") " Feb 02 21:43:13 crc kubenswrapper[4789]: I0202 21:43:13.022027 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f86f59c-9db0-4580-a8f3-2d3fe558c905-public-tls-certs\") pod \"0f86f59c-9db0-4580-a8f3-2d3fe558c905\" (UID: \"0f86f59c-9db0-4580-a8f3-2d3fe558c905\") " Feb 02 21:43:13 crc kubenswrapper[4789]: I0202 21:43:13.028044 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f86f59c-9db0-4580-a8f3-2d3fe558c905-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "0f86f59c-9db0-4580-a8f3-2d3fe558c905" (UID: "0f86f59c-9db0-4580-a8f3-2d3fe558c905"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:43:13 crc kubenswrapper[4789]: I0202 21:43:13.028878 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f86f59c-9db0-4580-a8f3-2d3fe558c905-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "0f86f59c-9db0-4580-a8f3-2d3fe558c905" (UID: "0f86f59c-9db0-4580-a8f3-2d3fe558c905"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:43:13 crc kubenswrapper[4789]: I0202 21:43:13.029044 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f86f59c-9db0-4580-a8f3-2d3fe558c905-kube-api-access-wzkpp" (OuterVolumeSpecName: "kube-api-access-wzkpp") pod "0f86f59c-9db0-4580-a8f3-2d3fe558c905" (UID: "0f86f59c-9db0-4580-a8f3-2d3fe558c905"). InnerVolumeSpecName "kube-api-access-wzkpp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:43:13 crc kubenswrapper[4789]: I0202 21:43:13.030765 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f86f59c-9db0-4580-a8f3-2d3fe558c905-scripts" (OuterVolumeSpecName: "scripts") pod "0f86f59c-9db0-4580-a8f3-2d3fe558c905" (UID: "0f86f59c-9db0-4580-a8f3-2d3fe558c905"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:43:13 crc kubenswrapper[4789]: I0202 21:43:13.047878 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f86f59c-9db0-4580-a8f3-2d3fe558c905-config-data" (OuterVolumeSpecName: "config-data") pod "0f86f59c-9db0-4580-a8f3-2d3fe558c905" (UID: "0f86f59c-9db0-4580-a8f3-2d3fe558c905"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:43:13 crc kubenswrapper[4789]: I0202 21:43:13.048465 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f86f59c-9db0-4580-a8f3-2d3fe558c905-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0f86f59c-9db0-4580-a8f3-2d3fe558c905" (UID: "0f86f59c-9db0-4580-a8f3-2d3fe558c905"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:43:13 crc kubenswrapper[4789]: I0202 21:43:13.082818 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f86f59c-9db0-4580-a8f3-2d3fe558c905-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "0f86f59c-9db0-4580-a8f3-2d3fe558c905" (UID: "0f86f59c-9db0-4580-a8f3-2d3fe558c905"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:43:13 crc kubenswrapper[4789]: I0202 21:43:13.099988 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f86f59c-9db0-4580-a8f3-2d3fe558c905-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "0f86f59c-9db0-4580-a8f3-2d3fe558c905" (UID: "0f86f59c-9db0-4580-a8f3-2d3fe558c905"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:43:13 crc kubenswrapper[4789]: I0202 21:43:13.123623 4789 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0f86f59c-9db0-4580-a8f3-2d3fe558c905-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:13 crc kubenswrapper[4789]: I0202 21:43:13.123656 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wzkpp\" (UniqueName: \"kubernetes.io/projected/0f86f59c-9db0-4580-a8f3-2d3fe558c905-kube-api-access-wzkpp\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:13 crc kubenswrapper[4789]: I0202 21:43:13.123668 4789 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f86f59c-9db0-4580-a8f3-2d3fe558c905-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:13 crc kubenswrapper[4789]: I0202 21:43:13.123678 4789 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f86f59c-9db0-4580-a8f3-2d3fe558c905-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:13 crc kubenswrapper[4789]: I0202 21:43:13.123688 4789 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f86f59c-9db0-4580-a8f3-2d3fe558c905-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:13 crc kubenswrapper[4789]: I0202 21:43:13.123696 4789 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f86f59c-9db0-4580-a8f3-2d3fe558c905-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:13 crc kubenswrapper[4789]: I0202 21:43:13.123704 4789 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0f86f59c-9db0-4580-a8f3-2d3fe558c905-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:13 crc kubenswrapper[4789]: I0202 21:43:13.123711 4789 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f86f59c-9db0-4580-a8f3-2d3fe558c905-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:13 crc kubenswrapper[4789]: E0202 21:43:13.211270 4789 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 17dd53bbaea1cc18fc470d6b2376b40412de4caece9ebad3ce307cda50a79e10 is running failed: container process not found" containerID="17dd53bbaea1cc18fc470d6b2376b40412de4caece9ebad3ce307cda50a79e10" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 02 21:43:13 crc kubenswrapper[4789]: E0202 21:43:13.211722 4789 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 17dd53bbaea1cc18fc470d6b2376b40412de4caece9ebad3ce307cda50a79e10 is running failed: container process not found" containerID="17dd53bbaea1cc18fc470d6b2376b40412de4caece9ebad3ce307cda50a79e10" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 02 21:43:13 crc kubenswrapper[4789]: E0202 21:43:13.212019 4789 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 17dd53bbaea1cc18fc470d6b2376b40412de4caece9ebad3ce307cda50a79e10 is running failed: container process not found" containerID="17dd53bbaea1cc18fc470d6b2376b40412de4caece9ebad3ce307cda50a79e10" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 02 21:43:13 crc kubenswrapper[4789]: E0202 21:43:13.212110 4789 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6c5300b0e812632a91b74ba2ab909199288368ea0a90d2cfd2fd7fbf3551f6b6" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 02 21:43:13 crc kubenswrapper[4789]: E0202 21:43:13.212045 4789 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 17dd53bbaea1cc18fc470d6b2376b40412de4caece9ebad3ce307cda50a79e10 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-tjn59" podUID="0ad9b78e-5f3a-42f0-a462-c6b0bf0c4ec1" containerName="ovsdb-server" Feb 02 21:43:13 crc kubenswrapper[4789]: E0202 21:43:13.217503 4789 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6c5300b0e812632a91b74ba2ab909199288368ea0a90d2cfd2fd7fbf3551f6b6" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 02 21:43:13 crc kubenswrapper[4789]: E0202 21:43:13.219433 4789 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6c5300b0e812632a91b74ba2ab909199288368ea0a90d2cfd2fd7fbf3551f6b6" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 02 21:43:13 crc kubenswrapper[4789]: E0202 21:43:13.219497 4789 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-tjn59" podUID="0ad9b78e-5f3a-42f0-a462-c6b0bf0c4ec1" containerName="ovs-vswitchd" Feb 02 21:43:13 crc kubenswrapper[4789]: E0202 21:43:13.327466 4789 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Feb 02 21:43:13 crc kubenswrapper[4789]: E0202 21:43:13.327540 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b8917d54-451e-4a56-9e8a-142bb5db17e1-config-data podName:b8917d54-451e-4a56-9e8a-142bb5db17e1 nodeName:}" failed. No retries permitted until 2026-02-02 21:43:21.32752305 +0000 UTC m=+1421.622548069 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/b8917d54-451e-4a56-9e8a-142bb5db17e1-config-data") pod "rabbitmq-cell1-server-0" (UID: "b8917d54-451e-4a56-9e8a-142bb5db17e1") : configmap "rabbitmq-cell1-config-data" not found Feb 02 21:43:13 crc kubenswrapper[4789]: I0202 21:43:13.452035 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 02 21:43:13 crc kubenswrapper[4789]: I0202 21:43:13.530771 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b4db4b23-dae0-42a5-ad47-3336073d0b6a-rabbitmq-plugins\") pod \"b4db4b23-dae0-42a5-ad47-3336073d0b6a\" (UID: \"b4db4b23-dae0-42a5-ad47-3336073d0b6a\") " Feb 02 21:43:13 crc kubenswrapper[4789]: I0202 21:43:13.530885 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b4db4b23-dae0-42a5-ad47-3336073d0b6a-rabbitmq-confd\") pod \"b4db4b23-dae0-42a5-ad47-3336073d0b6a\" (UID: \"b4db4b23-dae0-42a5-ad47-3336073d0b6a\") " Feb 02 21:43:13 crc kubenswrapper[4789]: I0202 21:43:13.530986 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b4db4b23-dae0-42a5-ad47-3336073d0b6a-server-conf\") pod \"b4db4b23-dae0-42a5-ad47-3336073d0b6a\" (UID: \"b4db4b23-dae0-42a5-ad47-3336073d0b6a\") " Feb 02 21:43:13 crc kubenswrapper[4789]: I0202 21:43:13.531059 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b4db4b23-dae0-42a5-ad47-3336073d0b6a-erlang-cookie-secret\") pod \"b4db4b23-dae0-42a5-ad47-3336073d0b6a\" (UID: \"b4db4b23-dae0-42a5-ad47-3336073d0b6a\") " Feb 02 21:43:13 crc kubenswrapper[4789]: I0202 21:43:13.531636 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qcfw4\" (UniqueName: \"kubernetes.io/projected/b4db4b23-dae0-42a5-ad47-3336073d0b6a-kube-api-access-qcfw4\") pod \"b4db4b23-dae0-42a5-ad47-3336073d0b6a\" (UID: \"b4db4b23-dae0-42a5-ad47-3336073d0b6a\") " Feb 02 21:43:13 crc kubenswrapper[4789]: I0202 21:43:13.531505 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4db4b23-dae0-42a5-ad47-3336073d0b6a-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "b4db4b23-dae0-42a5-ad47-3336073d0b6a" (UID: "b4db4b23-dae0-42a5-ad47-3336073d0b6a"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 21:43:13 crc kubenswrapper[4789]: I0202 21:43:13.531677 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"b4db4b23-dae0-42a5-ad47-3336073d0b6a\" (UID: \"b4db4b23-dae0-42a5-ad47-3336073d0b6a\") " Feb 02 21:43:13 crc kubenswrapper[4789]: I0202 21:43:13.531712 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b4db4b23-dae0-42a5-ad47-3336073d0b6a-config-data\") pod \"b4db4b23-dae0-42a5-ad47-3336073d0b6a\" (UID: \"b4db4b23-dae0-42a5-ad47-3336073d0b6a\") " Feb 02 21:43:13 crc kubenswrapper[4789]: I0202 21:43:13.531749 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b4db4b23-dae0-42a5-ad47-3336073d0b6a-rabbitmq-tls\") pod \"b4db4b23-dae0-42a5-ad47-3336073d0b6a\" (UID: \"b4db4b23-dae0-42a5-ad47-3336073d0b6a\") " Feb 02 21:43:13 crc kubenswrapper[4789]: I0202 21:43:13.531780 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b4db4b23-dae0-42a5-ad47-3336073d0b6a-rabbitmq-erlang-cookie\") pod \"b4db4b23-dae0-42a5-ad47-3336073d0b6a\" (UID: \"b4db4b23-dae0-42a5-ad47-3336073d0b6a\") " Feb 02 21:43:13 crc kubenswrapper[4789]: I0202 21:43:13.531809 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b4db4b23-dae0-42a5-ad47-3336073d0b6a-pod-info\") pod \"b4db4b23-dae0-42a5-ad47-3336073d0b6a\" (UID: \"b4db4b23-dae0-42a5-ad47-3336073d0b6a\") " Feb 02 21:43:13 crc kubenswrapper[4789]: I0202 21:43:13.531865 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b4db4b23-dae0-42a5-ad47-3336073d0b6a-plugins-conf\") pod \"b4db4b23-dae0-42a5-ad47-3336073d0b6a\" (UID: \"b4db4b23-dae0-42a5-ad47-3336073d0b6a\") " Feb 02 21:43:13 crc kubenswrapper[4789]: I0202 21:43:13.533001 4789 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b4db4b23-dae0-42a5-ad47-3336073d0b6a-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:13 crc kubenswrapper[4789]: I0202 21:43:13.533712 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4db4b23-dae0-42a5-ad47-3336073d0b6a-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "b4db4b23-dae0-42a5-ad47-3336073d0b6a" (UID: "b4db4b23-dae0-42a5-ad47-3336073d0b6a"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 21:43:13 crc kubenswrapper[4789]: I0202 21:43:13.533783 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4db4b23-dae0-42a5-ad47-3336073d0b6a-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "b4db4b23-dae0-42a5-ad47-3336073d0b6a" (UID: "b4db4b23-dae0-42a5-ad47-3336073d0b6a"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:43:13 crc kubenswrapper[4789]: I0202 21:43:13.535751 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4db4b23-dae0-42a5-ad47-3336073d0b6a-kube-api-access-qcfw4" (OuterVolumeSpecName: "kube-api-access-qcfw4") pod "b4db4b23-dae0-42a5-ad47-3336073d0b6a" (UID: "b4db4b23-dae0-42a5-ad47-3336073d0b6a"). InnerVolumeSpecName "kube-api-access-qcfw4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:43:13 crc kubenswrapper[4789]: I0202 21:43:13.537750 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4db4b23-dae0-42a5-ad47-3336073d0b6a-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "b4db4b23-dae0-42a5-ad47-3336073d0b6a" (UID: "b4db4b23-dae0-42a5-ad47-3336073d0b6a"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:43:13 crc kubenswrapper[4789]: I0202 21:43:13.539303 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/b4db4b23-dae0-42a5-ad47-3336073d0b6a-pod-info" (OuterVolumeSpecName: "pod-info") pod "b4db4b23-dae0-42a5-ad47-3336073d0b6a" (UID: "b4db4b23-dae0-42a5-ad47-3336073d0b6a"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 02 21:43:13 crc kubenswrapper[4789]: I0202 21:43:13.540130 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "persistence") pod "b4db4b23-dae0-42a5-ad47-3336073d0b6a" (UID: "b4db4b23-dae0-42a5-ad47-3336073d0b6a"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 02 21:43:13 crc kubenswrapper[4789]: I0202 21:43:13.540257 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4db4b23-dae0-42a5-ad47-3336073d0b6a-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "b4db4b23-dae0-42a5-ad47-3336073d0b6a" (UID: "b4db4b23-dae0-42a5-ad47-3336073d0b6a"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:43:13 crc kubenswrapper[4789]: I0202 21:43:13.573964 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4db4b23-dae0-42a5-ad47-3336073d0b6a-config-data" (OuterVolumeSpecName: "config-data") pod "b4db4b23-dae0-42a5-ad47-3336073d0b6a" (UID: "b4db4b23-dae0-42a5-ad47-3336073d0b6a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:43:13 crc kubenswrapper[4789]: I0202 21:43:13.605114 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4db4b23-dae0-42a5-ad47-3336073d0b6a-server-conf" (OuterVolumeSpecName: "server-conf") pod "b4db4b23-dae0-42a5-ad47-3336073d0b6a" (UID: "b4db4b23-dae0-42a5-ad47-3336073d0b6a"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:43:13 crc kubenswrapper[4789]: I0202 21:43:13.624217 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4db4b23-dae0-42a5-ad47-3336073d0b6a-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "b4db4b23-dae0-42a5-ad47-3336073d0b6a" (UID: "b4db4b23-dae0-42a5-ad47-3336073d0b6a"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:43:13 crc kubenswrapper[4789]: I0202 21:43:13.634202 4789 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b4db4b23-dae0-42a5-ad47-3336073d0b6a-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:13 crc kubenswrapper[4789]: I0202 21:43:13.634243 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qcfw4\" (UniqueName: \"kubernetes.io/projected/b4db4b23-dae0-42a5-ad47-3336073d0b6a-kube-api-access-qcfw4\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:13 crc kubenswrapper[4789]: I0202 21:43:13.634283 4789 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Feb 02 21:43:13 crc kubenswrapper[4789]: I0202 21:43:13.634293 4789 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b4db4b23-dae0-42a5-ad47-3336073d0b6a-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:13 crc kubenswrapper[4789]: I0202 21:43:13.634301 4789 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b4db4b23-dae0-42a5-ad47-3336073d0b6a-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:13 crc kubenswrapper[4789]: I0202 21:43:13.634309 4789 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b4db4b23-dae0-42a5-ad47-3336073d0b6a-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:13 crc kubenswrapper[4789]: I0202 21:43:13.634317 4789 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b4db4b23-dae0-42a5-ad47-3336073d0b6a-pod-info\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:13 crc kubenswrapper[4789]: I0202 21:43:13.635516 4789 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b4db4b23-dae0-42a5-ad47-3336073d0b6a-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:13 crc kubenswrapper[4789]: I0202 21:43:13.635527 4789 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b4db4b23-dae0-42a5-ad47-3336073d0b6a-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:13 crc kubenswrapper[4789]: I0202 21:43:13.635535 4789 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b4db4b23-dae0-42a5-ad47-3336073d0b6a-server-conf\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:13 crc kubenswrapper[4789]: I0202 21:43:13.651966 4789 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Feb 02 21:43:13 crc kubenswrapper[4789]: I0202 21:43:13.738810 4789 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:13 crc kubenswrapper[4789]: E0202 21:43:13.751246 4789 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err=< Feb 02 21:43:13 crc kubenswrapper[4789]: command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: 2026-02-02T21:43:06Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Feb 02 21:43:13 crc kubenswrapper[4789]: /etc/init.d/functions: line 589: 386 Alarm clock "$@" Feb 02 21:43:13 crc kubenswrapper[4789]: > execCommand=["/usr/share/ovn/scripts/ovn-ctl","stop_controller"] containerName="ovn-controller" pod="openstack/ovn-controller-gjls4" message=< Feb 02 21:43:13 crc kubenswrapper[4789]: Exiting ovn-controller (1) [FAILED] Feb 02 21:43:13 crc kubenswrapper[4789]: Killing ovn-controller (1) [ OK ] Feb 02 21:43:13 crc kubenswrapper[4789]: 2026-02-02T21:43:06Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Feb 02 21:43:13 crc kubenswrapper[4789]: /etc/init.d/functions: line 589: 386 Alarm clock "$@" Feb 02 21:43:13 crc kubenswrapper[4789]: > Feb 02 21:43:13 crc kubenswrapper[4789]: E0202 21:43:13.751283 4789 kuberuntime_container.go:691] "PreStop hook failed" err=< Feb 02 21:43:13 crc kubenswrapper[4789]: command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: 2026-02-02T21:43:06Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Feb 02 21:43:13 crc kubenswrapper[4789]: /etc/init.d/functions: line 589: 386 Alarm clock "$@" Feb 02 21:43:13 crc kubenswrapper[4789]: > pod="openstack/ovn-controller-gjls4" podUID="c571c3a8-8470-4076-adde-89416f071937" containerName="ovn-controller" containerID="cri-o://d17231a26fe8d830c193a7bd2b6abb5889e5164f3e5069db7e214257ec7141a8" Feb 02 21:43:13 crc kubenswrapper[4789]: I0202 21:43:13.751322 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-gjls4" podUID="c571c3a8-8470-4076-adde-89416f071937" containerName="ovn-controller" containerID="cri-o://d17231a26fe8d830c193a7bd2b6abb5889e5164f3e5069db7e214257ec7141a8" gracePeriod=22 Feb 02 21:43:13 crc kubenswrapper[4789]: I0202 21:43:13.769745 4789 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-gjls4" podUID="c571c3a8-8470-4076-adde-89416f071937" containerName="ovn-controller" probeResult="failure" output="" Feb 02 21:43:13 crc kubenswrapper[4789]: E0202 21:43:13.774765 4789 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d17231a26fe8d830c193a7bd2b6abb5889e5164f3e5069db7e214257ec7141a8 is running failed: container process not found" containerID="d17231a26fe8d830c193a7bd2b6abb5889e5164f3e5069db7e214257ec7141a8" cmd=["/usr/local/bin/container-scripts/ovn_controller_readiness.sh"] Feb 02 21:43:13 crc kubenswrapper[4789]: E0202 21:43:13.780052 4789 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d17231a26fe8d830c193a7bd2b6abb5889e5164f3e5069db7e214257ec7141a8 is running failed: container process not found" containerID="d17231a26fe8d830c193a7bd2b6abb5889e5164f3e5069db7e214257ec7141a8" cmd=["/usr/local/bin/container-scripts/ovn_controller_readiness.sh"] Feb 02 21:43:13 crc kubenswrapper[4789]: E0202 21:43:13.785085 4789 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d17231a26fe8d830c193a7bd2b6abb5889e5164f3e5069db7e214257ec7141a8 is running failed: container process not found" containerID="d17231a26fe8d830c193a7bd2b6abb5889e5164f3e5069db7e214257ec7141a8" cmd=["/usr/local/bin/container-scripts/ovn_controller_readiness.sh"] Feb 02 21:43:13 crc kubenswrapper[4789]: E0202 21:43:13.785112 4789 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d17231a26fe8d830c193a7bd2b6abb5889e5164f3e5069db7e214257ec7141a8 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-gjls4" podUID="c571c3a8-8470-4076-adde-89416f071937" containerName="ovn-controller" Feb 02 21:43:13 crc kubenswrapper[4789]: I0202 21:43:13.785274 4789 generic.go:334] "Generic (PLEG): container finished" podID="b8917d54-451e-4a56-9e8a-142bb5db17e1" containerID="c1c71c5e760475551c02af4a87ac69c6090f0b50c9bec80607975d728d2b02e2" exitCode=0 Feb 02 21:43:13 crc kubenswrapper[4789]: I0202 21:43:13.785344 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b8917d54-451e-4a56-9e8a-142bb5db17e1","Type":"ContainerDied","Data":"c1c71c5e760475551c02af4a87ac69c6090f0b50c9bec80607975d728d2b02e2"} Feb 02 21:43:13 crc kubenswrapper[4789]: I0202 21:43:13.797323 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-595cf58668-hfkcq" event={"ID":"0f86f59c-9db0-4580-a8f3-2d3fe558c905","Type":"ContainerDied","Data":"7ab85332c67cad5b827eae1955322c4dc6506ff8b855cc9cd54b62c92d431d33"} Feb 02 21:43:13 crc kubenswrapper[4789]: I0202 21:43:13.797361 4789 scope.go:117] "RemoveContainer" containerID="bbeb0176ca8c9142d15e473d306a3fc80a2f4568a8a86ce41b47afe19830a87f" Feb 02 21:43:13 crc kubenswrapper[4789]: I0202 21:43:13.797443 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-595cf58668-hfkcq" Feb 02 21:43:13 crc kubenswrapper[4789]: I0202 21:43:13.836061 4789 generic.go:334] "Generic (PLEG): container finished" podID="b4db4b23-dae0-42a5-ad47-3336073d0b6a" containerID="669108a572e6de86b6fe38547a253f5eabaaaa84647d8dcb02f45a63322c1bd9" exitCode=0 Feb 02 21:43:13 crc kubenswrapper[4789]: I0202 21:43:13.836104 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b4db4b23-dae0-42a5-ad47-3336073d0b6a","Type":"ContainerDied","Data":"669108a572e6de86b6fe38547a253f5eabaaaa84647d8dcb02f45a63322c1bd9"} Feb 02 21:43:13 crc kubenswrapper[4789]: I0202 21:43:13.836143 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b4db4b23-dae0-42a5-ad47-3336073d0b6a","Type":"ContainerDied","Data":"a287094b9dc75aa61117d069b721c13f01fa508d00808b0032962fcb227bddf5"} Feb 02 21:43:13 crc kubenswrapper[4789]: I0202 21:43:13.836169 4789 scope.go:117] "RemoveContainer" containerID="669108a572e6de86b6fe38547a253f5eabaaaa84647d8dcb02f45a63322c1bd9" Feb 02 21:43:13 crc kubenswrapper[4789]: I0202 21:43:13.836313 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 02 21:43:13 crc kubenswrapper[4789]: I0202 21:43:13.854930 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-595cf58668-hfkcq"] Feb 02 21:43:13 crc kubenswrapper[4789]: I0202 21:43:13.861675 4789 scope.go:117] "RemoveContainer" containerID="b73f21ef1c3cee1aa5a9891e737707de6085ae08c57a737e8bf2cb9c0bd4154c" Feb 02 21:43:13 crc kubenswrapper[4789]: I0202 21:43:13.861792 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-595cf58668-hfkcq"] Feb 02 21:43:13 crc kubenswrapper[4789]: I0202 21:43:13.976236 4789 scope.go:117] "RemoveContainer" containerID="669108a572e6de86b6fe38547a253f5eabaaaa84647d8dcb02f45a63322c1bd9" Feb 02 21:43:13 crc kubenswrapper[4789]: E0202 21:43:13.976885 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"669108a572e6de86b6fe38547a253f5eabaaaa84647d8dcb02f45a63322c1bd9\": container with ID starting with 669108a572e6de86b6fe38547a253f5eabaaaa84647d8dcb02f45a63322c1bd9 not found: ID does not exist" containerID="669108a572e6de86b6fe38547a253f5eabaaaa84647d8dcb02f45a63322c1bd9" Feb 02 21:43:13 crc kubenswrapper[4789]: I0202 21:43:13.976915 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"669108a572e6de86b6fe38547a253f5eabaaaa84647d8dcb02f45a63322c1bd9"} err="failed to get container status \"669108a572e6de86b6fe38547a253f5eabaaaa84647d8dcb02f45a63322c1bd9\": rpc error: code = NotFound desc = could not find container \"669108a572e6de86b6fe38547a253f5eabaaaa84647d8dcb02f45a63322c1bd9\": container with ID starting with 669108a572e6de86b6fe38547a253f5eabaaaa84647d8dcb02f45a63322c1bd9 not found: ID does not exist" Feb 02 21:43:13 crc kubenswrapper[4789]: I0202 21:43:13.976934 4789 scope.go:117] "RemoveContainer" containerID="b73f21ef1c3cee1aa5a9891e737707de6085ae08c57a737e8bf2cb9c0bd4154c" Feb 02 21:43:13 crc kubenswrapper[4789]: E0202 21:43:13.977226 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b73f21ef1c3cee1aa5a9891e737707de6085ae08c57a737e8bf2cb9c0bd4154c\": container with ID starting with b73f21ef1c3cee1aa5a9891e737707de6085ae08c57a737e8bf2cb9c0bd4154c not found: ID does not exist" containerID="b73f21ef1c3cee1aa5a9891e737707de6085ae08c57a737e8bf2cb9c0bd4154c" Feb 02 21:43:13 crc kubenswrapper[4789]: I0202 21:43:13.977245 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b73f21ef1c3cee1aa5a9891e737707de6085ae08c57a737e8bf2cb9c0bd4154c"} err="failed to get container status \"b73f21ef1c3cee1aa5a9891e737707de6085ae08c57a737e8bf2cb9c0bd4154c\": rpc error: code = NotFound desc = could not find container \"b73f21ef1c3cee1aa5a9891e737707de6085ae08c57a737e8bf2cb9c0bd4154c\": container with ID starting with b73f21ef1c3cee1aa5a9891e737707de6085ae08c57a737e8bf2cb9c0bd4154c not found: ID does not exist" Feb 02 21:43:13 crc kubenswrapper[4789]: E0202 21:43:13.978695 4789 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0f86f59c_9db0_4580_a8f3_2d3fe558c905.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0f86f59c_9db0_4580_a8f3_2d3fe558c905.slice/crio-7ab85332c67cad5b827eae1955322c4dc6506ff8b855cc9cd54b62c92d431d33\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb4db4b23_dae0_42a5_ad47_3336073d0b6a.slice/crio-a287094b9dc75aa61117d069b721c13f01fa508d00808b0032962fcb227bddf5\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb4db4b23_dae0_42a5_ad47_3336073d0b6a.slice\": RecentStats: unable to find data in memory cache]" Feb 02 21:43:13 crc kubenswrapper[4789]: I0202 21:43:13.991997 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 02 21:43:13 crc kubenswrapper[4789]: I0202 21:43:13.998693 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 02 21:43:14 crc kubenswrapper[4789]: I0202 21:43:14.002319 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 02 21:43:14 crc kubenswrapper[4789]: I0202 21:43:14.042987 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b8917d54-451e-4a56-9e8a-142bb5db17e1-erlang-cookie-secret\") pod \"b8917d54-451e-4a56-9e8a-142bb5db17e1\" (UID: \"b8917d54-451e-4a56-9e8a-142bb5db17e1\") " Feb 02 21:43:14 crc kubenswrapper[4789]: I0202 21:43:14.043036 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7wzbn\" (UniqueName: \"kubernetes.io/projected/b8917d54-451e-4a56-9e8a-142bb5db17e1-kube-api-access-7wzbn\") pod \"b8917d54-451e-4a56-9e8a-142bb5db17e1\" (UID: \"b8917d54-451e-4a56-9e8a-142bb5db17e1\") " Feb 02 21:43:14 crc kubenswrapper[4789]: I0202 21:43:14.043061 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b8917d54-451e-4a56-9e8a-142bb5db17e1-rabbitmq-plugins\") pod \"b8917d54-451e-4a56-9e8a-142bb5db17e1\" (UID: \"b8917d54-451e-4a56-9e8a-142bb5db17e1\") " Feb 02 21:43:14 crc kubenswrapper[4789]: I0202 21:43:14.043101 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"b8917d54-451e-4a56-9e8a-142bb5db17e1\" (UID: \"b8917d54-451e-4a56-9e8a-142bb5db17e1\") " Feb 02 21:43:14 crc kubenswrapper[4789]: I0202 21:43:14.043138 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b8917d54-451e-4a56-9e8a-142bb5db17e1-rabbitmq-tls\") pod \"b8917d54-451e-4a56-9e8a-142bb5db17e1\" (UID: \"b8917d54-451e-4a56-9e8a-142bb5db17e1\") " Feb 02 21:43:14 crc kubenswrapper[4789]: I0202 21:43:14.043184 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b8917d54-451e-4a56-9e8a-142bb5db17e1-rabbitmq-erlang-cookie\") pod \"b8917d54-451e-4a56-9e8a-142bb5db17e1\" (UID: \"b8917d54-451e-4a56-9e8a-142bb5db17e1\") " Feb 02 21:43:14 crc kubenswrapper[4789]: I0202 21:43:14.043231 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b8917d54-451e-4a56-9e8a-142bb5db17e1-pod-info\") pod \"b8917d54-451e-4a56-9e8a-142bb5db17e1\" (UID: \"b8917d54-451e-4a56-9e8a-142bb5db17e1\") " Feb 02 21:43:14 crc kubenswrapper[4789]: I0202 21:43:14.043295 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b8917d54-451e-4a56-9e8a-142bb5db17e1-rabbitmq-confd\") pod \"b8917d54-451e-4a56-9e8a-142bb5db17e1\" (UID: \"b8917d54-451e-4a56-9e8a-142bb5db17e1\") " Feb 02 21:43:14 crc kubenswrapper[4789]: I0202 21:43:14.043324 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b8917d54-451e-4a56-9e8a-142bb5db17e1-plugins-conf\") pod \"b8917d54-451e-4a56-9e8a-142bb5db17e1\" (UID: \"b8917d54-451e-4a56-9e8a-142bb5db17e1\") " Feb 02 21:43:14 crc kubenswrapper[4789]: I0202 21:43:14.043343 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b8917d54-451e-4a56-9e8a-142bb5db17e1-server-conf\") pod \"b8917d54-451e-4a56-9e8a-142bb5db17e1\" (UID: \"b8917d54-451e-4a56-9e8a-142bb5db17e1\") " Feb 02 21:43:14 crc kubenswrapper[4789]: I0202 21:43:14.043388 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b8917d54-451e-4a56-9e8a-142bb5db17e1-config-data\") pod \"b8917d54-451e-4a56-9e8a-142bb5db17e1\" (UID: \"b8917d54-451e-4a56-9e8a-142bb5db17e1\") " Feb 02 21:43:14 crc kubenswrapper[4789]: I0202 21:43:14.044601 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8917d54-451e-4a56-9e8a-142bb5db17e1-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "b8917d54-451e-4a56-9e8a-142bb5db17e1" (UID: "b8917d54-451e-4a56-9e8a-142bb5db17e1"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:43:14 crc kubenswrapper[4789]: I0202 21:43:14.045244 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8917d54-451e-4a56-9e8a-142bb5db17e1-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "b8917d54-451e-4a56-9e8a-142bb5db17e1" (UID: "b8917d54-451e-4a56-9e8a-142bb5db17e1"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 21:43:14 crc kubenswrapper[4789]: I0202 21:43:14.045463 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8917d54-451e-4a56-9e8a-142bb5db17e1-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "b8917d54-451e-4a56-9e8a-142bb5db17e1" (UID: "b8917d54-451e-4a56-9e8a-142bb5db17e1"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 21:43:14 crc kubenswrapper[4789]: I0202 21:43:14.047797 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/b8917d54-451e-4a56-9e8a-142bb5db17e1-pod-info" (OuterVolumeSpecName: "pod-info") pod "b8917d54-451e-4a56-9e8a-142bb5db17e1" (UID: "b8917d54-451e-4a56-9e8a-142bb5db17e1"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 02 21:43:14 crc kubenswrapper[4789]: I0202 21:43:14.047817 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "persistence") pod "b8917d54-451e-4a56-9e8a-142bb5db17e1" (UID: "b8917d54-451e-4a56-9e8a-142bb5db17e1"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 02 21:43:14 crc kubenswrapper[4789]: I0202 21:43:14.048510 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8917d54-451e-4a56-9e8a-142bb5db17e1-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "b8917d54-451e-4a56-9e8a-142bb5db17e1" (UID: "b8917d54-451e-4a56-9e8a-142bb5db17e1"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:43:14 crc kubenswrapper[4789]: I0202 21:43:14.048614 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8917d54-451e-4a56-9e8a-142bb5db17e1-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "b8917d54-451e-4a56-9e8a-142bb5db17e1" (UID: "b8917d54-451e-4a56-9e8a-142bb5db17e1"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:43:14 crc kubenswrapper[4789]: I0202 21:43:14.049147 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8917d54-451e-4a56-9e8a-142bb5db17e1-kube-api-access-7wzbn" (OuterVolumeSpecName: "kube-api-access-7wzbn") pod "b8917d54-451e-4a56-9e8a-142bb5db17e1" (UID: "b8917d54-451e-4a56-9e8a-142bb5db17e1"). InnerVolumeSpecName "kube-api-access-7wzbn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:43:14 crc kubenswrapper[4789]: I0202 21:43:14.067385 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8917d54-451e-4a56-9e8a-142bb5db17e1-config-data" (OuterVolumeSpecName: "config-data") pod "b8917d54-451e-4a56-9e8a-142bb5db17e1" (UID: "b8917d54-451e-4a56-9e8a-142bb5db17e1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:43:14 crc kubenswrapper[4789]: I0202 21:43:14.090447 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8917d54-451e-4a56-9e8a-142bb5db17e1-server-conf" (OuterVolumeSpecName: "server-conf") pod "b8917d54-451e-4a56-9e8a-142bb5db17e1" (UID: "b8917d54-451e-4a56-9e8a-142bb5db17e1"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:43:14 crc kubenswrapper[4789]: E0202 21:43:14.106259 4789 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b302cbd832ca9db939c2b0bf4835c6ec6fb237f5c200d53981557f9c42498b12" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 02 21:43:14 crc kubenswrapper[4789]: E0202 21:43:14.107634 4789 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b302cbd832ca9db939c2b0bf4835c6ec6fb237f5c200d53981557f9c42498b12" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 02 21:43:14 crc kubenswrapper[4789]: E0202 21:43:14.108617 4789 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b302cbd832ca9db939c2b0bf4835c6ec6fb237f5c200d53981557f9c42498b12" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 02 21:43:14 crc kubenswrapper[4789]: E0202 21:43:14.108657 4789 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="a0ceeffe-1326-4d2d-ab85-dbc02869bee1" containerName="nova-scheduler-scheduler" Feb 02 21:43:14 crc kubenswrapper[4789]: I0202 21:43:14.120788 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8917d54-451e-4a56-9e8a-142bb5db17e1-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "b8917d54-451e-4a56-9e8a-142bb5db17e1" (UID: "b8917d54-451e-4a56-9e8a-142bb5db17e1"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:43:14 crc kubenswrapper[4789]: I0202 21:43:14.145488 4789 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b8917d54-451e-4a56-9e8a-142bb5db17e1-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:14 crc kubenswrapper[4789]: I0202 21:43:14.145729 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7wzbn\" (UniqueName: \"kubernetes.io/projected/b8917d54-451e-4a56-9e8a-142bb5db17e1-kube-api-access-7wzbn\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:14 crc kubenswrapper[4789]: I0202 21:43:14.145790 4789 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b8917d54-451e-4a56-9e8a-142bb5db17e1-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:14 crc kubenswrapper[4789]: I0202 21:43:14.145871 4789 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Feb 02 21:43:14 crc kubenswrapper[4789]: I0202 21:43:14.145930 4789 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b8917d54-451e-4a56-9e8a-142bb5db17e1-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:14 crc kubenswrapper[4789]: I0202 21:43:14.145985 4789 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b8917d54-451e-4a56-9e8a-142bb5db17e1-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:14 crc kubenswrapper[4789]: I0202 21:43:14.146039 4789 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b8917d54-451e-4a56-9e8a-142bb5db17e1-pod-info\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:14 crc kubenswrapper[4789]: I0202 21:43:14.146092 4789 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b8917d54-451e-4a56-9e8a-142bb5db17e1-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:14 crc kubenswrapper[4789]: I0202 21:43:14.146141 4789 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b8917d54-451e-4a56-9e8a-142bb5db17e1-server-conf\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:14 crc kubenswrapper[4789]: I0202 21:43:14.146193 4789 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b8917d54-451e-4a56-9e8a-142bb5db17e1-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:14 crc kubenswrapper[4789]: I0202 21:43:14.146247 4789 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b8917d54-451e-4a56-9e8a-142bb5db17e1-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:14 crc kubenswrapper[4789]: I0202 21:43:14.162075 4789 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Feb 02 21:43:14 crc kubenswrapper[4789]: I0202 21:43:14.248244 4789 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:14 crc kubenswrapper[4789]: I0202 21:43:14.430725 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f86f59c-9db0-4580-a8f3-2d3fe558c905" path="/var/lib/kubelet/pods/0f86f59c-9db0-4580-a8f3-2d3fe558c905/volumes" Feb 02 21:43:14 crc kubenswrapper[4789]: I0202 21:43:14.432171 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64a9a0c6-c663-400b-8c60-43c582b7cac0" path="/var/lib/kubelet/pods/64a9a0c6-c663-400b-8c60-43c582b7cac0/volumes" Feb 02 21:43:14 crc kubenswrapper[4789]: I0202 21:43:14.432780 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a77ac0de-f396-45e6-a92c-07fbddc4ec60" path="/var/lib/kubelet/pods/a77ac0de-f396-45e6-a92c-07fbddc4ec60/volumes" Feb 02 21:43:14 crc kubenswrapper[4789]: I0202 21:43:14.433971 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab56a6da-6187-4fa6-bd4e-93046de2d432" path="/var/lib/kubelet/pods/ab56a6da-6187-4fa6-bd4e-93046de2d432/volumes" Feb 02 21:43:14 crc kubenswrapper[4789]: I0202 21:43:14.434715 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4db4b23-dae0-42a5-ad47-3336073d0b6a" path="/var/lib/kubelet/pods/b4db4b23-dae0-42a5-ad47-3336073d0b6a/volumes" Feb 02 21:43:14 crc kubenswrapper[4789]: I0202 21:43:14.550211 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-gjls4_c571c3a8-8470-4076-adde-89416f071937/ovn-controller/0.log" Feb 02 21:43:14 crc kubenswrapper[4789]: I0202 21:43:14.550265 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-gjls4" Feb 02 21:43:14 crc kubenswrapper[4789]: I0202 21:43:14.654239 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c571c3a8-8470-4076-adde-89416f071937-scripts\") pod \"c571c3a8-8470-4076-adde-89416f071937\" (UID: \"c571c3a8-8470-4076-adde-89416f071937\") " Feb 02 21:43:14 crc kubenswrapper[4789]: I0202 21:43:14.654317 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/c571c3a8-8470-4076-adde-89416f071937-ovn-controller-tls-certs\") pod \"c571c3a8-8470-4076-adde-89416f071937\" (UID: \"c571c3a8-8470-4076-adde-89416f071937\") " Feb 02 21:43:14 crc kubenswrapper[4789]: I0202 21:43:14.654357 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7gjjv\" (UniqueName: \"kubernetes.io/projected/c571c3a8-8470-4076-adde-89416f071937-kube-api-access-7gjjv\") pod \"c571c3a8-8470-4076-adde-89416f071937\" (UID: \"c571c3a8-8470-4076-adde-89416f071937\") " Feb 02 21:43:14 crc kubenswrapper[4789]: I0202 21:43:14.654385 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c571c3a8-8470-4076-adde-89416f071937-var-run\") pod \"c571c3a8-8470-4076-adde-89416f071937\" (UID: \"c571c3a8-8470-4076-adde-89416f071937\") " Feb 02 21:43:14 crc kubenswrapper[4789]: I0202 21:43:14.654422 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c571c3a8-8470-4076-adde-89416f071937-var-log-ovn\") pod \"c571c3a8-8470-4076-adde-89416f071937\" (UID: \"c571c3a8-8470-4076-adde-89416f071937\") " Feb 02 21:43:14 crc kubenswrapper[4789]: I0202 21:43:14.654524 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c571c3a8-8470-4076-adde-89416f071937-var-run-ovn\") pod \"c571c3a8-8470-4076-adde-89416f071937\" (UID: \"c571c3a8-8470-4076-adde-89416f071937\") " Feb 02 21:43:14 crc kubenswrapper[4789]: I0202 21:43:14.654603 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c571c3a8-8470-4076-adde-89416f071937-combined-ca-bundle\") pod \"c571c3a8-8470-4076-adde-89416f071937\" (UID: \"c571c3a8-8470-4076-adde-89416f071937\") " Feb 02 21:43:14 crc kubenswrapper[4789]: I0202 21:43:14.654713 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c571c3a8-8470-4076-adde-89416f071937-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "c571c3a8-8470-4076-adde-89416f071937" (UID: "c571c3a8-8470-4076-adde-89416f071937"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 21:43:14 crc kubenswrapper[4789]: I0202 21:43:14.654721 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c571c3a8-8470-4076-adde-89416f071937-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "c571c3a8-8470-4076-adde-89416f071937" (UID: "c571c3a8-8470-4076-adde-89416f071937"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 21:43:14 crc kubenswrapper[4789]: I0202 21:43:14.654800 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c571c3a8-8470-4076-adde-89416f071937-var-run" (OuterVolumeSpecName: "var-run") pod "c571c3a8-8470-4076-adde-89416f071937" (UID: "c571c3a8-8470-4076-adde-89416f071937"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 21:43:14 crc kubenswrapper[4789]: I0202 21:43:14.655102 4789 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c571c3a8-8470-4076-adde-89416f071937-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:14 crc kubenswrapper[4789]: I0202 21:43:14.655116 4789 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c571c3a8-8470-4076-adde-89416f071937-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:14 crc kubenswrapper[4789]: I0202 21:43:14.655127 4789 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c571c3a8-8470-4076-adde-89416f071937-var-run\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:14 crc kubenswrapper[4789]: I0202 21:43:14.655741 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c571c3a8-8470-4076-adde-89416f071937-scripts" (OuterVolumeSpecName: "scripts") pod "c571c3a8-8470-4076-adde-89416f071937" (UID: "c571c3a8-8470-4076-adde-89416f071937"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:43:14 crc kubenswrapper[4789]: I0202 21:43:14.693923 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c571c3a8-8470-4076-adde-89416f071937-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c571c3a8-8470-4076-adde-89416f071937" (UID: "c571c3a8-8470-4076-adde-89416f071937"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:43:14 crc kubenswrapper[4789]: I0202 21:43:14.700687 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c571c3a8-8470-4076-adde-89416f071937-kube-api-access-7gjjv" (OuterVolumeSpecName: "kube-api-access-7gjjv") pod "c571c3a8-8470-4076-adde-89416f071937" (UID: "c571c3a8-8470-4076-adde-89416f071937"). InnerVolumeSpecName "kube-api-access-7gjjv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:43:14 crc kubenswrapper[4789]: I0202 21:43:14.722639 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c571c3a8-8470-4076-adde-89416f071937-ovn-controller-tls-certs" (OuterVolumeSpecName: "ovn-controller-tls-certs") pod "c571c3a8-8470-4076-adde-89416f071937" (UID: "c571c3a8-8470-4076-adde-89416f071937"). InnerVolumeSpecName "ovn-controller-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:43:14 crc kubenswrapper[4789]: I0202 21:43:14.756849 4789 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c571c3a8-8470-4076-adde-89416f071937-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:14 crc kubenswrapper[4789]: I0202 21:43:14.756885 4789 reconciler_common.go:293] "Volume detached for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/c571c3a8-8470-4076-adde-89416f071937-ovn-controller-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:14 crc kubenswrapper[4789]: I0202 21:43:14.756895 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7gjjv\" (UniqueName: \"kubernetes.io/projected/c571c3a8-8470-4076-adde-89416f071937-kube-api-access-7gjjv\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:14 crc kubenswrapper[4789]: I0202 21:43:14.756906 4789 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c571c3a8-8470-4076-adde-89416f071937-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:14 crc kubenswrapper[4789]: I0202 21:43:14.800133 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 02 21:43:14 crc kubenswrapper[4789]: I0202 21:43:14.849697 4789 generic.go:334] "Generic (PLEG): container finished" podID="aa36ad4d-6c5e-4dd5-a7a3-34e5dddfba8b" containerID="dfd72fb016b8250b7d52cd2384cba2cc136043ef8ea07229e3afb0b578d3fbf4" exitCode=0 Feb 02 21:43:14 crc kubenswrapper[4789]: I0202 21:43:14.849775 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"aa36ad4d-6c5e-4dd5-a7a3-34e5dddfba8b","Type":"ContainerDied","Data":"dfd72fb016b8250b7d52cd2384cba2cc136043ef8ea07229e3afb0b578d3fbf4"} Feb 02 21:43:14 crc kubenswrapper[4789]: I0202 21:43:14.849813 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"aa36ad4d-6c5e-4dd5-a7a3-34e5dddfba8b","Type":"ContainerDied","Data":"dc9c9a8e812b02299bac4776912a78d9626ea7964e754bafbb13f798edd82c59"} Feb 02 21:43:14 crc kubenswrapper[4789]: I0202 21:43:14.849834 4789 scope.go:117] "RemoveContainer" containerID="dfd72fb016b8250b7d52cd2384cba2cc136043ef8ea07229e3afb0b578d3fbf4" Feb 02 21:43:14 crc kubenswrapper[4789]: I0202 21:43:14.849867 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 02 21:43:14 crc kubenswrapper[4789]: I0202 21:43:14.852472 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b8917d54-451e-4a56-9e8a-142bb5db17e1","Type":"ContainerDied","Data":"95c4b796e6336984d0dc820412ea4efe958320e94fbcf078a04bb239e144172f"} Feb 02 21:43:14 crc kubenswrapper[4789]: I0202 21:43:14.852533 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 02 21:43:14 crc kubenswrapper[4789]: I0202 21:43:14.854175 4789 generic.go:334] "Generic (PLEG): container finished" podID="a0ceeffe-1326-4d2d-ab85-dbc02869bee1" containerID="b302cbd832ca9db939c2b0bf4835c6ec6fb237f5c200d53981557f9c42498b12" exitCode=0 Feb 02 21:43:14 crc kubenswrapper[4789]: I0202 21:43:14.854225 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a0ceeffe-1326-4d2d-ab85-dbc02869bee1","Type":"ContainerDied","Data":"b302cbd832ca9db939c2b0bf4835c6ec6fb237f5c200d53981557f9c42498b12"} Feb 02 21:43:14 crc kubenswrapper[4789]: I0202 21:43:14.861185 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-gjls4_c571c3a8-8470-4076-adde-89416f071937/ovn-controller/0.log" Feb 02 21:43:14 crc kubenswrapper[4789]: I0202 21:43:14.861225 4789 generic.go:334] "Generic (PLEG): container finished" podID="c571c3a8-8470-4076-adde-89416f071937" containerID="d17231a26fe8d830c193a7bd2b6abb5889e5164f3e5069db7e214257ec7141a8" exitCode=139 Feb 02 21:43:14 crc kubenswrapper[4789]: I0202 21:43:14.861283 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-gjls4" event={"ID":"c571c3a8-8470-4076-adde-89416f071937","Type":"ContainerDied","Data":"d17231a26fe8d830c193a7bd2b6abb5889e5164f3e5069db7e214257ec7141a8"} Feb 02 21:43:14 crc kubenswrapper[4789]: I0202 21:43:14.861303 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-gjls4" event={"ID":"c571c3a8-8470-4076-adde-89416f071937","Type":"ContainerDied","Data":"8f7daf988c11c33d2d922ade395ba0c5f550739635868994a6a4c7810c35fbcd"} Feb 02 21:43:14 crc kubenswrapper[4789]: I0202 21:43:14.861345 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-gjls4" Feb 02 21:43:14 crc kubenswrapper[4789]: I0202 21:43:14.867298 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mf6bm\" (UniqueName: \"kubernetes.io/projected/aa36ad4d-6c5e-4dd5-a7a3-34e5dddfba8b-kube-api-access-mf6bm\") pod \"aa36ad4d-6c5e-4dd5-a7a3-34e5dddfba8b\" (UID: \"aa36ad4d-6c5e-4dd5-a7a3-34e5dddfba8b\") " Feb 02 21:43:14 crc kubenswrapper[4789]: I0202 21:43:14.867347 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa36ad4d-6c5e-4dd5-a7a3-34e5dddfba8b-combined-ca-bundle\") pod \"aa36ad4d-6c5e-4dd5-a7a3-34e5dddfba8b\" (UID: \"aa36ad4d-6c5e-4dd5-a7a3-34e5dddfba8b\") " Feb 02 21:43:14 crc kubenswrapper[4789]: I0202 21:43:14.867393 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa36ad4d-6c5e-4dd5-a7a3-34e5dddfba8b-config-data\") pod \"aa36ad4d-6c5e-4dd5-a7a3-34e5dddfba8b\" (UID: \"aa36ad4d-6c5e-4dd5-a7a3-34e5dddfba8b\") " Feb 02 21:43:14 crc kubenswrapper[4789]: I0202 21:43:14.872847 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa36ad4d-6c5e-4dd5-a7a3-34e5dddfba8b-kube-api-access-mf6bm" (OuterVolumeSpecName: "kube-api-access-mf6bm") pod "aa36ad4d-6c5e-4dd5-a7a3-34e5dddfba8b" (UID: "aa36ad4d-6c5e-4dd5-a7a3-34e5dddfba8b"). InnerVolumeSpecName "kube-api-access-mf6bm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:43:14 crc kubenswrapper[4789]: I0202 21:43:14.916374 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa36ad4d-6c5e-4dd5-a7a3-34e5dddfba8b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aa36ad4d-6c5e-4dd5-a7a3-34e5dddfba8b" (UID: "aa36ad4d-6c5e-4dd5-a7a3-34e5dddfba8b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:43:14 crc kubenswrapper[4789]: I0202 21:43:14.917434 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa36ad4d-6c5e-4dd5-a7a3-34e5dddfba8b-config-data" (OuterVolumeSpecName: "config-data") pod "aa36ad4d-6c5e-4dd5-a7a3-34e5dddfba8b" (UID: "aa36ad4d-6c5e-4dd5-a7a3-34e5dddfba8b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:43:14 crc kubenswrapper[4789]: I0202 21:43:14.918282 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 02 21:43:14 crc kubenswrapper[4789]: I0202 21:43:14.927040 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 02 21:43:14 crc kubenswrapper[4789]: I0202 21:43:14.928254 4789 scope.go:117] "RemoveContainer" containerID="dfd72fb016b8250b7d52cd2384cba2cc136043ef8ea07229e3afb0b578d3fbf4" Feb 02 21:43:14 crc kubenswrapper[4789]: E0202 21:43:14.930858 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dfd72fb016b8250b7d52cd2384cba2cc136043ef8ea07229e3afb0b578d3fbf4\": container with ID starting with dfd72fb016b8250b7d52cd2384cba2cc136043ef8ea07229e3afb0b578d3fbf4 not found: ID does not exist" containerID="dfd72fb016b8250b7d52cd2384cba2cc136043ef8ea07229e3afb0b578d3fbf4" Feb 02 21:43:14 crc kubenswrapper[4789]: I0202 21:43:14.930896 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dfd72fb016b8250b7d52cd2384cba2cc136043ef8ea07229e3afb0b578d3fbf4"} err="failed to get container status \"dfd72fb016b8250b7d52cd2384cba2cc136043ef8ea07229e3afb0b578d3fbf4\": rpc error: code = NotFound desc = could not find container \"dfd72fb016b8250b7d52cd2384cba2cc136043ef8ea07229e3afb0b578d3fbf4\": container with ID starting with dfd72fb016b8250b7d52cd2384cba2cc136043ef8ea07229e3afb0b578d3fbf4 not found: ID does not exist" Feb 02 21:43:14 crc kubenswrapper[4789]: I0202 21:43:14.930921 4789 scope.go:117] "RemoveContainer" containerID="c1c71c5e760475551c02af4a87ac69c6090f0b50c9bec80607975d728d2b02e2" Feb 02 21:43:14 crc kubenswrapper[4789]: I0202 21:43:14.932852 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-gjls4"] Feb 02 21:43:14 crc kubenswrapper[4789]: I0202 21:43:14.938345 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-gjls4"] Feb 02 21:43:14 crc kubenswrapper[4789]: I0202 21:43:14.941461 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 21:43:14 crc kubenswrapper[4789]: I0202 21:43:14.956121 4789 scope.go:117] "RemoveContainer" containerID="1e002b2adadc7aa45b24e8a9b6b844784752243592f8b12913aa7c87780c5192" Feb 02 21:43:14 crc kubenswrapper[4789]: I0202 21:43:14.969537 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mf6bm\" (UniqueName: \"kubernetes.io/projected/aa36ad4d-6c5e-4dd5-a7a3-34e5dddfba8b-kube-api-access-mf6bm\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:14 crc kubenswrapper[4789]: I0202 21:43:14.969573 4789 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa36ad4d-6c5e-4dd5-a7a3-34e5dddfba8b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:14 crc kubenswrapper[4789]: I0202 21:43:14.969599 4789 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa36ad4d-6c5e-4dd5-a7a3-34e5dddfba8b-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:14 crc kubenswrapper[4789]: I0202 21:43:14.977559 4789 scope.go:117] "RemoveContainer" containerID="d17231a26fe8d830c193a7bd2b6abb5889e5164f3e5069db7e214257ec7141a8" Feb 02 21:43:14 crc kubenswrapper[4789]: I0202 21:43:14.995555 4789 scope.go:117] "RemoveContainer" containerID="d17231a26fe8d830c193a7bd2b6abb5889e5164f3e5069db7e214257ec7141a8" Feb 02 21:43:14 crc kubenswrapper[4789]: E0202 21:43:14.995895 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d17231a26fe8d830c193a7bd2b6abb5889e5164f3e5069db7e214257ec7141a8\": container with ID starting with d17231a26fe8d830c193a7bd2b6abb5889e5164f3e5069db7e214257ec7141a8 not found: ID does not exist" containerID="d17231a26fe8d830c193a7bd2b6abb5889e5164f3e5069db7e214257ec7141a8" Feb 02 21:43:14 crc kubenswrapper[4789]: I0202 21:43:14.995920 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d17231a26fe8d830c193a7bd2b6abb5889e5164f3e5069db7e214257ec7141a8"} err="failed to get container status \"d17231a26fe8d830c193a7bd2b6abb5889e5164f3e5069db7e214257ec7141a8\": rpc error: code = NotFound desc = could not find container \"d17231a26fe8d830c193a7bd2b6abb5889e5164f3e5069db7e214257ec7141a8\": container with ID starting with d17231a26fe8d830c193a7bd2b6abb5889e5164f3e5069db7e214257ec7141a8 not found: ID does not exist" Feb 02 21:43:15 crc kubenswrapper[4789]: I0202 21:43:15.071045 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xg4h4\" (UniqueName: \"kubernetes.io/projected/a0ceeffe-1326-4d2d-ab85-dbc02869bee1-kube-api-access-xg4h4\") pod \"a0ceeffe-1326-4d2d-ab85-dbc02869bee1\" (UID: \"a0ceeffe-1326-4d2d-ab85-dbc02869bee1\") " Feb 02 21:43:15 crc kubenswrapper[4789]: I0202 21:43:15.071139 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0ceeffe-1326-4d2d-ab85-dbc02869bee1-config-data\") pod \"a0ceeffe-1326-4d2d-ab85-dbc02869bee1\" (UID: \"a0ceeffe-1326-4d2d-ab85-dbc02869bee1\") " Feb 02 21:43:15 crc kubenswrapper[4789]: I0202 21:43:15.071237 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0ceeffe-1326-4d2d-ab85-dbc02869bee1-combined-ca-bundle\") pod \"a0ceeffe-1326-4d2d-ab85-dbc02869bee1\" (UID: \"a0ceeffe-1326-4d2d-ab85-dbc02869bee1\") " Feb 02 21:43:15 crc kubenswrapper[4789]: I0202 21:43:15.076015 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0ceeffe-1326-4d2d-ab85-dbc02869bee1-kube-api-access-xg4h4" (OuterVolumeSpecName: "kube-api-access-xg4h4") pod "a0ceeffe-1326-4d2d-ab85-dbc02869bee1" (UID: "a0ceeffe-1326-4d2d-ab85-dbc02869bee1"). InnerVolumeSpecName "kube-api-access-xg4h4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:43:15 crc kubenswrapper[4789]: I0202 21:43:15.092135 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0ceeffe-1326-4d2d-ab85-dbc02869bee1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a0ceeffe-1326-4d2d-ab85-dbc02869bee1" (UID: "a0ceeffe-1326-4d2d-ab85-dbc02869bee1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:43:15 crc kubenswrapper[4789]: I0202 21:43:15.096299 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0ceeffe-1326-4d2d-ab85-dbc02869bee1-config-data" (OuterVolumeSpecName: "config-data") pod "a0ceeffe-1326-4d2d-ab85-dbc02869bee1" (UID: "a0ceeffe-1326-4d2d-ab85-dbc02869bee1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:43:15 crc kubenswrapper[4789]: I0202 21:43:15.172493 4789 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0ceeffe-1326-4d2d-ab85-dbc02869bee1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:15 crc kubenswrapper[4789]: I0202 21:43:15.172526 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xg4h4\" (UniqueName: \"kubernetes.io/projected/a0ceeffe-1326-4d2d-ab85-dbc02869bee1-kube-api-access-xg4h4\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:15 crc kubenswrapper[4789]: I0202 21:43:15.172542 4789 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0ceeffe-1326-4d2d-ab85-dbc02869bee1-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:15 crc kubenswrapper[4789]: I0202 21:43:15.193924 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 02 21:43:15 crc kubenswrapper[4789]: I0202 21:43:15.200835 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 02 21:43:15 crc kubenswrapper[4789]: I0202 21:43:15.426003 4789 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="7acbb536-0a08-4132-a84a-848735b0e7f4" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.164:8776/healthcheck\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 02 21:43:15 crc kubenswrapper[4789]: I0202 21:43:15.745564 4789 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-7c4994f5f-462kb" podUID="78b23a1f-cc85-4767-b19c-6069adfc735a" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.166:9696/\": dial tcp 10.217.0.166:9696: connect: connection refused" Feb 02 21:43:15 crc kubenswrapper[4789]: I0202 21:43:15.926447 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 21:43:15 crc kubenswrapper[4789]: I0202 21:43:15.926441 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a0ceeffe-1326-4d2d-ab85-dbc02869bee1","Type":"ContainerDied","Data":"af26a42fef9ae7a6ff5db400dfba0b2c06b5ebead6cd8fdc420ec28df0245212"} Feb 02 21:43:15 crc kubenswrapper[4789]: I0202 21:43:15.926638 4789 scope.go:117] "RemoveContainer" containerID="b302cbd832ca9db939c2b0bf4835c6ec6fb237f5c200d53981557f9c42498b12" Feb 02 21:43:15 crc kubenswrapper[4789]: I0202 21:43:15.934785 4789 generic.go:334] "Generic (PLEG): container finished" podID="b579f7f4-db1f-4d76-82fb-ef4cad438842" containerID="4442ad2bcd72e1f7d739ef50d0304ab053ba1e52fd3d4c19d121698c07aa9558" exitCode=0 Feb 02 21:43:15 crc kubenswrapper[4789]: I0202 21:43:15.934833 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b579f7f4-db1f-4d76-82fb-ef4cad438842","Type":"ContainerDied","Data":"4442ad2bcd72e1f7d739ef50d0304ab053ba1e52fd3d4c19d121698c07aa9558"} Feb 02 21:43:15 crc kubenswrapper[4789]: I0202 21:43:15.934859 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b579f7f4-db1f-4d76-82fb-ef4cad438842","Type":"ContainerDied","Data":"a859b36ae2f6d42dfb7cd8f54d60e3cb5732145a3e0c8c167eaeb102b2c4871f"} Feb 02 21:43:15 crc kubenswrapper[4789]: I0202 21:43:15.934873 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a859b36ae2f6d42dfb7cd8f54d60e3cb5732145a3e0c8c167eaeb102b2c4871f" Feb 02 21:43:15 crc kubenswrapper[4789]: I0202 21:43:15.955060 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 21:43:15 crc kubenswrapper[4789]: I0202 21:43:15.981149 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 21:43:15 crc kubenswrapper[4789]: I0202 21:43:15.988170 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 21:43:16 crc kubenswrapper[4789]: I0202 21:43:16.084229 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b579f7f4-db1f-4d76-82fb-ef4cad438842-config-data\") pod \"b579f7f4-db1f-4d76-82fb-ef4cad438842\" (UID: \"b579f7f4-db1f-4d76-82fb-ef4cad438842\") " Feb 02 21:43:16 crc kubenswrapper[4789]: I0202 21:43:16.084295 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b579f7f4-db1f-4d76-82fb-ef4cad438842-scripts\") pod \"b579f7f4-db1f-4d76-82fb-ef4cad438842\" (UID: \"b579f7f4-db1f-4d76-82fb-ef4cad438842\") " Feb 02 21:43:16 crc kubenswrapper[4789]: I0202 21:43:16.084322 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b579f7f4-db1f-4d76-82fb-ef4cad438842-combined-ca-bundle\") pod \"b579f7f4-db1f-4d76-82fb-ef4cad438842\" (UID: \"b579f7f4-db1f-4d76-82fb-ef4cad438842\") " Feb 02 21:43:16 crc kubenswrapper[4789]: I0202 21:43:16.084343 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b579f7f4-db1f-4d76-82fb-ef4cad438842-sg-core-conf-yaml\") pod \"b579f7f4-db1f-4d76-82fb-ef4cad438842\" (UID: \"b579f7f4-db1f-4d76-82fb-ef4cad438842\") " Feb 02 21:43:16 crc kubenswrapper[4789]: I0202 21:43:16.084362 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-745wq\" (UniqueName: \"kubernetes.io/projected/b579f7f4-db1f-4d76-82fb-ef4cad438842-kube-api-access-745wq\") pod \"b579f7f4-db1f-4d76-82fb-ef4cad438842\" (UID: \"b579f7f4-db1f-4d76-82fb-ef4cad438842\") " Feb 02 21:43:16 crc kubenswrapper[4789]: I0202 21:43:16.084389 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b579f7f4-db1f-4d76-82fb-ef4cad438842-run-httpd\") pod \"b579f7f4-db1f-4d76-82fb-ef4cad438842\" (UID: \"b579f7f4-db1f-4d76-82fb-ef4cad438842\") " Feb 02 21:43:16 crc kubenswrapper[4789]: I0202 21:43:16.084444 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b579f7f4-db1f-4d76-82fb-ef4cad438842-log-httpd\") pod \"b579f7f4-db1f-4d76-82fb-ef4cad438842\" (UID: \"b579f7f4-db1f-4d76-82fb-ef4cad438842\") " Feb 02 21:43:16 crc kubenswrapper[4789]: I0202 21:43:16.084528 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b579f7f4-db1f-4d76-82fb-ef4cad438842-ceilometer-tls-certs\") pod \"b579f7f4-db1f-4d76-82fb-ef4cad438842\" (UID: \"b579f7f4-db1f-4d76-82fb-ef4cad438842\") " Feb 02 21:43:16 crc kubenswrapper[4789]: I0202 21:43:16.085941 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b579f7f4-db1f-4d76-82fb-ef4cad438842-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b579f7f4-db1f-4d76-82fb-ef4cad438842" (UID: "b579f7f4-db1f-4d76-82fb-ef4cad438842"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 21:43:16 crc kubenswrapper[4789]: I0202 21:43:16.085952 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b579f7f4-db1f-4d76-82fb-ef4cad438842-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b579f7f4-db1f-4d76-82fb-ef4cad438842" (UID: "b579f7f4-db1f-4d76-82fb-ef4cad438842"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 21:43:16 crc kubenswrapper[4789]: I0202 21:43:16.089550 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b579f7f4-db1f-4d76-82fb-ef4cad438842-scripts" (OuterVolumeSpecName: "scripts") pod "b579f7f4-db1f-4d76-82fb-ef4cad438842" (UID: "b579f7f4-db1f-4d76-82fb-ef4cad438842"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:43:16 crc kubenswrapper[4789]: I0202 21:43:16.094271 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b579f7f4-db1f-4d76-82fb-ef4cad438842-kube-api-access-745wq" (OuterVolumeSpecName: "kube-api-access-745wq") pod "b579f7f4-db1f-4d76-82fb-ef4cad438842" (UID: "b579f7f4-db1f-4d76-82fb-ef4cad438842"). InnerVolumeSpecName "kube-api-access-745wq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:43:16 crc kubenswrapper[4789]: I0202 21:43:16.111983 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b579f7f4-db1f-4d76-82fb-ef4cad438842-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b579f7f4-db1f-4d76-82fb-ef4cad438842" (UID: "b579f7f4-db1f-4d76-82fb-ef4cad438842"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:43:16 crc kubenswrapper[4789]: I0202 21:43:16.156739 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b579f7f4-db1f-4d76-82fb-ef4cad438842-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "b579f7f4-db1f-4d76-82fb-ef4cad438842" (UID: "b579f7f4-db1f-4d76-82fb-ef4cad438842"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:43:16 crc kubenswrapper[4789]: I0202 21:43:16.164592 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b579f7f4-db1f-4d76-82fb-ef4cad438842-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b579f7f4-db1f-4d76-82fb-ef4cad438842" (UID: "b579f7f4-db1f-4d76-82fb-ef4cad438842"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:43:16 crc kubenswrapper[4789]: I0202 21:43:16.186498 4789 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b579f7f4-db1f-4d76-82fb-ef4cad438842-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:16 crc kubenswrapper[4789]: I0202 21:43:16.186548 4789 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b579f7f4-db1f-4d76-82fb-ef4cad438842-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:16 crc kubenswrapper[4789]: I0202 21:43:16.186566 4789 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b579f7f4-db1f-4d76-82fb-ef4cad438842-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:16 crc kubenswrapper[4789]: I0202 21:43:16.186608 4789 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b579f7f4-db1f-4d76-82fb-ef4cad438842-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:16 crc kubenswrapper[4789]: I0202 21:43:16.186628 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-745wq\" (UniqueName: \"kubernetes.io/projected/b579f7f4-db1f-4d76-82fb-ef4cad438842-kube-api-access-745wq\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:16 crc kubenswrapper[4789]: I0202 21:43:16.186646 4789 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b579f7f4-db1f-4d76-82fb-ef4cad438842-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:16 crc kubenswrapper[4789]: I0202 21:43:16.186661 4789 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b579f7f4-db1f-4d76-82fb-ef4cad438842-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:16 crc kubenswrapper[4789]: I0202 21:43:16.190793 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b579f7f4-db1f-4d76-82fb-ef4cad438842-config-data" (OuterVolumeSpecName: "config-data") pod "b579f7f4-db1f-4d76-82fb-ef4cad438842" (UID: "b579f7f4-db1f-4d76-82fb-ef4cad438842"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:43:16 crc kubenswrapper[4789]: I0202 21:43:16.289173 4789 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b579f7f4-db1f-4d76-82fb-ef4cad438842-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:16 crc kubenswrapper[4789]: I0202 21:43:16.453248 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0ceeffe-1326-4d2d-ab85-dbc02869bee1" path="/var/lib/kubelet/pods/a0ceeffe-1326-4d2d-ab85-dbc02869bee1/volumes" Feb 02 21:43:16 crc kubenswrapper[4789]: I0202 21:43:16.454304 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa36ad4d-6c5e-4dd5-a7a3-34e5dddfba8b" path="/var/lib/kubelet/pods/aa36ad4d-6c5e-4dd5-a7a3-34e5dddfba8b/volumes" Feb 02 21:43:16 crc kubenswrapper[4789]: I0202 21:43:16.455871 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8917d54-451e-4a56-9e8a-142bb5db17e1" path="/var/lib/kubelet/pods/b8917d54-451e-4a56-9e8a-142bb5db17e1/volumes" Feb 02 21:43:16 crc kubenswrapper[4789]: I0202 21:43:16.457922 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c571c3a8-8470-4076-adde-89416f071937" path="/var/lib/kubelet/pods/c571c3a8-8470-4076-adde-89416f071937/volumes" Feb 02 21:43:16 crc kubenswrapper[4789]: I0202 21:43:16.948281 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 21:43:16 crc kubenswrapper[4789]: I0202 21:43:16.979375 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 21:43:16 crc kubenswrapper[4789]: I0202 21:43:16.986551 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 02 21:43:17 crc kubenswrapper[4789]: I0202 21:43:17.849275 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7c4994f5f-462kb" Feb 02 21:43:17 crc kubenswrapper[4789]: I0202 21:43:17.926224 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/78b23a1f-cc85-4767-b19c-6069adfc735a-httpd-config\") pod \"78b23a1f-cc85-4767-b19c-6069adfc735a\" (UID: \"78b23a1f-cc85-4767-b19c-6069adfc735a\") " Feb 02 21:43:17 crc kubenswrapper[4789]: I0202 21:43:17.927700 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s2bgm\" (UniqueName: \"kubernetes.io/projected/78b23a1f-cc85-4767-b19c-6069adfc735a-kube-api-access-s2bgm\") pod \"78b23a1f-cc85-4767-b19c-6069adfc735a\" (UID: \"78b23a1f-cc85-4767-b19c-6069adfc735a\") " Feb 02 21:43:17 crc kubenswrapper[4789]: I0202 21:43:17.927743 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/78b23a1f-cc85-4767-b19c-6069adfc735a-public-tls-certs\") pod \"78b23a1f-cc85-4767-b19c-6069adfc735a\" (UID: \"78b23a1f-cc85-4767-b19c-6069adfc735a\") " Feb 02 21:43:17 crc kubenswrapper[4789]: I0202 21:43:17.927767 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/78b23a1f-cc85-4767-b19c-6069adfc735a-config\") pod \"78b23a1f-cc85-4767-b19c-6069adfc735a\" (UID: \"78b23a1f-cc85-4767-b19c-6069adfc735a\") " Feb 02 21:43:17 crc kubenswrapper[4789]: I0202 21:43:17.927790 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/78b23a1f-cc85-4767-b19c-6069adfc735a-ovndb-tls-certs\") pod \"78b23a1f-cc85-4767-b19c-6069adfc735a\" (UID: \"78b23a1f-cc85-4767-b19c-6069adfc735a\") " Feb 02 21:43:17 crc kubenswrapper[4789]: I0202 21:43:17.927901 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78b23a1f-cc85-4767-b19c-6069adfc735a-combined-ca-bundle\") pod \"78b23a1f-cc85-4767-b19c-6069adfc735a\" (UID: \"78b23a1f-cc85-4767-b19c-6069adfc735a\") " Feb 02 21:43:17 crc kubenswrapper[4789]: I0202 21:43:17.927980 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/78b23a1f-cc85-4767-b19c-6069adfc735a-internal-tls-certs\") pod \"78b23a1f-cc85-4767-b19c-6069adfc735a\" (UID: \"78b23a1f-cc85-4767-b19c-6069adfc735a\") " Feb 02 21:43:17 crc kubenswrapper[4789]: I0202 21:43:17.942827 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78b23a1f-cc85-4767-b19c-6069adfc735a-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "78b23a1f-cc85-4767-b19c-6069adfc735a" (UID: "78b23a1f-cc85-4767-b19c-6069adfc735a"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:43:17 crc kubenswrapper[4789]: I0202 21:43:17.943149 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78b23a1f-cc85-4767-b19c-6069adfc735a-kube-api-access-s2bgm" (OuterVolumeSpecName: "kube-api-access-s2bgm") pod "78b23a1f-cc85-4767-b19c-6069adfc735a" (UID: "78b23a1f-cc85-4767-b19c-6069adfc735a"). InnerVolumeSpecName "kube-api-access-s2bgm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:43:17 crc kubenswrapper[4789]: I0202 21:43:17.964686 4789 generic.go:334] "Generic (PLEG): container finished" podID="78b23a1f-cc85-4767-b19c-6069adfc735a" containerID="553d373b31d254acbe2370697ade07f36e41177b6244fed11902fec65d96f129" exitCode=0 Feb 02 21:43:17 crc kubenswrapper[4789]: I0202 21:43:17.964772 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7c4994f5f-462kb" event={"ID":"78b23a1f-cc85-4767-b19c-6069adfc735a","Type":"ContainerDied","Data":"553d373b31d254acbe2370697ade07f36e41177b6244fed11902fec65d96f129"} Feb 02 21:43:17 crc kubenswrapper[4789]: I0202 21:43:17.964830 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7c4994f5f-462kb" event={"ID":"78b23a1f-cc85-4767-b19c-6069adfc735a","Type":"ContainerDied","Data":"737db95ddfbcb8d98a2987ef6db8aae192a0aad144cbba6515c03c59773b5e1c"} Feb 02 21:43:17 crc kubenswrapper[4789]: I0202 21:43:17.964869 4789 scope.go:117] "RemoveContainer" containerID="299b4734565096b1be6400a79e47dcc680e20c6351889626bc796a381f662a16" Feb 02 21:43:17 crc kubenswrapper[4789]: I0202 21:43:17.965111 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7c4994f5f-462kb" Feb 02 21:43:17 crc kubenswrapper[4789]: I0202 21:43:17.990233 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78b23a1f-cc85-4767-b19c-6069adfc735a-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "78b23a1f-cc85-4767-b19c-6069adfc735a" (UID: "78b23a1f-cc85-4767-b19c-6069adfc735a"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:43:17 crc kubenswrapper[4789]: I0202 21:43:17.994633 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78b23a1f-cc85-4767-b19c-6069adfc735a-config" (OuterVolumeSpecName: "config") pod "78b23a1f-cc85-4767-b19c-6069adfc735a" (UID: "78b23a1f-cc85-4767-b19c-6069adfc735a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:43:17 crc kubenswrapper[4789]: I0202 21:43:17.997567 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78b23a1f-cc85-4767-b19c-6069adfc735a-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "78b23a1f-cc85-4767-b19c-6069adfc735a" (UID: "78b23a1f-cc85-4767-b19c-6069adfc735a"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:43:17 crc kubenswrapper[4789]: I0202 21:43:17.999113 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78b23a1f-cc85-4767-b19c-6069adfc735a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "78b23a1f-cc85-4767-b19c-6069adfc735a" (UID: "78b23a1f-cc85-4767-b19c-6069adfc735a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:43:18 crc kubenswrapper[4789]: I0202 21:43:18.029540 4789 scope.go:117] "RemoveContainer" containerID="553d373b31d254acbe2370697ade07f36e41177b6244fed11902fec65d96f129" Feb 02 21:43:18 crc kubenswrapper[4789]: I0202 21:43:18.029678 4789 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/78b23a1f-cc85-4767-b19c-6069adfc735a-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:18 crc kubenswrapper[4789]: I0202 21:43:18.029681 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78b23a1f-cc85-4767-b19c-6069adfc735a-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "78b23a1f-cc85-4767-b19c-6069adfc735a" (UID: "78b23a1f-cc85-4767-b19c-6069adfc735a"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:43:18 crc kubenswrapper[4789]: I0202 21:43:18.029711 4789 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/78b23a1f-cc85-4767-b19c-6069adfc735a-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:18 crc kubenswrapper[4789]: I0202 21:43:18.029770 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s2bgm\" (UniqueName: \"kubernetes.io/projected/78b23a1f-cc85-4767-b19c-6069adfc735a-kube-api-access-s2bgm\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:18 crc kubenswrapper[4789]: I0202 21:43:18.029797 4789 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/78b23a1f-cc85-4767-b19c-6069adfc735a-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:18 crc kubenswrapper[4789]: I0202 21:43:18.029816 4789 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/78b23a1f-cc85-4767-b19c-6069adfc735a-config\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:18 crc kubenswrapper[4789]: I0202 21:43:18.029834 4789 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78b23a1f-cc85-4767-b19c-6069adfc735a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:18 crc kubenswrapper[4789]: I0202 21:43:18.053274 4789 scope.go:117] "RemoveContainer" containerID="299b4734565096b1be6400a79e47dcc680e20c6351889626bc796a381f662a16" Feb 02 21:43:18 crc kubenswrapper[4789]: E0202 21:43:18.053808 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"299b4734565096b1be6400a79e47dcc680e20c6351889626bc796a381f662a16\": container with ID starting with 299b4734565096b1be6400a79e47dcc680e20c6351889626bc796a381f662a16 not found: ID does not exist" containerID="299b4734565096b1be6400a79e47dcc680e20c6351889626bc796a381f662a16" Feb 02 21:43:18 crc kubenswrapper[4789]: I0202 21:43:18.053908 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"299b4734565096b1be6400a79e47dcc680e20c6351889626bc796a381f662a16"} err="failed to get container status \"299b4734565096b1be6400a79e47dcc680e20c6351889626bc796a381f662a16\": rpc error: code = NotFound desc = could not find container \"299b4734565096b1be6400a79e47dcc680e20c6351889626bc796a381f662a16\": container with ID starting with 299b4734565096b1be6400a79e47dcc680e20c6351889626bc796a381f662a16 not found: ID does not exist" Feb 02 21:43:18 crc kubenswrapper[4789]: I0202 21:43:18.053943 4789 scope.go:117] "RemoveContainer" containerID="553d373b31d254acbe2370697ade07f36e41177b6244fed11902fec65d96f129" Feb 02 21:43:18 crc kubenswrapper[4789]: E0202 21:43:18.054258 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"553d373b31d254acbe2370697ade07f36e41177b6244fed11902fec65d96f129\": container with ID starting with 553d373b31d254acbe2370697ade07f36e41177b6244fed11902fec65d96f129 not found: ID does not exist" containerID="553d373b31d254acbe2370697ade07f36e41177b6244fed11902fec65d96f129" Feb 02 21:43:18 crc kubenswrapper[4789]: I0202 21:43:18.054365 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"553d373b31d254acbe2370697ade07f36e41177b6244fed11902fec65d96f129"} err="failed to get container status \"553d373b31d254acbe2370697ade07f36e41177b6244fed11902fec65d96f129\": rpc error: code = NotFound desc = could not find container \"553d373b31d254acbe2370697ade07f36e41177b6244fed11902fec65d96f129\": container with ID starting with 553d373b31d254acbe2370697ade07f36e41177b6244fed11902fec65d96f129 not found: ID does not exist" Feb 02 21:43:18 crc kubenswrapper[4789]: I0202 21:43:18.130891 4789 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/78b23a1f-cc85-4767-b19c-6069adfc735a-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:18 crc kubenswrapper[4789]: E0202 21:43:18.210118 4789 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 17dd53bbaea1cc18fc470d6b2376b40412de4caece9ebad3ce307cda50a79e10 is running failed: container process not found" containerID="17dd53bbaea1cc18fc470d6b2376b40412de4caece9ebad3ce307cda50a79e10" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 02 21:43:18 crc kubenswrapper[4789]: E0202 21:43:18.210523 4789 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 17dd53bbaea1cc18fc470d6b2376b40412de4caece9ebad3ce307cda50a79e10 is running failed: container process not found" containerID="17dd53bbaea1cc18fc470d6b2376b40412de4caece9ebad3ce307cda50a79e10" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 02 21:43:18 crc kubenswrapper[4789]: E0202 21:43:18.211012 4789 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 17dd53bbaea1cc18fc470d6b2376b40412de4caece9ebad3ce307cda50a79e10 is running failed: container process not found" containerID="17dd53bbaea1cc18fc470d6b2376b40412de4caece9ebad3ce307cda50a79e10" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 02 21:43:18 crc kubenswrapper[4789]: E0202 21:43:18.211077 4789 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 17dd53bbaea1cc18fc470d6b2376b40412de4caece9ebad3ce307cda50a79e10 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-tjn59" podUID="0ad9b78e-5f3a-42f0-a462-c6b0bf0c4ec1" containerName="ovsdb-server" Feb 02 21:43:18 crc kubenswrapper[4789]: E0202 21:43:18.212418 4789 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6c5300b0e812632a91b74ba2ab909199288368ea0a90d2cfd2fd7fbf3551f6b6" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 02 21:43:18 crc kubenswrapper[4789]: E0202 21:43:18.214570 4789 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6c5300b0e812632a91b74ba2ab909199288368ea0a90d2cfd2fd7fbf3551f6b6" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 02 21:43:18 crc kubenswrapper[4789]: E0202 21:43:18.216519 4789 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6c5300b0e812632a91b74ba2ab909199288368ea0a90d2cfd2fd7fbf3551f6b6" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 02 21:43:18 crc kubenswrapper[4789]: E0202 21:43:18.216598 4789 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-tjn59" podUID="0ad9b78e-5f3a-42f0-a462-c6b0bf0c4ec1" containerName="ovs-vswitchd" Feb 02 21:43:18 crc kubenswrapper[4789]: I0202 21:43:18.322241 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7c4994f5f-462kb"] Feb 02 21:43:18 crc kubenswrapper[4789]: I0202 21:43:18.327910 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-7c4994f5f-462kb"] Feb 02 21:43:18 crc kubenswrapper[4789]: I0202 21:43:18.429768 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78b23a1f-cc85-4767-b19c-6069adfc735a" path="/var/lib/kubelet/pods/78b23a1f-cc85-4767-b19c-6069adfc735a/volumes" Feb 02 21:43:18 crc kubenswrapper[4789]: I0202 21:43:18.430924 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b579f7f4-db1f-4d76-82fb-ef4cad438842" path="/var/lib/kubelet/pods/b579f7f4-db1f-4d76-82fb-ef4cad438842/volumes" Feb 02 21:43:21 crc kubenswrapper[4789]: I0202 21:43:21.216814 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-nn5kg" Feb 02 21:43:21 crc kubenswrapper[4789]: I0202 21:43:21.305817 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-nn5kg" Feb 02 21:43:21 crc kubenswrapper[4789]: I0202 21:43:21.464915 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nn5kg"] Feb 02 21:43:22 crc kubenswrapper[4789]: I0202 21:43:22.842188 4789 patch_prober.go:28] interesting pod/machine-config-daemon-c8vcn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 21:43:22 crc kubenswrapper[4789]: I0202 21:43:22.842260 4789 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 21:43:23 crc kubenswrapper[4789]: I0202 21:43:23.028005 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-nn5kg" podUID="0bf87933-483d-4608-9fab-9f0cfa9fb326" containerName="registry-server" containerID="cri-o://663731cad30dfe107241ae97eb0f5c1d3dd773984c0b7800fda4945d05ae1793" gracePeriod=2 Feb 02 21:43:23 crc kubenswrapper[4789]: E0202 21:43:23.209783 4789 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 17dd53bbaea1cc18fc470d6b2376b40412de4caece9ebad3ce307cda50a79e10 is running failed: container process not found" containerID="17dd53bbaea1cc18fc470d6b2376b40412de4caece9ebad3ce307cda50a79e10" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 02 21:43:23 crc kubenswrapper[4789]: E0202 21:43:23.211788 4789 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 17dd53bbaea1cc18fc470d6b2376b40412de4caece9ebad3ce307cda50a79e10 is running failed: container process not found" containerID="17dd53bbaea1cc18fc470d6b2376b40412de4caece9ebad3ce307cda50a79e10" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 02 21:43:23 crc kubenswrapper[4789]: E0202 21:43:23.212571 4789 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 17dd53bbaea1cc18fc470d6b2376b40412de4caece9ebad3ce307cda50a79e10 is running failed: container process not found" containerID="17dd53bbaea1cc18fc470d6b2376b40412de4caece9ebad3ce307cda50a79e10" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 02 21:43:23 crc kubenswrapper[4789]: E0202 21:43:23.212684 4789 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 17dd53bbaea1cc18fc470d6b2376b40412de4caece9ebad3ce307cda50a79e10 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-tjn59" podUID="0ad9b78e-5f3a-42f0-a462-c6b0bf0c4ec1" containerName="ovsdb-server" Feb 02 21:43:23 crc kubenswrapper[4789]: E0202 21:43:23.215130 4789 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6c5300b0e812632a91b74ba2ab909199288368ea0a90d2cfd2fd7fbf3551f6b6" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 02 21:43:23 crc kubenswrapper[4789]: E0202 21:43:23.220674 4789 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6c5300b0e812632a91b74ba2ab909199288368ea0a90d2cfd2fd7fbf3551f6b6" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 02 21:43:23 crc kubenswrapper[4789]: E0202 21:43:23.223847 4789 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6c5300b0e812632a91b74ba2ab909199288368ea0a90d2cfd2fd7fbf3551f6b6" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 02 21:43:23 crc kubenswrapper[4789]: E0202 21:43:23.223914 4789 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-tjn59" podUID="0ad9b78e-5f3a-42f0-a462-c6b0bf0c4ec1" containerName="ovs-vswitchd" Feb 02 21:43:23 crc kubenswrapper[4789]: I0202 21:43:23.536442 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nn5kg" Feb 02 21:43:23 crc kubenswrapper[4789]: I0202 21:43:23.627285 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0bf87933-483d-4608-9fab-9f0cfa9fb326-utilities\") pod \"0bf87933-483d-4608-9fab-9f0cfa9fb326\" (UID: \"0bf87933-483d-4608-9fab-9f0cfa9fb326\") " Feb 02 21:43:23 crc kubenswrapper[4789]: I0202 21:43:23.627377 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9p9xb\" (UniqueName: \"kubernetes.io/projected/0bf87933-483d-4608-9fab-9f0cfa9fb326-kube-api-access-9p9xb\") pod \"0bf87933-483d-4608-9fab-9f0cfa9fb326\" (UID: \"0bf87933-483d-4608-9fab-9f0cfa9fb326\") " Feb 02 21:43:23 crc kubenswrapper[4789]: I0202 21:43:23.627429 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0bf87933-483d-4608-9fab-9f0cfa9fb326-catalog-content\") pod \"0bf87933-483d-4608-9fab-9f0cfa9fb326\" (UID: \"0bf87933-483d-4608-9fab-9f0cfa9fb326\") " Feb 02 21:43:23 crc kubenswrapper[4789]: I0202 21:43:23.628346 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0bf87933-483d-4608-9fab-9f0cfa9fb326-utilities" (OuterVolumeSpecName: "utilities") pod "0bf87933-483d-4608-9fab-9f0cfa9fb326" (UID: "0bf87933-483d-4608-9fab-9f0cfa9fb326"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 21:43:23 crc kubenswrapper[4789]: I0202 21:43:23.633466 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bf87933-483d-4608-9fab-9f0cfa9fb326-kube-api-access-9p9xb" (OuterVolumeSpecName: "kube-api-access-9p9xb") pod "0bf87933-483d-4608-9fab-9f0cfa9fb326" (UID: "0bf87933-483d-4608-9fab-9f0cfa9fb326"). InnerVolumeSpecName "kube-api-access-9p9xb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:43:23 crc kubenswrapper[4789]: I0202 21:43:23.729915 4789 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0bf87933-483d-4608-9fab-9f0cfa9fb326-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:23 crc kubenswrapper[4789]: I0202 21:43:23.729958 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9p9xb\" (UniqueName: \"kubernetes.io/projected/0bf87933-483d-4608-9fab-9f0cfa9fb326-kube-api-access-9p9xb\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:23 crc kubenswrapper[4789]: I0202 21:43:23.799172 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0bf87933-483d-4608-9fab-9f0cfa9fb326-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0bf87933-483d-4608-9fab-9f0cfa9fb326" (UID: "0bf87933-483d-4608-9fab-9f0cfa9fb326"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 21:43:23 crc kubenswrapper[4789]: I0202 21:43:23.831699 4789 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0bf87933-483d-4608-9fab-9f0cfa9fb326-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:24 crc kubenswrapper[4789]: I0202 21:43:24.046777 4789 generic.go:334] "Generic (PLEG): container finished" podID="0bf87933-483d-4608-9fab-9f0cfa9fb326" containerID="663731cad30dfe107241ae97eb0f5c1d3dd773984c0b7800fda4945d05ae1793" exitCode=0 Feb 02 21:43:24 crc kubenswrapper[4789]: I0202 21:43:24.046852 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nn5kg" event={"ID":"0bf87933-483d-4608-9fab-9f0cfa9fb326","Type":"ContainerDied","Data":"663731cad30dfe107241ae97eb0f5c1d3dd773984c0b7800fda4945d05ae1793"} Feb 02 21:43:24 crc kubenswrapper[4789]: I0202 21:43:24.046871 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nn5kg" Feb 02 21:43:24 crc kubenswrapper[4789]: I0202 21:43:24.046911 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nn5kg" event={"ID":"0bf87933-483d-4608-9fab-9f0cfa9fb326","Type":"ContainerDied","Data":"e7c3a1595ce13bbd3297b04c5762f5b858b010a5368c28ff7e0b48b29e55dbbb"} Feb 02 21:43:24 crc kubenswrapper[4789]: I0202 21:43:24.046962 4789 scope.go:117] "RemoveContainer" containerID="663731cad30dfe107241ae97eb0f5c1d3dd773984c0b7800fda4945d05ae1793" Feb 02 21:43:24 crc kubenswrapper[4789]: I0202 21:43:24.075739 4789 scope.go:117] "RemoveContainer" containerID="2be82df2b0e9a95d9535c329c4aac2b5684a8a513a868a9aef3785a4d0ed3c98" Feb 02 21:43:24 crc kubenswrapper[4789]: I0202 21:43:24.112798 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nn5kg"] Feb 02 21:43:24 crc kubenswrapper[4789]: I0202 21:43:24.120336 4789 scope.go:117] "RemoveContainer" containerID="0ecc6d0a811029b0d3a588b5bb03e52c54a14bb62d8653b1cae84cfe94675a80" Feb 02 21:43:24 crc kubenswrapper[4789]: I0202 21:43:24.124335 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-nn5kg"] Feb 02 21:43:24 crc kubenswrapper[4789]: I0202 21:43:24.169541 4789 scope.go:117] "RemoveContainer" containerID="663731cad30dfe107241ae97eb0f5c1d3dd773984c0b7800fda4945d05ae1793" Feb 02 21:43:24 crc kubenswrapper[4789]: E0202 21:43:24.170479 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"663731cad30dfe107241ae97eb0f5c1d3dd773984c0b7800fda4945d05ae1793\": container with ID starting with 663731cad30dfe107241ae97eb0f5c1d3dd773984c0b7800fda4945d05ae1793 not found: ID does not exist" containerID="663731cad30dfe107241ae97eb0f5c1d3dd773984c0b7800fda4945d05ae1793" Feb 02 21:43:24 crc kubenswrapper[4789]: I0202 21:43:24.170555 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"663731cad30dfe107241ae97eb0f5c1d3dd773984c0b7800fda4945d05ae1793"} err="failed to get container status \"663731cad30dfe107241ae97eb0f5c1d3dd773984c0b7800fda4945d05ae1793\": rpc error: code = NotFound desc = could not find container \"663731cad30dfe107241ae97eb0f5c1d3dd773984c0b7800fda4945d05ae1793\": container with ID starting with 663731cad30dfe107241ae97eb0f5c1d3dd773984c0b7800fda4945d05ae1793 not found: ID does not exist" Feb 02 21:43:24 crc kubenswrapper[4789]: I0202 21:43:24.170636 4789 scope.go:117] "RemoveContainer" containerID="2be82df2b0e9a95d9535c329c4aac2b5684a8a513a868a9aef3785a4d0ed3c98" Feb 02 21:43:24 crc kubenswrapper[4789]: E0202 21:43:24.171185 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2be82df2b0e9a95d9535c329c4aac2b5684a8a513a868a9aef3785a4d0ed3c98\": container with ID starting with 2be82df2b0e9a95d9535c329c4aac2b5684a8a513a868a9aef3785a4d0ed3c98 not found: ID does not exist" containerID="2be82df2b0e9a95d9535c329c4aac2b5684a8a513a868a9aef3785a4d0ed3c98" Feb 02 21:43:24 crc kubenswrapper[4789]: I0202 21:43:24.171229 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2be82df2b0e9a95d9535c329c4aac2b5684a8a513a868a9aef3785a4d0ed3c98"} err="failed to get container status \"2be82df2b0e9a95d9535c329c4aac2b5684a8a513a868a9aef3785a4d0ed3c98\": rpc error: code = NotFound desc = could not find container \"2be82df2b0e9a95d9535c329c4aac2b5684a8a513a868a9aef3785a4d0ed3c98\": container with ID starting with 2be82df2b0e9a95d9535c329c4aac2b5684a8a513a868a9aef3785a4d0ed3c98 not found: ID does not exist" Feb 02 21:43:24 crc kubenswrapper[4789]: I0202 21:43:24.171256 4789 scope.go:117] "RemoveContainer" containerID="0ecc6d0a811029b0d3a588b5bb03e52c54a14bb62d8653b1cae84cfe94675a80" Feb 02 21:43:24 crc kubenswrapper[4789]: E0202 21:43:24.174708 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ecc6d0a811029b0d3a588b5bb03e52c54a14bb62d8653b1cae84cfe94675a80\": container with ID starting with 0ecc6d0a811029b0d3a588b5bb03e52c54a14bb62d8653b1cae84cfe94675a80 not found: ID does not exist" containerID="0ecc6d0a811029b0d3a588b5bb03e52c54a14bb62d8653b1cae84cfe94675a80" Feb 02 21:43:24 crc kubenswrapper[4789]: I0202 21:43:24.174912 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ecc6d0a811029b0d3a588b5bb03e52c54a14bb62d8653b1cae84cfe94675a80"} err="failed to get container status \"0ecc6d0a811029b0d3a588b5bb03e52c54a14bb62d8653b1cae84cfe94675a80\": rpc error: code = NotFound desc = could not find container \"0ecc6d0a811029b0d3a588b5bb03e52c54a14bb62d8653b1cae84cfe94675a80\": container with ID starting with 0ecc6d0a811029b0d3a588b5bb03e52c54a14bb62d8653b1cae84cfe94675a80 not found: ID does not exist" Feb 02 21:43:24 crc kubenswrapper[4789]: E0202 21:43:24.181745 4789 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0bf87933_483d_4608_9fab_9f0cfa9fb326.slice/crio-e7c3a1595ce13bbd3297b04c5762f5b858b010a5368c28ff7e0b48b29e55dbbb\": RecentStats: unable to find data in memory cache]" Feb 02 21:43:24 crc kubenswrapper[4789]: I0202 21:43:24.427786 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0bf87933-483d-4608-9fab-9f0cfa9fb326" path="/var/lib/kubelet/pods/0bf87933-483d-4608-9fab-9f0cfa9fb326/volumes" Feb 02 21:43:28 crc kubenswrapper[4789]: E0202 21:43:28.210266 4789 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 17dd53bbaea1cc18fc470d6b2376b40412de4caece9ebad3ce307cda50a79e10 is running failed: container process not found" containerID="17dd53bbaea1cc18fc470d6b2376b40412de4caece9ebad3ce307cda50a79e10" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 02 21:43:28 crc kubenswrapper[4789]: E0202 21:43:28.211069 4789 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 17dd53bbaea1cc18fc470d6b2376b40412de4caece9ebad3ce307cda50a79e10 is running failed: container process not found" containerID="17dd53bbaea1cc18fc470d6b2376b40412de4caece9ebad3ce307cda50a79e10" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 02 21:43:28 crc kubenswrapper[4789]: E0202 21:43:28.211714 4789 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 17dd53bbaea1cc18fc470d6b2376b40412de4caece9ebad3ce307cda50a79e10 is running failed: container process not found" containerID="17dd53bbaea1cc18fc470d6b2376b40412de4caece9ebad3ce307cda50a79e10" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 02 21:43:28 crc kubenswrapper[4789]: E0202 21:43:28.211788 4789 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 17dd53bbaea1cc18fc470d6b2376b40412de4caece9ebad3ce307cda50a79e10 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-tjn59" podUID="0ad9b78e-5f3a-42f0-a462-c6b0bf0c4ec1" containerName="ovsdb-server" Feb 02 21:43:28 crc kubenswrapper[4789]: E0202 21:43:28.211913 4789 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6c5300b0e812632a91b74ba2ab909199288368ea0a90d2cfd2fd7fbf3551f6b6" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 02 21:43:28 crc kubenswrapper[4789]: E0202 21:43:28.213821 4789 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6c5300b0e812632a91b74ba2ab909199288368ea0a90d2cfd2fd7fbf3551f6b6" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 02 21:43:28 crc kubenswrapper[4789]: E0202 21:43:28.215987 4789 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6c5300b0e812632a91b74ba2ab909199288368ea0a90d2cfd2fd7fbf3551f6b6" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 02 21:43:28 crc kubenswrapper[4789]: E0202 21:43:28.216030 4789 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-tjn59" podUID="0ad9b78e-5f3a-42f0-a462-c6b0bf0c4ec1" containerName="ovs-vswitchd" Feb 02 21:43:33 crc kubenswrapper[4789]: E0202 21:43:33.210123 4789 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 17dd53bbaea1cc18fc470d6b2376b40412de4caece9ebad3ce307cda50a79e10 is running failed: container process not found" containerID="17dd53bbaea1cc18fc470d6b2376b40412de4caece9ebad3ce307cda50a79e10" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 02 21:43:33 crc kubenswrapper[4789]: E0202 21:43:33.212195 4789 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 17dd53bbaea1cc18fc470d6b2376b40412de4caece9ebad3ce307cda50a79e10 is running failed: container process not found" containerID="17dd53bbaea1cc18fc470d6b2376b40412de4caece9ebad3ce307cda50a79e10" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 02 21:43:33 crc kubenswrapper[4789]: E0202 21:43:33.212569 4789 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6c5300b0e812632a91b74ba2ab909199288368ea0a90d2cfd2fd7fbf3551f6b6" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 02 21:43:33 crc kubenswrapper[4789]: E0202 21:43:33.212798 4789 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 17dd53bbaea1cc18fc470d6b2376b40412de4caece9ebad3ce307cda50a79e10 is running failed: container process not found" containerID="17dd53bbaea1cc18fc470d6b2376b40412de4caece9ebad3ce307cda50a79e10" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 02 21:43:33 crc kubenswrapper[4789]: E0202 21:43:33.212847 4789 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 17dd53bbaea1cc18fc470d6b2376b40412de4caece9ebad3ce307cda50a79e10 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-tjn59" podUID="0ad9b78e-5f3a-42f0-a462-c6b0bf0c4ec1" containerName="ovsdb-server" Feb 02 21:43:33 crc kubenswrapper[4789]: E0202 21:43:33.215139 4789 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6c5300b0e812632a91b74ba2ab909199288368ea0a90d2cfd2fd7fbf3551f6b6" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 02 21:43:33 crc kubenswrapper[4789]: E0202 21:43:33.217251 4789 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6c5300b0e812632a91b74ba2ab909199288368ea0a90d2cfd2fd7fbf3551f6b6" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 02 21:43:33 crc kubenswrapper[4789]: E0202 21:43:33.217321 4789 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-tjn59" podUID="0ad9b78e-5f3a-42f0-a462-c6b0bf0c4ec1" containerName="ovs-vswitchd" Feb 02 21:43:36 crc kubenswrapper[4789]: I0202 21:43:36.732609 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 02 21:43:36 crc kubenswrapper[4789]: I0202 21:43:36.869173 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87f6bccb-d5fc-4868-aca2-734d16898805-combined-ca-bundle\") pod \"87f6bccb-d5fc-4868-aca2-734d16898805\" (UID: \"87f6bccb-d5fc-4868-aca2-734d16898805\") " Feb 02 21:43:36 crc kubenswrapper[4789]: I0202 21:43:36.869358 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/87f6bccb-d5fc-4868-aca2-734d16898805-cache\") pod \"87f6bccb-d5fc-4868-aca2-734d16898805\" (UID: \"87f6bccb-d5fc-4868-aca2-734d16898805\") " Feb 02 21:43:36 crc kubenswrapper[4789]: I0202 21:43:36.869397 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swift\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"87f6bccb-d5fc-4868-aca2-734d16898805\" (UID: \"87f6bccb-d5fc-4868-aca2-734d16898805\") " Feb 02 21:43:36 crc kubenswrapper[4789]: I0202 21:43:36.869428 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/87f6bccb-d5fc-4868-aca2-734d16898805-etc-swift\") pod \"87f6bccb-d5fc-4868-aca2-734d16898805\" (UID: \"87f6bccb-d5fc-4868-aca2-734d16898805\") " Feb 02 21:43:36 crc kubenswrapper[4789]: I0202 21:43:36.869455 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2gm8x\" (UniqueName: \"kubernetes.io/projected/87f6bccb-d5fc-4868-aca2-734d16898805-kube-api-access-2gm8x\") pod \"87f6bccb-d5fc-4868-aca2-734d16898805\" (UID: \"87f6bccb-d5fc-4868-aca2-734d16898805\") " Feb 02 21:43:36 crc kubenswrapper[4789]: I0202 21:43:36.869517 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/87f6bccb-d5fc-4868-aca2-734d16898805-lock\") pod \"87f6bccb-d5fc-4868-aca2-734d16898805\" (UID: \"87f6bccb-d5fc-4868-aca2-734d16898805\") " Feb 02 21:43:36 crc kubenswrapper[4789]: I0202 21:43:36.870175 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87f6bccb-d5fc-4868-aca2-734d16898805-cache" (OuterVolumeSpecName: "cache") pod "87f6bccb-d5fc-4868-aca2-734d16898805" (UID: "87f6bccb-d5fc-4868-aca2-734d16898805"). InnerVolumeSpecName "cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 21:43:36 crc kubenswrapper[4789]: I0202 21:43:36.870491 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87f6bccb-d5fc-4868-aca2-734d16898805-lock" (OuterVolumeSpecName: "lock") pod "87f6bccb-d5fc-4868-aca2-734d16898805" (UID: "87f6bccb-d5fc-4868-aca2-734d16898805"). InnerVolumeSpecName "lock". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 21:43:36 crc kubenswrapper[4789]: I0202 21:43:36.878124 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87f6bccb-d5fc-4868-aca2-734d16898805-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "87f6bccb-d5fc-4868-aca2-734d16898805" (UID: "87f6bccb-d5fc-4868-aca2-734d16898805"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:43:36 crc kubenswrapper[4789]: I0202 21:43:36.879858 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87f6bccb-d5fc-4868-aca2-734d16898805-kube-api-access-2gm8x" (OuterVolumeSpecName: "kube-api-access-2gm8x") pod "87f6bccb-d5fc-4868-aca2-734d16898805" (UID: "87f6bccb-d5fc-4868-aca2-734d16898805"). InnerVolumeSpecName "kube-api-access-2gm8x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:43:36 crc kubenswrapper[4789]: I0202 21:43:36.889843 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "swift") pod "87f6bccb-d5fc-4868-aca2-734d16898805" (UID: "87f6bccb-d5fc-4868-aca2-734d16898805"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 02 21:43:36 crc kubenswrapper[4789]: I0202 21:43:36.964311 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-tjn59_0ad9b78e-5f3a-42f0-a462-c6b0bf0c4ec1/ovs-vswitchd/0.log" Feb 02 21:43:36 crc kubenswrapper[4789]: I0202 21:43:36.965818 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-tjn59" Feb 02 21:43:36 crc kubenswrapper[4789]: I0202 21:43:36.971335 4789 reconciler_common.go:293] "Volume detached for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/87f6bccb-d5fc-4868-aca2-734d16898805-cache\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:36 crc kubenswrapper[4789]: I0202 21:43:36.971383 4789 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Feb 02 21:43:36 crc kubenswrapper[4789]: I0202 21:43:36.971393 4789 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/87f6bccb-d5fc-4868-aca2-734d16898805-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:36 crc kubenswrapper[4789]: I0202 21:43:36.971403 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2gm8x\" (UniqueName: \"kubernetes.io/projected/87f6bccb-d5fc-4868-aca2-734d16898805-kube-api-access-2gm8x\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:36 crc kubenswrapper[4789]: I0202 21:43:36.971412 4789 reconciler_common.go:293] "Volume detached for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/87f6bccb-d5fc-4868-aca2-734d16898805-lock\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:36 crc kubenswrapper[4789]: I0202 21:43:36.992229 4789 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Feb 02 21:43:37 crc kubenswrapper[4789]: I0202 21:43:37.072899 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/0ad9b78e-5f3a-42f0-a462-c6b0bf0c4ec1-var-lib\") pod \"0ad9b78e-5f3a-42f0-a462-c6b0bf0c4ec1\" (UID: \"0ad9b78e-5f3a-42f0-a462-c6b0bf0c4ec1\") " Feb 02 21:43:37 crc kubenswrapper[4789]: I0202 21:43:37.072960 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/0ad9b78e-5f3a-42f0-a462-c6b0bf0c4ec1-var-log\") pod \"0ad9b78e-5f3a-42f0-a462-c6b0bf0c4ec1\" (UID: \"0ad9b78e-5f3a-42f0-a462-c6b0bf0c4ec1\") " Feb 02 21:43:37 crc kubenswrapper[4789]: I0202 21:43:37.073015 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ad9b78e-5f3a-42f0-a462-c6b0bf0c4ec1-var-lib" (OuterVolumeSpecName: "var-lib") pod "0ad9b78e-5f3a-42f0-a462-c6b0bf0c4ec1" (UID: "0ad9b78e-5f3a-42f0-a462-c6b0bf0c4ec1"). InnerVolumeSpecName "var-lib". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 21:43:37 crc kubenswrapper[4789]: I0202 21:43:37.073109 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0ad9b78e-5f3a-42f0-a462-c6b0bf0c4ec1-scripts\") pod \"0ad9b78e-5f3a-42f0-a462-c6b0bf0c4ec1\" (UID: \"0ad9b78e-5f3a-42f0-a462-c6b0bf0c4ec1\") " Feb 02 21:43:37 crc kubenswrapper[4789]: I0202 21:43:37.073139 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ad9b78e-5f3a-42f0-a462-c6b0bf0c4ec1-var-log" (OuterVolumeSpecName: "var-log") pod "0ad9b78e-5f3a-42f0-a462-c6b0bf0c4ec1" (UID: "0ad9b78e-5f3a-42f0-a462-c6b0bf0c4ec1"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 21:43:37 crc kubenswrapper[4789]: I0202 21:43:37.073168 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/0ad9b78e-5f3a-42f0-a462-c6b0bf0c4ec1-etc-ovs\") pod \"0ad9b78e-5f3a-42f0-a462-c6b0bf0c4ec1\" (UID: \"0ad9b78e-5f3a-42f0-a462-c6b0bf0c4ec1\") " Feb 02 21:43:37 crc kubenswrapper[4789]: I0202 21:43:37.073201 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0ad9b78e-5f3a-42f0-a462-c6b0bf0c4ec1-var-run\") pod \"0ad9b78e-5f3a-42f0-a462-c6b0bf0c4ec1\" (UID: \"0ad9b78e-5f3a-42f0-a462-c6b0bf0c4ec1\") " Feb 02 21:43:37 crc kubenswrapper[4789]: I0202 21:43:37.073238 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ad9b78e-5f3a-42f0-a462-c6b0bf0c4ec1-etc-ovs" (OuterVolumeSpecName: "etc-ovs") pod "0ad9b78e-5f3a-42f0-a462-c6b0bf0c4ec1" (UID: "0ad9b78e-5f3a-42f0-a462-c6b0bf0c4ec1"). InnerVolumeSpecName "etc-ovs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 21:43:37 crc kubenswrapper[4789]: I0202 21:43:37.073239 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6bfsx\" (UniqueName: \"kubernetes.io/projected/0ad9b78e-5f3a-42f0-a462-c6b0bf0c4ec1-kube-api-access-6bfsx\") pod \"0ad9b78e-5f3a-42f0-a462-c6b0bf0c4ec1\" (UID: \"0ad9b78e-5f3a-42f0-a462-c6b0bf0c4ec1\") " Feb 02 21:43:37 crc kubenswrapper[4789]: I0202 21:43:37.073293 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ad9b78e-5f3a-42f0-a462-c6b0bf0c4ec1-var-run" (OuterVolumeSpecName: "var-run") pod "0ad9b78e-5f3a-42f0-a462-c6b0bf0c4ec1" (UID: "0ad9b78e-5f3a-42f0-a462-c6b0bf0c4ec1"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 21:43:37 crc kubenswrapper[4789]: I0202 21:43:37.073879 4789 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:37 crc kubenswrapper[4789]: I0202 21:43:37.073898 4789 reconciler_common.go:293] "Volume detached for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/0ad9b78e-5f3a-42f0-a462-c6b0bf0c4ec1-var-lib\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:37 crc kubenswrapper[4789]: I0202 21:43:37.073910 4789 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/0ad9b78e-5f3a-42f0-a462-c6b0bf0c4ec1-var-log\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:37 crc kubenswrapper[4789]: I0202 21:43:37.073921 4789 reconciler_common.go:293] "Volume detached for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/0ad9b78e-5f3a-42f0-a462-c6b0bf0c4ec1-etc-ovs\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:37 crc kubenswrapper[4789]: I0202 21:43:37.073931 4789 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0ad9b78e-5f3a-42f0-a462-c6b0bf0c4ec1-var-run\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:37 crc kubenswrapper[4789]: I0202 21:43:37.074188 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ad9b78e-5f3a-42f0-a462-c6b0bf0c4ec1-scripts" (OuterVolumeSpecName: "scripts") pod "0ad9b78e-5f3a-42f0-a462-c6b0bf0c4ec1" (UID: "0ad9b78e-5f3a-42f0-a462-c6b0bf0c4ec1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:43:37 crc kubenswrapper[4789]: I0202 21:43:37.076553 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ad9b78e-5f3a-42f0-a462-c6b0bf0c4ec1-kube-api-access-6bfsx" (OuterVolumeSpecName: "kube-api-access-6bfsx") pod "0ad9b78e-5f3a-42f0-a462-c6b0bf0c4ec1" (UID: "0ad9b78e-5f3a-42f0-a462-c6b0bf0c4ec1"). InnerVolumeSpecName "kube-api-access-6bfsx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:43:37 crc kubenswrapper[4789]: I0202 21:43:37.164006 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87f6bccb-d5fc-4868-aca2-734d16898805-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "87f6bccb-d5fc-4868-aca2-734d16898805" (UID: "87f6bccb-d5fc-4868-aca2-734d16898805"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:43:37 crc kubenswrapper[4789]: I0202 21:43:37.171522 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 02 21:43:37 crc kubenswrapper[4789]: I0202 21:43:37.176268 4789 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0ad9b78e-5f3a-42f0-a462-c6b0bf0c4ec1-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:37 crc kubenswrapper[4789]: I0202 21:43:37.176329 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6bfsx\" (UniqueName: \"kubernetes.io/projected/0ad9b78e-5f3a-42f0-a462-c6b0bf0c4ec1-kube-api-access-6bfsx\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:37 crc kubenswrapper[4789]: I0202 21:43:37.176352 4789 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87f6bccb-d5fc-4868-aca2-734d16898805-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:37 crc kubenswrapper[4789]: I0202 21:43:37.204696 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-tjn59_0ad9b78e-5f3a-42f0-a462-c6b0bf0c4ec1/ovs-vswitchd/0.log" Feb 02 21:43:37 crc kubenswrapper[4789]: I0202 21:43:37.205935 4789 generic.go:334] "Generic (PLEG): container finished" podID="0ad9b78e-5f3a-42f0-a462-c6b0bf0c4ec1" containerID="6c5300b0e812632a91b74ba2ab909199288368ea0a90d2cfd2fd7fbf3551f6b6" exitCode=137 Feb 02 21:43:37 crc kubenswrapper[4789]: I0202 21:43:37.206007 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-tjn59" Feb 02 21:43:37 crc kubenswrapper[4789]: I0202 21:43:37.206046 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-tjn59" event={"ID":"0ad9b78e-5f3a-42f0-a462-c6b0bf0c4ec1","Type":"ContainerDied","Data":"6c5300b0e812632a91b74ba2ab909199288368ea0a90d2cfd2fd7fbf3551f6b6"} Feb 02 21:43:37 crc kubenswrapper[4789]: I0202 21:43:37.206124 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-tjn59" event={"ID":"0ad9b78e-5f3a-42f0-a462-c6b0bf0c4ec1","Type":"ContainerDied","Data":"3d505979889fe8acec9d3d5c9c46cb8caf806d34a49a9d462b4a21176917521b"} Feb 02 21:43:37 crc kubenswrapper[4789]: I0202 21:43:37.206155 4789 scope.go:117] "RemoveContainer" containerID="6c5300b0e812632a91b74ba2ab909199288368ea0a90d2cfd2fd7fbf3551f6b6" Feb 02 21:43:37 crc kubenswrapper[4789]: I0202 21:43:37.211307 4789 generic.go:334] "Generic (PLEG): container finished" podID="57c9f301-615a-4182-b17e-3ae250e8335c" containerID="fbe1157b2a6d65b0c7188f948585dfc0be3a3d76f5c3b57620ea3d6091a4927c" exitCode=137 Feb 02 21:43:37 crc kubenswrapper[4789]: I0202 21:43:37.211351 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"57c9f301-615a-4182-b17e-3ae250e8335c","Type":"ContainerDied","Data":"fbe1157b2a6d65b0c7188f948585dfc0be3a3d76f5c3b57620ea3d6091a4927c"} Feb 02 21:43:37 crc kubenswrapper[4789]: I0202 21:43:37.211397 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"57c9f301-615a-4182-b17e-3ae250e8335c","Type":"ContainerDied","Data":"157fe788c3f15e076361528ba462a9f7d3a289ad3d8310817e32a82af5c86217"} Feb 02 21:43:37 crc kubenswrapper[4789]: I0202 21:43:37.211405 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 02 21:43:37 crc kubenswrapper[4789]: I0202 21:43:37.226240 4789 generic.go:334] "Generic (PLEG): container finished" podID="87f6bccb-d5fc-4868-aca2-734d16898805" containerID="19152882f397a8eaf801b2e8d8fd5858677ede37b6cfd35d02fe8847efc8de27" exitCode=137 Feb 02 21:43:37 crc kubenswrapper[4789]: I0202 21:43:37.226284 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"87f6bccb-d5fc-4868-aca2-734d16898805","Type":"ContainerDied","Data":"19152882f397a8eaf801b2e8d8fd5858677ede37b6cfd35d02fe8847efc8de27"} Feb 02 21:43:37 crc kubenswrapper[4789]: I0202 21:43:37.226311 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"87f6bccb-d5fc-4868-aca2-734d16898805","Type":"ContainerDied","Data":"fa3ee7e4fc1174542a4372cd96b69826c253a37869acc4458144491c01712e4a"} Feb 02 21:43:37 crc kubenswrapper[4789]: I0202 21:43:37.226653 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 02 21:43:37 crc kubenswrapper[4789]: I0202 21:43:37.238847 4789 scope.go:117] "RemoveContainer" containerID="17dd53bbaea1cc18fc470d6b2376b40412de4caece9ebad3ce307cda50a79e10" Feb 02 21:43:37 crc kubenswrapper[4789]: I0202 21:43:37.277243 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57c9f301-615a-4182-b17e-3ae250e8335c-combined-ca-bundle\") pod \"57c9f301-615a-4182-b17e-3ae250e8335c\" (UID: \"57c9f301-615a-4182-b17e-3ae250e8335c\") " Feb 02 21:43:37 crc kubenswrapper[4789]: I0202 21:43:37.277304 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/57c9f301-615a-4182-b17e-3ae250e8335c-etc-machine-id\") pod \"57c9f301-615a-4182-b17e-3ae250e8335c\" (UID: \"57c9f301-615a-4182-b17e-3ae250e8335c\") " Feb 02 21:43:37 crc kubenswrapper[4789]: I0202 21:43:37.277372 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7cqb5\" (UniqueName: \"kubernetes.io/projected/57c9f301-615a-4182-b17e-3ae250e8335c-kube-api-access-7cqb5\") pod \"57c9f301-615a-4182-b17e-3ae250e8335c\" (UID: \"57c9f301-615a-4182-b17e-3ae250e8335c\") " Feb 02 21:43:37 crc kubenswrapper[4789]: I0202 21:43:37.277443 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57c9f301-615a-4182-b17e-3ae250e8335c-config-data\") pod \"57c9f301-615a-4182-b17e-3ae250e8335c\" (UID: \"57c9f301-615a-4182-b17e-3ae250e8335c\") " Feb 02 21:43:37 crc kubenswrapper[4789]: I0202 21:43:37.277518 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/57c9f301-615a-4182-b17e-3ae250e8335c-config-data-custom\") pod \"57c9f301-615a-4182-b17e-3ae250e8335c\" (UID: \"57c9f301-615a-4182-b17e-3ae250e8335c\") " Feb 02 21:43:37 crc kubenswrapper[4789]: I0202 21:43:37.277549 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57c9f301-615a-4182-b17e-3ae250e8335c-scripts\") pod \"57c9f301-615a-4182-b17e-3ae250e8335c\" (UID: \"57c9f301-615a-4182-b17e-3ae250e8335c\") " Feb 02 21:43:37 crc kubenswrapper[4789]: I0202 21:43:37.277739 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/57c9f301-615a-4182-b17e-3ae250e8335c-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "57c9f301-615a-4182-b17e-3ae250e8335c" (UID: "57c9f301-615a-4182-b17e-3ae250e8335c"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 21:43:37 crc kubenswrapper[4789]: I0202 21:43:37.277977 4789 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/57c9f301-615a-4182-b17e-3ae250e8335c-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:37 crc kubenswrapper[4789]: I0202 21:43:37.281330 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57c9f301-615a-4182-b17e-3ae250e8335c-kube-api-access-7cqb5" (OuterVolumeSpecName: "kube-api-access-7cqb5") pod "57c9f301-615a-4182-b17e-3ae250e8335c" (UID: "57c9f301-615a-4182-b17e-3ae250e8335c"). InnerVolumeSpecName "kube-api-access-7cqb5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:43:37 crc kubenswrapper[4789]: I0202 21:43:37.283442 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57c9f301-615a-4182-b17e-3ae250e8335c-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "57c9f301-615a-4182-b17e-3ae250e8335c" (UID: "57c9f301-615a-4182-b17e-3ae250e8335c"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:43:37 crc kubenswrapper[4789]: I0202 21:43:37.285058 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57c9f301-615a-4182-b17e-3ae250e8335c-scripts" (OuterVolumeSpecName: "scripts") pod "57c9f301-615a-4182-b17e-3ae250e8335c" (UID: "57c9f301-615a-4182-b17e-3ae250e8335c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:43:37 crc kubenswrapper[4789]: I0202 21:43:37.316764 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-tjn59"] Feb 02 21:43:37 crc kubenswrapper[4789]: I0202 21:43:37.320275 4789 scope.go:117] "RemoveContainer" containerID="18af5228ac54d94f667c107fc2d86d65f1501a6cb0e8343a4e14781b1065e8a2" Feb 02 21:43:37 crc kubenswrapper[4789]: I0202 21:43:37.328636 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-ovs-tjn59"] Feb 02 21:43:37 crc kubenswrapper[4789]: I0202 21:43:37.333741 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57c9f301-615a-4182-b17e-3ae250e8335c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "57c9f301-615a-4182-b17e-3ae250e8335c" (UID: "57c9f301-615a-4182-b17e-3ae250e8335c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:43:37 crc kubenswrapper[4789]: I0202 21:43:37.336641 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Feb 02 21:43:37 crc kubenswrapper[4789]: I0202 21:43:37.341555 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-storage-0"] Feb 02 21:43:37 crc kubenswrapper[4789]: I0202 21:43:37.372289 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57c9f301-615a-4182-b17e-3ae250e8335c-config-data" (OuterVolumeSpecName: "config-data") pod "57c9f301-615a-4182-b17e-3ae250e8335c" (UID: "57c9f301-615a-4182-b17e-3ae250e8335c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:43:37 crc kubenswrapper[4789]: I0202 21:43:37.379185 4789 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57c9f301-615a-4182-b17e-3ae250e8335c-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:37 crc kubenswrapper[4789]: I0202 21:43:37.379217 4789 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/57c9f301-615a-4182-b17e-3ae250e8335c-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:37 crc kubenswrapper[4789]: I0202 21:43:37.379231 4789 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57c9f301-615a-4182-b17e-3ae250e8335c-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:37 crc kubenswrapper[4789]: I0202 21:43:37.379243 4789 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57c9f301-615a-4182-b17e-3ae250e8335c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:37 crc kubenswrapper[4789]: I0202 21:43:37.379255 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7cqb5\" (UniqueName: \"kubernetes.io/projected/57c9f301-615a-4182-b17e-3ae250e8335c-kube-api-access-7cqb5\") on node \"crc\" DevicePath \"\"" Feb 02 21:43:37 crc kubenswrapper[4789]: I0202 21:43:37.405496 4789 scope.go:117] "RemoveContainer" containerID="6c5300b0e812632a91b74ba2ab909199288368ea0a90d2cfd2fd7fbf3551f6b6" Feb 02 21:43:37 crc kubenswrapper[4789]: E0202 21:43:37.405921 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c5300b0e812632a91b74ba2ab909199288368ea0a90d2cfd2fd7fbf3551f6b6\": container with ID starting with 6c5300b0e812632a91b74ba2ab909199288368ea0a90d2cfd2fd7fbf3551f6b6 not found: ID does not exist" containerID="6c5300b0e812632a91b74ba2ab909199288368ea0a90d2cfd2fd7fbf3551f6b6" Feb 02 21:43:37 crc kubenswrapper[4789]: I0202 21:43:37.405964 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c5300b0e812632a91b74ba2ab909199288368ea0a90d2cfd2fd7fbf3551f6b6"} err="failed to get container status \"6c5300b0e812632a91b74ba2ab909199288368ea0a90d2cfd2fd7fbf3551f6b6\": rpc error: code = NotFound desc = could not find container \"6c5300b0e812632a91b74ba2ab909199288368ea0a90d2cfd2fd7fbf3551f6b6\": container with ID starting with 6c5300b0e812632a91b74ba2ab909199288368ea0a90d2cfd2fd7fbf3551f6b6 not found: ID does not exist" Feb 02 21:43:37 crc kubenswrapper[4789]: I0202 21:43:37.405989 4789 scope.go:117] "RemoveContainer" containerID="17dd53bbaea1cc18fc470d6b2376b40412de4caece9ebad3ce307cda50a79e10" Feb 02 21:43:37 crc kubenswrapper[4789]: E0202 21:43:37.406254 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17dd53bbaea1cc18fc470d6b2376b40412de4caece9ebad3ce307cda50a79e10\": container with ID starting with 17dd53bbaea1cc18fc470d6b2376b40412de4caece9ebad3ce307cda50a79e10 not found: ID does not exist" containerID="17dd53bbaea1cc18fc470d6b2376b40412de4caece9ebad3ce307cda50a79e10" Feb 02 21:43:37 crc kubenswrapper[4789]: I0202 21:43:37.406278 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17dd53bbaea1cc18fc470d6b2376b40412de4caece9ebad3ce307cda50a79e10"} err="failed to get container status \"17dd53bbaea1cc18fc470d6b2376b40412de4caece9ebad3ce307cda50a79e10\": rpc error: code = NotFound desc = could not find container \"17dd53bbaea1cc18fc470d6b2376b40412de4caece9ebad3ce307cda50a79e10\": container with ID starting with 17dd53bbaea1cc18fc470d6b2376b40412de4caece9ebad3ce307cda50a79e10 not found: ID does not exist" Feb 02 21:43:37 crc kubenswrapper[4789]: I0202 21:43:37.406297 4789 scope.go:117] "RemoveContainer" containerID="18af5228ac54d94f667c107fc2d86d65f1501a6cb0e8343a4e14781b1065e8a2" Feb 02 21:43:37 crc kubenswrapper[4789]: E0202 21:43:37.406484 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18af5228ac54d94f667c107fc2d86d65f1501a6cb0e8343a4e14781b1065e8a2\": container with ID starting with 18af5228ac54d94f667c107fc2d86d65f1501a6cb0e8343a4e14781b1065e8a2 not found: ID does not exist" containerID="18af5228ac54d94f667c107fc2d86d65f1501a6cb0e8343a4e14781b1065e8a2" Feb 02 21:43:37 crc kubenswrapper[4789]: I0202 21:43:37.406507 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18af5228ac54d94f667c107fc2d86d65f1501a6cb0e8343a4e14781b1065e8a2"} err="failed to get container status \"18af5228ac54d94f667c107fc2d86d65f1501a6cb0e8343a4e14781b1065e8a2\": rpc error: code = NotFound desc = could not find container \"18af5228ac54d94f667c107fc2d86d65f1501a6cb0e8343a4e14781b1065e8a2\": container with ID starting with 18af5228ac54d94f667c107fc2d86d65f1501a6cb0e8343a4e14781b1065e8a2 not found: ID does not exist" Feb 02 21:43:37 crc kubenswrapper[4789]: I0202 21:43:37.406520 4789 scope.go:117] "RemoveContainer" containerID="ecc06c5902aa50d55c9a1d5a9d91397ab8aa463f6ac87ac09a03b387026f2890" Feb 02 21:43:37 crc kubenswrapper[4789]: I0202 21:43:37.426630 4789 scope.go:117] "RemoveContainer" containerID="fbe1157b2a6d65b0c7188f948585dfc0be3a3d76f5c3b57620ea3d6091a4927c" Feb 02 21:43:37 crc kubenswrapper[4789]: I0202 21:43:37.447684 4789 scope.go:117] "RemoveContainer" containerID="ecc06c5902aa50d55c9a1d5a9d91397ab8aa463f6ac87ac09a03b387026f2890" Feb 02 21:43:37 crc kubenswrapper[4789]: E0202 21:43:37.448018 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ecc06c5902aa50d55c9a1d5a9d91397ab8aa463f6ac87ac09a03b387026f2890\": container with ID starting with ecc06c5902aa50d55c9a1d5a9d91397ab8aa463f6ac87ac09a03b387026f2890 not found: ID does not exist" containerID="ecc06c5902aa50d55c9a1d5a9d91397ab8aa463f6ac87ac09a03b387026f2890" Feb 02 21:43:37 crc kubenswrapper[4789]: I0202 21:43:37.448045 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ecc06c5902aa50d55c9a1d5a9d91397ab8aa463f6ac87ac09a03b387026f2890"} err="failed to get container status \"ecc06c5902aa50d55c9a1d5a9d91397ab8aa463f6ac87ac09a03b387026f2890\": rpc error: code = NotFound desc = could not find container \"ecc06c5902aa50d55c9a1d5a9d91397ab8aa463f6ac87ac09a03b387026f2890\": container with ID starting with ecc06c5902aa50d55c9a1d5a9d91397ab8aa463f6ac87ac09a03b387026f2890 not found: ID does not exist" Feb 02 21:43:37 crc kubenswrapper[4789]: I0202 21:43:37.448065 4789 scope.go:117] "RemoveContainer" containerID="fbe1157b2a6d65b0c7188f948585dfc0be3a3d76f5c3b57620ea3d6091a4927c" Feb 02 21:43:37 crc kubenswrapper[4789]: E0202 21:43:37.448534 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fbe1157b2a6d65b0c7188f948585dfc0be3a3d76f5c3b57620ea3d6091a4927c\": container with ID starting with fbe1157b2a6d65b0c7188f948585dfc0be3a3d76f5c3b57620ea3d6091a4927c not found: ID does not exist" containerID="fbe1157b2a6d65b0c7188f948585dfc0be3a3d76f5c3b57620ea3d6091a4927c" Feb 02 21:43:37 crc kubenswrapper[4789]: I0202 21:43:37.448557 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbe1157b2a6d65b0c7188f948585dfc0be3a3d76f5c3b57620ea3d6091a4927c"} err="failed to get container status \"fbe1157b2a6d65b0c7188f948585dfc0be3a3d76f5c3b57620ea3d6091a4927c\": rpc error: code = NotFound desc = could not find container \"fbe1157b2a6d65b0c7188f948585dfc0be3a3d76f5c3b57620ea3d6091a4927c\": container with ID starting with fbe1157b2a6d65b0c7188f948585dfc0be3a3d76f5c3b57620ea3d6091a4927c not found: ID does not exist" Feb 02 21:43:37 crc kubenswrapper[4789]: I0202 21:43:37.448570 4789 scope.go:117] "RemoveContainer" containerID="19152882f397a8eaf801b2e8d8fd5858677ede37b6cfd35d02fe8847efc8de27" Feb 02 21:43:37 crc kubenswrapper[4789]: I0202 21:43:37.471071 4789 scope.go:117] "RemoveContainer" containerID="758668fe2c5ee9470a7c3aa0b9a80c8ff6b3ee015da4b7aab90845bdc8131fbe" Feb 02 21:43:37 crc kubenswrapper[4789]: I0202 21:43:37.494829 4789 scope.go:117] "RemoveContainer" containerID="772b32b4a568764e9d52dc458b0ac79908b73b42aa7c0ab429a6e69ef36ff4ee" Feb 02 21:43:37 crc kubenswrapper[4789]: I0202 21:43:37.515917 4789 scope.go:117] "RemoveContainer" containerID="81a1db9e6f95967f7398c2d9e33aef20a4ebd27dac4bde8ca54c1d2cb9e32588" Feb 02 21:43:37 crc kubenswrapper[4789]: I0202 21:43:37.536276 4789 scope.go:117] "RemoveContainer" containerID="292bcc186a04274a666bd4bca60221734a4bf42019919ba532cfde2503636ddb" Feb 02 21:43:37 crc kubenswrapper[4789]: I0202 21:43:37.550088 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 21:43:37 crc kubenswrapper[4789]: I0202 21:43:37.554535 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 21:43:37 crc kubenswrapper[4789]: I0202 21:43:37.567821 4789 scope.go:117] "RemoveContainer" containerID="1e6fc4897376cc9d269976f61acf3f0cc76fb66b261f7e18fb05f5f9f439d27d" Feb 02 21:43:37 crc kubenswrapper[4789]: I0202 21:43:37.587299 4789 scope.go:117] "RemoveContainer" containerID="41f66ea30afde5a33d387e2cc7b5c5ed11aef0e66a8afd458c8af299945c2460" Feb 02 21:43:37 crc kubenswrapper[4789]: I0202 21:43:37.613410 4789 scope.go:117] "RemoveContainer" containerID="7a20dacf9652208f7b99bf2a1079fa1a4eb150591b3740a517f85585c21a53d1" Feb 02 21:43:37 crc kubenswrapper[4789]: I0202 21:43:37.643167 4789 scope.go:117] "RemoveContainer" containerID="aab045fa01e8633951d3b23cb6099a13479fc7bde9e851b10aeb53ad724f1a5a" Feb 02 21:43:37 crc kubenswrapper[4789]: I0202 21:43:37.667386 4789 scope.go:117] "RemoveContainer" containerID="d8b8973838965c20503722920a92fa3f55adad61b2b29d0ad5b46e04847ba642" Feb 02 21:43:37 crc kubenswrapper[4789]: I0202 21:43:37.694806 4789 scope.go:117] "RemoveContainer" containerID="f8710e800cb558add663bfff070701d51801997c411687aea039144baf3f407d" Feb 02 21:43:37 crc kubenswrapper[4789]: I0202 21:43:37.727744 4789 scope.go:117] "RemoveContainer" containerID="b07c3c791de729e8c85f1895c49db2a43d74603b713f577900b8371d9d871050" Feb 02 21:43:37 crc kubenswrapper[4789]: I0202 21:43:37.752278 4789 scope.go:117] "RemoveContainer" containerID="b2a613095dfded30ccf9e469a7904687f82e0e1076df8bb3c12d61ae91f09cbb" Feb 02 21:43:37 crc kubenswrapper[4789]: I0202 21:43:37.774516 4789 scope.go:117] "RemoveContainer" containerID="dc1d8d39fd0b72fbfd8a3196945369271e6997b06ed178e120be5a8c661363c0" Feb 02 21:43:37 crc kubenswrapper[4789]: I0202 21:43:37.800159 4789 scope.go:117] "RemoveContainer" containerID="db66ce76b54133027343e52fa4a37bee9603c2a78eccea429cb9107f7f66533b" Feb 02 21:43:37 crc kubenswrapper[4789]: I0202 21:43:37.832607 4789 scope.go:117] "RemoveContainer" containerID="19152882f397a8eaf801b2e8d8fd5858677ede37b6cfd35d02fe8847efc8de27" Feb 02 21:43:37 crc kubenswrapper[4789]: E0202 21:43:37.833228 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19152882f397a8eaf801b2e8d8fd5858677ede37b6cfd35d02fe8847efc8de27\": container with ID starting with 19152882f397a8eaf801b2e8d8fd5858677ede37b6cfd35d02fe8847efc8de27 not found: ID does not exist" containerID="19152882f397a8eaf801b2e8d8fd5858677ede37b6cfd35d02fe8847efc8de27" Feb 02 21:43:37 crc kubenswrapper[4789]: I0202 21:43:37.833277 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19152882f397a8eaf801b2e8d8fd5858677ede37b6cfd35d02fe8847efc8de27"} err="failed to get container status \"19152882f397a8eaf801b2e8d8fd5858677ede37b6cfd35d02fe8847efc8de27\": rpc error: code = NotFound desc = could not find container \"19152882f397a8eaf801b2e8d8fd5858677ede37b6cfd35d02fe8847efc8de27\": container with ID starting with 19152882f397a8eaf801b2e8d8fd5858677ede37b6cfd35d02fe8847efc8de27 not found: ID does not exist" Feb 02 21:43:37 crc kubenswrapper[4789]: I0202 21:43:37.833311 4789 scope.go:117] "RemoveContainer" containerID="758668fe2c5ee9470a7c3aa0b9a80c8ff6b3ee015da4b7aab90845bdc8131fbe" Feb 02 21:43:37 crc kubenswrapper[4789]: E0202 21:43:37.833941 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"758668fe2c5ee9470a7c3aa0b9a80c8ff6b3ee015da4b7aab90845bdc8131fbe\": container with ID starting with 758668fe2c5ee9470a7c3aa0b9a80c8ff6b3ee015da4b7aab90845bdc8131fbe not found: ID does not exist" containerID="758668fe2c5ee9470a7c3aa0b9a80c8ff6b3ee015da4b7aab90845bdc8131fbe" Feb 02 21:43:37 crc kubenswrapper[4789]: I0202 21:43:37.834024 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"758668fe2c5ee9470a7c3aa0b9a80c8ff6b3ee015da4b7aab90845bdc8131fbe"} err="failed to get container status \"758668fe2c5ee9470a7c3aa0b9a80c8ff6b3ee015da4b7aab90845bdc8131fbe\": rpc error: code = NotFound desc = could not find container \"758668fe2c5ee9470a7c3aa0b9a80c8ff6b3ee015da4b7aab90845bdc8131fbe\": container with ID starting with 758668fe2c5ee9470a7c3aa0b9a80c8ff6b3ee015da4b7aab90845bdc8131fbe not found: ID does not exist" Feb 02 21:43:37 crc kubenswrapper[4789]: I0202 21:43:37.834076 4789 scope.go:117] "RemoveContainer" containerID="772b32b4a568764e9d52dc458b0ac79908b73b42aa7c0ab429a6e69ef36ff4ee" Feb 02 21:43:37 crc kubenswrapper[4789]: E0202 21:43:37.834698 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"772b32b4a568764e9d52dc458b0ac79908b73b42aa7c0ab429a6e69ef36ff4ee\": container with ID starting with 772b32b4a568764e9d52dc458b0ac79908b73b42aa7c0ab429a6e69ef36ff4ee not found: ID does not exist" containerID="772b32b4a568764e9d52dc458b0ac79908b73b42aa7c0ab429a6e69ef36ff4ee" Feb 02 21:43:37 crc kubenswrapper[4789]: I0202 21:43:37.834734 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"772b32b4a568764e9d52dc458b0ac79908b73b42aa7c0ab429a6e69ef36ff4ee"} err="failed to get container status \"772b32b4a568764e9d52dc458b0ac79908b73b42aa7c0ab429a6e69ef36ff4ee\": rpc error: code = NotFound desc = could not find container \"772b32b4a568764e9d52dc458b0ac79908b73b42aa7c0ab429a6e69ef36ff4ee\": container with ID starting with 772b32b4a568764e9d52dc458b0ac79908b73b42aa7c0ab429a6e69ef36ff4ee not found: ID does not exist" Feb 02 21:43:37 crc kubenswrapper[4789]: I0202 21:43:37.834760 4789 scope.go:117] "RemoveContainer" containerID="81a1db9e6f95967f7398c2d9e33aef20a4ebd27dac4bde8ca54c1d2cb9e32588" Feb 02 21:43:37 crc kubenswrapper[4789]: E0202 21:43:37.836100 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81a1db9e6f95967f7398c2d9e33aef20a4ebd27dac4bde8ca54c1d2cb9e32588\": container with ID starting with 81a1db9e6f95967f7398c2d9e33aef20a4ebd27dac4bde8ca54c1d2cb9e32588 not found: ID does not exist" containerID="81a1db9e6f95967f7398c2d9e33aef20a4ebd27dac4bde8ca54c1d2cb9e32588" Feb 02 21:43:37 crc kubenswrapper[4789]: I0202 21:43:37.836195 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81a1db9e6f95967f7398c2d9e33aef20a4ebd27dac4bde8ca54c1d2cb9e32588"} err="failed to get container status \"81a1db9e6f95967f7398c2d9e33aef20a4ebd27dac4bde8ca54c1d2cb9e32588\": rpc error: code = NotFound desc = could not find container \"81a1db9e6f95967f7398c2d9e33aef20a4ebd27dac4bde8ca54c1d2cb9e32588\": container with ID starting with 81a1db9e6f95967f7398c2d9e33aef20a4ebd27dac4bde8ca54c1d2cb9e32588 not found: ID does not exist" Feb 02 21:43:37 crc kubenswrapper[4789]: I0202 21:43:37.836248 4789 scope.go:117] "RemoveContainer" containerID="292bcc186a04274a666bd4bca60221734a4bf42019919ba532cfde2503636ddb" Feb 02 21:43:37 crc kubenswrapper[4789]: E0202 21:43:37.836741 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"292bcc186a04274a666bd4bca60221734a4bf42019919ba532cfde2503636ddb\": container with ID starting with 292bcc186a04274a666bd4bca60221734a4bf42019919ba532cfde2503636ddb not found: ID does not exist" containerID="292bcc186a04274a666bd4bca60221734a4bf42019919ba532cfde2503636ddb" Feb 02 21:43:37 crc kubenswrapper[4789]: I0202 21:43:37.836802 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"292bcc186a04274a666bd4bca60221734a4bf42019919ba532cfde2503636ddb"} err="failed to get container status \"292bcc186a04274a666bd4bca60221734a4bf42019919ba532cfde2503636ddb\": rpc error: code = NotFound desc = could not find container \"292bcc186a04274a666bd4bca60221734a4bf42019919ba532cfde2503636ddb\": container with ID starting with 292bcc186a04274a666bd4bca60221734a4bf42019919ba532cfde2503636ddb not found: ID does not exist" Feb 02 21:43:37 crc kubenswrapper[4789]: I0202 21:43:37.836842 4789 scope.go:117] "RemoveContainer" containerID="1e6fc4897376cc9d269976f61acf3f0cc76fb66b261f7e18fb05f5f9f439d27d" Feb 02 21:43:37 crc kubenswrapper[4789]: E0202 21:43:37.837234 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e6fc4897376cc9d269976f61acf3f0cc76fb66b261f7e18fb05f5f9f439d27d\": container with ID starting with 1e6fc4897376cc9d269976f61acf3f0cc76fb66b261f7e18fb05f5f9f439d27d not found: ID does not exist" containerID="1e6fc4897376cc9d269976f61acf3f0cc76fb66b261f7e18fb05f5f9f439d27d" Feb 02 21:43:37 crc kubenswrapper[4789]: I0202 21:43:37.837277 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e6fc4897376cc9d269976f61acf3f0cc76fb66b261f7e18fb05f5f9f439d27d"} err="failed to get container status \"1e6fc4897376cc9d269976f61acf3f0cc76fb66b261f7e18fb05f5f9f439d27d\": rpc error: code = NotFound desc = could not find container \"1e6fc4897376cc9d269976f61acf3f0cc76fb66b261f7e18fb05f5f9f439d27d\": container with ID starting with 1e6fc4897376cc9d269976f61acf3f0cc76fb66b261f7e18fb05f5f9f439d27d not found: ID does not exist" Feb 02 21:43:37 crc kubenswrapper[4789]: I0202 21:43:37.837302 4789 scope.go:117] "RemoveContainer" containerID="41f66ea30afde5a33d387e2cc7b5c5ed11aef0e66a8afd458c8af299945c2460" Feb 02 21:43:37 crc kubenswrapper[4789]: E0202 21:43:37.837833 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41f66ea30afde5a33d387e2cc7b5c5ed11aef0e66a8afd458c8af299945c2460\": container with ID starting with 41f66ea30afde5a33d387e2cc7b5c5ed11aef0e66a8afd458c8af299945c2460 not found: ID does not exist" containerID="41f66ea30afde5a33d387e2cc7b5c5ed11aef0e66a8afd458c8af299945c2460" Feb 02 21:43:37 crc kubenswrapper[4789]: I0202 21:43:37.837886 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41f66ea30afde5a33d387e2cc7b5c5ed11aef0e66a8afd458c8af299945c2460"} err="failed to get container status \"41f66ea30afde5a33d387e2cc7b5c5ed11aef0e66a8afd458c8af299945c2460\": rpc error: code = NotFound desc = could not find container \"41f66ea30afde5a33d387e2cc7b5c5ed11aef0e66a8afd458c8af299945c2460\": container with ID starting with 41f66ea30afde5a33d387e2cc7b5c5ed11aef0e66a8afd458c8af299945c2460 not found: ID does not exist" Feb 02 21:43:37 crc kubenswrapper[4789]: I0202 21:43:37.837923 4789 scope.go:117] "RemoveContainer" containerID="7a20dacf9652208f7b99bf2a1079fa1a4eb150591b3740a517f85585c21a53d1" Feb 02 21:43:37 crc kubenswrapper[4789]: E0202 21:43:37.838327 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a20dacf9652208f7b99bf2a1079fa1a4eb150591b3740a517f85585c21a53d1\": container with ID starting with 7a20dacf9652208f7b99bf2a1079fa1a4eb150591b3740a517f85585c21a53d1 not found: ID does not exist" containerID="7a20dacf9652208f7b99bf2a1079fa1a4eb150591b3740a517f85585c21a53d1" Feb 02 21:43:37 crc kubenswrapper[4789]: I0202 21:43:37.838365 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a20dacf9652208f7b99bf2a1079fa1a4eb150591b3740a517f85585c21a53d1"} err="failed to get container status \"7a20dacf9652208f7b99bf2a1079fa1a4eb150591b3740a517f85585c21a53d1\": rpc error: code = NotFound desc = could not find container \"7a20dacf9652208f7b99bf2a1079fa1a4eb150591b3740a517f85585c21a53d1\": container with ID starting with 7a20dacf9652208f7b99bf2a1079fa1a4eb150591b3740a517f85585c21a53d1 not found: ID does not exist" Feb 02 21:43:37 crc kubenswrapper[4789]: I0202 21:43:37.838408 4789 scope.go:117] "RemoveContainer" containerID="aab045fa01e8633951d3b23cb6099a13479fc7bde9e851b10aeb53ad724f1a5a" Feb 02 21:43:37 crc kubenswrapper[4789]: E0202 21:43:37.838815 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aab045fa01e8633951d3b23cb6099a13479fc7bde9e851b10aeb53ad724f1a5a\": container with ID starting with aab045fa01e8633951d3b23cb6099a13479fc7bde9e851b10aeb53ad724f1a5a not found: ID does not exist" containerID="aab045fa01e8633951d3b23cb6099a13479fc7bde9e851b10aeb53ad724f1a5a" Feb 02 21:43:37 crc kubenswrapper[4789]: I0202 21:43:37.838889 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aab045fa01e8633951d3b23cb6099a13479fc7bde9e851b10aeb53ad724f1a5a"} err="failed to get container status \"aab045fa01e8633951d3b23cb6099a13479fc7bde9e851b10aeb53ad724f1a5a\": rpc error: code = NotFound desc = could not find container \"aab045fa01e8633951d3b23cb6099a13479fc7bde9e851b10aeb53ad724f1a5a\": container with ID starting with aab045fa01e8633951d3b23cb6099a13479fc7bde9e851b10aeb53ad724f1a5a not found: ID does not exist" Feb 02 21:43:37 crc kubenswrapper[4789]: I0202 21:43:37.838934 4789 scope.go:117] "RemoveContainer" containerID="d8b8973838965c20503722920a92fa3f55adad61b2b29d0ad5b46e04847ba642" Feb 02 21:43:37 crc kubenswrapper[4789]: E0202 21:43:37.839281 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8b8973838965c20503722920a92fa3f55adad61b2b29d0ad5b46e04847ba642\": container with ID starting with d8b8973838965c20503722920a92fa3f55adad61b2b29d0ad5b46e04847ba642 not found: ID does not exist" containerID="d8b8973838965c20503722920a92fa3f55adad61b2b29d0ad5b46e04847ba642" Feb 02 21:43:37 crc kubenswrapper[4789]: I0202 21:43:37.839331 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8b8973838965c20503722920a92fa3f55adad61b2b29d0ad5b46e04847ba642"} err="failed to get container status \"d8b8973838965c20503722920a92fa3f55adad61b2b29d0ad5b46e04847ba642\": rpc error: code = NotFound desc = could not find container \"d8b8973838965c20503722920a92fa3f55adad61b2b29d0ad5b46e04847ba642\": container with ID starting with d8b8973838965c20503722920a92fa3f55adad61b2b29d0ad5b46e04847ba642 not found: ID does not exist" Feb 02 21:43:37 crc kubenswrapper[4789]: I0202 21:43:37.839364 4789 scope.go:117] "RemoveContainer" containerID="f8710e800cb558add663bfff070701d51801997c411687aea039144baf3f407d" Feb 02 21:43:37 crc kubenswrapper[4789]: E0202 21:43:37.839881 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8710e800cb558add663bfff070701d51801997c411687aea039144baf3f407d\": container with ID starting with f8710e800cb558add663bfff070701d51801997c411687aea039144baf3f407d not found: ID does not exist" containerID="f8710e800cb558add663bfff070701d51801997c411687aea039144baf3f407d" Feb 02 21:43:37 crc kubenswrapper[4789]: I0202 21:43:37.839919 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8710e800cb558add663bfff070701d51801997c411687aea039144baf3f407d"} err="failed to get container status \"f8710e800cb558add663bfff070701d51801997c411687aea039144baf3f407d\": rpc error: code = NotFound desc = could not find container \"f8710e800cb558add663bfff070701d51801997c411687aea039144baf3f407d\": container with ID starting with f8710e800cb558add663bfff070701d51801997c411687aea039144baf3f407d not found: ID does not exist" Feb 02 21:43:37 crc kubenswrapper[4789]: I0202 21:43:37.839949 4789 scope.go:117] "RemoveContainer" containerID="b07c3c791de729e8c85f1895c49db2a43d74603b713f577900b8371d9d871050" Feb 02 21:43:37 crc kubenswrapper[4789]: E0202 21:43:37.840281 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b07c3c791de729e8c85f1895c49db2a43d74603b713f577900b8371d9d871050\": container with ID starting with b07c3c791de729e8c85f1895c49db2a43d74603b713f577900b8371d9d871050 not found: ID does not exist" containerID="b07c3c791de729e8c85f1895c49db2a43d74603b713f577900b8371d9d871050" Feb 02 21:43:37 crc kubenswrapper[4789]: I0202 21:43:37.840318 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b07c3c791de729e8c85f1895c49db2a43d74603b713f577900b8371d9d871050"} err="failed to get container status \"b07c3c791de729e8c85f1895c49db2a43d74603b713f577900b8371d9d871050\": rpc error: code = NotFound desc = could not find container \"b07c3c791de729e8c85f1895c49db2a43d74603b713f577900b8371d9d871050\": container with ID starting with b07c3c791de729e8c85f1895c49db2a43d74603b713f577900b8371d9d871050 not found: ID does not exist" Feb 02 21:43:37 crc kubenswrapper[4789]: I0202 21:43:37.840346 4789 scope.go:117] "RemoveContainer" containerID="b2a613095dfded30ccf9e469a7904687f82e0e1076df8bb3c12d61ae91f09cbb" Feb 02 21:43:37 crc kubenswrapper[4789]: E0202 21:43:37.840789 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2a613095dfded30ccf9e469a7904687f82e0e1076df8bb3c12d61ae91f09cbb\": container with ID starting with b2a613095dfded30ccf9e469a7904687f82e0e1076df8bb3c12d61ae91f09cbb not found: ID does not exist" containerID="b2a613095dfded30ccf9e469a7904687f82e0e1076df8bb3c12d61ae91f09cbb" Feb 02 21:43:37 crc kubenswrapper[4789]: I0202 21:43:37.840836 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2a613095dfded30ccf9e469a7904687f82e0e1076df8bb3c12d61ae91f09cbb"} err="failed to get container status \"b2a613095dfded30ccf9e469a7904687f82e0e1076df8bb3c12d61ae91f09cbb\": rpc error: code = NotFound desc = could not find container \"b2a613095dfded30ccf9e469a7904687f82e0e1076df8bb3c12d61ae91f09cbb\": container with ID starting with b2a613095dfded30ccf9e469a7904687f82e0e1076df8bb3c12d61ae91f09cbb not found: ID does not exist" Feb 02 21:43:37 crc kubenswrapper[4789]: I0202 21:43:37.840864 4789 scope.go:117] "RemoveContainer" containerID="dc1d8d39fd0b72fbfd8a3196945369271e6997b06ed178e120be5a8c661363c0" Feb 02 21:43:37 crc kubenswrapper[4789]: E0202 21:43:37.841436 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc1d8d39fd0b72fbfd8a3196945369271e6997b06ed178e120be5a8c661363c0\": container with ID starting with dc1d8d39fd0b72fbfd8a3196945369271e6997b06ed178e120be5a8c661363c0 not found: ID does not exist" containerID="dc1d8d39fd0b72fbfd8a3196945369271e6997b06ed178e120be5a8c661363c0" Feb 02 21:43:37 crc kubenswrapper[4789]: I0202 21:43:37.841492 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc1d8d39fd0b72fbfd8a3196945369271e6997b06ed178e120be5a8c661363c0"} err="failed to get container status \"dc1d8d39fd0b72fbfd8a3196945369271e6997b06ed178e120be5a8c661363c0\": rpc error: code = NotFound desc = could not find container \"dc1d8d39fd0b72fbfd8a3196945369271e6997b06ed178e120be5a8c661363c0\": container with ID starting with dc1d8d39fd0b72fbfd8a3196945369271e6997b06ed178e120be5a8c661363c0 not found: ID does not exist" Feb 02 21:43:37 crc kubenswrapper[4789]: I0202 21:43:37.841539 4789 scope.go:117] "RemoveContainer" containerID="db66ce76b54133027343e52fa4a37bee9603c2a78eccea429cb9107f7f66533b" Feb 02 21:43:37 crc kubenswrapper[4789]: E0202 21:43:37.841983 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db66ce76b54133027343e52fa4a37bee9603c2a78eccea429cb9107f7f66533b\": container with ID starting with db66ce76b54133027343e52fa4a37bee9603c2a78eccea429cb9107f7f66533b not found: ID does not exist" containerID="db66ce76b54133027343e52fa4a37bee9603c2a78eccea429cb9107f7f66533b" Feb 02 21:43:37 crc kubenswrapper[4789]: I0202 21:43:37.842027 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db66ce76b54133027343e52fa4a37bee9603c2a78eccea429cb9107f7f66533b"} err="failed to get container status \"db66ce76b54133027343e52fa4a37bee9603c2a78eccea429cb9107f7f66533b\": rpc error: code = NotFound desc = could not find container \"db66ce76b54133027343e52fa4a37bee9603c2a78eccea429cb9107f7f66533b\": container with ID starting with db66ce76b54133027343e52fa4a37bee9603c2a78eccea429cb9107f7f66533b not found: ID does not exist" Feb 02 21:43:38 crc kubenswrapper[4789]: I0202 21:43:38.436878 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ad9b78e-5f3a-42f0-a462-c6b0bf0c4ec1" path="/var/lib/kubelet/pods/0ad9b78e-5f3a-42f0-a462-c6b0bf0c4ec1/volumes" Feb 02 21:43:38 crc kubenswrapper[4789]: I0202 21:43:38.444517 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57c9f301-615a-4182-b17e-3ae250e8335c" path="/var/lib/kubelet/pods/57c9f301-615a-4182-b17e-3ae250e8335c/volumes" Feb 02 21:43:38 crc kubenswrapper[4789]: I0202 21:43:38.446245 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87f6bccb-d5fc-4868-aca2-734d16898805" path="/var/lib/kubelet/pods/87f6bccb-d5fc-4868-aca2-734d16898805/volumes" Feb 02 21:43:52 crc kubenswrapper[4789]: I0202 21:43:52.842435 4789 patch_prober.go:28] interesting pod/machine-config-daemon-c8vcn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 21:43:52 crc kubenswrapper[4789]: I0202 21:43:52.843158 4789 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.741470 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8mb6n"] Feb 02 21:44:09 crc kubenswrapper[4789]: E0202 21:44:09.742708 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bf87933-483d-4608-9fab-9f0cfa9fb326" containerName="extract-utilities" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.742742 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bf87933-483d-4608-9fab-9f0cfa9fb326" containerName="extract-utilities" Feb 02 21:44:09 crc kubenswrapper[4789]: E0202 21:44:09.742781 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8917d54-451e-4a56-9e8a-142bb5db17e1" containerName="setup-container" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.742978 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8917d54-451e-4a56-9e8a-142bb5db17e1" containerName="setup-container" Feb 02 21:44:09 crc kubenswrapper[4789]: E0202 21:44:09.743007 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87f6bccb-d5fc-4868-aca2-734d16898805" containerName="container-auditor" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.743021 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="87f6bccb-d5fc-4868-aca2-734d16898805" containerName="container-auditor" Feb 02 21:44:09 crc kubenswrapper[4789]: E0202 21:44:09.743040 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87f6bccb-d5fc-4868-aca2-734d16898805" containerName="swift-recon-cron" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.743053 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="87f6bccb-d5fc-4868-aca2-734d16898805" containerName="swift-recon-cron" Feb 02 21:44:09 crc kubenswrapper[4789]: E0202 21:44:09.743078 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ad9b78e-5f3a-42f0-a462-c6b0bf0c4ec1" containerName="ovsdb-server-init" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.743090 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ad9b78e-5f3a-42f0-a462-c6b0bf0c4ec1" containerName="ovsdb-server-init" Feb 02 21:44:09 crc kubenswrapper[4789]: E0202 21:44:09.743106 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8917d54-451e-4a56-9e8a-142bb5db17e1" containerName="rabbitmq" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.743118 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8917d54-451e-4a56-9e8a-142bb5db17e1" containerName="rabbitmq" Feb 02 21:44:09 crc kubenswrapper[4789]: E0202 21:44:09.743131 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d53e4c0-add2-4cfd-bbea-e0a1d3196091" containerName="barbican-api-log" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.743145 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d53e4c0-add2-4cfd-bbea-e0a1d3196091" containerName="barbican-api-log" Feb 02 21:44:09 crc kubenswrapper[4789]: E0202 21:44:09.743194 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a77ac0de-f396-45e6-a92c-07fbddc4ec60" containerName="galera" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.743207 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="a77ac0de-f396-45e6-a92c-07fbddc4ec60" containerName="galera" Feb 02 21:44:09 crc kubenswrapper[4789]: E0202 21:44:09.743221 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87f6bccb-d5fc-4868-aca2-734d16898805" containerName="rsync" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.743232 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="87f6bccb-d5fc-4868-aca2-734d16898805" containerName="rsync" Feb 02 21:44:09 crc kubenswrapper[4789]: E0202 21:44:09.743252 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87f6bccb-d5fc-4868-aca2-734d16898805" containerName="account-server" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.743265 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="87f6bccb-d5fc-4868-aca2-734d16898805" containerName="account-server" Feb 02 21:44:09 crc kubenswrapper[4789]: E0202 21:44:09.743285 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bf87933-483d-4608-9fab-9f0cfa9fb326" containerName="registry-server" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.743298 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bf87933-483d-4608-9fab-9f0cfa9fb326" containerName="registry-server" Feb 02 21:44:09 crc kubenswrapper[4789]: E0202 21:44:09.743318 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57c9f301-615a-4182-b17e-3ae250e8335c" containerName="probe" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.743330 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="57c9f301-615a-4182-b17e-3ae250e8335c" containerName="probe" Feb 02 21:44:09 crc kubenswrapper[4789]: E0202 21:44:09.743357 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87f6bccb-d5fc-4868-aca2-734d16898805" containerName="account-auditor" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.743373 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="87f6bccb-d5fc-4868-aca2-734d16898805" containerName="account-auditor" Feb 02 21:44:09 crc kubenswrapper[4789]: E0202 21:44:09.743397 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87f6bccb-d5fc-4868-aca2-734d16898805" containerName="container-server" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.743414 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="87f6bccb-d5fc-4868-aca2-734d16898805" containerName="container-server" Feb 02 21:44:09 crc kubenswrapper[4789]: E0202 21:44:09.743431 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87f6bccb-d5fc-4868-aca2-734d16898805" containerName="container-replicator" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.743447 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="87f6bccb-d5fc-4868-aca2-734d16898805" containerName="container-replicator" Feb 02 21:44:09 crc kubenswrapper[4789]: E0202 21:44:09.743466 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab56a6da-6187-4fa6-bd4e-93046de2d432" containerName="ovn-northd" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.743480 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab56a6da-6187-4fa6-bd4e-93046de2d432" containerName="ovn-northd" Feb 02 21:44:09 crc kubenswrapper[4789]: E0202 21:44:09.743502 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f86f59c-9db0-4580-a8f3-2d3fe558c905" containerName="keystone-api" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.743514 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f86f59c-9db0-4580-a8f3-2d3fe558c905" containerName="keystone-api" Feb 02 21:44:09 crc kubenswrapper[4789]: E0202 21:44:09.743552 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c571c3a8-8470-4076-adde-89416f071937" containerName="ovn-controller" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.743564 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="c571c3a8-8470-4076-adde-89416f071937" containerName="ovn-controller" Feb 02 21:44:09 crc kubenswrapper[4789]: E0202 21:44:09.743606 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a77ac0de-f396-45e6-a92c-07fbddc4ec60" containerName="mysql-bootstrap" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.743618 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="a77ac0de-f396-45e6-a92c-07fbddc4ec60" containerName="mysql-bootstrap" Feb 02 21:44:09 crc kubenswrapper[4789]: E0202 21:44:09.743639 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="802bda4f-2363-4ca6-a126-2ccf1448ed71" containerName="barbican-keystone-listener-log" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.743651 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="802bda4f-2363-4ca6-a126-2ccf1448ed71" containerName="barbican-keystone-listener-log" Feb 02 21:44:09 crc kubenswrapper[4789]: E0202 21:44:09.743673 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24fb18f4-7a0f-4ae5-9104-e7dc45a479ff" containerName="glance-log" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.743685 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="24fb18f4-7a0f-4ae5-9104-e7dc45a479ff" containerName="glance-log" Feb 02 21:44:09 crc kubenswrapper[4789]: E0202 21:44:09.743700 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24fb18f4-7a0f-4ae5-9104-e7dc45a479ff" containerName="glance-httpd" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.743712 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="24fb18f4-7a0f-4ae5-9104-e7dc45a479ff" containerName="glance-httpd" Feb 02 21:44:09 crc kubenswrapper[4789]: E0202 21:44:09.743730 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa36ad4d-6c5e-4dd5-a7a3-34e5dddfba8b" containerName="nova-cell0-conductor-conductor" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.743743 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa36ad4d-6c5e-4dd5-a7a3-34e5dddfba8b" containerName="nova-cell0-conductor-conductor" Feb 02 21:44:09 crc kubenswrapper[4789]: E0202 21:44:09.743774 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bf87933-483d-4608-9fab-9f0cfa9fb326" containerName="extract-content" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.743787 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bf87933-483d-4608-9fab-9f0cfa9fb326" containerName="extract-content" Feb 02 21:44:09 crc kubenswrapper[4789]: E0202 21:44:09.743804 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bb81567-8536-4275-ab0e-a003ef904230" containerName="glance-httpd" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.743816 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bb81567-8536-4275-ab0e-a003ef904230" containerName="glance-httpd" Feb 02 21:44:09 crc kubenswrapper[4789]: E0202 21:44:09.743829 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0ceeffe-1326-4d2d-ab85-dbc02869bee1" containerName="nova-scheduler-scheduler" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.743841 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0ceeffe-1326-4d2d-ab85-dbc02869bee1" containerName="nova-scheduler-scheduler" Feb 02 21:44:09 crc kubenswrapper[4789]: E0202 21:44:09.743861 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7acbb536-0a08-4132-a84a-848735b0e7f4" containerName="cinder-api" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.743873 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="7acbb536-0a08-4132-a84a-848735b0e7f4" containerName="cinder-api" Feb 02 21:44:09 crc kubenswrapper[4789]: E0202 21:44:09.743889 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57c9f301-615a-4182-b17e-3ae250e8335c" containerName="cinder-scheduler" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.743902 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="57c9f301-615a-4182-b17e-3ae250e8335c" containerName="cinder-scheduler" Feb 02 21:44:09 crc kubenswrapper[4789]: E0202 21:44:09.743919 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b579f7f4-db1f-4d76-82fb-ef4cad438842" containerName="ceilometer-central-agent" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.743931 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="b579f7f4-db1f-4d76-82fb-ef4cad438842" containerName="ceilometer-central-agent" Feb 02 21:44:09 crc kubenswrapper[4789]: E0202 21:44:09.743944 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78b23a1f-cc85-4767-b19c-6069adfc735a" containerName="neutron-api" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.743956 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="78b23a1f-cc85-4767-b19c-6069adfc735a" containerName="neutron-api" Feb 02 21:44:09 crc kubenswrapper[4789]: E0202 21:44:09.743980 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ae097e7-380b-4044-8598-abc3e1059356" containerName="nova-api-log" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.743992 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ae097e7-380b-4044-8598-abc3e1059356" containerName="nova-api-log" Feb 02 21:44:09 crc kubenswrapper[4789]: E0202 21:44:09.744014 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ba13473-b423-43a0-ab15-9d6be616cc7b" containerName="nova-metadata-metadata" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.744025 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ba13473-b423-43a0-ab15-9d6be616cc7b" containerName="nova-metadata-metadata" Feb 02 21:44:09 crc kubenswrapper[4789]: E0202 21:44:09.744046 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87f6bccb-d5fc-4868-aca2-734d16898805" containerName="object-auditor" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.744058 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="87f6bccb-d5fc-4868-aca2-734d16898805" containerName="object-auditor" Feb 02 21:44:09 crc kubenswrapper[4789]: E0202 21:44:09.744081 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ae097e7-380b-4044-8598-abc3e1059356" containerName="nova-api-api" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.744092 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ae097e7-380b-4044-8598-abc3e1059356" containerName="nova-api-api" Feb 02 21:44:09 crc kubenswrapper[4789]: E0202 21:44:09.744107 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="212c4e72-7988-4770-ba07-ae0362baac7e" containerName="kube-state-metrics" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.744121 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="212c4e72-7988-4770-ba07-ae0362baac7e" containerName="kube-state-metrics" Feb 02 21:44:09 crc kubenswrapper[4789]: E0202 21:44:09.744156 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="078a8abb-3926-40cd-9340-0bef088c130f" containerName="memcached" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.744168 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="078a8abb-3926-40cd-9340-0bef088c130f" containerName="memcached" Feb 02 21:44:09 crc kubenswrapper[4789]: E0202 21:44:09.744191 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ba13473-b423-43a0-ab15-9d6be616cc7b" containerName="nova-metadata-log" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.744203 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ba13473-b423-43a0-ab15-9d6be616cc7b" containerName="nova-metadata-log" Feb 02 21:44:09 crc kubenswrapper[4789]: E0202 21:44:09.744226 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87f6bccb-d5fc-4868-aca2-734d16898805" containerName="object-expirer" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.744237 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="87f6bccb-d5fc-4868-aca2-734d16898805" containerName="object-expirer" Feb 02 21:44:09 crc kubenswrapper[4789]: E0202 21:44:09.744252 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="399d9417-2065-4e92-89c5-a04dbeaf2cca" containerName="nova-cell1-conductor-conductor" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.744265 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="399d9417-2065-4e92-89c5-a04dbeaf2cca" containerName="nova-cell1-conductor-conductor" Feb 02 21:44:09 crc kubenswrapper[4789]: E0202 21:44:09.744284 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4db4b23-dae0-42a5-ad47-3336073d0b6a" containerName="setup-container" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.744296 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4db4b23-dae0-42a5-ad47-3336073d0b6a" containerName="setup-container" Feb 02 21:44:09 crc kubenswrapper[4789]: E0202 21:44:09.744316 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b579f7f4-db1f-4d76-82fb-ef4cad438842" containerName="proxy-httpd" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.744328 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="b579f7f4-db1f-4d76-82fb-ef4cad438842" containerName="proxy-httpd" Feb 02 21:44:09 crc kubenswrapper[4789]: E0202 21:44:09.744347 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ad9b78e-5f3a-42f0-a462-c6b0bf0c4ec1" containerName="ovs-vswitchd" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.744358 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ad9b78e-5f3a-42f0-a462-c6b0bf0c4ec1" containerName="ovs-vswitchd" Feb 02 21:44:09 crc kubenswrapper[4789]: E0202 21:44:09.744382 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c3c85c8-d5b9-48f3-9408-0d9693d0cbf1" containerName="barbican-worker" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.744394 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c3c85c8-d5b9-48f3-9408-0d9693d0cbf1" containerName="barbican-worker" Feb 02 21:44:09 crc kubenswrapper[4789]: E0202 21:44:09.744414 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78b23a1f-cc85-4767-b19c-6069adfc735a" containerName="neutron-httpd" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.744426 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="78b23a1f-cc85-4767-b19c-6069adfc735a" containerName="neutron-httpd" Feb 02 21:44:09 crc kubenswrapper[4789]: E0202 21:44:09.744442 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87f6bccb-d5fc-4868-aca2-734d16898805" containerName="account-reaper" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.744453 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="87f6bccb-d5fc-4868-aca2-734d16898805" containerName="account-reaper" Feb 02 21:44:09 crc kubenswrapper[4789]: E0202 21:44:09.744472 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87f6bccb-d5fc-4868-aca2-734d16898805" containerName="account-replicator" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.744486 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="87f6bccb-d5fc-4868-aca2-734d16898805" containerName="account-replicator" Feb 02 21:44:09 crc kubenswrapper[4789]: E0202 21:44:09.744507 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="802bda4f-2363-4ca6-a126-2ccf1448ed71" containerName="barbican-keystone-listener" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.744519 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="802bda4f-2363-4ca6-a126-2ccf1448ed71" containerName="barbican-keystone-listener" Feb 02 21:44:09 crc kubenswrapper[4789]: E0202 21:44:09.744541 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ad9b78e-5f3a-42f0-a462-c6b0bf0c4ec1" containerName="ovsdb-server" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.744552 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ad9b78e-5f3a-42f0-a462-c6b0bf0c4ec1" containerName="ovsdb-server" Feb 02 21:44:09 crc kubenswrapper[4789]: E0202 21:44:09.744568 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4db4b23-dae0-42a5-ad47-3336073d0b6a" containerName="rabbitmq" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.744606 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4db4b23-dae0-42a5-ad47-3336073d0b6a" containerName="rabbitmq" Feb 02 21:44:09 crc kubenswrapper[4789]: E0202 21:44:09.744623 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d53e4c0-add2-4cfd-bbea-e0a1d3196091" containerName="barbican-api" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.744634 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d53e4c0-add2-4cfd-bbea-e0a1d3196091" containerName="barbican-api" Feb 02 21:44:09 crc kubenswrapper[4789]: E0202 21:44:09.744647 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87f6bccb-d5fc-4868-aca2-734d16898805" containerName="container-updater" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.744659 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="87f6bccb-d5fc-4868-aca2-734d16898805" containerName="container-updater" Feb 02 21:44:09 crc kubenswrapper[4789]: E0202 21:44:09.744679 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="349cede5-331c-4454-8c9c-fda8fe886f07" containerName="placement-api" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.744691 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="349cede5-331c-4454-8c9c-fda8fe886f07" containerName="placement-api" Feb 02 21:44:09 crc kubenswrapper[4789]: E0202 21:44:09.744710 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b579f7f4-db1f-4d76-82fb-ef4cad438842" containerName="ceilometer-notification-agent" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.744724 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="b579f7f4-db1f-4d76-82fb-ef4cad438842" containerName="ceilometer-notification-agent" Feb 02 21:44:09 crc kubenswrapper[4789]: E0202 21:44:09.744739 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b579f7f4-db1f-4d76-82fb-ef4cad438842" containerName="sg-core" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.744750 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="b579f7f4-db1f-4d76-82fb-ef4cad438842" containerName="sg-core" Feb 02 21:44:09 crc kubenswrapper[4789]: E0202 21:44:09.744764 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c3c85c8-d5b9-48f3-9408-0d9693d0cbf1" containerName="barbican-worker-log" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.744777 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c3c85c8-d5b9-48f3-9408-0d9693d0cbf1" containerName="barbican-worker-log" Feb 02 21:44:09 crc kubenswrapper[4789]: E0202 21:44:09.744795 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="349cede5-331c-4454-8c9c-fda8fe886f07" containerName="placement-log" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.744807 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="349cede5-331c-4454-8c9c-fda8fe886f07" containerName="placement-log" Feb 02 21:44:09 crc kubenswrapper[4789]: E0202 21:44:09.744821 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87f6bccb-d5fc-4868-aca2-734d16898805" containerName="object-server" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.744833 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="87f6bccb-d5fc-4868-aca2-734d16898805" containerName="object-server" Feb 02 21:44:09 crc kubenswrapper[4789]: E0202 21:44:09.744852 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7acbb536-0a08-4132-a84a-848735b0e7f4" containerName="cinder-api-log" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.744866 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="7acbb536-0a08-4132-a84a-848735b0e7f4" containerName="cinder-api-log" Feb 02 21:44:09 crc kubenswrapper[4789]: E0202 21:44:09.744881 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87f6bccb-d5fc-4868-aca2-734d16898805" containerName="object-updater" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.744894 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="87f6bccb-d5fc-4868-aca2-734d16898805" containerName="object-updater" Feb 02 21:44:09 crc kubenswrapper[4789]: E0202 21:44:09.744917 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab56a6da-6187-4fa6-bd4e-93046de2d432" containerName="openstack-network-exporter" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.744929 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab56a6da-6187-4fa6-bd4e-93046de2d432" containerName="openstack-network-exporter" Feb 02 21:44:09 crc kubenswrapper[4789]: E0202 21:44:09.744950 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bb81567-8536-4275-ab0e-a003ef904230" containerName="glance-log" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.744962 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bb81567-8536-4275-ab0e-a003ef904230" containerName="glance-log" Feb 02 21:44:09 crc kubenswrapper[4789]: E0202 21:44:09.744974 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87f6bccb-d5fc-4868-aca2-734d16898805" containerName="object-replicator" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.744986 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="87f6bccb-d5fc-4868-aca2-734d16898805" containerName="object-replicator" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.745227 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab56a6da-6187-4fa6-bd4e-93046de2d432" containerName="openstack-network-exporter" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.745243 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8917d54-451e-4a56-9e8a-142bb5db17e1" containerName="rabbitmq" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.745267 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="87f6bccb-d5fc-4868-aca2-734d16898805" containerName="object-updater" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.745281 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="b579f7f4-db1f-4d76-82fb-ef4cad438842" containerName="ceilometer-notification-agent" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.745295 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="87f6bccb-d5fc-4868-aca2-734d16898805" containerName="container-server" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.745313 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="b579f7f4-db1f-4d76-82fb-ef4cad438842" containerName="proxy-httpd" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.745330 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f86f59c-9db0-4580-a8f3-2d3fe558c905" containerName="keystone-api" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.745346 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="87f6bccb-d5fc-4868-aca2-734d16898805" containerName="object-replicator" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.745369 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="87f6bccb-d5fc-4868-aca2-734d16898805" containerName="object-auditor" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.745538 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="24fb18f4-7a0f-4ae5-9104-e7dc45a479ff" containerName="glance-httpd" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.745564 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d53e4c0-add2-4cfd-bbea-e0a1d3196091" containerName="barbican-api" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.748369 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="87f6bccb-d5fc-4868-aca2-734d16898805" containerName="account-auditor" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.748423 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ad9b78e-5f3a-42f0-a462-c6b0bf0c4ec1" containerName="ovsdb-server" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.748453 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="87f6bccb-d5fc-4868-aca2-734d16898805" containerName="container-replicator" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.748473 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="87f6bccb-d5fc-4868-aca2-734d16898805" containerName="object-expirer" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.748493 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4db4b23-dae0-42a5-ad47-3336073d0b6a" containerName="rabbitmq" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.748509 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="78b23a1f-cc85-4767-b19c-6069adfc735a" containerName="neutron-api" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.748531 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="87f6bccb-d5fc-4868-aca2-734d16898805" containerName="account-server" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.748551 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="a77ac0de-f396-45e6-a92c-07fbddc4ec60" containerName="galera" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.748570 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="87f6bccb-d5fc-4868-aca2-734d16898805" containerName="container-auditor" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.748626 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="87f6bccb-d5fc-4868-aca2-734d16898805" containerName="swift-recon-cron" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.748641 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa36ad4d-6c5e-4dd5-a7a3-34e5dddfba8b" containerName="nova-cell0-conductor-conductor" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.748654 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="078a8abb-3926-40cd-9340-0bef088c130f" containerName="memcached" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.748672 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="57c9f301-615a-4182-b17e-3ae250e8335c" containerName="probe" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.748688 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="349cede5-331c-4454-8c9c-fda8fe886f07" containerName="placement-log" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.748708 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="57c9f301-615a-4182-b17e-3ae250e8335c" containerName="cinder-scheduler" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.748722 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bb81567-8536-4275-ab0e-a003ef904230" containerName="glance-log" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.748743 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="78b23a1f-cc85-4767-b19c-6069adfc735a" containerName="neutron-httpd" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.748760 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="212c4e72-7988-4770-ba07-ae0362baac7e" containerName="kube-state-metrics" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.748779 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="87f6bccb-d5fc-4868-aca2-734d16898805" containerName="account-replicator" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.748802 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d53e4c0-add2-4cfd-bbea-e0a1d3196091" containerName="barbican-api-log" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.748822 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bb81567-8536-4275-ab0e-a003ef904230" containerName="glance-httpd" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.748842 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ba13473-b423-43a0-ab15-9d6be616cc7b" containerName="nova-metadata-log" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.748857 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="c571c3a8-8470-4076-adde-89416f071937" containerName="ovn-controller" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.748872 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="802bda4f-2363-4ca6-a126-2ccf1448ed71" containerName="barbican-keystone-listener" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.748894 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="24fb18f4-7a0f-4ae5-9104-e7dc45a479ff" containerName="glance-log" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.748906 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="399d9417-2065-4e92-89c5-a04dbeaf2cca" containerName="nova-cell1-conductor-conductor" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.748923 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="87f6bccb-d5fc-4868-aca2-734d16898805" containerName="container-updater" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.748946 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bf87933-483d-4608-9fab-9f0cfa9fb326" containerName="registry-server" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.748973 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="349cede5-331c-4454-8c9c-fda8fe886f07" containerName="placement-api" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.748991 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c3c85c8-d5b9-48f3-9408-0d9693d0cbf1" containerName="barbican-worker-log" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.749018 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0ceeffe-1326-4d2d-ab85-dbc02869bee1" containerName="nova-scheduler-scheduler" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.749034 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="802bda4f-2363-4ca6-a126-2ccf1448ed71" containerName="barbican-keystone-listener-log" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.749052 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c3c85c8-d5b9-48f3-9408-0d9693d0cbf1" containerName="barbican-worker" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.749065 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="87f6bccb-d5fc-4868-aca2-734d16898805" containerName="account-reaper" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.749083 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ae097e7-380b-4044-8598-abc3e1059356" containerName="nova-api-api" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.749099 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="87f6bccb-d5fc-4868-aca2-734d16898805" containerName="rsync" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.749110 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="b579f7f4-db1f-4d76-82fb-ef4cad438842" containerName="sg-core" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.749130 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ad9b78e-5f3a-42f0-a462-c6b0bf0c4ec1" containerName="ovs-vswitchd" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.749149 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="7acbb536-0a08-4132-a84a-848735b0e7f4" containerName="cinder-api" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.749167 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="7acbb536-0a08-4132-a84a-848735b0e7f4" containerName="cinder-api-log" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.749181 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ba13473-b423-43a0-ab15-9d6be616cc7b" containerName="nova-metadata-metadata" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.749201 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="87f6bccb-d5fc-4868-aca2-734d16898805" containerName="object-server" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.749218 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="b579f7f4-db1f-4d76-82fb-ef4cad438842" containerName="ceilometer-central-agent" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.749238 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab56a6da-6187-4fa6-bd4e-93046de2d432" containerName="ovn-northd" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.749253 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ae097e7-380b-4044-8598-abc3e1059356" containerName="nova-api-log" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.751215 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8mb6n" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.760033 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8mb6n"] Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.833909 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/944faa05-ae97-458f-be48-99e643a7fd9f-catalog-content\") pod \"redhat-marketplace-8mb6n\" (UID: \"944faa05-ae97-458f-be48-99e643a7fd9f\") " pod="openshift-marketplace/redhat-marketplace-8mb6n" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.834000 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbrmj\" (UniqueName: \"kubernetes.io/projected/944faa05-ae97-458f-be48-99e643a7fd9f-kube-api-access-jbrmj\") pod \"redhat-marketplace-8mb6n\" (UID: \"944faa05-ae97-458f-be48-99e643a7fd9f\") " pod="openshift-marketplace/redhat-marketplace-8mb6n" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.834266 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/944faa05-ae97-458f-be48-99e643a7fd9f-utilities\") pod \"redhat-marketplace-8mb6n\" (UID: \"944faa05-ae97-458f-be48-99e643a7fd9f\") " pod="openshift-marketplace/redhat-marketplace-8mb6n" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.935610 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbrmj\" (UniqueName: \"kubernetes.io/projected/944faa05-ae97-458f-be48-99e643a7fd9f-kube-api-access-jbrmj\") pod \"redhat-marketplace-8mb6n\" (UID: \"944faa05-ae97-458f-be48-99e643a7fd9f\") " pod="openshift-marketplace/redhat-marketplace-8mb6n" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.935758 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/944faa05-ae97-458f-be48-99e643a7fd9f-utilities\") pod \"redhat-marketplace-8mb6n\" (UID: \"944faa05-ae97-458f-be48-99e643a7fd9f\") " pod="openshift-marketplace/redhat-marketplace-8mb6n" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.935880 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/944faa05-ae97-458f-be48-99e643a7fd9f-catalog-content\") pod \"redhat-marketplace-8mb6n\" (UID: \"944faa05-ae97-458f-be48-99e643a7fd9f\") " pod="openshift-marketplace/redhat-marketplace-8mb6n" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.936519 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/944faa05-ae97-458f-be48-99e643a7fd9f-utilities\") pod \"redhat-marketplace-8mb6n\" (UID: \"944faa05-ae97-458f-be48-99e643a7fd9f\") " pod="openshift-marketplace/redhat-marketplace-8mb6n" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.936546 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/944faa05-ae97-458f-be48-99e643a7fd9f-catalog-content\") pod \"redhat-marketplace-8mb6n\" (UID: \"944faa05-ae97-458f-be48-99e643a7fd9f\") " pod="openshift-marketplace/redhat-marketplace-8mb6n" Feb 02 21:44:09 crc kubenswrapper[4789]: I0202 21:44:09.956453 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbrmj\" (UniqueName: \"kubernetes.io/projected/944faa05-ae97-458f-be48-99e643a7fd9f-kube-api-access-jbrmj\") pod \"redhat-marketplace-8mb6n\" (UID: \"944faa05-ae97-458f-be48-99e643a7fd9f\") " pod="openshift-marketplace/redhat-marketplace-8mb6n" Feb 02 21:44:10 crc kubenswrapper[4789]: I0202 21:44:10.086703 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8mb6n" Feb 02 21:44:10 crc kubenswrapper[4789]: I0202 21:44:10.529936 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8mb6n"] Feb 02 21:44:10 crc kubenswrapper[4789]: I0202 21:44:10.603661 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8mb6n" event={"ID":"944faa05-ae97-458f-be48-99e643a7fd9f","Type":"ContainerStarted","Data":"676fc601ddb7628a1e19918f1944791524ff3154af1d818480fbb10fe8f10fef"} Feb 02 21:44:11 crc kubenswrapper[4789]: I0202 21:44:11.613975 4789 generic.go:334] "Generic (PLEG): container finished" podID="944faa05-ae97-458f-be48-99e643a7fd9f" containerID="df044a07ee1f2a43bba97aa3c842c2e40a2f69e85a03c27abd136a32e376fb4f" exitCode=0 Feb 02 21:44:11 crc kubenswrapper[4789]: I0202 21:44:11.614040 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8mb6n" event={"ID":"944faa05-ae97-458f-be48-99e643a7fd9f","Type":"ContainerDied","Data":"df044a07ee1f2a43bba97aa3c842c2e40a2f69e85a03c27abd136a32e376fb4f"} Feb 02 21:44:13 crc kubenswrapper[4789]: I0202 21:44:13.639812 4789 generic.go:334] "Generic (PLEG): container finished" podID="944faa05-ae97-458f-be48-99e643a7fd9f" containerID="83061f72f23f448105c6b4e22d1bdd5aab8799b5023dbace9d015e06f5f0dc69" exitCode=0 Feb 02 21:44:13 crc kubenswrapper[4789]: I0202 21:44:13.640018 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8mb6n" event={"ID":"944faa05-ae97-458f-be48-99e643a7fd9f","Type":"ContainerDied","Data":"83061f72f23f448105c6b4e22d1bdd5aab8799b5023dbace9d015e06f5f0dc69"} Feb 02 21:44:14 crc kubenswrapper[4789]: I0202 21:44:14.653960 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8mb6n" event={"ID":"944faa05-ae97-458f-be48-99e643a7fd9f","Type":"ContainerStarted","Data":"3faf4153acbbea3432699f23621bc0e50087f93b2feadd35060090d9ef3f7366"} Feb 02 21:44:20 crc kubenswrapper[4789]: I0202 21:44:20.087608 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8mb6n" Feb 02 21:44:20 crc kubenswrapper[4789]: I0202 21:44:20.088807 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8mb6n" Feb 02 21:44:20 crc kubenswrapper[4789]: I0202 21:44:20.171448 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8mb6n" Feb 02 21:44:20 crc kubenswrapper[4789]: I0202 21:44:20.206903 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8mb6n" podStartSLOduration=8.784460579 podStartE2EDuration="11.206872896s" podCreationTimestamp="2026-02-02 21:44:09 +0000 UTC" firstStartedPulling="2026-02-02 21:44:11.61595399 +0000 UTC m=+1471.910979009" lastFinishedPulling="2026-02-02 21:44:14.038366267 +0000 UTC m=+1474.333391326" observedRunningTime="2026-02-02 21:44:14.676694023 +0000 UTC m=+1474.971719072" watchObservedRunningTime="2026-02-02 21:44:20.206872896 +0000 UTC m=+1480.501897965" Feb 02 21:44:20 crc kubenswrapper[4789]: I0202 21:44:20.791773 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8mb6n" Feb 02 21:44:20 crc kubenswrapper[4789]: I0202 21:44:20.976279 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8mb6n"] Feb 02 21:44:22 crc kubenswrapper[4789]: I0202 21:44:22.731473 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8mb6n" podUID="944faa05-ae97-458f-be48-99e643a7fd9f" containerName="registry-server" containerID="cri-o://3faf4153acbbea3432699f23621bc0e50087f93b2feadd35060090d9ef3f7366" gracePeriod=2 Feb 02 21:44:22 crc kubenswrapper[4789]: I0202 21:44:22.841764 4789 patch_prober.go:28] interesting pod/machine-config-daemon-c8vcn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 21:44:22 crc kubenswrapper[4789]: I0202 21:44:22.841847 4789 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 21:44:22 crc kubenswrapper[4789]: I0202 21:44:22.841911 4789 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" Feb 02 21:44:22 crc kubenswrapper[4789]: I0202 21:44:22.842722 4789 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6f53ea5f1a80f886dfd6c88f09837b2b4d54c1a0219e9a215978594e6e78e40f"} pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 21:44:22 crc kubenswrapper[4789]: I0202 21:44:22.842848 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" containerName="machine-config-daemon" containerID="cri-o://6f53ea5f1a80f886dfd6c88f09837b2b4d54c1a0219e9a215978594e6e78e40f" gracePeriod=600 Feb 02 21:44:23 crc kubenswrapper[4789]: I0202 21:44:23.221720 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8mb6n" Feb 02 21:44:23 crc kubenswrapper[4789]: I0202 21:44:23.241818 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/944faa05-ae97-458f-be48-99e643a7fd9f-catalog-content\") pod \"944faa05-ae97-458f-be48-99e643a7fd9f\" (UID: \"944faa05-ae97-458f-be48-99e643a7fd9f\") " Feb 02 21:44:23 crc kubenswrapper[4789]: I0202 21:44:23.241995 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbrmj\" (UniqueName: \"kubernetes.io/projected/944faa05-ae97-458f-be48-99e643a7fd9f-kube-api-access-jbrmj\") pod \"944faa05-ae97-458f-be48-99e643a7fd9f\" (UID: \"944faa05-ae97-458f-be48-99e643a7fd9f\") " Feb 02 21:44:23 crc kubenswrapper[4789]: I0202 21:44:23.242107 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/944faa05-ae97-458f-be48-99e643a7fd9f-utilities\") pod \"944faa05-ae97-458f-be48-99e643a7fd9f\" (UID: \"944faa05-ae97-458f-be48-99e643a7fd9f\") " Feb 02 21:44:23 crc kubenswrapper[4789]: I0202 21:44:23.243634 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/944faa05-ae97-458f-be48-99e643a7fd9f-utilities" (OuterVolumeSpecName: "utilities") pod "944faa05-ae97-458f-be48-99e643a7fd9f" (UID: "944faa05-ae97-458f-be48-99e643a7fd9f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 21:44:23 crc kubenswrapper[4789]: I0202 21:44:23.251450 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/944faa05-ae97-458f-be48-99e643a7fd9f-kube-api-access-jbrmj" (OuterVolumeSpecName: "kube-api-access-jbrmj") pod "944faa05-ae97-458f-be48-99e643a7fd9f" (UID: "944faa05-ae97-458f-be48-99e643a7fd9f"). InnerVolumeSpecName "kube-api-access-jbrmj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:44:23 crc kubenswrapper[4789]: I0202 21:44:23.316738 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/944faa05-ae97-458f-be48-99e643a7fd9f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "944faa05-ae97-458f-be48-99e643a7fd9f" (UID: "944faa05-ae97-458f-be48-99e643a7fd9f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 21:44:23 crc kubenswrapper[4789]: I0202 21:44:23.343347 4789 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/944faa05-ae97-458f-be48-99e643a7fd9f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 21:44:23 crc kubenswrapper[4789]: I0202 21:44:23.343386 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jbrmj\" (UniqueName: \"kubernetes.io/projected/944faa05-ae97-458f-be48-99e643a7fd9f-kube-api-access-jbrmj\") on node \"crc\" DevicePath \"\"" Feb 02 21:44:23 crc kubenswrapper[4789]: I0202 21:44:23.343403 4789 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/944faa05-ae97-458f-be48-99e643a7fd9f-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 21:44:23 crc kubenswrapper[4789]: I0202 21:44:23.742384 4789 generic.go:334] "Generic (PLEG): container finished" podID="bdf018b4-1451-4d37-be6e-05802b67c73e" containerID="6f53ea5f1a80f886dfd6c88f09837b2b4d54c1a0219e9a215978594e6e78e40f" exitCode=0 Feb 02 21:44:23 crc kubenswrapper[4789]: I0202 21:44:23.742446 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" event={"ID":"bdf018b4-1451-4d37-be6e-05802b67c73e","Type":"ContainerDied","Data":"6f53ea5f1a80f886dfd6c88f09837b2b4d54c1a0219e9a215978594e6e78e40f"} Feb 02 21:44:23 crc kubenswrapper[4789]: I0202 21:44:23.742625 4789 scope.go:117] "RemoveContainer" containerID="58201de0dc796bafdb3ebb503e9bfcd61c6265506eb41819ac59515674816d43" Feb 02 21:44:23 crc kubenswrapper[4789]: I0202 21:44:23.743020 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" event={"ID":"bdf018b4-1451-4d37-be6e-05802b67c73e","Type":"ContainerStarted","Data":"de6aa73a1267c130655031ef3c4dd7e6bb34d4ce8d75bf21c205ad83e1223e18"} Feb 02 21:44:23 crc kubenswrapper[4789]: I0202 21:44:23.748487 4789 generic.go:334] "Generic (PLEG): container finished" podID="944faa05-ae97-458f-be48-99e643a7fd9f" containerID="3faf4153acbbea3432699f23621bc0e50087f93b2feadd35060090d9ef3f7366" exitCode=0 Feb 02 21:44:23 crc kubenswrapper[4789]: I0202 21:44:23.748522 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8mb6n" Feb 02 21:44:23 crc kubenswrapper[4789]: I0202 21:44:23.748539 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8mb6n" event={"ID":"944faa05-ae97-458f-be48-99e643a7fd9f","Type":"ContainerDied","Data":"3faf4153acbbea3432699f23621bc0e50087f93b2feadd35060090d9ef3f7366"} Feb 02 21:44:23 crc kubenswrapper[4789]: I0202 21:44:23.748664 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8mb6n" event={"ID":"944faa05-ae97-458f-be48-99e643a7fd9f","Type":"ContainerDied","Data":"676fc601ddb7628a1e19918f1944791524ff3154af1d818480fbb10fe8f10fef"} Feb 02 21:44:23 crc kubenswrapper[4789]: I0202 21:44:23.787417 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8mb6n"] Feb 02 21:44:23 crc kubenswrapper[4789]: I0202 21:44:23.795341 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8mb6n"] Feb 02 21:44:23 crc kubenswrapper[4789]: I0202 21:44:23.829111 4789 scope.go:117] "RemoveContainer" containerID="3faf4153acbbea3432699f23621bc0e50087f93b2feadd35060090d9ef3f7366" Feb 02 21:44:23 crc kubenswrapper[4789]: I0202 21:44:23.849106 4789 scope.go:117] "RemoveContainer" containerID="83061f72f23f448105c6b4e22d1bdd5aab8799b5023dbace9d015e06f5f0dc69" Feb 02 21:44:23 crc kubenswrapper[4789]: I0202 21:44:23.865538 4789 scope.go:117] "RemoveContainer" containerID="df044a07ee1f2a43bba97aa3c842c2e40a2f69e85a03c27abd136a32e376fb4f" Feb 02 21:44:23 crc kubenswrapper[4789]: I0202 21:44:23.904163 4789 scope.go:117] "RemoveContainer" containerID="3faf4153acbbea3432699f23621bc0e50087f93b2feadd35060090d9ef3f7366" Feb 02 21:44:23 crc kubenswrapper[4789]: E0202 21:44:23.904644 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3faf4153acbbea3432699f23621bc0e50087f93b2feadd35060090d9ef3f7366\": container with ID starting with 3faf4153acbbea3432699f23621bc0e50087f93b2feadd35060090d9ef3f7366 not found: ID does not exist" containerID="3faf4153acbbea3432699f23621bc0e50087f93b2feadd35060090d9ef3f7366" Feb 02 21:44:23 crc kubenswrapper[4789]: I0202 21:44:23.904692 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3faf4153acbbea3432699f23621bc0e50087f93b2feadd35060090d9ef3f7366"} err="failed to get container status \"3faf4153acbbea3432699f23621bc0e50087f93b2feadd35060090d9ef3f7366\": rpc error: code = NotFound desc = could not find container \"3faf4153acbbea3432699f23621bc0e50087f93b2feadd35060090d9ef3f7366\": container with ID starting with 3faf4153acbbea3432699f23621bc0e50087f93b2feadd35060090d9ef3f7366 not found: ID does not exist" Feb 02 21:44:23 crc kubenswrapper[4789]: I0202 21:44:23.904722 4789 scope.go:117] "RemoveContainer" containerID="83061f72f23f448105c6b4e22d1bdd5aab8799b5023dbace9d015e06f5f0dc69" Feb 02 21:44:23 crc kubenswrapper[4789]: E0202 21:44:23.905168 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83061f72f23f448105c6b4e22d1bdd5aab8799b5023dbace9d015e06f5f0dc69\": container with ID starting with 83061f72f23f448105c6b4e22d1bdd5aab8799b5023dbace9d015e06f5f0dc69 not found: ID does not exist" containerID="83061f72f23f448105c6b4e22d1bdd5aab8799b5023dbace9d015e06f5f0dc69" Feb 02 21:44:23 crc kubenswrapper[4789]: I0202 21:44:23.905205 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83061f72f23f448105c6b4e22d1bdd5aab8799b5023dbace9d015e06f5f0dc69"} err="failed to get container status \"83061f72f23f448105c6b4e22d1bdd5aab8799b5023dbace9d015e06f5f0dc69\": rpc error: code = NotFound desc = could not find container \"83061f72f23f448105c6b4e22d1bdd5aab8799b5023dbace9d015e06f5f0dc69\": container with ID starting with 83061f72f23f448105c6b4e22d1bdd5aab8799b5023dbace9d015e06f5f0dc69 not found: ID does not exist" Feb 02 21:44:23 crc kubenswrapper[4789]: I0202 21:44:23.905237 4789 scope.go:117] "RemoveContainer" containerID="df044a07ee1f2a43bba97aa3c842c2e40a2f69e85a03c27abd136a32e376fb4f" Feb 02 21:44:23 crc kubenswrapper[4789]: E0202 21:44:23.905512 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df044a07ee1f2a43bba97aa3c842c2e40a2f69e85a03c27abd136a32e376fb4f\": container with ID starting with df044a07ee1f2a43bba97aa3c842c2e40a2f69e85a03c27abd136a32e376fb4f not found: ID does not exist" containerID="df044a07ee1f2a43bba97aa3c842c2e40a2f69e85a03c27abd136a32e376fb4f" Feb 02 21:44:23 crc kubenswrapper[4789]: I0202 21:44:23.905559 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df044a07ee1f2a43bba97aa3c842c2e40a2f69e85a03c27abd136a32e376fb4f"} err="failed to get container status \"df044a07ee1f2a43bba97aa3c842c2e40a2f69e85a03c27abd136a32e376fb4f\": rpc error: code = NotFound desc = could not find container \"df044a07ee1f2a43bba97aa3c842c2e40a2f69e85a03c27abd136a32e376fb4f\": container with ID starting with df044a07ee1f2a43bba97aa3c842c2e40a2f69e85a03c27abd136a32e376fb4f not found: ID does not exist" Feb 02 21:44:24 crc kubenswrapper[4789]: I0202 21:44:24.437866 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="944faa05-ae97-458f-be48-99e643a7fd9f" path="/var/lib/kubelet/pods/944faa05-ae97-458f-be48-99e643a7fd9f/volumes" Feb 02 21:44:43 crc kubenswrapper[4789]: I0202 21:44:43.266711 4789 scope.go:117] "RemoveContainer" containerID="c302d40717f0c425b6e65f87b401026a5061ab6e38b1f75577a83208d8771c00" Feb 02 21:44:43 crc kubenswrapper[4789]: I0202 21:44:43.297982 4789 scope.go:117] "RemoveContainer" containerID="aa30436da3f9aef978f1a1d46087f72ad354d7c56731802c59d834777a580e9a" Feb 02 21:44:43 crc kubenswrapper[4789]: I0202 21:44:43.327327 4789 scope.go:117] "RemoveContainer" containerID="ecfa06e359801169bdd06bd88548fc6c7999a73aea8eb2d73c459b8201ac6223" Feb 02 21:44:43 crc kubenswrapper[4789]: I0202 21:44:43.362626 4789 scope.go:117] "RemoveContainer" containerID="47bf2be39b5dad7a475d59f7c4179d5a96aa340d7f1de5e0bc249d3d76a4661b" Feb 02 21:44:43 crc kubenswrapper[4789]: I0202 21:44:43.386885 4789 scope.go:117] "RemoveContainer" containerID="6726f0ab9af33468e45fe77530ced8a0b271c97eb7762c9a2fb78e8d8c2d78d1" Feb 02 21:44:43 crc kubenswrapper[4789]: I0202 21:44:43.421261 4789 scope.go:117] "RemoveContainer" containerID="4117429bd46f62e85af19b47cebd37c852b76015feb8cc2c979245ca7a597def" Feb 02 21:44:43 crc kubenswrapper[4789]: I0202 21:44:43.453403 4789 scope.go:117] "RemoveContainer" containerID="99815c9ab4c4a392d428b682bb8183bd80d7ee78da0ca0fe649f88834caeb4a5" Feb 02 21:44:43 crc kubenswrapper[4789]: I0202 21:44:43.485284 4789 scope.go:117] "RemoveContainer" containerID="ac99a70faf619168f2f6dbb6cfc2aa89482ec9b6ff1eab45b5685ea95ef9ca8e" Feb 02 21:44:43 crc kubenswrapper[4789]: I0202 21:44:43.505853 4789 scope.go:117] "RemoveContainer" containerID="2d756f003e9b75af5444a466f5fd1cebbbb53bbbc68e11562d8bf580bc3c9dae" Feb 02 21:44:43 crc kubenswrapper[4789]: I0202 21:44:43.541794 4789 scope.go:117] "RemoveContainer" containerID="f560b261973fe579074cd34bbc1721935671051919e539fb9ad4d3adc8d8a597" Feb 02 21:45:00 crc kubenswrapper[4789]: I0202 21:45:00.186228 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29501145-wht77"] Feb 02 21:45:00 crc kubenswrapper[4789]: E0202 21:45:00.187473 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="944faa05-ae97-458f-be48-99e643a7fd9f" containerName="extract-utilities" Feb 02 21:45:00 crc kubenswrapper[4789]: I0202 21:45:00.187493 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="944faa05-ae97-458f-be48-99e643a7fd9f" containerName="extract-utilities" Feb 02 21:45:00 crc kubenswrapper[4789]: E0202 21:45:00.187514 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="944faa05-ae97-458f-be48-99e643a7fd9f" containerName="extract-content" Feb 02 21:45:00 crc kubenswrapper[4789]: I0202 21:45:00.187523 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="944faa05-ae97-458f-be48-99e643a7fd9f" containerName="extract-content" Feb 02 21:45:00 crc kubenswrapper[4789]: E0202 21:45:00.187549 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="944faa05-ae97-458f-be48-99e643a7fd9f" containerName="registry-server" Feb 02 21:45:00 crc kubenswrapper[4789]: I0202 21:45:00.187556 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="944faa05-ae97-458f-be48-99e643a7fd9f" containerName="registry-server" Feb 02 21:45:00 crc kubenswrapper[4789]: I0202 21:45:00.187741 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="944faa05-ae97-458f-be48-99e643a7fd9f" containerName="registry-server" Feb 02 21:45:00 crc kubenswrapper[4789]: I0202 21:45:00.188355 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29501145-wht77" Feb 02 21:45:00 crc kubenswrapper[4789]: I0202 21:45:00.191461 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 02 21:45:00 crc kubenswrapper[4789]: I0202 21:45:00.191904 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 02 21:45:00 crc kubenswrapper[4789]: I0202 21:45:00.214676 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29501145-wht77"] Feb 02 21:45:00 crc kubenswrapper[4789]: I0202 21:45:00.327897 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/603e47a2-9b11-440a-92cb-4e869da257cf-secret-volume\") pod \"collect-profiles-29501145-wht77\" (UID: \"603e47a2-9b11-440a-92cb-4e869da257cf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501145-wht77" Feb 02 21:45:00 crc kubenswrapper[4789]: I0202 21:45:00.327966 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/603e47a2-9b11-440a-92cb-4e869da257cf-config-volume\") pod \"collect-profiles-29501145-wht77\" (UID: \"603e47a2-9b11-440a-92cb-4e869da257cf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501145-wht77" Feb 02 21:45:00 crc kubenswrapper[4789]: I0202 21:45:00.328005 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6b76\" (UniqueName: \"kubernetes.io/projected/603e47a2-9b11-440a-92cb-4e869da257cf-kube-api-access-t6b76\") pod \"collect-profiles-29501145-wht77\" (UID: \"603e47a2-9b11-440a-92cb-4e869da257cf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501145-wht77" Feb 02 21:45:00 crc kubenswrapper[4789]: I0202 21:45:00.430663 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/603e47a2-9b11-440a-92cb-4e869da257cf-secret-volume\") pod \"collect-profiles-29501145-wht77\" (UID: \"603e47a2-9b11-440a-92cb-4e869da257cf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501145-wht77" Feb 02 21:45:00 crc kubenswrapper[4789]: I0202 21:45:00.430743 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/603e47a2-9b11-440a-92cb-4e869da257cf-config-volume\") pod \"collect-profiles-29501145-wht77\" (UID: \"603e47a2-9b11-440a-92cb-4e869da257cf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501145-wht77" Feb 02 21:45:00 crc kubenswrapper[4789]: I0202 21:45:00.430797 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6b76\" (UniqueName: \"kubernetes.io/projected/603e47a2-9b11-440a-92cb-4e869da257cf-kube-api-access-t6b76\") pod \"collect-profiles-29501145-wht77\" (UID: \"603e47a2-9b11-440a-92cb-4e869da257cf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501145-wht77" Feb 02 21:45:00 crc kubenswrapper[4789]: I0202 21:45:00.432404 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/603e47a2-9b11-440a-92cb-4e869da257cf-config-volume\") pod \"collect-profiles-29501145-wht77\" (UID: \"603e47a2-9b11-440a-92cb-4e869da257cf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501145-wht77" Feb 02 21:45:00 crc kubenswrapper[4789]: I0202 21:45:00.442390 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/603e47a2-9b11-440a-92cb-4e869da257cf-secret-volume\") pod \"collect-profiles-29501145-wht77\" (UID: \"603e47a2-9b11-440a-92cb-4e869da257cf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501145-wht77" Feb 02 21:45:00 crc kubenswrapper[4789]: I0202 21:45:00.455732 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6b76\" (UniqueName: \"kubernetes.io/projected/603e47a2-9b11-440a-92cb-4e869da257cf-kube-api-access-t6b76\") pod \"collect-profiles-29501145-wht77\" (UID: \"603e47a2-9b11-440a-92cb-4e869da257cf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501145-wht77" Feb 02 21:45:00 crc kubenswrapper[4789]: I0202 21:45:00.512540 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29501145-wht77" Feb 02 21:45:00 crc kubenswrapper[4789]: I0202 21:45:00.978180 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29501145-wht77"] Feb 02 21:45:01 crc kubenswrapper[4789]: I0202 21:45:01.189740 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29501145-wht77" event={"ID":"603e47a2-9b11-440a-92cb-4e869da257cf","Type":"ContainerStarted","Data":"f170f83f360a287e662f5da2e77de61b5529a442188a7800b6a935241f283768"} Feb 02 21:45:01 crc kubenswrapper[4789]: I0202 21:45:01.190136 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29501145-wht77" event={"ID":"603e47a2-9b11-440a-92cb-4e869da257cf","Type":"ContainerStarted","Data":"ed0a197d82aed01bfe097f7a667644b81bf3c12cffafb69a0732f67237553144"} Feb 02 21:45:01 crc kubenswrapper[4789]: I0202 21:45:01.218868 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29501145-wht77" podStartSLOduration=1.218846613 podStartE2EDuration="1.218846613s" podCreationTimestamp="2026-02-02 21:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 21:45:01.210819325 +0000 UTC m=+1521.505844354" watchObservedRunningTime="2026-02-02 21:45:01.218846613 +0000 UTC m=+1521.513871632" Feb 02 21:45:02 crc kubenswrapper[4789]: I0202 21:45:02.200448 4789 generic.go:334] "Generic (PLEG): container finished" podID="603e47a2-9b11-440a-92cb-4e869da257cf" containerID="f170f83f360a287e662f5da2e77de61b5529a442188a7800b6a935241f283768" exitCode=0 Feb 02 21:45:02 crc kubenswrapper[4789]: I0202 21:45:02.200491 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29501145-wht77" event={"ID":"603e47a2-9b11-440a-92cb-4e869da257cf","Type":"ContainerDied","Data":"f170f83f360a287e662f5da2e77de61b5529a442188a7800b6a935241f283768"} Feb 02 21:45:03 crc kubenswrapper[4789]: I0202 21:45:03.561887 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29501145-wht77" Feb 02 21:45:03 crc kubenswrapper[4789]: I0202 21:45:03.690415 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t6b76\" (UniqueName: \"kubernetes.io/projected/603e47a2-9b11-440a-92cb-4e869da257cf-kube-api-access-t6b76\") pod \"603e47a2-9b11-440a-92cb-4e869da257cf\" (UID: \"603e47a2-9b11-440a-92cb-4e869da257cf\") " Feb 02 21:45:03 crc kubenswrapper[4789]: I0202 21:45:03.690487 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/603e47a2-9b11-440a-92cb-4e869da257cf-secret-volume\") pod \"603e47a2-9b11-440a-92cb-4e869da257cf\" (UID: \"603e47a2-9b11-440a-92cb-4e869da257cf\") " Feb 02 21:45:03 crc kubenswrapper[4789]: I0202 21:45:03.690681 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/603e47a2-9b11-440a-92cb-4e869da257cf-config-volume\") pod \"603e47a2-9b11-440a-92cb-4e869da257cf\" (UID: \"603e47a2-9b11-440a-92cb-4e869da257cf\") " Feb 02 21:45:03 crc kubenswrapper[4789]: I0202 21:45:03.691457 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/603e47a2-9b11-440a-92cb-4e869da257cf-config-volume" (OuterVolumeSpecName: "config-volume") pod "603e47a2-9b11-440a-92cb-4e869da257cf" (UID: "603e47a2-9b11-440a-92cb-4e869da257cf"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 21:45:03 crc kubenswrapper[4789]: I0202 21:45:03.695759 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/603e47a2-9b11-440a-92cb-4e869da257cf-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "603e47a2-9b11-440a-92cb-4e869da257cf" (UID: "603e47a2-9b11-440a-92cb-4e869da257cf"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 21:45:03 crc kubenswrapper[4789]: I0202 21:45:03.695872 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/603e47a2-9b11-440a-92cb-4e869da257cf-kube-api-access-t6b76" (OuterVolumeSpecName: "kube-api-access-t6b76") pod "603e47a2-9b11-440a-92cb-4e869da257cf" (UID: "603e47a2-9b11-440a-92cb-4e869da257cf"). InnerVolumeSpecName "kube-api-access-t6b76". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:45:03 crc kubenswrapper[4789]: I0202 21:45:03.792023 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t6b76\" (UniqueName: \"kubernetes.io/projected/603e47a2-9b11-440a-92cb-4e869da257cf-kube-api-access-t6b76\") on node \"crc\" DevicePath \"\"" Feb 02 21:45:03 crc kubenswrapper[4789]: I0202 21:45:03.792063 4789 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/603e47a2-9b11-440a-92cb-4e869da257cf-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 02 21:45:03 crc kubenswrapper[4789]: I0202 21:45:03.792077 4789 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/603e47a2-9b11-440a-92cb-4e869da257cf-config-volume\") on node \"crc\" DevicePath \"\"" Feb 02 21:45:04 crc kubenswrapper[4789]: I0202 21:45:04.221877 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29501145-wht77" event={"ID":"603e47a2-9b11-440a-92cb-4e869da257cf","Type":"ContainerDied","Data":"ed0a197d82aed01bfe097f7a667644b81bf3c12cffafb69a0732f67237553144"} Feb 02 21:45:04 crc kubenswrapper[4789]: I0202 21:45:04.222278 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed0a197d82aed01bfe097f7a667644b81bf3c12cffafb69a0732f67237553144" Feb 02 21:45:04 crc kubenswrapper[4789]: I0202 21:45:04.221938 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29501145-wht77" Feb 02 21:45:41 crc kubenswrapper[4789]: I0202 21:45:41.241478 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mw5n7"] Feb 02 21:45:41 crc kubenswrapper[4789]: E0202 21:45:41.242497 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="603e47a2-9b11-440a-92cb-4e869da257cf" containerName="collect-profiles" Feb 02 21:45:41 crc kubenswrapper[4789]: I0202 21:45:41.242512 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="603e47a2-9b11-440a-92cb-4e869da257cf" containerName="collect-profiles" Feb 02 21:45:41 crc kubenswrapper[4789]: I0202 21:45:41.242816 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="603e47a2-9b11-440a-92cb-4e869da257cf" containerName="collect-profiles" Feb 02 21:45:41 crc kubenswrapper[4789]: I0202 21:45:41.243997 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mw5n7" Feb 02 21:45:41 crc kubenswrapper[4789]: I0202 21:45:41.267531 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mw5n7"] Feb 02 21:45:41 crc kubenswrapper[4789]: I0202 21:45:41.367518 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/864ec1f2-4bb0-49f2-af83-95bf497a039c-catalog-content\") pod \"certified-operators-mw5n7\" (UID: \"864ec1f2-4bb0-49f2-af83-95bf497a039c\") " pod="openshift-marketplace/certified-operators-mw5n7" Feb 02 21:45:41 crc kubenswrapper[4789]: I0202 21:45:41.367566 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5rst\" (UniqueName: \"kubernetes.io/projected/864ec1f2-4bb0-49f2-af83-95bf497a039c-kube-api-access-w5rst\") pod \"certified-operators-mw5n7\" (UID: \"864ec1f2-4bb0-49f2-af83-95bf497a039c\") " pod="openshift-marketplace/certified-operators-mw5n7" Feb 02 21:45:41 crc kubenswrapper[4789]: I0202 21:45:41.367616 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/864ec1f2-4bb0-49f2-af83-95bf497a039c-utilities\") pod \"certified-operators-mw5n7\" (UID: \"864ec1f2-4bb0-49f2-af83-95bf497a039c\") " pod="openshift-marketplace/certified-operators-mw5n7" Feb 02 21:45:41 crc kubenswrapper[4789]: I0202 21:45:41.468873 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/864ec1f2-4bb0-49f2-af83-95bf497a039c-utilities\") pod \"certified-operators-mw5n7\" (UID: \"864ec1f2-4bb0-49f2-af83-95bf497a039c\") " pod="openshift-marketplace/certified-operators-mw5n7" Feb 02 21:45:41 crc kubenswrapper[4789]: I0202 21:45:41.469073 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/864ec1f2-4bb0-49f2-af83-95bf497a039c-catalog-content\") pod \"certified-operators-mw5n7\" (UID: \"864ec1f2-4bb0-49f2-af83-95bf497a039c\") " pod="openshift-marketplace/certified-operators-mw5n7" Feb 02 21:45:41 crc kubenswrapper[4789]: I0202 21:45:41.469108 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5rst\" (UniqueName: \"kubernetes.io/projected/864ec1f2-4bb0-49f2-af83-95bf497a039c-kube-api-access-w5rst\") pod \"certified-operators-mw5n7\" (UID: \"864ec1f2-4bb0-49f2-af83-95bf497a039c\") " pod="openshift-marketplace/certified-operators-mw5n7" Feb 02 21:45:41 crc kubenswrapper[4789]: I0202 21:45:41.469548 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/864ec1f2-4bb0-49f2-af83-95bf497a039c-utilities\") pod \"certified-operators-mw5n7\" (UID: \"864ec1f2-4bb0-49f2-af83-95bf497a039c\") " pod="openshift-marketplace/certified-operators-mw5n7" Feb 02 21:45:41 crc kubenswrapper[4789]: I0202 21:45:41.470029 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/864ec1f2-4bb0-49f2-af83-95bf497a039c-catalog-content\") pod \"certified-operators-mw5n7\" (UID: \"864ec1f2-4bb0-49f2-af83-95bf497a039c\") " pod="openshift-marketplace/certified-operators-mw5n7" Feb 02 21:45:41 crc kubenswrapper[4789]: I0202 21:45:41.487851 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5rst\" (UniqueName: \"kubernetes.io/projected/864ec1f2-4bb0-49f2-af83-95bf497a039c-kube-api-access-w5rst\") pod \"certified-operators-mw5n7\" (UID: \"864ec1f2-4bb0-49f2-af83-95bf497a039c\") " pod="openshift-marketplace/certified-operators-mw5n7" Feb 02 21:45:41 crc kubenswrapper[4789]: I0202 21:45:41.579129 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mw5n7" Feb 02 21:45:42 crc kubenswrapper[4789]: I0202 21:45:42.077225 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mw5n7"] Feb 02 21:45:42 crc kubenswrapper[4789]: W0202 21:45:42.089008 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod864ec1f2_4bb0_49f2_af83_95bf497a039c.slice/crio-04102efc6318bda8f46855d1abc853c963d22085cf56385bfeb986f3d9782aee WatchSource:0}: Error finding container 04102efc6318bda8f46855d1abc853c963d22085cf56385bfeb986f3d9782aee: Status 404 returned error can't find the container with id 04102efc6318bda8f46855d1abc853c963d22085cf56385bfeb986f3d9782aee Feb 02 21:45:42 crc kubenswrapper[4789]: I0202 21:45:42.624022 4789 generic.go:334] "Generic (PLEG): container finished" podID="864ec1f2-4bb0-49f2-af83-95bf497a039c" containerID="cd44e5becb3e6bca72fa733e3e7cab4c8d53dce2a1c3e0a9dcbd5c14b85fe061" exitCode=0 Feb 02 21:45:42 crc kubenswrapper[4789]: I0202 21:45:42.624452 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mw5n7" event={"ID":"864ec1f2-4bb0-49f2-af83-95bf497a039c","Type":"ContainerDied","Data":"cd44e5becb3e6bca72fa733e3e7cab4c8d53dce2a1c3e0a9dcbd5c14b85fe061"} Feb 02 21:45:42 crc kubenswrapper[4789]: I0202 21:45:42.624499 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mw5n7" event={"ID":"864ec1f2-4bb0-49f2-af83-95bf497a039c","Type":"ContainerStarted","Data":"04102efc6318bda8f46855d1abc853c963d22085cf56385bfeb986f3d9782aee"} Feb 02 21:45:43 crc kubenswrapper[4789]: I0202 21:45:43.777289 4789 scope.go:117] "RemoveContainer" containerID="a02144326266b78df86578a9882f49b5733c0fda172e3dbbc76c0dc7873e9df6" Feb 02 21:45:43 crc kubenswrapper[4789]: I0202 21:45:43.818686 4789 scope.go:117] "RemoveContainer" containerID="53541170e44fa63e8ef02609a4f413138b680f2efd72af45a332e10060e0d01f" Feb 02 21:45:43 crc kubenswrapper[4789]: I0202 21:45:43.843823 4789 scope.go:117] "RemoveContainer" containerID="9a5ca93c4582c6514e21c95b5e463e017afe67a0e00064023f85657aa0366a24" Feb 02 21:45:43 crc kubenswrapper[4789]: I0202 21:45:43.884702 4789 scope.go:117] "RemoveContainer" containerID="922c7100bf1da4cbcedd71c1e8161322c8c3c06483b622837923aef6ad441e2c" Feb 02 21:45:43 crc kubenswrapper[4789]: I0202 21:45:43.927903 4789 scope.go:117] "RemoveContainer" containerID="96a08e8fb516847374b2ab373ad97fdcb73d2efa2aed29cc4309574c6e8ffd3b" Feb 02 21:45:43 crc kubenswrapper[4789]: I0202 21:45:43.978337 4789 scope.go:117] "RemoveContainer" containerID="52c73b93b06db75c8b0ee660bb62854833f9f64f18b216c0fc8214cf979cf14a" Feb 02 21:45:43 crc kubenswrapper[4789]: I0202 21:45:43.997791 4789 scope.go:117] "RemoveContainer" containerID="6a85531001078e7b0623762a1ac1ae36a06d2b03afa2a75de0c40ceaef0cf5ef" Feb 02 21:45:44 crc kubenswrapper[4789]: I0202 21:45:44.034627 4789 scope.go:117] "RemoveContainer" containerID="f47ff43ff041b8e71635de9af25216b371b7e740200d70f77ff890795c3a4085" Feb 02 21:45:44 crc kubenswrapper[4789]: I0202 21:45:44.064334 4789 scope.go:117] "RemoveContainer" containerID="2f184873817571d3c96a8961e93b31393468ea2e04736452d72e1bfb0963324c" Feb 02 21:45:44 crc kubenswrapper[4789]: I0202 21:45:44.111627 4789 scope.go:117] "RemoveContainer" containerID="d79719fe99392b86ef4b5ecada444e79deb1ac6615f195af2e8788f2390a175f" Feb 02 21:45:44 crc kubenswrapper[4789]: I0202 21:45:44.161034 4789 scope.go:117] "RemoveContainer" containerID="9875869b174e349daf124d55e4a4702547bf8f56e47f23da592c46f1d55a3cc2" Feb 02 21:45:44 crc kubenswrapper[4789]: I0202 21:45:44.216509 4789 scope.go:117] "RemoveContainer" containerID="28ae4ce8143169b3cdc70332b382b5abe55ce1bfb5cfb0ab898beeaadfc9f864" Feb 02 21:45:44 crc kubenswrapper[4789]: I0202 21:45:44.234471 4789 scope.go:117] "RemoveContainer" containerID="411e1d9c2338519d38825314451a3aee0f0e1c6639158094ec4eb5bdee347fa2" Feb 02 21:45:44 crc kubenswrapper[4789]: I0202 21:45:44.259960 4789 scope.go:117] "RemoveContainer" containerID="c258b245ff5663d640558115f7c3d15a377cce4b941559f3e648a918e8a7b996" Feb 02 21:45:44 crc kubenswrapper[4789]: I0202 21:45:44.275846 4789 scope.go:117] "RemoveContainer" containerID="c592964e1b36e9ddb3659f00af59f06d0458c5e38247a3a727d5a8797b62b739" Feb 02 21:45:44 crc kubenswrapper[4789]: I0202 21:45:44.310400 4789 scope.go:117] "RemoveContainer" containerID="24049c2be1d4b8b3da9929ca79be7a24c066039c1f9c521ceac6f6fdb404fc0d" Feb 02 21:45:44 crc kubenswrapper[4789]: I0202 21:45:44.336668 4789 scope.go:117] "RemoveContainer" containerID="bca07d9fc6526f1efc5fac2d1c5f5e2995acb5069d6fe6aaa00814d05a0d8296" Feb 02 21:45:44 crc kubenswrapper[4789]: I0202 21:45:44.797049 4789 generic.go:334] "Generic (PLEG): container finished" podID="864ec1f2-4bb0-49f2-af83-95bf497a039c" containerID="4e706c90c8fe913f37ca5cbcf22e591ecac3c929cfaf9a6a7afea4058ce3760a" exitCode=0 Feb 02 21:45:44 crc kubenswrapper[4789]: I0202 21:45:44.797099 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mw5n7" event={"ID":"864ec1f2-4bb0-49f2-af83-95bf497a039c","Type":"ContainerDied","Data":"4e706c90c8fe913f37ca5cbcf22e591ecac3c929cfaf9a6a7afea4058ce3760a"} Feb 02 21:45:45 crc kubenswrapper[4789]: I0202 21:45:45.807784 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mw5n7" event={"ID":"864ec1f2-4bb0-49f2-af83-95bf497a039c","Type":"ContainerStarted","Data":"c6ac4e832ac6fca027b21a48a06f6e587070bac767697ba7b282bd277f355db7"} Feb 02 21:45:45 crc kubenswrapper[4789]: I0202 21:45:45.836224 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mw5n7" podStartSLOduration=2.099946433 podStartE2EDuration="4.836193464s" podCreationTimestamp="2026-02-02 21:45:41 +0000 UTC" firstStartedPulling="2026-02-02 21:45:42.626774145 +0000 UTC m=+1562.921799194" lastFinishedPulling="2026-02-02 21:45:45.363021166 +0000 UTC m=+1565.658046225" observedRunningTime="2026-02-02 21:45:45.83076229 +0000 UTC m=+1566.125787319" watchObservedRunningTime="2026-02-02 21:45:45.836193464 +0000 UTC m=+1566.131218533" Feb 02 21:45:51 crc kubenswrapper[4789]: I0202 21:45:51.580135 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mw5n7" Feb 02 21:45:51 crc kubenswrapper[4789]: I0202 21:45:51.580876 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mw5n7" Feb 02 21:45:51 crc kubenswrapper[4789]: I0202 21:45:51.665425 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mw5n7" Feb 02 21:45:51 crc kubenswrapper[4789]: I0202 21:45:51.955520 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mw5n7" Feb 02 21:45:52 crc kubenswrapper[4789]: I0202 21:45:52.019554 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mw5n7"] Feb 02 21:45:53 crc kubenswrapper[4789]: I0202 21:45:53.899711 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mw5n7" podUID="864ec1f2-4bb0-49f2-af83-95bf497a039c" containerName="registry-server" containerID="cri-o://c6ac4e832ac6fca027b21a48a06f6e587070bac767697ba7b282bd277f355db7" gracePeriod=2 Feb 02 21:45:54 crc kubenswrapper[4789]: I0202 21:45:54.424060 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mw5n7" Feb 02 21:45:54 crc kubenswrapper[4789]: I0202 21:45:54.547783 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w5rst\" (UniqueName: \"kubernetes.io/projected/864ec1f2-4bb0-49f2-af83-95bf497a039c-kube-api-access-w5rst\") pod \"864ec1f2-4bb0-49f2-af83-95bf497a039c\" (UID: \"864ec1f2-4bb0-49f2-af83-95bf497a039c\") " Feb 02 21:45:54 crc kubenswrapper[4789]: I0202 21:45:54.547899 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/864ec1f2-4bb0-49f2-af83-95bf497a039c-utilities\") pod \"864ec1f2-4bb0-49f2-af83-95bf497a039c\" (UID: \"864ec1f2-4bb0-49f2-af83-95bf497a039c\") " Feb 02 21:45:54 crc kubenswrapper[4789]: I0202 21:45:54.548026 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/864ec1f2-4bb0-49f2-af83-95bf497a039c-catalog-content\") pod \"864ec1f2-4bb0-49f2-af83-95bf497a039c\" (UID: \"864ec1f2-4bb0-49f2-af83-95bf497a039c\") " Feb 02 21:45:54 crc kubenswrapper[4789]: I0202 21:45:54.550152 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/864ec1f2-4bb0-49f2-af83-95bf497a039c-utilities" (OuterVolumeSpecName: "utilities") pod "864ec1f2-4bb0-49f2-af83-95bf497a039c" (UID: "864ec1f2-4bb0-49f2-af83-95bf497a039c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 21:45:54 crc kubenswrapper[4789]: I0202 21:45:54.562871 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/864ec1f2-4bb0-49f2-af83-95bf497a039c-kube-api-access-w5rst" (OuterVolumeSpecName: "kube-api-access-w5rst") pod "864ec1f2-4bb0-49f2-af83-95bf497a039c" (UID: "864ec1f2-4bb0-49f2-af83-95bf497a039c"). InnerVolumeSpecName "kube-api-access-w5rst". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:45:54 crc kubenswrapper[4789]: I0202 21:45:54.644491 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/864ec1f2-4bb0-49f2-af83-95bf497a039c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "864ec1f2-4bb0-49f2-af83-95bf497a039c" (UID: "864ec1f2-4bb0-49f2-af83-95bf497a039c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 21:45:54 crc kubenswrapper[4789]: I0202 21:45:54.651296 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w5rst\" (UniqueName: \"kubernetes.io/projected/864ec1f2-4bb0-49f2-af83-95bf497a039c-kube-api-access-w5rst\") on node \"crc\" DevicePath \"\"" Feb 02 21:45:54 crc kubenswrapper[4789]: I0202 21:45:54.651398 4789 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/864ec1f2-4bb0-49f2-af83-95bf497a039c-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 21:45:54 crc kubenswrapper[4789]: I0202 21:45:54.651472 4789 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/864ec1f2-4bb0-49f2-af83-95bf497a039c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 21:45:54 crc kubenswrapper[4789]: I0202 21:45:54.912986 4789 generic.go:334] "Generic (PLEG): container finished" podID="864ec1f2-4bb0-49f2-af83-95bf497a039c" containerID="c6ac4e832ac6fca027b21a48a06f6e587070bac767697ba7b282bd277f355db7" exitCode=0 Feb 02 21:45:54 crc kubenswrapper[4789]: I0202 21:45:54.913066 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mw5n7" event={"ID":"864ec1f2-4bb0-49f2-af83-95bf497a039c","Type":"ContainerDied","Data":"c6ac4e832ac6fca027b21a48a06f6e587070bac767697ba7b282bd277f355db7"} Feb 02 21:45:54 crc kubenswrapper[4789]: I0202 21:45:54.913100 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mw5n7" Feb 02 21:45:54 crc kubenswrapper[4789]: I0202 21:45:54.913150 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mw5n7" event={"ID":"864ec1f2-4bb0-49f2-af83-95bf497a039c","Type":"ContainerDied","Data":"04102efc6318bda8f46855d1abc853c963d22085cf56385bfeb986f3d9782aee"} Feb 02 21:45:54 crc kubenswrapper[4789]: I0202 21:45:54.913192 4789 scope.go:117] "RemoveContainer" containerID="c6ac4e832ac6fca027b21a48a06f6e587070bac767697ba7b282bd277f355db7" Feb 02 21:45:54 crc kubenswrapper[4789]: I0202 21:45:54.948887 4789 scope.go:117] "RemoveContainer" containerID="4e706c90c8fe913f37ca5cbcf22e591ecac3c929cfaf9a6a7afea4058ce3760a" Feb 02 21:45:54 crc kubenswrapper[4789]: I0202 21:45:54.958528 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mw5n7"] Feb 02 21:45:54 crc kubenswrapper[4789]: I0202 21:45:54.967480 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mw5n7"] Feb 02 21:45:54 crc kubenswrapper[4789]: I0202 21:45:54.980927 4789 scope.go:117] "RemoveContainer" containerID="cd44e5becb3e6bca72fa733e3e7cab4c8d53dce2a1c3e0a9dcbd5c14b85fe061" Feb 02 21:45:55 crc kubenswrapper[4789]: I0202 21:45:55.017755 4789 scope.go:117] "RemoveContainer" containerID="c6ac4e832ac6fca027b21a48a06f6e587070bac767697ba7b282bd277f355db7" Feb 02 21:45:55 crc kubenswrapper[4789]: E0202 21:45:55.018530 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6ac4e832ac6fca027b21a48a06f6e587070bac767697ba7b282bd277f355db7\": container with ID starting with c6ac4e832ac6fca027b21a48a06f6e587070bac767697ba7b282bd277f355db7 not found: ID does not exist" containerID="c6ac4e832ac6fca027b21a48a06f6e587070bac767697ba7b282bd277f355db7" Feb 02 21:45:55 crc kubenswrapper[4789]: I0202 21:45:55.018652 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6ac4e832ac6fca027b21a48a06f6e587070bac767697ba7b282bd277f355db7"} err="failed to get container status \"c6ac4e832ac6fca027b21a48a06f6e587070bac767697ba7b282bd277f355db7\": rpc error: code = NotFound desc = could not find container \"c6ac4e832ac6fca027b21a48a06f6e587070bac767697ba7b282bd277f355db7\": container with ID starting with c6ac4e832ac6fca027b21a48a06f6e587070bac767697ba7b282bd277f355db7 not found: ID does not exist" Feb 02 21:45:55 crc kubenswrapper[4789]: I0202 21:45:55.018710 4789 scope.go:117] "RemoveContainer" containerID="4e706c90c8fe913f37ca5cbcf22e591ecac3c929cfaf9a6a7afea4058ce3760a" Feb 02 21:45:55 crc kubenswrapper[4789]: E0202 21:45:55.019601 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e706c90c8fe913f37ca5cbcf22e591ecac3c929cfaf9a6a7afea4058ce3760a\": container with ID starting with 4e706c90c8fe913f37ca5cbcf22e591ecac3c929cfaf9a6a7afea4058ce3760a not found: ID does not exist" containerID="4e706c90c8fe913f37ca5cbcf22e591ecac3c929cfaf9a6a7afea4058ce3760a" Feb 02 21:45:55 crc kubenswrapper[4789]: I0202 21:45:55.019638 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e706c90c8fe913f37ca5cbcf22e591ecac3c929cfaf9a6a7afea4058ce3760a"} err="failed to get container status \"4e706c90c8fe913f37ca5cbcf22e591ecac3c929cfaf9a6a7afea4058ce3760a\": rpc error: code = NotFound desc = could not find container \"4e706c90c8fe913f37ca5cbcf22e591ecac3c929cfaf9a6a7afea4058ce3760a\": container with ID starting with 4e706c90c8fe913f37ca5cbcf22e591ecac3c929cfaf9a6a7afea4058ce3760a not found: ID does not exist" Feb 02 21:45:55 crc kubenswrapper[4789]: I0202 21:45:55.019665 4789 scope.go:117] "RemoveContainer" containerID="cd44e5becb3e6bca72fa733e3e7cab4c8d53dce2a1c3e0a9dcbd5c14b85fe061" Feb 02 21:45:55 crc kubenswrapper[4789]: E0202 21:45:55.020148 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd44e5becb3e6bca72fa733e3e7cab4c8d53dce2a1c3e0a9dcbd5c14b85fe061\": container with ID starting with cd44e5becb3e6bca72fa733e3e7cab4c8d53dce2a1c3e0a9dcbd5c14b85fe061 not found: ID does not exist" containerID="cd44e5becb3e6bca72fa733e3e7cab4c8d53dce2a1c3e0a9dcbd5c14b85fe061" Feb 02 21:45:55 crc kubenswrapper[4789]: I0202 21:45:55.020205 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd44e5becb3e6bca72fa733e3e7cab4c8d53dce2a1c3e0a9dcbd5c14b85fe061"} err="failed to get container status \"cd44e5becb3e6bca72fa733e3e7cab4c8d53dce2a1c3e0a9dcbd5c14b85fe061\": rpc error: code = NotFound desc = could not find container \"cd44e5becb3e6bca72fa733e3e7cab4c8d53dce2a1c3e0a9dcbd5c14b85fe061\": container with ID starting with cd44e5becb3e6bca72fa733e3e7cab4c8d53dce2a1c3e0a9dcbd5c14b85fe061 not found: ID does not exist" Feb 02 21:45:56 crc kubenswrapper[4789]: I0202 21:45:56.436299 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="864ec1f2-4bb0-49f2-af83-95bf497a039c" path="/var/lib/kubelet/pods/864ec1f2-4bb0-49f2-af83-95bf497a039c/volumes" Feb 02 21:46:44 crc kubenswrapper[4789]: I0202 21:46:44.554818 4789 scope.go:117] "RemoveContainer" containerID="4d415a493748a5fe09f699514591958e89913c004ba0a9b4078c606869ecb7de" Feb 02 21:46:44 crc kubenswrapper[4789]: I0202 21:46:44.580321 4789 scope.go:117] "RemoveContainer" containerID="338cf0d5d7efb9dbbb48b11890d5c138bba78aa9270e7f6c74ba60abb2882959" Feb 02 21:46:44 crc kubenswrapper[4789]: I0202 21:46:44.623700 4789 scope.go:117] "RemoveContainer" containerID="0eca0dc9b0d9a1bd83046a0f3570f8bb83aedec5f2d1b6428ad5f16255c8d458" Feb 02 21:46:44 crc kubenswrapper[4789]: I0202 21:46:44.643718 4789 scope.go:117] "RemoveContainer" containerID="2cef93e7502918b2fdc30d919f677a18e537af13a5ab8002948b7da723faab1a" Feb 02 21:46:44 crc kubenswrapper[4789]: I0202 21:46:44.667536 4789 scope.go:117] "RemoveContainer" containerID="2b6de69f9e66b84935d1feec95db1e8c1077e1b7f5201ec276390cf510290679" Feb 02 21:46:44 crc kubenswrapper[4789]: I0202 21:46:44.685264 4789 scope.go:117] "RemoveContainer" containerID="f0215d16c08c102f787f13d2c2da456f3ba5286566c5ccacad8f32a59f3affce" Feb 02 21:46:44 crc kubenswrapper[4789]: I0202 21:46:44.716733 4789 scope.go:117] "RemoveContainer" containerID="1fdc983ab9ac3038cc265e648fd3bb3bee5ec995759e5ffb2049a6a91c398225" Feb 02 21:46:44 crc kubenswrapper[4789]: I0202 21:46:44.772316 4789 scope.go:117] "RemoveContainer" containerID="4d19e9acfcd069d33568dd5eb37918d764af6cd982550a47d0a1b5bc56158aa0" Feb 02 21:46:44 crc kubenswrapper[4789]: I0202 21:46:44.798939 4789 scope.go:117] "RemoveContainer" containerID="9e08ef420908bb5471cb646abacab871ac3bb7adf1ac462a377f792d5691f1fb" Feb 02 21:46:44 crc kubenswrapper[4789]: I0202 21:46:44.823183 4789 scope.go:117] "RemoveContainer" containerID="31cb564d5503d24a074fb5871aa516ae2843819e6096357ded09ae326cf07b00" Feb 02 21:46:44 crc kubenswrapper[4789]: I0202 21:46:44.838455 4789 scope.go:117] "RemoveContainer" containerID="3da3de917d02b853103d72b4a2655d21606b0e94c091cada13d7340c8a07120d" Feb 02 21:46:44 crc kubenswrapper[4789]: I0202 21:46:44.852450 4789 scope.go:117] "RemoveContainer" containerID="6c6b69de05330d0c2b4df4009c215f51143b7e6e5f0f8eb55ddce7e689d2f46b" Feb 02 21:46:52 crc kubenswrapper[4789]: I0202 21:46:52.841794 4789 patch_prober.go:28] interesting pod/machine-config-daemon-c8vcn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 21:46:52 crc kubenswrapper[4789]: I0202 21:46:52.842419 4789 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 21:47:22 crc kubenswrapper[4789]: I0202 21:47:22.841657 4789 patch_prober.go:28] interesting pod/machine-config-daemon-c8vcn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 21:47:22 crc kubenswrapper[4789]: I0202 21:47:22.842327 4789 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 21:47:44 crc kubenswrapper[4789]: I0202 21:47:44.984768 4789 scope.go:117] "RemoveContainer" containerID="c0a8ebcdf24c0da82f27897eaa37e69996ea151354df5ea83d450176e048c49d" Feb 02 21:47:45 crc kubenswrapper[4789]: I0202 21:47:45.038324 4789 scope.go:117] "RemoveContainer" containerID="769de9c8bf165e35289b8a68dea9fb07d75ccb9362227f3a94b1889434eb4ece" Feb 02 21:47:45 crc kubenswrapper[4789]: I0202 21:47:45.076941 4789 scope.go:117] "RemoveContainer" containerID="e085656982d1a651607fc032070499319d528e8519c270e5e4395a2dd48ea137" Feb 02 21:47:45 crc kubenswrapper[4789]: I0202 21:47:45.142973 4789 scope.go:117] "RemoveContainer" containerID="1f900f4e81cf4a1185756414a43858602853f385c3d95d439c33014b387e9ccf" Feb 02 21:47:45 crc kubenswrapper[4789]: I0202 21:47:45.191936 4789 scope.go:117] "RemoveContainer" containerID="f71ea990fd211f90f73858215cc4f96f5720f05aa94fd89824ef0789717946ca" Feb 02 21:47:52 crc kubenswrapper[4789]: I0202 21:47:52.842322 4789 patch_prober.go:28] interesting pod/machine-config-daemon-c8vcn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 21:47:52 crc kubenswrapper[4789]: I0202 21:47:52.843036 4789 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 21:47:52 crc kubenswrapper[4789]: I0202 21:47:52.843104 4789 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" Feb 02 21:47:52 crc kubenswrapper[4789]: I0202 21:47:52.844026 4789 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"de6aa73a1267c130655031ef3c4dd7e6bb34d4ce8d75bf21c205ad83e1223e18"} pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 21:47:52 crc kubenswrapper[4789]: I0202 21:47:52.844128 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" containerName="machine-config-daemon" containerID="cri-o://de6aa73a1267c130655031ef3c4dd7e6bb34d4ce8d75bf21c205ad83e1223e18" gracePeriod=600 Feb 02 21:47:52 crc kubenswrapper[4789]: E0202 21:47:52.982348 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 21:47:53 crc kubenswrapper[4789]: I0202 21:47:53.091268 4789 generic.go:334] "Generic (PLEG): container finished" podID="bdf018b4-1451-4d37-be6e-05802b67c73e" containerID="de6aa73a1267c130655031ef3c4dd7e6bb34d4ce8d75bf21c205ad83e1223e18" exitCode=0 Feb 02 21:47:53 crc kubenswrapper[4789]: I0202 21:47:53.091359 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" event={"ID":"bdf018b4-1451-4d37-be6e-05802b67c73e","Type":"ContainerDied","Data":"de6aa73a1267c130655031ef3c4dd7e6bb34d4ce8d75bf21c205ad83e1223e18"} Feb 02 21:47:53 crc kubenswrapper[4789]: I0202 21:47:53.091437 4789 scope.go:117] "RemoveContainer" containerID="6f53ea5f1a80f886dfd6c88f09837b2b4d54c1a0219e9a215978594e6e78e40f" Feb 02 21:47:53 crc kubenswrapper[4789]: I0202 21:47:53.091998 4789 scope.go:117] "RemoveContainer" containerID="de6aa73a1267c130655031ef3c4dd7e6bb34d4ce8d75bf21c205ad83e1223e18" Feb 02 21:47:53 crc kubenswrapper[4789]: E0202 21:47:53.092269 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 21:48:06 crc kubenswrapper[4789]: I0202 21:48:06.420524 4789 scope.go:117] "RemoveContainer" containerID="de6aa73a1267c130655031ef3c4dd7e6bb34d4ce8d75bf21c205ad83e1223e18" Feb 02 21:48:06 crc kubenswrapper[4789]: E0202 21:48:06.421492 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 21:48:19 crc kubenswrapper[4789]: I0202 21:48:19.419307 4789 scope.go:117] "RemoveContainer" containerID="de6aa73a1267c130655031ef3c4dd7e6bb34d4ce8d75bf21c205ad83e1223e18" Feb 02 21:48:19 crc kubenswrapper[4789]: E0202 21:48:19.420219 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 21:48:33 crc kubenswrapper[4789]: I0202 21:48:33.419739 4789 scope.go:117] "RemoveContainer" containerID="de6aa73a1267c130655031ef3c4dd7e6bb34d4ce8d75bf21c205ad83e1223e18" Feb 02 21:48:33 crc kubenswrapper[4789]: E0202 21:48:33.420666 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 21:48:44 crc kubenswrapper[4789]: I0202 21:48:44.555044 4789 scope.go:117] "RemoveContainer" containerID="de6aa73a1267c130655031ef3c4dd7e6bb34d4ce8d75bf21c205ad83e1223e18" Feb 02 21:48:44 crc kubenswrapper[4789]: E0202 21:48:44.556292 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 21:48:45 crc kubenswrapper[4789]: I0202 21:48:45.304750 4789 scope.go:117] "RemoveContainer" containerID="4e428b3e90b59603738751c709dbc7de1ccef26fb2e50aac0b788d4a2d55579c" Feb 02 21:48:45 crc kubenswrapper[4789]: I0202 21:48:45.368320 4789 scope.go:117] "RemoveContainer" containerID="0947f8cdd1f5dab6746e2ce88b87d9cc21b32de7ac54eec8ed4b2dc8b2ff1f61" Feb 02 21:48:45 crc kubenswrapper[4789]: I0202 21:48:45.386288 4789 scope.go:117] "RemoveContainer" containerID="28a7ed128e7bef7f569955019dd73ac9d95249468906497c95bad0c6363ebdd8" Feb 02 21:48:45 crc kubenswrapper[4789]: I0202 21:48:45.404060 4789 scope.go:117] "RemoveContainer" containerID="c4d593fd14424a40e7eb4b508c719970461ef690e1eb1894e38dd03571b8b07b" Feb 02 21:48:45 crc kubenswrapper[4789]: I0202 21:48:45.426465 4789 scope.go:117] "RemoveContainer" containerID="4442ad2bcd72e1f7d739ef50d0304ab053ba1e52fd3d4c19d121698c07aa9558" Feb 02 21:48:57 crc kubenswrapper[4789]: I0202 21:48:57.419947 4789 scope.go:117] "RemoveContainer" containerID="de6aa73a1267c130655031ef3c4dd7e6bb34d4ce8d75bf21c205ad83e1223e18" Feb 02 21:48:57 crc kubenswrapper[4789]: E0202 21:48:57.422637 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 21:49:12 crc kubenswrapper[4789]: I0202 21:49:12.421990 4789 scope.go:117] "RemoveContainer" containerID="de6aa73a1267c130655031ef3c4dd7e6bb34d4ce8d75bf21c205ad83e1223e18" Feb 02 21:49:12 crc kubenswrapper[4789]: E0202 21:49:12.422983 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 21:49:25 crc kubenswrapper[4789]: I0202 21:49:25.419465 4789 scope.go:117] "RemoveContainer" containerID="de6aa73a1267c130655031ef3c4dd7e6bb34d4ce8d75bf21c205ad83e1223e18" Feb 02 21:49:25 crc kubenswrapper[4789]: E0202 21:49:25.421774 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 21:49:39 crc kubenswrapper[4789]: I0202 21:49:39.419491 4789 scope.go:117] "RemoveContainer" containerID="de6aa73a1267c130655031ef3c4dd7e6bb34d4ce8d75bf21c205ad83e1223e18" Feb 02 21:49:39 crc kubenswrapper[4789]: E0202 21:49:39.420818 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 21:49:53 crc kubenswrapper[4789]: I0202 21:49:53.419037 4789 scope.go:117] "RemoveContainer" containerID="de6aa73a1267c130655031ef3c4dd7e6bb34d4ce8d75bf21c205ad83e1223e18" Feb 02 21:49:53 crc kubenswrapper[4789]: E0202 21:49:53.419631 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 21:50:08 crc kubenswrapper[4789]: I0202 21:50:08.419946 4789 scope.go:117] "RemoveContainer" containerID="de6aa73a1267c130655031ef3c4dd7e6bb34d4ce8d75bf21c205ad83e1223e18" Feb 02 21:50:08 crc kubenswrapper[4789]: E0202 21:50:08.420976 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 21:50:22 crc kubenswrapper[4789]: I0202 21:50:22.420641 4789 scope.go:117] "RemoveContainer" containerID="de6aa73a1267c130655031ef3c4dd7e6bb34d4ce8d75bf21c205ad83e1223e18" Feb 02 21:50:22 crc kubenswrapper[4789]: E0202 21:50:22.421508 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 21:50:23 crc kubenswrapper[4789]: I0202 21:50:23.971684 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4xd8b"] Feb 02 21:50:23 crc kubenswrapper[4789]: E0202 21:50:23.972517 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="864ec1f2-4bb0-49f2-af83-95bf497a039c" containerName="extract-content" Feb 02 21:50:23 crc kubenswrapper[4789]: I0202 21:50:23.972541 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="864ec1f2-4bb0-49f2-af83-95bf497a039c" containerName="extract-content" Feb 02 21:50:23 crc kubenswrapper[4789]: E0202 21:50:23.972563 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="864ec1f2-4bb0-49f2-af83-95bf497a039c" containerName="extract-utilities" Feb 02 21:50:23 crc kubenswrapper[4789]: I0202 21:50:23.972576 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="864ec1f2-4bb0-49f2-af83-95bf497a039c" containerName="extract-utilities" Feb 02 21:50:23 crc kubenswrapper[4789]: E0202 21:50:23.972634 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="864ec1f2-4bb0-49f2-af83-95bf497a039c" containerName="registry-server" Feb 02 21:50:23 crc kubenswrapper[4789]: I0202 21:50:23.972647 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="864ec1f2-4bb0-49f2-af83-95bf497a039c" containerName="registry-server" Feb 02 21:50:23 crc kubenswrapper[4789]: I0202 21:50:23.972936 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="864ec1f2-4bb0-49f2-af83-95bf497a039c" containerName="registry-server" Feb 02 21:50:23 crc kubenswrapper[4789]: I0202 21:50:23.974676 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4xd8b" Feb 02 21:50:23 crc kubenswrapper[4789]: I0202 21:50:23.986445 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4xd8b"] Feb 02 21:50:24 crc kubenswrapper[4789]: I0202 21:50:24.176800 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d220eb8a-1386-4b45-aa32-467f9d8ce1e2-catalog-content\") pod \"community-operators-4xd8b\" (UID: \"d220eb8a-1386-4b45-aa32-467f9d8ce1e2\") " pod="openshift-marketplace/community-operators-4xd8b" Feb 02 21:50:24 crc kubenswrapper[4789]: I0202 21:50:24.177648 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d220eb8a-1386-4b45-aa32-467f9d8ce1e2-utilities\") pod \"community-operators-4xd8b\" (UID: \"d220eb8a-1386-4b45-aa32-467f9d8ce1e2\") " pod="openshift-marketplace/community-operators-4xd8b" Feb 02 21:50:24 crc kubenswrapper[4789]: I0202 21:50:24.177691 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgt6k\" (UniqueName: \"kubernetes.io/projected/d220eb8a-1386-4b45-aa32-467f9d8ce1e2-kube-api-access-qgt6k\") pod \"community-operators-4xd8b\" (UID: \"d220eb8a-1386-4b45-aa32-467f9d8ce1e2\") " pod="openshift-marketplace/community-operators-4xd8b" Feb 02 21:50:24 crc kubenswrapper[4789]: I0202 21:50:24.279363 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d220eb8a-1386-4b45-aa32-467f9d8ce1e2-utilities\") pod \"community-operators-4xd8b\" (UID: \"d220eb8a-1386-4b45-aa32-467f9d8ce1e2\") " pod="openshift-marketplace/community-operators-4xd8b" Feb 02 21:50:24 crc kubenswrapper[4789]: I0202 21:50:24.279430 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgt6k\" (UniqueName: \"kubernetes.io/projected/d220eb8a-1386-4b45-aa32-467f9d8ce1e2-kube-api-access-qgt6k\") pod \"community-operators-4xd8b\" (UID: \"d220eb8a-1386-4b45-aa32-467f9d8ce1e2\") " pod="openshift-marketplace/community-operators-4xd8b" Feb 02 21:50:24 crc kubenswrapper[4789]: I0202 21:50:24.279456 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d220eb8a-1386-4b45-aa32-467f9d8ce1e2-catalog-content\") pod \"community-operators-4xd8b\" (UID: \"d220eb8a-1386-4b45-aa32-467f9d8ce1e2\") " pod="openshift-marketplace/community-operators-4xd8b" Feb 02 21:50:24 crc kubenswrapper[4789]: I0202 21:50:24.280082 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d220eb8a-1386-4b45-aa32-467f9d8ce1e2-catalog-content\") pod \"community-operators-4xd8b\" (UID: \"d220eb8a-1386-4b45-aa32-467f9d8ce1e2\") " pod="openshift-marketplace/community-operators-4xd8b" Feb 02 21:50:24 crc kubenswrapper[4789]: I0202 21:50:24.280099 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d220eb8a-1386-4b45-aa32-467f9d8ce1e2-utilities\") pod \"community-operators-4xd8b\" (UID: \"d220eb8a-1386-4b45-aa32-467f9d8ce1e2\") " pod="openshift-marketplace/community-operators-4xd8b" Feb 02 21:50:24 crc kubenswrapper[4789]: I0202 21:50:24.302022 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgt6k\" (UniqueName: \"kubernetes.io/projected/d220eb8a-1386-4b45-aa32-467f9d8ce1e2-kube-api-access-qgt6k\") pod \"community-operators-4xd8b\" (UID: \"d220eb8a-1386-4b45-aa32-467f9d8ce1e2\") " pod="openshift-marketplace/community-operators-4xd8b" Feb 02 21:50:24 crc kubenswrapper[4789]: I0202 21:50:24.309638 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4xd8b" Feb 02 21:50:24 crc kubenswrapper[4789]: I0202 21:50:24.581052 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4xd8b"] Feb 02 21:50:25 crc kubenswrapper[4789]: I0202 21:50:25.521231 4789 generic.go:334] "Generic (PLEG): container finished" podID="d220eb8a-1386-4b45-aa32-467f9d8ce1e2" containerID="47d147a77e85fceff00ad365f0e058dc4308792f23cd525557f6f1dc01564018" exitCode=0 Feb 02 21:50:25 crc kubenswrapper[4789]: I0202 21:50:25.521380 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4xd8b" event={"ID":"d220eb8a-1386-4b45-aa32-467f9d8ce1e2","Type":"ContainerDied","Data":"47d147a77e85fceff00ad365f0e058dc4308792f23cd525557f6f1dc01564018"} Feb 02 21:50:25 crc kubenswrapper[4789]: I0202 21:50:25.521476 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4xd8b" event={"ID":"d220eb8a-1386-4b45-aa32-467f9d8ce1e2","Type":"ContainerStarted","Data":"7b5176812cd46656f6ff486a37af5727ba51c9b872dc514c8a38697062f27ae6"} Feb 02 21:50:25 crc kubenswrapper[4789]: I0202 21:50:25.524243 4789 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 21:50:26 crc kubenswrapper[4789]: I0202 21:50:26.538894 4789 generic.go:334] "Generic (PLEG): container finished" podID="d220eb8a-1386-4b45-aa32-467f9d8ce1e2" containerID="17b3d64eb576cd9768c5d30a547f610b7bf0e942e7a59937710e381121fdbb42" exitCode=0 Feb 02 21:50:26 crc kubenswrapper[4789]: I0202 21:50:26.538996 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4xd8b" event={"ID":"d220eb8a-1386-4b45-aa32-467f9d8ce1e2","Type":"ContainerDied","Data":"17b3d64eb576cd9768c5d30a547f610b7bf0e942e7a59937710e381121fdbb42"} Feb 02 21:50:27 crc kubenswrapper[4789]: I0202 21:50:27.552121 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4xd8b" event={"ID":"d220eb8a-1386-4b45-aa32-467f9d8ce1e2","Type":"ContainerStarted","Data":"56889c15a615d4b94111a873f7c58afb902acf521b9c50b2218a8014f1029f87"} Feb 02 21:50:27 crc kubenswrapper[4789]: I0202 21:50:27.578907 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4xd8b" podStartSLOduration=3.144248201 podStartE2EDuration="4.578884228s" podCreationTimestamp="2026-02-02 21:50:23 +0000 UTC" firstStartedPulling="2026-02-02 21:50:25.524038565 +0000 UTC m=+1845.819063584" lastFinishedPulling="2026-02-02 21:50:26.958674592 +0000 UTC m=+1847.253699611" observedRunningTime="2026-02-02 21:50:27.570714767 +0000 UTC m=+1847.865739816" watchObservedRunningTime="2026-02-02 21:50:27.578884228 +0000 UTC m=+1847.873909277" Feb 02 21:50:33 crc kubenswrapper[4789]: I0202 21:50:33.426730 4789 scope.go:117] "RemoveContainer" containerID="de6aa73a1267c130655031ef3c4dd7e6bb34d4ce8d75bf21c205ad83e1223e18" Feb 02 21:50:33 crc kubenswrapper[4789]: E0202 21:50:33.429202 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 21:50:34 crc kubenswrapper[4789]: I0202 21:50:34.310753 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4xd8b" Feb 02 21:50:34 crc kubenswrapper[4789]: I0202 21:50:34.311154 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4xd8b" Feb 02 21:50:34 crc kubenswrapper[4789]: I0202 21:50:34.386087 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4xd8b" Feb 02 21:50:34 crc kubenswrapper[4789]: I0202 21:50:34.665216 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4xd8b" Feb 02 21:50:34 crc kubenswrapper[4789]: I0202 21:50:34.731865 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4xd8b"] Feb 02 21:50:36 crc kubenswrapper[4789]: I0202 21:50:36.635486 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4xd8b" podUID="d220eb8a-1386-4b45-aa32-467f9d8ce1e2" containerName="registry-server" containerID="cri-o://56889c15a615d4b94111a873f7c58afb902acf521b9c50b2218a8014f1029f87" gracePeriod=2 Feb 02 21:50:37 crc kubenswrapper[4789]: I0202 21:50:37.096725 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4xd8b" Feb 02 21:50:37 crc kubenswrapper[4789]: I0202 21:50:37.278650 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d220eb8a-1386-4b45-aa32-467f9d8ce1e2-catalog-content\") pod \"d220eb8a-1386-4b45-aa32-467f9d8ce1e2\" (UID: \"d220eb8a-1386-4b45-aa32-467f9d8ce1e2\") " Feb 02 21:50:37 crc kubenswrapper[4789]: I0202 21:50:37.278783 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgt6k\" (UniqueName: \"kubernetes.io/projected/d220eb8a-1386-4b45-aa32-467f9d8ce1e2-kube-api-access-qgt6k\") pod \"d220eb8a-1386-4b45-aa32-467f9d8ce1e2\" (UID: \"d220eb8a-1386-4b45-aa32-467f9d8ce1e2\") " Feb 02 21:50:37 crc kubenswrapper[4789]: I0202 21:50:37.278864 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d220eb8a-1386-4b45-aa32-467f9d8ce1e2-utilities\") pod \"d220eb8a-1386-4b45-aa32-467f9d8ce1e2\" (UID: \"d220eb8a-1386-4b45-aa32-467f9d8ce1e2\") " Feb 02 21:50:37 crc kubenswrapper[4789]: I0202 21:50:37.279949 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d220eb8a-1386-4b45-aa32-467f9d8ce1e2-utilities" (OuterVolumeSpecName: "utilities") pod "d220eb8a-1386-4b45-aa32-467f9d8ce1e2" (UID: "d220eb8a-1386-4b45-aa32-467f9d8ce1e2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 21:50:37 crc kubenswrapper[4789]: I0202 21:50:37.287193 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d220eb8a-1386-4b45-aa32-467f9d8ce1e2-kube-api-access-qgt6k" (OuterVolumeSpecName: "kube-api-access-qgt6k") pod "d220eb8a-1386-4b45-aa32-467f9d8ce1e2" (UID: "d220eb8a-1386-4b45-aa32-467f9d8ce1e2"). InnerVolumeSpecName "kube-api-access-qgt6k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:50:37 crc kubenswrapper[4789]: I0202 21:50:37.370601 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d220eb8a-1386-4b45-aa32-467f9d8ce1e2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d220eb8a-1386-4b45-aa32-467f9d8ce1e2" (UID: "d220eb8a-1386-4b45-aa32-467f9d8ce1e2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 21:50:37 crc kubenswrapper[4789]: I0202 21:50:37.380794 4789 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d220eb8a-1386-4b45-aa32-467f9d8ce1e2-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 21:50:37 crc kubenswrapper[4789]: I0202 21:50:37.380843 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qgt6k\" (UniqueName: \"kubernetes.io/projected/d220eb8a-1386-4b45-aa32-467f9d8ce1e2-kube-api-access-qgt6k\") on node \"crc\" DevicePath \"\"" Feb 02 21:50:37 crc kubenswrapper[4789]: I0202 21:50:37.380856 4789 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d220eb8a-1386-4b45-aa32-467f9d8ce1e2-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 21:50:37 crc kubenswrapper[4789]: I0202 21:50:37.653109 4789 generic.go:334] "Generic (PLEG): container finished" podID="d220eb8a-1386-4b45-aa32-467f9d8ce1e2" containerID="56889c15a615d4b94111a873f7c58afb902acf521b9c50b2218a8014f1029f87" exitCode=0 Feb 02 21:50:37 crc kubenswrapper[4789]: I0202 21:50:37.653161 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4xd8b" event={"ID":"d220eb8a-1386-4b45-aa32-467f9d8ce1e2","Type":"ContainerDied","Data":"56889c15a615d4b94111a873f7c58afb902acf521b9c50b2218a8014f1029f87"} Feb 02 21:50:37 crc kubenswrapper[4789]: I0202 21:50:37.653191 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4xd8b" event={"ID":"d220eb8a-1386-4b45-aa32-467f9d8ce1e2","Type":"ContainerDied","Data":"7b5176812cd46656f6ff486a37af5727ba51c9b872dc514c8a38697062f27ae6"} Feb 02 21:50:37 crc kubenswrapper[4789]: I0202 21:50:37.653212 4789 scope.go:117] "RemoveContainer" containerID="56889c15a615d4b94111a873f7c58afb902acf521b9c50b2218a8014f1029f87" Feb 02 21:50:37 crc kubenswrapper[4789]: I0202 21:50:37.653225 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4xd8b" Feb 02 21:50:37 crc kubenswrapper[4789]: I0202 21:50:37.676890 4789 scope.go:117] "RemoveContainer" containerID="17b3d64eb576cd9768c5d30a547f610b7bf0e942e7a59937710e381121fdbb42" Feb 02 21:50:37 crc kubenswrapper[4789]: I0202 21:50:37.702298 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4xd8b"] Feb 02 21:50:37 crc kubenswrapper[4789]: I0202 21:50:37.703568 4789 scope.go:117] "RemoveContainer" containerID="47d147a77e85fceff00ad365f0e058dc4308792f23cd525557f6f1dc01564018" Feb 02 21:50:37 crc kubenswrapper[4789]: I0202 21:50:37.706888 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4xd8b"] Feb 02 21:50:37 crc kubenswrapper[4789]: I0202 21:50:37.739339 4789 scope.go:117] "RemoveContainer" containerID="56889c15a615d4b94111a873f7c58afb902acf521b9c50b2218a8014f1029f87" Feb 02 21:50:37 crc kubenswrapper[4789]: E0202 21:50:37.739764 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56889c15a615d4b94111a873f7c58afb902acf521b9c50b2218a8014f1029f87\": container with ID starting with 56889c15a615d4b94111a873f7c58afb902acf521b9c50b2218a8014f1029f87 not found: ID does not exist" containerID="56889c15a615d4b94111a873f7c58afb902acf521b9c50b2218a8014f1029f87" Feb 02 21:50:37 crc kubenswrapper[4789]: I0202 21:50:37.739812 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56889c15a615d4b94111a873f7c58afb902acf521b9c50b2218a8014f1029f87"} err="failed to get container status \"56889c15a615d4b94111a873f7c58afb902acf521b9c50b2218a8014f1029f87\": rpc error: code = NotFound desc = could not find container \"56889c15a615d4b94111a873f7c58afb902acf521b9c50b2218a8014f1029f87\": container with ID starting with 56889c15a615d4b94111a873f7c58afb902acf521b9c50b2218a8014f1029f87 not found: ID does not exist" Feb 02 21:50:37 crc kubenswrapper[4789]: I0202 21:50:37.739860 4789 scope.go:117] "RemoveContainer" containerID="17b3d64eb576cd9768c5d30a547f610b7bf0e942e7a59937710e381121fdbb42" Feb 02 21:50:37 crc kubenswrapper[4789]: E0202 21:50:37.740406 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17b3d64eb576cd9768c5d30a547f610b7bf0e942e7a59937710e381121fdbb42\": container with ID starting with 17b3d64eb576cd9768c5d30a547f610b7bf0e942e7a59937710e381121fdbb42 not found: ID does not exist" containerID="17b3d64eb576cd9768c5d30a547f610b7bf0e942e7a59937710e381121fdbb42" Feb 02 21:50:37 crc kubenswrapper[4789]: I0202 21:50:37.740453 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17b3d64eb576cd9768c5d30a547f610b7bf0e942e7a59937710e381121fdbb42"} err="failed to get container status \"17b3d64eb576cd9768c5d30a547f610b7bf0e942e7a59937710e381121fdbb42\": rpc error: code = NotFound desc = could not find container \"17b3d64eb576cd9768c5d30a547f610b7bf0e942e7a59937710e381121fdbb42\": container with ID starting with 17b3d64eb576cd9768c5d30a547f610b7bf0e942e7a59937710e381121fdbb42 not found: ID does not exist" Feb 02 21:50:37 crc kubenswrapper[4789]: I0202 21:50:37.740483 4789 scope.go:117] "RemoveContainer" containerID="47d147a77e85fceff00ad365f0e058dc4308792f23cd525557f6f1dc01564018" Feb 02 21:50:37 crc kubenswrapper[4789]: E0202 21:50:37.740828 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47d147a77e85fceff00ad365f0e058dc4308792f23cd525557f6f1dc01564018\": container with ID starting with 47d147a77e85fceff00ad365f0e058dc4308792f23cd525557f6f1dc01564018 not found: ID does not exist" containerID="47d147a77e85fceff00ad365f0e058dc4308792f23cd525557f6f1dc01564018" Feb 02 21:50:37 crc kubenswrapper[4789]: I0202 21:50:37.740879 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47d147a77e85fceff00ad365f0e058dc4308792f23cd525557f6f1dc01564018"} err="failed to get container status \"47d147a77e85fceff00ad365f0e058dc4308792f23cd525557f6f1dc01564018\": rpc error: code = NotFound desc = could not find container \"47d147a77e85fceff00ad365f0e058dc4308792f23cd525557f6f1dc01564018\": container with ID starting with 47d147a77e85fceff00ad365f0e058dc4308792f23cd525557f6f1dc01564018 not found: ID does not exist" Feb 02 21:50:38 crc kubenswrapper[4789]: I0202 21:50:38.433044 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d220eb8a-1386-4b45-aa32-467f9d8ce1e2" path="/var/lib/kubelet/pods/d220eb8a-1386-4b45-aa32-467f9d8ce1e2/volumes" Feb 02 21:50:45 crc kubenswrapper[4789]: I0202 21:50:45.420056 4789 scope.go:117] "RemoveContainer" containerID="de6aa73a1267c130655031ef3c4dd7e6bb34d4ce8d75bf21c205ad83e1223e18" Feb 02 21:50:45 crc kubenswrapper[4789]: E0202 21:50:45.420923 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 21:51:00 crc kubenswrapper[4789]: I0202 21:51:00.445791 4789 scope.go:117] "RemoveContainer" containerID="de6aa73a1267c130655031ef3c4dd7e6bb34d4ce8d75bf21c205ad83e1223e18" Feb 02 21:51:00 crc kubenswrapper[4789]: E0202 21:51:00.451950 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 21:51:13 crc kubenswrapper[4789]: I0202 21:51:13.420531 4789 scope.go:117] "RemoveContainer" containerID="de6aa73a1267c130655031ef3c4dd7e6bb34d4ce8d75bf21c205ad83e1223e18" Feb 02 21:51:13 crc kubenswrapper[4789]: E0202 21:51:13.421672 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 21:51:26 crc kubenswrapper[4789]: I0202 21:51:26.420619 4789 scope.go:117] "RemoveContainer" containerID="de6aa73a1267c130655031ef3c4dd7e6bb34d4ce8d75bf21c205ad83e1223e18" Feb 02 21:51:26 crc kubenswrapper[4789]: E0202 21:51:26.421309 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 21:51:38 crc kubenswrapper[4789]: I0202 21:51:38.420331 4789 scope.go:117] "RemoveContainer" containerID="de6aa73a1267c130655031ef3c4dd7e6bb34d4ce8d75bf21c205ad83e1223e18" Feb 02 21:51:38 crc kubenswrapper[4789]: E0202 21:51:38.421277 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 21:51:51 crc kubenswrapper[4789]: I0202 21:51:51.419968 4789 scope.go:117] "RemoveContainer" containerID="de6aa73a1267c130655031ef3c4dd7e6bb34d4ce8d75bf21c205ad83e1223e18" Feb 02 21:51:51 crc kubenswrapper[4789]: E0202 21:51:51.420789 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 21:52:07 crc kubenswrapper[4789]: I0202 21:52:07.420321 4789 scope.go:117] "RemoveContainer" containerID="de6aa73a1267c130655031ef3c4dd7e6bb34d4ce8d75bf21c205ad83e1223e18" Feb 02 21:52:07 crc kubenswrapper[4789]: E0202 21:52:07.421649 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 21:52:18 crc kubenswrapper[4789]: I0202 21:52:18.419861 4789 scope.go:117] "RemoveContainer" containerID="de6aa73a1267c130655031ef3c4dd7e6bb34d4ce8d75bf21c205ad83e1223e18" Feb 02 21:52:18 crc kubenswrapper[4789]: E0202 21:52:18.420546 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 21:52:33 crc kubenswrapper[4789]: I0202 21:52:33.420114 4789 scope.go:117] "RemoveContainer" containerID="de6aa73a1267c130655031ef3c4dd7e6bb34d4ce8d75bf21c205ad83e1223e18" Feb 02 21:52:33 crc kubenswrapper[4789]: E0202 21:52:33.421270 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 21:52:47 crc kubenswrapper[4789]: I0202 21:52:47.419462 4789 scope.go:117] "RemoveContainer" containerID="de6aa73a1267c130655031ef3c4dd7e6bb34d4ce8d75bf21c205ad83e1223e18" Feb 02 21:52:47 crc kubenswrapper[4789]: E0202 21:52:47.420535 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 21:53:02 crc kubenswrapper[4789]: I0202 21:53:02.420653 4789 scope.go:117] "RemoveContainer" containerID="de6aa73a1267c130655031ef3c4dd7e6bb34d4ce8d75bf21c205ad83e1223e18" Feb 02 21:53:03 crc kubenswrapper[4789]: I0202 21:53:03.068690 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" event={"ID":"bdf018b4-1451-4d37-be6e-05802b67c73e","Type":"ContainerStarted","Data":"7b61f591fd4d0e3f42e5bba6786e3823208a066fad37c1d40c104df25eafefc3"} Feb 02 21:53:14 crc kubenswrapper[4789]: I0202 21:53:14.007137 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-h5wnt"] Feb 02 21:53:14 crc kubenswrapper[4789]: E0202 21:53:14.007938 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d220eb8a-1386-4b45-aa32-467f9d8ce1e2" containerName="registry-server" Feb 02 21:53:14 crc kubenswrapper[4789]: I0202 21:53:14.007953 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="d220eb8a-1386-4b45-aa32-467f9d8ce1e2" containerName="registry-server" Feb 02 21:53:14 crc kubenswrapper[4789]: E0202 21:53:14.007995 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d220eb8a-1386-4b45-aa32-467f9d8ce1e2" containerName="extract-content" Feb 02 21:53:14 crc kubenswrapper[4789]: I0202 21:53:14.008003 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="d220eb8a-1386-4b45-aa32-467f9d8ce1e2" containerName="extract-content" Feb 02 21:53:14 crc kubenswrapper[4789]: E0202 21:53:14.008014 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d220eb8a-1386-4b45-aa32-467f9d8ce1e2" containerName="extract-utilities" Feb 02 21:53:14 crc kubenswrapper[4789]: I0202 21:53:14.008023 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="d220eb8a-1386-4b45-aa32-467f9d8ce1e2" containerName="extract-utilities" Feb 02 21:53:14 crc kubenswrapper[4789]: I0202 21:53:14.008196 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="d220eb8a-1386-4b45-aa32-467f9d8ce1e2" containerName="registry-server" Feb 02 21:53:14 crc kubenswrapper[4789]: I0202 21:53:14.009272 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h5wnt" Feb 02 21:53:14 crc kubenswrapper[4789]: I0202 21:53:14.067155 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbf665f0-6d87-41db-bb44-d9fe0eaefeed-utilities\") pod \"redhat-operators-h5wnt\" (UID: \"bbf665f0-6d87-41db-bb44-d9fe0eaefeed\") " pod="openshift-marketplace/redhat-operators-h5wnt" Feb 02 21:53:14 crc kubenswrapper[4789]: I0202 21:53:14.067252 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbf665f0-6d87-41db-bb44-d9fe0eaefeed-catalog-content\") pod \"redhat-operators-h5wnt\" (UID: \"bbf665f0-6d87-41db-bb44-d9fe0eaefeed\") " pod="openshift-marketplace/redhat-operators-h5wnt" Feb 02 21:53:14 crc kubenswrapper[4789]: I0202 21:53:14.067637 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zt6r9\" (UniqueName: \"kubernetes.io/projected/bbf665f0-6d87-41db-bb44-d9fe0eaefeed-kube-api-access-zt6r9\") pod \"redhat-operators-h5wnt\" (UID: \"bbf665f0-6d87-41db-bb44-d9fe0eaefeed\") " pod="openshift-marketplace/redhat-operators-h5wnt" Feb 02 21:53:14 crc kubenswrapper[4789]: I0202 21:53:14.067725 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-h5wnt"] Feb 02 21:53:14 crc kubenswrapper[4789]: I0202 21:53:14.168791 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbf665f0-6d87-41db-bb44-d9fe0eaefeed-utilities\") pod \"redhat-operators-h5wnt\" (UID: \"bbf665f0-6d87-41db-bb44-d9fe0eaefeed\") " pod="openshift-marketplace/redhat-operators-h5wnt" Feb 02 21:53:14 crc kubenswrapper[4789]: I0202 21:53:14.168906 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbf665f0-6d87-41db-bb44-d9fe0eaefeed-catalog-content\") pod \"redhat-operators-h5wnt\" (UID: \"bbf665f0-6d87-41db-bb44-d9fe0eaefeed\") " pod="openshift-marketplace/redhat-operators-h5wnt" Feb 02 21:53:14 crc kubenswrapper[4789]: I0202 21:53:14.168954 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zt6r9\" (UniqueName: \"kubernetes.io/projected/bbf665f0-6d87-41db-bb44-d9fe0eaefeed-kube-api-access-zt6r9\") pod \"redhat-operators-h5wnt\" (UID: \"bbf665f0-6d87-41db-bb44-d9fe0eaefeed\") " pod="openshift-marketplace/redhat-operators-h5wnt" Feb 02 21:53:14 crc kubenswrapper[4789]: I0202 21:53:14.169863 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbf665f0-6d87-41db-bb44-d9fe0eaefeed-utilities\") pod \"redhat-operators-h5wnt\" (UID: \"bbf665f0-6d87-41db-bb44-d9fe0eaefeed\") " pod="openshift-marketplace/redhat-operators-h5wnt" Feb 02 21:53:14 crc kubenswrapper[4789]: I0202 21:53:14.170145 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbf665f0-6d87-41db-bb44-d9fe0eaefeed-catalog-content\") pod \"redhat-operators-h5wnt\" (UID: \"bbf665f0-6d87-41db-bb44-d9fe0eaefeed\") " pod="openshift-marketplace/redhat-operators-h5wnt" Feb 02 21:53:14 crc kubenswrapper[4789]: I0202 21:53:14.199146 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zt6r9\" (UniqueName: \"kubernetes.io/projected/bbf665f0-6d87-41db-bb44-d9fe0eaefeed-kube-api-access-zt6r9\") pod \"redhat-operators-h5wnt\" (UID: \"bbf665f0-6d87-41db-bb44-d9fe0eaefeed\") " pod="openshift-marketplace/redhat-operators-h5wnt" Feb 02 21:53:14 crc kubenswrapper[4789]: I0202 21:53:14.350807 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h5wnt" Feb 02 21:53:14 crc kubenswrapper[4789]: I0202 21:53:14.776714 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-h5wnt"] Feb 02 21:53:15 crc kubenswrapper[4789]: I0202 21:53:15.169020 4789 generic.go:334] "Generic (PLEG): container finished" podID="bbf665f0-6d87-41db-bb44-d9fe0eaefeed" containerID="1587cd3e8c4039466f1a7bd307df2000bba974bc36f580fc6a36ebf89ccf5ead" exitCode=0 Feb 02 21:53:15 crc kubenswrapper[4789]: I0202 21:53:15.169066 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h5wnt" event={"ID":"bbf665f0-6d87-41db-bb44-d9fe0eaefeed","Type":"ContainerDied","Data":"1587cd3e8c4039466f1a7bd307df2000bba974bc36f580fc6a36ebf89ccf5ead"} Feb 02 21:53:15 crc kubenswrapper[4789]: I0202 21:53:15.169109 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h5wnt" event={"ID":"bbf665f0-6d87-41db-bb44-d9fe0eaefeed","Type":"ContainerStarted","Data":"d48890d8510bc40f14b2c420ee278a1addc53e314c0fccc5ad4364e3b07c05a0"} Feb 02 21:53:16 crc kubenswrapper[4789]: I0202 21:53:16.181796 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h5wnt" event={"ID":"bbf665f0-6d87-41db-bb44-d9fe0eaefeed","Type":"ContainerStarted","Data":"89fccb4b0e908814b3d724d5dcb4f649261f5b8eec37d1832e1d7fb7d2fb77d0"} Feb 02 21:53:17 crc kubenswrapper[4789]: I0202 21:53:17.194535 4789 generic.go:334] "Generic (PLEG): container finished" podID="bbf665f0-6d87-41db-bb44-d9fe0eaefeed" containerID="89fccb4b0e908814b3d724d5dcb4f649261f5b8eec37d1832e1d7fb7d2fb77d0" exitCode=0 Feb 02 21:53:17 crc kubenswrapper[4789]: I0202 21:53:17.194621 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h5wnt" event={"ID":"bbf665f0-6d87-41db-bb44-d9fe0eaefeed","Type":"ContainerDied","Data":"89fccb4b0e908814b3d724d5dcb4f649261f5b8eec37d1832e1d7fb7d2fb77d0"} Feb 02 21:53:18 crc kubenswrapper[4789]: I0202 21:53:18.203786 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h5wnt" event={"ID":"bbf665f0-6d87-41db-bb44-d9fe0eaefeed","Type":"ContainerStarted","Data":"c0ce2799bf5333cfe56ba6bede6db76b43c20461f803d1764893e31c5574569d"} Feb 02 21:53:18 crc kubenswrapper[4789]: I0202 21:53:18.236883 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-h5wnt" podStartSLOduration=2.814587682 podStartE2EDuration="5.236860647s" podCreationTimestamp="2026-02-02 21:53:13 +0000 UTC" firstStartedPulling="2026-02-02 21:53:15.171279753 +0000 UTC m=+2015.466304772" lastFinishedPulling="2026-02-02 21:53:17.593552678 +0000 UTC m=+2017.888577737" observedRunningTime="2026-02-02 21:53:18.227282256 +0000 UTC m=+2018.522307315" watchObservedRunningTime="2026-02-02 21:53:18.236860647 +0000 UTC m=+2018.531885706" Feb 02 21:53:24 crc kubenswrapper[4789]: I0202 21:53:24.351303 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-h5wnt" Feb 02 21:53:24 crc kubenswrapper[4789]: I0202 21:53:24.351951 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-h5wnt" Feb 02 21:53:25 crc kubenswrapper[4789]: I0202 21:53:25.416366 4789 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-h5wnt" podUID="bbf665f0-6d87-41db-bb44-d9fe0eaefeed" containerName="registry-server" probeResult="failure" output=< Feb 02 21:53:25 crc kubenswrapper[4789]: timeout: failed to connect service ":50051" within 1s Feb 02 21:53:25 crc kubenswrapper[4789]: > Feb 02 21:53:34 crc kubenswrapper[4789]: I0202 21:53:34.436697 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-h5wnt" Feb 02 21:53:34 crc kubenswrapper[4789]: I0202 21:53:34.500419 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-h5wnt" Feb 02 21:53:34 crc kubenswrapper[4789]: I0202 21:53:34.683044 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-h5wnt"] Feb 02 21:53:36 crc kubenswrapper[4789]: I0202 21:53:36.382393 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-h5wnt" podUID="bbf665f0-6d87-41db-bb44-d9fe0eaefeed" containerName="registry-server" containerID="cri-o://c0ce2799bf5333cfe56ba6bede6db76b43c20461f803d1764893e31c5574569d" gracePeriod=2 Feb 02 21:53:36 crc kubenswrapper[4789]: I0202 21:53:36.928548 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h5wnt" Feb 02 21:53:36 crc kubenswrapper[4789]: I0202 21:53:36.962396 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zt6r9\" (UniqueName: \"kubernetes.io/projected/bbf665f0-6d87-41db-bb44-d9fe0eaefeed-kube-api-access-zt6r9\") pod \"bbf665f0-6d87-41db-bb44-d9fe0eaefeed\" (UID: \"bbf665f0-6d87-41db-bb44-d9fe0eaefeed\") " Feb 02 21:53:36 crc kubenswrapper[4789]: I0202 21:53:36.962521 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbf665f0-6d87-41db-bb44-d9fe0eaefeed-catalog-content\") pod \"bbf665f0-6d87-41db-bb44-d9fe0eaefeed\" (UID: \"bbf665f0-6d87-41db-bb44-d9fe0eaefeed\") " Feb 02 21:53:36 crc kubenswrapper[4789]: I0202 21:53:36.962614 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbf665f0-6d87-41db-bb44-d9fe0eaefeed-utilities\") pod \"bbf665f0-6d87-41db-bb44-d9fe0eaefeed\" (UID: \"bbf665f0-6d87-41db-bb44-d9fe0eaefeed\") " Feb 02 21:53:36 crc kubenswrapper[4789]: I0202 21:53:36.965167 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bbf665f0-6d87-41db-bb44-d9fe0eaefeed-utilities" (OuterVolumeSpecName: "utilities") pod "bbf665f0-6d87-41db-bb44-d9fe0eaefeed" (UID: "bbf665f0-6d87-41db-bb44-d9fe0eaefeed"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 21:53:36 crc kubenswrapper[4789]: I0202 21:53:36.974362 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbf665f0-6d87-41db-bb44-d9fe0eaefeed-kube-api-access-zt6r9" (OuterVolumeSpecName: "kube-api-access-zt6r9") pod "bbf665f0-6d87-41db-bb44-d9fe0eaefeed" (UID: "bbf665f0-6d87-41db-bb44-d9fe0eaefeed"). InnerVolumeSpecName "kube-api-access-zt6r9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:53:37 crc kubenswrapper[4789]: I0202 21:53:37.065471 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zt6r9\" (UniqueName: \"kubernetes.io/projected/bbf665f0-6d87-41db-bb44-d9fe0eaefeed-kube-api-access-zt6r9\") on node \"crc\" DevicePath \"\"" Feb 02 21:53:37 crc kubenswrapper[4789]: I0202 21:53:37.065510 4789 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbf665f0-6d87-41db-bb44-d9fe0eaefeed-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 21:53:37 crc kubenswrapper[4789]: I0202 21:53:37.129294 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bbf665f0-6d87-41db-bb44-d9fe0eaefeed-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bbf665f0-6d87-41db-bb44-d9fe0eaefeed" (UID: "bbf665f0-6d87-41db-bb44-d9fe0eaefeed"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 21:53:37 crc kubenswrapper[4789]: I0202 21:53:37.166464 4789 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbf665f0-6d87-41db-bb44-d9fe0eaefeed-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 21:53:37 crc kubenswrapper[4789]: I0202 21:53:37.396690 4789 generic.go:334] "Generic (PLEG): container finished" podID="bbf665f0-6d87-41db-bb44-d9fe0eaefeed" containerID="c0ce2799bf5333cfe56ba6bede6db76b43c20461f803d1764893e31c5574569d" exitCode=0 Feb 02 21:53:37 crc kubenswrapper[4789]: I0202 21:53:37.396761 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h5wnt" event={"ID":"bbf665f0-6d87-41db-bb44-d9fe0eaefeed","Type":"ContainerDied","Data":"c0ce2799bf5333cfe56ba6bede6db76b43c20461f803d1764893e31c5574569d"} Feb 02 21:53:37 crc kubenswrapper[4789]: I0202 21:53:37.396797 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h5wnt" Feb 02 21:53:37 crc kubenswrapper[4789]: I0202 21:53:37.396836 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h5wnt" event={"ID":"bbf665f0-6d87-41db-bb44-d9fe0eaefeed","Type":"ContainerDied","Data":"d48890d8510bc40f14b2c420ee278a1addc53e314c0fccc5ad4364e3b07c05a0"} Feb 02 21:53:37 crc kubenswrapper[4789]: I0202 21:53:37.396876 4789 scope.go:117] "RemoveContainer" containerID="c0ce2799bf5333cfe56ba6bede6db76b43c20461f803d1764893e31c5574569d" Feb 02 21:53:37 crc kubenswrapper[4789]: I0202 21:53:37.430889 4789 scope.go:117] "RemoveContainer" containerID="89fccb4b0e908814b3d724d5dcb4f649261f5b8eec37d1832e1d7fb7d2fb77d0" Feb 02 21:53:37 crc kubenswrapper[4789]: I0202 21:53:37.461149 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-h5wnt"] Feb 02 21:53:37 crc kubenswrapper[4789]: I0202 21:53:37.468027 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-h5wnt"] Feb 02 21:53:37 crc kubenswrapper[4789]: I0202 21:53:37.471231 4789 scope.go:117] "RemoveContainer" containerID="1587cd3e8c4039466f1a7bd307df2000bba974bc36f580fc6a36ebf89ccf5ead" Feb 02 21:53:37 crc kubenswrapper[4789]: I0202 21:53:37.507833 4789 scope.go:117] "RemoveContainer" containerID="c0ce2799bf5333cfe56ba6bede6db76b43c20461f803d1764893e31c5574569d" Feb 02 21:53:37 crc kubenswrapper[4789]: E0202 21:53:37.508343 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0ce2799bf5333cfe56ba6bede6db76b43c20461f803d1764893e31c5574569d\": container with ID starting with c0ce2799bf5333cfe56ba6bede6db76b43c20461f803d1764893e31c5574569d not found: ID does not exist" containerID="c0ce2799bf5333cfe56ba6bede6db76b43c20461f803d1764893e31c5574569d" Feb 02 21:53:37 crc kubenswrapper[4789]: I0202 21:53:37.508408 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0ce2799bf5333cfe56ba6bede6db76b43c20461f803d1764893e31c5574569d"} err="failed to get container status \"c0ce2799bf5333cfe56ba6bede6db76b43c20461f803d1764893e31c5574569d\": rpc error: code = NotFound desc = could not find container \"c0ce2799bf5333cfe56ba6bede6db76b43c20461f803d1764893e31c5574569d\": container with ID starting with c0ce2799bf5333cfe56ba6bede6db76b43c20461f803d1764893e31c5574569d not found: ID does not exist" Feb 02 21:53:37 crc kubenswrapper[4789]: I0202 21:53:37.508450 4789 scope.go:117] "RemoveContainer" containerID="89fccb4b0e908814b3d724d5dcb4f649261f5b8eec37d1832e1d7fb7d2fb77d0" Feb 02 21:53:37 crc kubenswrapper[4789]: E0202 21:53:37.510165 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89fccb4b0e908814b3d724d5dcb4f649261f5b8eec37d1832e1d7fb7d2fb77d0\": container with ID starting with 89fccb4b0e908814b3d724d5dcb4f649261f5b8eec37d1832e1d7fb7d2fb77d0 not found: ID does not exist" containerID="89fccb4b0e908814b3d724d5dcb4f649261f5b8eec37d1832e1d7fb7d2fb77d0" Feb 02 21:53:37 crc kubenswrapper[4789]: I0202 21:53:37.510205 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89fccb4b0e908814b3d724d5dcb4f649261f5b8eec37d1832e1d7fb7d2fb77d0"} err="failed to get container status \"89fccb4b0e908814b3d724d5dcb4f649261f5b8eec37d1832e1d7fb7d2fb77d0\": rpc error: code = NotFound desc = could not find container \"89fccb4b0e908814b3d724d5dcb4f649261f5b8eec37d1832e1d7fb7d2fb77d0\": container with ID starting with 89fccb4b0e908814b3d724d5dcb4f649261f5b8eec37d1832e1d7fb7d2fb77d0 not found: ID does not exist" Feb 02 21:53:37 crc kubenswrapper[4789]: I0202 21:53:37.510231 4789 scope.go:117] "RemoveContainer" containerID="1587cd3e8c4039466f1a7bd307df2000bba974bc36f580fc6a36ebf89ccf5ead" Feb 02 21:53:37 crc kubenswrapper[4789]: E0202 21:53:37.510720 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1587cd3e8c4039466f1a7bd307df2000bba974bc36f580fc6a36ebf89ccf5ead\": container with ID starting with 1587cd3e8c4039466f1a7bd307df2000bba974bc36f580fc6a36ebf89ccf5ead not found: ID does not exist" containerID="1587cd3e8c4039466f1a7bd307df2000bba974bc36f580fc6a36ebf89ccf5ead" Feb 02 21:53:37 crc kubenswrapper[4789]: I0202 21:53:37.510789 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1587cd3e8c4039466f1a7bd307df2000bba974bc36f580fc6a36ebf89ccf5ead"} err="failed to get container status \"1587cd3e8c4039466f1a7bd307df2000bba974bc36f580fc6a36ebf89ccf5ead\": rpc error: code = NotFound desc = could not find container \"1587cd3e8c4039466f1a7bd307df2000bba974bc36f580fc6a36ebf89ccf5ead\": container with ID starting with 1587cd3e8c4039466f1a7bd307df2000bba974bc36f580fc6a36ebf89ccf5ead not found: ID does not exist" Feb 02 21:53:38 crc kubenswrapper[4789]: I0202 21:53:38.436650 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbf665f0-6d87-41db-bb44-d9fe0eaefeed" path="/var/lib/kubelet/pods/bbf665f0-6d87-41db-bb44-d9fe0eaefeed/volumes" Feb 02 21:55:07 crc kubenswrapper[4789]: I0202 21:55:07.529002 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jpv5f"] Feb 02 21:55:07 crc kubenswrapper[4789]: E0202 21:55:07.530092 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbf665f0-6d87-41db-bb44-d9fe0eaefeed" containerName="extract-utilities" Feb 02 21:55:07 crc kubenswrapper[4789]: I0202 21:55:07.530129 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbf665f0-6d87-41db-bb44-d9fe0eaefeed" containerName="extract-utilities" Feb 02 21:55:07 crc kubenswrapper[4789]: E0202 21:55:07.530144 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbf665f0-6d87-41db-bb44-d9fe0eaefeed" containerName="registry-server" Feb 02 21:55:07 crc kubenswrapper[4789]: I0202 21:55:07.530151 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbf665f0-6d87-41db-bb44-d9fe0eaefeed" containerName="registry-server" Feb 02 21:55:07 crc kubenswrapper[4789]: E0202 21:55:07.530165 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbf665f0-6d87-41db-bb44-d9fe0eaefeed" containerName="extract-content" Feb 02 21:55:07 crc kubenswrapper[4789]: I0202 21:55:07.530171 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbf665f0-6d87-41db-bb44-d9fe0eaefeed" containerName="extract-content" Feb 02 21:55:07 crc kubenswrapper[4789]: I0202 21:55:07.530335 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbf665f0-6d87-41db-bb44-d9fe0eaefeed" containerName="registry-server" Feb 02 21:55:07 crc kubenswrapper[4789]: I0202 21:55:07.531514 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jpv5f" Feb 02 21:55:07 crc kubenswrapper[4789]: I0202 21:55:07.564293 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jpv5f"] Feb 02 21:55:07 crc kubenswrapper[4789]: I0202 21:55:07.624337 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtftj\" (UniqueName: \"kubernetes.io/projected/dab3f503-6303-4f47-8389-b0dd0d09d60c-kube-api-access-dtftj\") pod \"redhat-marketplace-jpv5f\" (UID: \"dab3f503-6303-4f47-8389-b0dd0d09d60c\") " pod="openshift-marketplace/redhat-marketplace-jpv5f" Feb 02 21:55:07 crc kubenswrapper[4789]: I0202 21:55:07.624395 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dab3f503-6303-4f47-8389-b0dd0d09d60c-catalog-content\") pod \"redhat-marketplace-jpv5f\" (UID: \"dab3f503-6303-4f47-8389-b0dd0d09d60c\") " pod="openshift-marketplace/redhat-marketplace-jpv5f" Feb 02 21:55:07 crc kubenswrapper[4789]: I0202 21:55:07.624415 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dab3f503-6303-4f47-8389-b0dd0d09d60c-utilities\") pod \"redhat-marketplace-jpv5f\" (UID: \"dab3f503-6303-4f47-8389-b0dd0d09d60c\") " pod="openshift-marketplace/redhat-marketplace-jpv5f" Feb 02 21:55:07 crc kubenswrapper[4789]: I0202 21:55:07.725917 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtftj\" (UniqueName: \"kubernetes.io/projected/dab3f503-6303-4f47-8389-b0dd0d09d60c-kube-api-access-dtftj\") pod \"redhat-marketplace-jpv5f\" (UID: \"dab3f503-6303-4f47-8389-b0dd0d09d60c\") " pod="openshift-marketplace/redhat-marketplace-jpv5f" Feb 02 21:55:07 crc kubenswrapper[4789]: I0202 21:55:07.725983 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dab3f503-6303-4f47-8389-b0dd0d09d60c-catalog-content\") pod \"redhat-marketplace-jpv5f\" (UID: \"dab3f503-6303-4f47-8389-b0dd0d09d60c\") " pod="openshift-marketplace/redhat-marketplace-jpv5f" Feb 02 21:55:07 crc kubenswrapper[4789]: I0202 21:55:07.725999 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dab3f503-6303-4f47-8389-b0dd0d09d60c-utilities\") pod \"redhat-marketplace-jpv5f\" (UID: \"dab3f503-6303-4f47-8389-b0dd0d09d60c\") " pod="openshift-marketplace/redhat-marketplace-jpv5f" Feb 02 21:55:07 crc kubenswrapper[4789]: I0202 21:55:07.726525 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dab3f503-6303-4f47-8389-b0dd0d09d60c-utilities\") pod \"redhat-marketplace-jpv5f\" (UID: \"dab3f503-6303-4f47-8389-b0dd0d09d60c\") " pod="openshift-marketplace/redhat-marketplace-jpv5f" Feb 02 21:55:07 crc kubenswrapper[4789]: I0202 21:55:07.727019 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dab3f503-6303-4f47-8389-b0dd0d09d60c-catalog-content\") pod \"redhat-marketplace-jpv5f\" (UID: \"dab3f503-6303-4f47-8389-b0dd0d09d60c\") " pod="openshift-marketplace/redhat-marketplace-jpv5f" Feb 02 21:55:07 crc kubenswrapper[4789]: I0202 21:55:07.748324 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtftj\" (UniqueName: \"kubernetes.io/projected/dab3f503-6303-4f47-8389-b0dd0d09d60c-kube-api-access-dtftj\") pod \"redhat-marketplace-jpv5f\" (UID: \"dab3f503-6303-4f47-8389-b0dd0d09d60c\") " pod="openshift-marketplace/redhat-marketplace-jpv5f" Feb 02 21:55:07 crc kubenswrapper[4789]: I0202 21:55:07.856970 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jpv5f" Feb 02 21:55:08 crc kubenswrapper[4789]: I0202 21:55:08.345741 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jpv5f"] Feb 02 21:55:09 crc kubenswrapper[4789]: I0202 21:55:09.223378 4789 generic.go:334] "Generic (PLEG): container finished" podID="dab3f503-6303-4f47-8389-b0dd0d09d60c" containerID="4548c20ff86758cc4254f0214187b841e68607a455124d602bd70fa37f4de60f" exitCode=0 Feb 02 21:55:09 crc kubenswrapper[4789]: I0202 21:55:09.223434 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jpv5f" event={"ID":"dab3f503-6303-4f47-8389-b0dd0d09d60c","Type":"ContainerDied","Data":"4548c20ff86758cc4254f0214187b841e68607a455124d602bd70fa37f4de60f"} Feb 02 21:55:09 crc kubenswrapper[4789]: I0202 21:55:09.223501 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jpv5f" event={"ID":"dab3f503-6303-4f47-8389-b0dd0d09d60c","Type":"ContainerStarted","Data":"bfffc52bcfbed8a44a4049e61a440b3257e5b075b3208204b3e9022d05891f67"} Feb 02 21:55:10 crc kubenswrapper[4789]: I0202 21:55:10.236918 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jpv5f" event={"ID":"dab3f503-6303-4f47-8389-b0dd0d09d60c","Type":"ContainerStarted","Data":"2a719a4a06741628fd300de3ca934c5785eadc9acbc6c0ab643f6374eeeeb0fa"} Feb 02 21:55:11 crc kubenswrapper[4789]: I0202 21:55:11.245955 4789 generic.go:334] "Generic (PLEG): container finished" podID="dab3f503-6303-4f47-8389-b0dd0d09d60c" containerID="2a719a4a06741628fd300de3ca934c5785eadc9acbc6c0ab643f6374eeeeb0fa" exitCode=0 Feb 02 21:55:11 crc kubenswrapper[4789]: I0202 21:55:11.246002 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jpv5f" event={"ID":"dab3f503-6303-4f47-8389-b0dd0d09d60c","Type":"ContainerDied","Data":"2a719a4a06741628fd300de3ca934c5785eadc9acbc6c0ab643f6374eeeeb0fa"} Feb 02 21:55:12 crc kubenswrapper[4789]: I0202 21:55:12.287089 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jpv5f" event={"ID":"dab3f503-6303-4f47-8389-b0dd0d09d60c","Type":"ContainerStarted","Data":"94a91684a7ca306fa3ddb31496ccc1626fa3e2fbb7e0d1267f4488c7269d3c5a"} Feb 02 21:55:12 crc kubenswrapper[4789]: I0202 21:55:12.326101 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jpv5f" podStartSLOduration=2.8479035489999998 podStartE2EDuration="5.32607555s" podCreationTimestamp="2026-02-02 21:55:07 +0000 UTC" firstStartedPulling="2026-02-02 21:55:09.224899323 +0000 UTC m=+2129.519924382" lastFinishedPulling="2026-02-02 21:55:11.703071344 +0000 UTC m=+2131.998096383" observedRunningTime="2026-02-02 21:55:12.318368683 +0000 UTC m=+2132.613393732" watchObservedRunningTime="2026-02-02 21:55:12.32607555 +0000 UTC m=+2132.621100609" Feb 02 21:55:17 crc kubenswrapper[4789]: I0202 21:55:17.857565 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jpv5f" Feb 02 21:55:17 crc kubenswrapper[4789]: I0202 21:55:17.858081 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jpv5f" Feb 02 21:55:17 crc kubenswrapper[4789]: I0202 21:55:17.931880 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jpv5f" Feb 02 21:55:18 crc kubenswrapper[4789]: I0202 21:55:18.417310 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jpv5f" Feb 02 21:55:18 crc kubenswrapper[4789]: I0202 21:55:18.491846 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jpv5f"] Feb 02 21:55:20 crc kubenswrapper[4789]: I0202 21:55:20.362360 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jpv5f" podUID="dab3f503-6303-4f47-8389-b0dd0d09d60c" containerName="registry-server" containerID="cri-o://94a91684a7ca306fa3ddb31496ccc1626fa3e2fbb7e0d1267f4488c7269d3c5a" gracePeriod=2 Feb 02 21:55:20 crc kubenswrapper[4789]: I0202 21:55:20.855464 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jpv5f" Feb 02 21:55:20 crc kubenswrapper[4789]: I0202 21:55:20.935023 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dtftj\" (UniqueName: \"kubernetes.io/projected/dab3f503-6303-4f47-8389-b0dd0d09d60c-kube-api-access-dtftj\") pod \"dab3f503-6303-4f47-8389-b0dd0d09d60c\" (UID: \"dab3f503-6303-4f47-8389-b0dd0d09d60c\") " Feb 02 21:55:20 crc kubenswrapper[4789]: I0202 21:55:20.935120 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dab3f503-6303-4f47-8389-b0dd0d09d60c-utilities\") pod \"dab3f503-6303-4f47-8389-b0dd0d09d60c\" (UID: \"dab3f503-6303-4f47-8389-b0dd0d09d60c\") " Feb 02 21:55:20 crc kubenswrapper[4789]: I0202 21:55:20.935216 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dab3f503-6303-4f47-8389-b0dd0d09d60c-catalog-content\") pod \"dab3f503-6303-4f47-8389-b0dd0d09d60c\" (UID: \"dab3f503-6303-4f47-8389-b0dd0d09d60c\") " Feb 02 21:55:20 crc kubenswrapper[4789]: I0202 21:55:20.935997 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dab3f503-6303-4f47-8389-b0dd0d09d60c-utilities" (OuterVolumeSpecName: "utilities") pod "dab3f503-6303-4f47-8389-b0dd0d09d60c" (UID: "dab3f503-6303-4f47-8389-b0dd0d09d60c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 21:55:20 crc kubenswrapper[4789]: I0202 21:55:20.940303 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dab3f503-6303-4f47-8389-b0dd0d09d60c-kube-api-access-dtftj" (OuterVolumeSpecName: "kube-api-access-dtftj") pod "dab3f503-6303-4f47-8389-b0dd0d09d60c" (UID: "dab3f503-6303-4f47-8389-b0dd0d09d60c"). InnerVolumeSpecName "kube-api-access-dtftj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:55:20 crc kubenswrapper[4789]: I0202 21:55:20.961871 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dab3f503-6303-4f47-8389-b0dd0d09d60c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dab3f503-6303-4f47-8389-b0dd0d09d60c" (UID: "dab3f503-6303-4f47-8389-b0dd0d09d60c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 21:55:21 crc kubenswrapper[4789]: I0202 21:55:21.037300 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dtftj\" (UniqueName: \"kubernetes.io/projected/dab3f503-6303-4f47-8389-b0dd0d09d60c-kube-api-access-dtftj\") on node \"crc\" DevicePath \"\"" Feb 02 21:55:21 crc kubenswrapper[4789]: I0202 21:55:21.037339 4789 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dab3f503-6303-4f47-8389-b0dd0d09d60c-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 21:55:21 crc kubenswrapper[4789]: I0202 21:55:21.037353 4789 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dab3f503-6303-4f47-8389-b0dd0d09d60c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 21:55:21 crc kubenswrapper[4789]: I0202 21:55:21.372667 4789 generic.go:334] "Generic (PLEG): container finished" podID="dab3f503-6303-4f47-8389-b0dd0d09d60c" containerID="94a91684a7ca306fa3ddb31496ccc1626fa3e2fbb7e0d1267f4488c7269d3c5a" exitCode=0 Feb 02 21:55:21 crc kubenswrapper[4789]: I0202 21:55:21.372730 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jpv5f" event={"ID":"dab3f503-6303-4f47-8389-b0dd0d09d60c","Type":"ContainerDied","Data":"94a91684a7ca306fa3ddb31496ccc1626fa3e2fbb7e0d1267f4488c7269d3c5a"} Feb 02 21:55:21 crc kubenswrapper[4789]: I0202 21:55:21.372780 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jpv5f" event={"ID":"dab3f503-6303-4f47-8389-b0dd0d09d60c","Type":"ContainerDied","Data":"bfffc52bcfbed8a44a4049e61a440b3257e5b075b3208204b3e9022d05891f67"} Feb 02 21:55:21 crc kubenswrapper[4789]: I0202 21:55:21.372813 4789 scope.go:117] "RemoveContainer" containerID="94a91684a7ca306fa3ddb31496ccc1626fa3e2fbb7e0d1267f4488c7269d3c5a" Feb 02 21:55:21 crc kubenswrapper[4789]: I0202 21:55:21.373909 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jpv5f" Feb 02 21:55:21 crc kubenswrapper[4789]: I0202 21:55:21.403053 4789 scope.go:117] "RemoveContainer" containerID="2a719a4a06741628fd300de3ca934c5785eadc9acbc6c0ab643f6374eeeeb0fa" Feb 02 21:55:21 crc kubenswrapper[4789]: I0202 21:55:21.419087 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jpv5f"] Feb 02 21:55:21 crc kubenswrapper[4789]: I0202 21:55:21.428313 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jpv5f"] Feb 02 21:55:21 crc kubenswrapper[4789]: I0202 21:55:21.438560 4789 scope.go:117] "RemoveContainer" containerID="4548c20ff86758cc4254f0214187b841e68607a455124d602bd70fa37f4de60f" Feb 02 21:55:21 crc kubenswrapper[4789]: I0202 21:55:21.474111 4789 scope.go:117] "RemoveContainer" containerID="94a91684a7ca306fa3ddb31496ccc1626fa3e2fbb7e0d1267f4488c7269d3c5a" Feb 02 21:55:21 crc kubenswrapper[4789]: E0202 21:55:21.474550 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94a91684a7ca306fa3ddb31496ccc1626fa3e2fbb7e0d1267f4488c7269d3c5a\": container with ID starting with 94a91684a7ca306fa3ddb31496ccc1626fa3e2fbb7e0d1267f4488c7269d3c5a not found: ID does not exist" containerID="94a91684a7ca306fa3ddb31496ccc1626fa3e2fbb7e0d1267f4488c7269d3c5a" Feb 02 21:55:21 crc kubenswrapper[4789]: I0202 21:55:21.474621 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94a91684a7ca306fa3ddb31496ccc1626fa3e2fbb7e0d1267f4488c7269d3c5a"} err="failed to get container status \"94a91684a7ca306fa3ddb31496ccc1626fa3e2fbb7e0d1267f4488c7269d3c5a\": rpc error: code = NotFound desc = could not find container \"94a91684a7ca306fa3ddb31496ccc1626fa3e2fbb7e0d1267f4488c7269d3c5a\": container with ID starting with 94a91684a7ca306fa3ddb31496ccc1626fa3e2fbb7e0d1267f4488c7269d3c5a not found: ID does not exist" Feb 02 21:55:21 crc kubenswrapper[4789]: I0202 21:55:21.474655 4789 scope.go:117] "RemoveContainer" containerID="2a719a4a06741628fd300de3ca934c5785eadc9acbc6c0ab643f6374eeeeb0fa" Feb 02 21:55:21 crc kubenswrapper[4789]: E0202 21:55:21.474976 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a719a4a06741628fd300de3ca934c5785eadc9acbc6c0ab643f6374eeeeb0fa\": container with ID starting with 2a719a4a06741628fd300de3ca934c5785eadc9acbc6c0ab643f6374eeeeb0fa not found: ID does not exist" containerID="2a719a4a06741628fd300de3ca934c5785eadc9acbc6c0ab643f6374eeeeb0fa" Feb 02 21:55:21 crc kubenswrapper[4789]: I0202 21:55:21.475015 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a719a4a06741628fd300de3ca934c5785eadc9acbc6c0ab643f6374eeeeb0fa"} err="failed to get container status \"2a719a4a06741628fd300de3ca934c5785eadc9acbc6c0ab643f6374eeeeb0fa\": rpc error: code = NotFound desc = could not find container \"2a719a4a06741628fd300de3ca934c5785eadc9acbc6c0ab643f6374eeeeb0fa\": container with ID starting with 2a719a4a06741628fd300de3ca934c5785eadc9acbc6c0ab643f6374eeeeb0fa not found: ID does not exist" Feb 02 21:55:21 crc kubenswrapper[4789]: I0202 21:55:21.475040 4789 scope.go:117] "RemoveContainer" containerID="4548c20ff86758cc4254f0214187b841e68607a455124d602bd70fa37f4de60f" Feb 02 21:55:21 crc kubenswrapper[4789]: E0202 21:55:21.475720 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4548c20ff86758cc4254f0214187b841e68607a455124d602bd70fa37f4de60f\": container with ID starting with 4548c20ff86758cc4254f0214187b841e68607a455124d602bd70fa37f4de60f not found: ID does not exist" containerID="4548c20ff86758cc4254f0214187b841e68607a455124d602bd70fa37f4de60f" Feb 02 21:55:21 crc kubenswrapper[4789]: I0202 21:55:21.475768 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4548c20ff86758cc4254f0214187b841e68607a455124d602bd70fa37f4de60f"} err="failed to get container status \"4548c20ff86758cc4254f0214187b841e68607a455124d602bd70fa37f4de60f\": rpc error: code = NotFound desc = could not find container \"4548c20ff86758cc4254f0214187b841e68607a455124d602bd70fa37f4de60f\": container with ID starting with 4548c20ff86758cc4254f0214187b841e68607a455124d602bd70fa37f4de60f not found: ID does not exist" Feb 02 21:55:22 crc kubenswrapper[4789]: I0202 21:55:22.436717 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dab3f503-6303-4f47-8389-b0dd0d09d60c" path="/var/lib/kubelet/pods/dab3f503-6303-4f47-8389-b0dd0d09d60c/volumes" Feb 02 21:55:22 crc kubenswrapper[4789]: I0202 21:55:22.842196 4789 patch_prober.go:28] interesting pod/machine-config-daemon-c8vcn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 21:55:22 crc kubenswrapper[4789]: I0202 21:55:22.842298 4789 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 21:55:43 crc kubenswrapper[4789]: I0202 21:55:43.326022 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jr2tl"] Feb 02 21:55:43 crc kubenswrapper[4789]: E0202 21:55:43.327327 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dab3f503-6303-4f47-8389-b0dd0d09d60c" containerName="extract-utilities" Feb 02 21:55:43 crc kubenswrapper[4789]: I0202 21:55:43.327356 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="dab3f503-6303-4f47-8389-b0dd0d09d60c" containerName="extract-utilities" Feb 02 21:55:43 crc kubenswrapper[4789]: E0202 21:55:43.327392 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dab3f503-6303-4f47-8389-b0dd0d09d60c" containerName="extract-content" Feb 02 21:55:43 crc kubenswrapper[4789]: I0202 21:55:43.327407 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="dab3f503-6303-4f47-8389-b0dd0d09d60c" containerName="extract-content" Feb 02 21:55:43 crc kubenswrapper[4789]: E0202 21:55:43.327443 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dab3f503-6303-4f47-8389-b0dd0d09d60c" containerName="registry-server" Feb 02 21:55:43 crc kubenswrapper[4789]: I0202 21:55:43.327458 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="dab3f503-6303-4f47-8389-b0dd0d09d60c" containerName="registry-server" Feb 02 21:55:43 crc kubenswrapper[4789]: I0202 21:55:43.327844 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="dab3f503-6303-4f47-8389-b0dd0d09d60c" containerName="registry-server" Feb 02 21:55:43 crc kubenswrapper[4789]: I0202 21:55:43.330168 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jr2tl" Feb 02 21:55:43 crc kubenswrapper[4789]: I0202 21:55:43.351509 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jr2tl"] Feb 02 21:55:43 crc kubenswrapper[4789]: I0202 21:55:43.416121 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ecb26eb-445f-4236-b455-967a32b5b073-catalog-content\") pod \"certified-operators-jr2tl\" (UID: \"2ecb26eb-445f-4236-b455-967a32b5b073\") " pod="openshift-marketplace/certified-operators-jr2tl" Feb 02 21:55:43 crc kubenswrapper[4789]: I0202 21:55:43.416171 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njqhn\" (UniqueName: \"kubernetes.io/projected/2ecb26eb-445f-4236-b455-967a32b5b073-kube-api-access-njqhn\") pod \"certified-operators-jr2tl\" (UID: \"2ecb26eb-445f-4236-b455-967a32b5b073\") " pod="openshift-marketplace/certified-operators-jr2tl" Feb 02 21:55:43 crc kubenswrapper[4789]: I0202 21:55:43.416281 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ecb26eb-445f-4236-b455-967a32b5b073-utilities\") pod \"certified-operators-jr2tl\" (UID: \"2ecb26eb-445f-4236-b455-967a32b5b073\") " pod="openshift-marketplace/certified-operators-jr2tl" Feb 02 21:55:43 crc kubenswrapper[4789]: I0202 21:55:43.517257 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ecb26eb-445f-4236-b455-967a32b5b073-utilities\") pod \"certified-operators-jr2tl\" (UID: \"2ecb26eb-445f-4236-b455-967a32b5b073\") " pod="openshift-marketplace/certified-operators-jr2tl" Feb 02 21:55:43 crc kubenswrapper[4789]: I0202 21:55:43.517350 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ecb26eb-445f-4236-b455-967a32b5b073-catalog-content\") pod \"certified-operators-jr2tl\" (UID: \"2ecb26eb-445f-4236-b455-967a32b5b073\") " pod="openshift-marketplace/certified-operators-jr2tl" Feb 02 21:55:43 crc kubenswrapper[4789]: I0202 21:55:43.517405 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njqhn\" (UniqueName: \"kubernetes.io/projected/2ecb26eb-445f-4236-b455-967a32b5b073-kube-api-access-njqhn\") pod \"certified-operators-jr2tl\" (UID: \"2ecb26eb-445f-4236-b455-967a32b5b073\") " pod="openshift-marketplace/certified-operators-jr2tl" Feb 02 21:55:43 crc kubenswrapper[4789]: I0202 21:55:43.518124 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ecb26eb-445f-4236-b455-967a32b5b073-utilities\") pod \"certified-operators-jr2tl\" (UID: \"2ecb26eb-445f-4236-b455-967a32b5b073\") " pod="openshift-marketplace/certified-operators-jr2tl" Feb 02 21:55:43 crc kubenswrapper[4789]: I0202 21:55:43.518177 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ecb26eb-445f-4236-b455-967a32b5b073-catalog-content\") pod \"certified-operators-jr2tl\" (UID: \"2ecb26eb-445f-4236-b455-967a32b5b073\") " pod="openshift-marketplace/certified-operators-jr2tl" Feb 02 21:55:43 crc kubenswrapper[4789]: I0202 21:55:43.555420 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njqhn\" (UniqueName: \"kubernetes.io/projected/2ecb26eb-445f-4236-b455-967a32b5b073-kube-api-access-njqhn\") pod \"certified-operators-jr2tl\" (UID: \"2ecb26eb-445f-4236-b455-967a32b5b073\") " pod="openshift-marketplace/certified-operators-jr2tl" Feb 02 21:55:43 crc kubenswrapper[4789]: I0202 21:55:43.660338 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jr2tl" Feb 02 21:55:44 crc kubenswrapper[4789]: I0202 21:55:44.182093 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jr2tl"] Feb 02 21:55:44 crc kubenswrapper[4789]: I0202 21:55:44.578011 4789 generic.go:334] "Generic (PLEG): container finished" podID="2ecb26eb-445f-4236-b455-967a32b5b073" containerID="81230c6c759f0681ee18b4ec676ab65d1e97843abfe1b6335f2cc87cab09ee7d" exitCode=0 Feb 02 21:55:44 crc kubenswrapper[4789]: I0202 21:55:44.578071 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jr2tl" event={"ID":"2ecb26eb-445f-4236-b455-967a32b5b073","Type":"ContainerDied","Data":"81230c6c759f0681ee18b4ec676ab65d1e97843abfe1b6335f2cc87cab09ee7d"} Feb 02 21:55:44 crc kubenswrapper[4789]: I0202 21:55:44.578137 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jr2tl" event={"ID":"2ecb26eb-445f-4236-b455-967a32b5b073","Type":"ContainerStarted","Data":"0f859ff4a416048a95ccb0fbf7bd693edde85aa87f0e9c32ebccd81afe8fc088"} Feb 02 21:55:44 crc kubenswrapper[4789]: I0202 21:55:44.580021 4789 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 21:55:45 crc kubenswrapper[4789]: I0202 21:55:45.586673 4789 generic.go:334] "Generic (PLEG): container finished" podID="2ecb26eb-445f-4236-b455-967a32b5b073" containerID="d1cce641c5086ee3b2513c8c3d84d755737bf1d91ded9f4f8d0268b9682e350c" exitCode=0 Feb 02 21:55:45 crc kubenswrapper[4789]: I0202 21:55:45.586769 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jr2tl" event={"ID":"2ecb26eb-445f-4236-b455-967a32b5b073","Type":"ContainerDied","Data":"d1cce641c5086ee3b2513c8c3d84d755737bf1d91ded9f4f8d0268b9682e350c"} Feb 02 21:55:46 crc kubenswrapper[4789]: I0202 21:55:46.596158 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jr2tl" event={"ID":"2ecb26eb-445f-4236-b455-967a32b5b073","Type":"ContainerStarted","Data":"b887e5a65159a903669d04a333afd1fc7eed929f535644332b8ed80ac5711d86"} Feb 02 21:55:46 crc kubenswrapper[4789]: I0202 21:55:46.620884 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jr2tl" podStartSLOduration=2.174820413 podStartE2EDuration="3.62086911s" podCreationTimestamp="2026-02-02 21:55:43 +0000 UTC" firstStartedPulling="2026-02-02 21:55:44.579680315 +0000 UTC m=+2164.874705334" lastFinishedPulling="2026-02-02 21:55:46.025729012 +0000 UTC m=+2166.320754031" observedRunningTime="2026-02-02 21:55:46.61803589 +0000 UTC m=+2166.913060899" watchObservedRunningTime="2026-02-02 21:55:46.62086911 +0000 UTC m=+2166.915894129" Feb 02 21:55:52 crc kubenswrapper[4789]: I0202 21:55:52.841530 4789 patch_prober.go:28] interesting pod/machine-config-daemon-c8vcn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 21:55:52 crc kubenswrapper[4789]: I0202 21:55:52.842188 4789 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 21:55:53 crc kubenswrapper[4789]: I0202 21:55:53.660893 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jr2tl" Feb 02 21:55:53 crc kubenswrapper[4789]: I0202 21:55:53.660939 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jr2tl" Feb 02 21:55:53 crc kubenswrapper[4789]: I0202 21:55:53.739067 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jr2tl" Feb 02 21:55:54 crc kubenswrapper[4789]: I0202 21:55:54.716923 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jr2tl" Feb 02 21:55:54 crc kubenswrapper[4789]: I0202 21:55:54.765799 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jr2tl"] Feb 02 21:55:56 crc kubenswrapper[4789]: I0202 21:55:56.680706 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jr2tl" podUID="2ecb26eb-445f-4236-b455-967a32b5b073" containerName="registry-server" containerID="cri-o://b887e5a65159a903669d04a333afd1fc7eed929f535644332b8ed80ac5711d86" gracePeriod=2 Feb 02 21:55:57 crc kubenswrapper[4789]: I0202 21:55:57.209168 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jr2tl" Feb 02 21:55:57 crc kubenswrapper[4789]: I0202 21:55:57.261787 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njqhn\" (UniqueName: \"kubernetes.io/projected/2ecb26eb-445f-4236-b455-967a32b5b073-kube-api-access-njqhn\") pod \"2ecb26eb-445f-4236-b455-967a32b5b073\" (UID: \"2ecb26eb-445f-4236-b455-967a32b5b073\") " Feb 02 21:55:57 crc kubenswrapper[4789]: I0202 21:55:57.261869 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ecb26eb-445f-4236-b455-967a32b5b073-catalog-content\") pod \"2ecb26eb-445f-4236-b455-967a32b5b073\" (UID: \"2ecb26eb-445f-4236-b455-967a32b5b073\") " Feb 02 21:55:57 crc kubenswrapper[4789]: I0202 21:55:57.261984 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ecb26eb-445f-4236-b455-967a32b5b073-utilities\") pod \"2ecb26eb-445f-4236-b455-967a32b5b073\" (UID: \"2ecb26eb-445f-4236-b455-967a32b5b073\") " Feb 02 21:55:57 crc kubenswrapper[4789]: I0202 21:55:57.263164 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ecb26eb-445f-4236-b455-967a32b5b073-utilities" (OuterVolumeSpecName: "utilities") pod "2ecb26eb-445f-4236-b455-967a32b5b073" (UID: "2ecb26eb-445f-4236-b455-967a32b5b073"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 21:55:57 crc kubenswrapper[4789]: I0202 21:55:57.269669 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ecb26eb-445f-4236-b455-967a32b5b073-kube-api-access-njqhn" (OuterVolumeSpecName: "kube-api-access-njqhn") pod "2ecb26eb-445f-4236-b455-967a32b5b073" (UID: "2ecb26eb-445f-4236-b455-967a32b5b073"). InnerVolumeSpecName "kube-api-access-njqhn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 21:55:57 crc kubenswrapper[4789]: I0202 21:55:57.318332 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ecb26eb-445f-4236-b455-967a32b5b073-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2ecb26eb-445f-4236-b455-967a32b5b073" (UID: "2ecb26eb-445f-4236-b455-967a32b5b073"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 21:55:57 crc kubenswrapper[4789]: I0202 21:55:57.363426 4789 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ecb26eb-445f-4236-b455-967a32b5b073-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 21:55:57 crc kubenswrapper[4789]: I0202 21:55:57.363459 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-njqhn\" (UniqueName: \"kubernetes.io/projected/2ecb26eb-445f-4236-b455-967a32b5b073-kube-api-access-njqhn\") on node \"crc\" DevicePath \"\"" Feb 02 21:55:57 crc kubenswrapper[4789]: I0202 21:55:57.363474 4789 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ecb26eb-445f-4236-b455-967a32b5b073-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 21:55:57 crc kubenswrapper[4789]: I0202 21:55:57.692933 4789 generic.go:334] "Generic (PLEG): container finished" podID="2ecb26eb-445f-4236-b455-967a32b5b073" containerID="b887e5a65159a903669d04a333afd1fc7eed929f535644332b8ed80ac5711d86" exitCode=0 Feb 02 21:55:57 crc kubenswrapper[4789]: I0202 21:55:57.692999 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jr2tl" event={"ID":"2ecb26eb-445f-4236-b455-967a32b5b073","Type":"ContainerDied","Data":"b887e5a65159a903669d04a333afd1fc7eed929f535644332b8ed80ac5711d86"} Feb 02 21:55:57 crc kubenswrapper[4789]: I0202 21:55:57.693031 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jr2tl" Feb 02 21:55:57 crc kubenswrapper[4789]: I0202 21:55:57.693049 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jr2tl" event={"ID":"2ecb26eb-445f-4236-b455-967a32b5b073","Type":"ContainerDied","Data":"0f859ff4a416048a95ccb0fbf7bd693edde85aa87f0e9c32ebccd81afe8fc088"} Feb 02 21:55:57 crc kubenswrapper[4789]: I0202 21:55:57.693080 4789 scope.go:117] "RemoveContainer" containerID="b887e5a65159a903669d04a333afd1fc7eed929f535644332b8ed80ac5711d86" Feb 02 21:55:57 crc kubenswrapper[4789]: I0202 21:55:57.719257 4789 scope.go:117] "RemoveContainer" containerID="d1cce641c5086ee3b2513c8c3d84d755737bf1d91ded9f4f8d0268b9682e350c" Feb 02 21:55:57 crc kubenswrapper[4789]: I0202 21:55:57.757665 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jr2tl"] Feb 02 21:55:57 crc kubenswrapper[4789]: I0202 21:55:57.768521 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jr2tl"] Feb 02 21:55:57 crc kubenswrapper[4789]: I0202 21:55:57.781379 4789 scope.go:117] "RemoveContainer" containerID="81230c6c759f0681ee18b4ec676ab65d1e97843abfe1b6335f2cc87cab09ee7d" Feb 02 21:55:57 crc kubenswrapper[4789]: I0202 21:55:57.811533 4789 scope.go:117] "RemoveContainer" containerID="b887e5a65159a903669d04a333afd1fc7eed929f535644332b8ed80ac5711d86" Feb 02 21:55:57 crc kubenswrapper[4789]: E0202 21:55:57.812114 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b887e5a65159a903669d04a333afd1fc7eed929f535644332b8ed80ac5711d86\": container with ID starting with b887e5a65159a903669d04a333afd1fc7eed929f535644332b8ed80ac5711d86 not found: ID does not exist" containerID="b887e5a65159a903669d04a333afd1fc7eed929f535644332b8ed80ac5711d86" Feb 02 21:55:57 crc kubenswrapper[4789]: I0202 21:55:57.812184 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b887e5a65159a903669d04a333afd1fc7eed929f535644332b8ed80ac5711d86"} err="failed to get container status \"b887e5a65159a903669d04a333afd1fc7eed929f535644332b8ed80ac5711d86\": rpc error: code = NotFound desc = could not find container \"b887e5a65159a903669d04a333afd1fc7eed929f535644332b8ed80ac5711d86\": container with ID starting with b887e5a65159a903669d04a333afd1fc7eed929f535644332b8ed80ac5711d86 not found: ID does not exist" Feb 02 21:55:57 crc kubenswrapper[4789]: I0202 21:55:57.812229 4789 scope.go:117] "RemoveContainer" containerID="d1cce641c5086ee3b2513c8c3d84d755737bf1d91ded9f4f8d0268b9682e350c" Feb 02 21:55:57 crc kubenswrapper[4789]: E0202 21:55:57.812832 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d1cce641c5086ee3b2513c8c3d84d755737bf1d91ded9f4f8d0268b9682e350c\": container with ID starting with d1cce641c5086ee3b2513c8c3d84d755737bf1d91ded9f4f8d0268b9682e350c not found: ID does not exist" containerID="d1cce641c5086ee3b2513c8c3d84d755737bf1d91ded9f4f8d0268b9682e350c" Feb 02 21:55:57 crc kubenswrapper[4789]: I0202 21:55:57.812905 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1cce641c5086ee3b2513c8c3d84d755737bf1d91ded9f4f8d0268b9682e350c"} err="failed to get container status \"d1cce641c5086ee3b2513c8c3d84d755737bf1d91ded9f4f8d0268b9682e350c\": rpc error: code = NotFound desc = could not find container \"d1cce641c5086ee3b2513c8c3d84d755737bf1d91ded9f4f8d0268b9682e350c\": container with ID starting with d1cce641c5086ee3b2513c8c3d84d755737bf1d91ded9f4f8d0268b9682e350c not found: ID does not exist" Feb 02 21:55:57 crc kubenswrapper[4789]: I0202 21:55:57.812960 4789 scope.go:117] "RemoveContainer" containerID="81230c6c759f0681ee18b4ec676ab65d1e97843abfe1b6335f2cc87cab09ee7d" Feb 02 21:55:57 crc kubenswrapper[4789]: E0202 21:55:57.813472 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81230c6c759f0681ee18b4ec676ab65d1e97843abfe1b6335f2cc87cab09ee7d\": container with ID starting with 81230c6c759f0681ee18b4ec676ab65d1e97843abfe1b6335f2cc87cab09ee7d not found: ID does not exist" containerID="81230c6c759f0681ee18b4ec676ab65d1e97843abfe1b6335f2cc87cab09ee7d" Feb 02 21:55:57 crc kubenswrapper[4789]: I0202 21:55:57.813526 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81230c6c759f0681ee18b4ec676ab65d1e97843abfe1b6335f2cc87cab09ee7d"} err="failed to get container status \"81230c6c759f0681ee18b4ec676ab65d1e97843abfe1b6335f2cc87cab09ee7d\": rpc error: code = NotFound desc = could not find container \"81230c6c759f0681ee18b4ec676ab65d1e97843abfe1b6335f2cc87cab09ee7d\": container with ID starting with 81230c6c759f0681ee18b4ec676ab65d1e97843abfe1b6335f2cc87cab09ee7d not found: ID does not exist" Feb 02 21:55:58 crc kubenswrapper[4789]: I0202 21:55:58.436019 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ecb26eb-445f-4236-b455-967a32b5b073" path="/var/lib/kubelet/pods/2ecb26eb-445f-4236-b455-967a32b5b073/volumes" Feb 02 21:56:22 crc kubenswrapper[4789]: I0202 21:56:22.841860 4789 patch_prober.go:28] interesting pod/machine-config-daemon-c8vcn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 21:56:22 crc kubenswrapper[4789]: I0202 21:56:22.842433 4789 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 21:56:22 crc kubenswrapper[4789]: I0202 21:56:22.842497 4789 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" Feb 02 21:56:22 crc kubenswrapper[4789]: I0202 21:56:22.843420 4789 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7b61f591fd4d0e3f42e5bba6786e3823208a066fad37c1d40c104df25eafefc3"} pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 21:56:22 crc kubenswrapper[4789]: I0202 21:56:22.843512 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" containerName="machine-config-daemon" containerID="cri-o://7b61f591fd4d0e3f42e5bba6786e3823208a066fad37c1d40c104df25eafefc3" gracePeriod=600 Feb 02 21:56:23 crc kubenswrapper[4789]: I0202 21:56:23.940986 4789 generic.go:334] "Generic (PLEG): container finished" podID="bdf018b4-1451-4d37-be6e-05802b67c73e" containerID="7b61f591fd4d0e3f42e5bba6786e3823208a066fad37c1d40c104df25eafefc3" exitCode=0 Feb 02 21:56:23 crc kubenswrapper[4789]: I0202 21:56:23.941052 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" event={"ID":"bdf018b4-1451-4d37-be6e-05802b67c73e","Type":"ContainerDied","Data":"7b61f591fd4d0e3f42e5bba6786e3823208a066fad37c1d40c104df25eafefc3"} Feb 02 21:56:23 crc kubenswrapper[4789]: I0202 21:56:23.941424 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" event={"ID":"bdf018b4-1451-4d37-be6e-05802b67c73e","Type":"ContainerStarted","Data":"073e0816e5465e56b89054d46f25c31acee8f07d96c3df073123dd7a7f51be95"} Feb 02 21:56:23 crc kubenswrapper[4789]: I0202 21:56:23.941464 4789 scope.go:117] "RemoveContainer" containerID="de6aa73a1267c130655031ef3c4dd7e6bb34d4ce8d75bf21c205ad83e1223e18" Feb 02 21:58:52 crc kubenswrapper[4789]: I0202 21:58:52.841253 4789 patch_prober.go:28] interesting pod/machine-config-daemon-c8vcn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 21:58:52 crc kubenswrapper[4789]: I0202 21:58:52.843856 4789 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 21:59:22 crc kubenswrapper[4789]: I0202 21:59:22.842283 4789 patch_prober.go:28] interesting pod/machine-config-daemon-c8vcn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 21:59:22 crc kubenswrapper[4789]: I0202 21:59:22.843140 4789 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 21:59:52 crc kubenswrapper[4789]: I0202 21:59:52.841676 4789 patch_prober.go:28] interesting pod/machine-config-daemon-c8vcn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 21:59:52 crc kubenswrapper[4789]: I0202 21:59:52.842279 4789 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 21:59:52 crc kubenswrapper[4789]: I0202 21:59:52.842343 4789 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" Feb 02 21:59:52 crc kubenswrapper[4789]: I0202 21:59:52.843167 4789 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"073e0816e5465e56b89054d46f25c31acee8f07d96c3df073123dd7a7f51be95"} pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 21:59:52 crc kubenswrapper[4789]: I0202 21:59:52.843269 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" containerName="machine-config-daemon" containerID="cri-o://073e0816e5465e56b89054d46f25c31acee8f07d96c3df073123dd7a7f51be95" gracePeriod=600 Feb 02 21:59:52 crc kubenswrapper[4789]: E0202 21:59:52.982246 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 21:59:53 crc kubenswrapper[4789]: I0202 21:59:53.631311 4789 generic.go:334] "Generic (PLEG): container finished" podID="bdf018b4-1451-4d37-be6e-05802b67c73e" containerID="073e0816e5465e56b89054d46f25c31acee8f07d96c3df073123dd7a7f51be95" exitCode=0 Feb 02 21:59:53 crc kubenswrapper[4789]: I0202 21:59:53.631404 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" event={"ID":"bdf018b4-1451-4d37-be6e-05802b67c73e","Type":"ContainerDied","Data":"073e0816e5465e56b89054d46f25c31acee8f07d96c3df073123dd7a7f51be95"} Feb 02 21:59:53 crc kubenswrapper[4789]: I0202 21:59:53.632094 4789 scope.go:117] "RemoveContainer" containerID="7b61f591fd4d0e3f42e5bba6786e3823208a066fad37c1d40c104df25eafefc3" Feb 02 21:59:53 crc kubenswrapper[4789]: I0202 21:59:53.632971 4789 scope.go:117] "RemoveContainer" containerID="073e0816e5465e56b89054d46f25c31acee8f07d96c3df073123dd7a7f51be95" Feb 02 21:59:53 crc kubenswrapper[4789]: E0202 21:59:53.633459 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:00:00 crc kubenswrapper[4789]: I0202 22:00:00.174676 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29501160-5s9sq"] Feb 02 22:00:00 crc kubenswrapper[4789]: E0202 22:00:00.175570 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ecb26eb-445f-4236-b455-967a32b5b073" containerName="extract-content" Feb 02 22:00:00 crc kubenswrapper[4789]: I0202 22:00:00.175636 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ecb26eb-445f-4236-b455-967a32b5b073" containerName="extract-content" Feb 02 22:00:00 crc kubenswrapper[4789]: E0202 22:00:00.175732 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ecb26eb-445f-4236-b455-967a32b5b073" containerName="extract-utilities" Feb 02 22:00:00 crc kubenswrapper[4789]: I0202 22:00:00.175751 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ecb26eb-445f-4236-b455-967a32b5b073" containerName="extract-utilities" Feb 02 22:00:00 crc kubenswrapper[4789]: E0202 22:00:00.175777 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ecb26eb-445f-4236-b455-967a32b5b073" containerName="registry-server" Feb 02 22:00:00 crc kubenswrapper[4789]: I0202 22:00:00.175793 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ecb26eb-445f-4236-b455-967a32b5b073" containerName="registry-server" Feb 02 22:00:00 crc kubenswrapper[4789]: I0202 22:00:00.176108 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ecb26eb-445f-4236-b455-967a32b5b073" containerName="registry-server" Feb 02 22:00:00 crc kubenswrapper[4789]: I0202 22:00:00.177181 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29501160-5s9sq" Feb 02 22:00:00 crc kubenswrapper[4789]: I0202 22:00:00.181938 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 02 22:00:00 crc kubenswrapper[4789]: I0202 22:00:00.184168 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 02 22:00:00 crc kubenswrapper[4789]: I0202 22:00:00.190249 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29501160-5s9sq"] Feb 02 22:00:00 crc kubenswrapper[4789]: I0202 22:00:00.283982 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a1bb7e7b-5bfd-49b8-8c42-e2a0d0d5a069-config-volume\") pod \"collect-profiles-29501160-5s9sq\" (UID: \"a1bb7e7b-5bfd-49b8-8c42-e2a0d0d5a069\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501160-5s9sq" Feb 02 22:00:00 crc kubenswrapper[4789]: I0202 22:00:00.284095 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a1bb7e7b-5bfd-49b8-8c42-e2a0d0d5a069-secret-volume\") pod \"collect-profiles-29501160-5s9sq\" (UID: \"a1bb7e7b-5bfd-49b8-8c42-e2a0d0d5a069\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501160-5s9sq" Feb 02 22:00:00 crc kubenswrapper[4789]: I0202 22:00:00.284285 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wdp6\" (UniqueName: \"kubernetes.io/projected/a1bb7e7b-5bfd-49b8-8c42-e2a0d0d5a069-kube-api-access-2wdp6\") pod \"collect-profiles-29501160-5s9sq\" (UID: \"a1bb7e7b-5bfd-49b8-8c42-e2a0d0d5a069\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501160-5s9sq" Feb 02 22:00:00 crc kubenswrapper[4789]: I0202 22:00:00.385427 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wdp6\" (UniqueName: \"kubernetes.io/projected/a1bb7e7b-5bfd-49b8-8c42-e2a0d0d5a069-kube-api-access-2wdp6\") pod \"collect-profiles-29501160-5s9sq\" (UID: \"a1bb7e7b-5bfd-49b8-8c42-e2a0d0d5a069\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501160-5s9sq" Feb 02 22:00:00 crc kubenswrapper[4789]: I0202 22:00:00.385553 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a1bb7e7b-5bfd-49b8-8c42-e2a0d0d5a069-config-volume\") pod \"collect-profiles-29501160-5s9sq\" (UID: \"a1bb7e7b-5bfd-49b8-8c42-e2a0d0d5a069\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501160-5s9sq" Feb 02 22:00:00 crc kubenswrapper[4789]: I0202 22:00:00.385663 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a1bb7e7b-5bfd-49b8-8c42-e2a0d0d5a069-secret-volume\") pod \"collect-profiles-29501160-5s9sq\" (UID: \"a1bb7e7b-5bfd-49b8-8c42-e2a0d0d5a069\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501160-5s9sq" Feb 02 22:00:00 crc kubenswrapper[4789]: I0202 22:00:00.387792 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a1bb7e7b-5bfd-49b8-8c42-e2a0d0d5a069-config-volume\") pod \"collect-profiles-29501160-5s9sq\" (UID: \"a1bb7e7b-5bfd-49b8-8c42-e2a0d0d5a069\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501160-5s9sq" Feb 02 22:00:00 crc kubenswrapper[4789]: I0202 22:00:00.392030 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a1bb7e7b-5bfd-49b8-8c42-e2a0d0d5a069-secret-volume\") pod \"collect-profiles-29501160-5s9sq\" (UID: \"a1bb7e7b-5bfd-49b8-8c42-e2a0d0d5a069\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501160-5s9sq" Feb 02 22:00:00 crc kubenswrapper[4789]: I0202 22:00:00.407120 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wdp6\" (UniqueName: \"kubernetes.io/projected/a1bb7e7b-5bfd-49b8-8c42-e2a0d0d5a069-kube-api-access-2wdp6\") pod \"collect-profiles-29501160-5s9sq\" (UID: \"a1bb7e7b-5bfd-49b8-8c42-e2a0d0d5a069\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501160-5s9sq" Feb 02 22:00:00 crc kubenswrapper[4789]: I0202 22:00:00.516008 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29501160-5s9sq" Feb 02 22:00:01 crc kubenswrapper[4789]: I0202 22:00:01.055786 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29501160-5s9sq"] Feb 02 22:00:01 crc kubenswrapper[4789]: W0202 22:00:01.062891 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda1bb7e7b_5bfd_49b8_8c42_e2a0d0d5a069.slice/crio-4ea88782f896dccc06d1ca93fa8fd945ea4f1e819b3a4ab0584f8fcc6f8ecef5 WatchSource:0}: Error finding container 4ea88782f896dccc06d1ca93fa8fd945ea4f1e819b3a4ab0584f8fcc6f8ecef5: Status 404 returned error can't find the container with id 4ea88782f896dccc06d1ca93fa8fd945ea4f1e819b3a4ab0584f8fcc6f8ecef5 Feb 02 22:00:01 crc kubenswrapper[4789]: I0202 22:00:01.721473 4789 generic.go:334] "Generic (PLEG): container finished" podID="a1bb7e7b-5bfd-49b8-8c42-e2a0d0d5a069" containerID="d0c70e18c8fe13219cea8a79460607b7ea51e6640a89448d9df0fe5e6f3bc98d" exitCode=0 Feb 02 22:00:01 crc kubenswrapper[4789]: I0202 22:00:01.721711 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29501160-5s9sq" event={"ID":"a1bb7e7b-5bfd-49b8-8c42-e2a0d0d5a069","Type":"ContainerDied","Data":"d0c70e18c8fe13219cea8a79460607b7ea51e6640a89448d9df0fe5e6f3bc98d"} Feb 02 22:00:01 crc kubenswrapper[4789]: I0202 22:00:01.721792 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29501160-5s9sq" event={"ID":"a1bb7e7b-5bfd-49b8-8c42-e2a0d0d5a069","Type":"ContainerStarted","Data":"4ea88782f896dccc06d1ca93fa8fd945ea4f1e819b3a4ab0584f8fcc6f8ecef5"} Feb 02 22:00:03 crc kubenswrapper[4789]: I0202 22:00:03.024515 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29501160-5s9sq" Feb 02 22:00:03 crc kubenswrapper[4789]: I0202 22:00:03.149225 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a1bb7e7b-5bfd-49b8-8c42-e2a0d0d5a069-secret-volume\") pod \"a1bb7e7b-5bfd-49b8-8c42-e2a0d0d5a069\" (UID: \"a1bb7e7b-5bfd-49b8-8c42-e2a0d0d5a069\") " Feb 02 22:00:03 crc kubenswrapper[4789]: I0202 22:00:03.149353 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a1bb7e7b-5bfd-49b8-8c42-e2a0d0d5a069-config-volume\") pod \"a1bb7e7b-5bfd-49b8-8c42-e2a0d0d5a069\" (UID: \"a1bb7e7b-5bfd-49b8-8c42-e2a0d0d5a069\") " Feb 02 22:00:03 crc kubenswrapper[4789]: I0202 22:00:03.149402 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2wdp6\" (UniqueName: \"kubernetes.io/projected/a1bb7e7b-5bfd-49b8-8c42-e2a0d0d5a069-kube-api-access-2wdp6\") pod \"a1bb7e7b-5bfd-49b8-8c42-e2a0d0d5a069\" (UID: \"a1bb7e7b-5bfd-49b8-8c42-e2a0d0d5a069\") " Feb 02 22:00:03 crc kubenswrapper[4789]: I0202 22:00:03.150804 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1bb7e7b-5bfd-49b8-8c42-e2a0d0d5a069-config-volume" (OuterVolumeSpecName: "config-volume") pod "a1bb7e7b-5bfd-49b8-8c42-e2a0d0d5a069" (UID: "a1bb7e7b-5bfd-49b8-8c42-e2a0d0d5a069"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 22:00:03 crc kubenswrapper[4789]: I0202 22:00:03.155777 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1bb7e7b-5bfd-49b8-8c42-e2a0d0d5a069-kube-api-access-2wdp6" (OuterVolumeSpecName: "kube-api-access-2wdp6") pod "a1bb7e7b-5bfd-49b8-8c42-e2a0d0d5a069" (UID: "a1bb7e7b-5bfd-49b8-8c42-e2a0d0d5a069"). InnerVolumeSpecName "kube-api-access-2wdp6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 22:00:03 crc kubenswrapper[4789]: I0202 22:00:03.156272 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1bb7e7b-5bfd-49b8-8c42-e2a0d0d5a069-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a1bb7e7b-5bfd-49b8-8c42-e2a0d0d5a069" (UID: "a1bb7e7b-5bfd-49b8-8c42-e2a0d0d5a069"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 22:00:03 crc kubenswrapper[4789]: I0202 22:00:03.251479 4789 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a1bb7e7b-5bfd-49b8-8c42-e2a0d0d5a069-config-volume\") on node \"crc\" DevicePath \"\"" Feb 02 22:00:03 crc kubenswrapper[4789]: I0202 22:00:03.251518 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2wdp6\" (UniqueName: \"kubernetes.io/projected/a1bb7e7b-5bfd-49b8-8c42-e2a0d0d5a069-kube-api-access-2wdp6\") on node \"crc\" DevicePath \"\"" Feb 02 22:00:03 crc kubenswrapper[4789]: I0202 22:00:03.251534 4789 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a1bb7e7b-5bfd-49b8-8c42-e2a0d0d5a069-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 02 22:00:03 crc kubenswrapper[4789]: I0202 22:00:03.737024 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29501160-5s9sq" event={"ID":"a1bb7e7b-5bfd-49b8-8c42-e2a0d0d5a069","Type":"ContainerDied","Data":"4ea88782f896dccc06d1ca93fa8fd945ea4f1e819b3a4ab0584f8fcc6f8ecef5"} Feb 02 22:00:03 crc kubenswrapper[4789]: I0202 22:00:03.737066 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ea88782f896dccc06d1ca93fa8fd945ea4f1e819b3a4ab0584f8fcc6f8ecef5" Feb 02 22:00:03 crc kubenswrapper[4789]: I0202 22:00:03.737088 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29501160-5s9sq" Feb 02 22:00:04 crc kubenswrapper[4789]: I0202 22:00:04.120545 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29501115-zcgzz"] Feb 02 22:00:04 crc kubenswrapper[4789]: I0202 22:00:04.127332 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29501115-zcgzz"] Feb 02 22:00:04 crc kubenswrapper[4789]: I0202 22:00:04.438364 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ee2bc38-213d-4181-8e23-0f579b87c986" path="/var/lib/kubelet/pods/9ee2bc38-213d-4181-8e23-0f579b87c986/volumes" Feb 02 22:00:09 crc kubenswrapper[4789]: I0202 22:00:09.420088 4789 scope.go:117] "RemoveContainer" containerID="073e0816e5465e56b89054d46f25c31acee8f07d96c3df073123dd7a7f51be95" Feb 02 22:00:09 crc kubenswrapper[4789]: E0202 22:00:09.421057 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:00:22 crc kubenswrapper[4789]: I0202 22:00:22.420484 4789 scope.go:117] "RemoveContainer" containerID="073e0816e5465e56b89054d46f25c31acee8f07d96c3df073123dd7a7f51be95" Feb 02 22:00:22 crc kubenswrapper[4789]: E0202 22:00:22.421525 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:00:37 crc kubenswrapper[4789]: I0202 22:00:37.419135 4789 scope.go:117] "RemoveContainer" containerID="073e0816e5465e56b89054d46f25c31acee8f07d96c3df073123dd7a7f51be95" Feb 02 22:00:37 crc kubenswrapper[4789]: E0202 22:00:37.419802 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:00:45 crc kubenswrapper[4789]: I0202 22:00:45.790833 4789 scope.go:117] "RemoveContainer" containerID="6d89acfaef1b3506730c39c3731b4a07faef3b7d4dd42d0a85038266e3378c3e" Feb 02 22:00:52 crc kubenswrapper[4789]: I0202 22:00:52.420114 4789 scope.go:117] "RemoveContainer" containerID="073e0816e5465e56b89054d46f25c31acee8f07d96c3df073123dd7a7f51be95" Feb 02 22:00:52 crc kubenswrapper[4789]: E0202 22:00:52.421410 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:01:03 crc kubenswrapper[4789]: I0202 22:01:03.420515 4789 scope.go:117] "RemoveContainer" containerID="073e0816e5465e56b89054d46f25c31acee8f07d96c3df073123dd7a7f51be95" Feb 02 22:01:03 crc kubenswrapper[4789]: E0202 22:01:03.421523 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:01:14 crc kubenswrapper[4789]: I0202 22:01:14.419535 4789 scope.go:117] "RemoveContainer" containerID="073e0816e5465e56b89054d46f25c31acee8f07d96c3df073123dd7a7f51be95" Feb 02 22:01:14 crc kubenswrapper[4789]: E0202 22:01:14.420712 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:01:19 crc kubenswrapper[4789]: I0202 22:01:19.989337 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qngqz"] Feb 02 22:01:19 crc kubenswrapper[4789]: E0202 22:01:19.990691 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1bb7e7b-5bfd-49b8-8c42-e2a0d0d5a069" containerName="collect-profiles" Feb 02 22:01:19 crc kubenswrapper[4789]: I0202 22:01:19.990726 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1bb7e7b-5bfd-49b8-8c42-e2a0d0d5a069" containerName="collect-profiles" Feb 02 22:01:19 crc kubenswrapper[4789]: I0202 22:01:19.995320 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1bb7e7b-5bfd-49b8-8c42-e2a0d0d5a069" containerName="collect-profiles" Feb 02 22:01:19 crc kubenswrapper[4789]: I0202 22:01:19.998015 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qngqz" Feb 02 22:01:20 crc kubenswrapper[4789]: I0202 22:01:20.019342 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qngqz"] Feb 02 22:01:20 crc kubenswrapper[4789]: I0202 22:01:20.090764 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c814241e-5f15-47d5-a7c2-9ba5028cb09a-utilities\") pod \"community-operators-qngqz\" (UID: \"c814241e-5f15-47d5-a7c2-9ba5028cb09a\") " pod="openshift-marketplace/community-operators-qngqz" Feb 02 22:01:20 crc kubenswrapper[4789]: I0202 22:01:20.091064 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c814241e-5f15-47d5-a7c2-9ba5028cb09a-catalog-content\") pod \"community-operators-qngqz\" (UID: \"c814241e-5f15-47d5-a7c2-9ba5028cb09a\") " pod="openshift-marketplace/community-operators-qngqz" Feb 02 22:01:20 crc kubenswrapper[4789]: I0202 22:01:20.091192 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qw9tn\" (UniqueName: \"kubernetes.io/projected/c814241e-5f15-47d5-a7c2-9ba5028cb09a-kube-api-access-qw9tn\") pod \"community-operators-qngqz\" (UID: \"c814241e-5f15-47d5-a7c2-9ba5028cb09a\") " pod="openshift-marketplace/community-operators-qngqz" Feb 02 22:01:20 crc kubenswrapper[4789]: I0202 22:01:20.191820 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c814241e-5f15-47d5-a7c2-9ba5028cb09a-utilities\") pod \"community-operators-qngqz\" (UID: \"c814241e-5f15-47d5-a7c2-9ba5028cb09a\") " pod="openshift-marketplace/community-operators-qngqz" Feb 02 22:01:20 crc kubenswrapper[4789]: I0202 22:01:20.192091 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c814241e-5f15-47d5-a7c2-9ba5028cb09a-catalog-content\") pod \"community-operators-qngqz\" (UID: \"c814241e-5f15-47d5-a7c2-9ba5028cb09a\") " pod="openshift-marketplace/community-operators-qngqz" Feb 02 22:01:20 crc kubenswrapper[4789]: I0202 22:01:20.192202 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qw9tn\" (UniqueName: \"kubernetes.io/projected/c814241e-5f15-47d5-a7c2-9ba5028cb09a-kube-api-access-qw9tn\") pod \"community-operators-qngqz\" (UID: \"c814241e-5f15-47d5-a7c2-9ba5028cb09a\") " pod="openshift-marketplace/community-operators-qngqz" Feb 02 22:01:20 crc kubenswrapper[4789]: I0202 22:01:20.192461 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c814241e-5f15-47d5-a7c2-9ba5028cb09a-utilities\") pod \"community-operators-qngqz\" (UID: \"c814241e-5f15-47d5-a7c2-9ba5028cb09a\") " pod="openshift-marketplace/community-operators-qngqz" Feb 02 22:01:20 crc kubenswrapper[4789]: I0202 22:01:20.192499 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c814241e-5f15-47d5-a7c2-9ba5028cb09a-catalog-content\") pod \"community-operators-qngqz\" (UID: \"c814241e-5f15-47d5-a7c2-9ba5028cb09a\") " pod="openshift-marketplace/community-operators-qngqz" Feb 02 22:01:20 crc kubenswrapper[4789]: I0202 22:01:20.220147 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qw9tn\" (UniqueName: \"kubernetes.io/projected/c814241e-5f15-47d5-a7c2-9ba5028cb09a-kube-api-access-qw9tn\") pod \"community-operators-qngqz\" (UID: \"c814241e-5f15-47d5-a7c2-9ba5028cb09a\") " pod="openshift-marketplace/community-operators-qngqz" Feb 02 22:01:20 crc kubenswrapper[4789]: I0202 22:01:20.331641 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qngqz" Feb 02 22:01:20 crc kubenswrapper[4789]: I0202 22:01:20.888271 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qngqz"] Feb 02 22:01:20 crc kubenswrapper[4789]: W0202 22:01:20.897033 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc814241e_5f15_47d5_a7c2_9ba5028cb09a.slice/crio-23390ac3c6a25efb6ee55d4a384d87f69f2dd88d959dec5663a7775a02ee2be9 WatchSource:0}: Error finding container 23390ac3c6a25efb6ee55d4a384d87f69f2dd88d959dec5663a7775a02ee2be9: Status 404 returned error can't find the container with id 23390ac3c6a25efb6ee55d4a384d87f69f2dd88d959dec5663a7775a02ee2be9 Feb 02 22:01:21 crc kubenswrapper[4789]: I0202 22:01:21.465877 4789 generic.go:334] "Generic (PLEG): container finished" podID="c814241e-5f15-47d5-a7c2-9ba5028cb09a" containerID="33de367149bdd49bb0fcd41c459b815755412597755b8351ceeadb4a06d1acd7" exitCode=0 Feb 02 22:01:21 crc kubenswrapper[4789]: I0202 22:01:21.465953 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qngqz" event={"ID":"c814241e-5f15-47d5-a7c2-9ba5028cb09a","Type":"ContainerDied","Data":"33de367149bdd49bb0fcd41c459b815755412597755b8351ceeadb4a06d1acd7"} Feb 02 22:01:21 crc kubenswrapper[4789]: I0202 22:01:21.466020 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qngqz" event={"ID":"c814241e-5f15-47d5-a7c2-9ba5028cb09a","Type":"ContainerStarted","Data":"23390ac3c6a25efb6ee55d4a384d87f69f2dd88d959dec5663a7775a02ee2be9"} Feb 02 22:01:21 crc kubenswrapper[4789]: I0202 22:01:21.469302 4789 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 22:01:22 crc kubenswrapper[4789]: I0202 22:01:22.478468 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qngqz" event={"ID":"c814241e-5f15-47d5-a7c2-9ba5028cb09a","Type":"ContainerStarted","Data":"ccc02cb25b9ec11fd5dd940c17023ceb560e7ef7e09d887ed9519a93c9e8391a"} Feb 02 22:01:23 crc kubenswrapper[4789]: I0202 22:01:23.489544 4789 generic.go:334] "Generic (PLEG): container finished" podID="c814241e-5f15-47d5-a7c2-9ba5028cb09a" containerID="ccc02cb25b9ec11fd5dd940c17023ceb560e7ef7e09d887ed9519a93c9e8391a" exitCode=0 Feb 02 22:01:23 crc kubenswrapper[4789]: I0202 22:01:23.489796 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qngqz" event={"ID":"c814241e-5f15-47d5-a7c2-9ba5028cb09a","Type":"ContainerDied","Data":"ccc02cb25b9ec11fd5dd940c17023ceb560e7ef7e09d887ed9519a93c9e8391a"} Feb 02 22:01:24 crc kubenswrapper[4789]: I0202 22:01:24.499633 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qngqz" event={"ID":"c814241e-5f15-47d5-a7c2-9ba5028cb09a","Type":"ContainerStarted","Data":"7541ab29c259066e576c4ceefe7779acf67424d08ef906cbaa478c83e6a883f4"} Feb 02 22:01:24 crc kubenswrapper[4789]: I0202 22:01:24.539546 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qngqz" podStartSLOduration=2.896283094 podStartE2EDuration="5.53952161s" podCreationTimestamp="2026-02-02 22:01:19 +0000 UTC" firstStartedPulling="2026-02-02 22:01:21.468826693 +0000 UTC m=+2501.763851742" lastFinishedPulling="2026-02-02 22:01:24.112065199 +0000 UTC m=+2504.407090258" observedRunningTime="2026-02-02 22:01:24.53383025 +0000 UTC m=+2504.828855289" watchObservedRunningTime="2026-02-02 22:01:24.53952161 +0000 UTC m=+2504.834546649" Feb 02 22:01:29 crc kubenswrapper[4789]: I0202 22:01:29.419749 4789 scope.go:117] "RemoveContainer" containerID="073e0816e5465e56b89054d46f25c31acee8f07d96c3df073123dd7a7f51be95" Feb 02 22:01:29 crc kubenswrapper[4789]: E0202 22:01:29.421117 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:01:30 crc kubenswrapper[4789]: I0202 22:01:30.332695 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qngqz" Feb 02 22:01:30 crc kubenswrapper[4789]: I0202 22:01:30.332761 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qngqz" Feb 02 22:01:30 crc kubenswrapper[4789]: I0202 22:01:30.407262 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qngqz" Feb 02 22:01:30 crc kubenswrapper[4789]: I0202 22:01:30.624386 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qngqz" Feb 02 22:01:30 crc kubenswrapper[4789]: I0202 22:01:30.699171 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qngqz"] Feb 02 22:01:32 crc kubenswrapper[4789]: I0202 22:01:32.573076 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-qngqz" podUID="c814241e-5f15-47d5-a7c2-9ba5028cb09a" containerName="registry-server" containerID="cri-o://7541ab29c259066e576c4ceefe7779acf67424d08ef906cbaa478c83e6a883f4" gracePeriod=2 Feb 02 22:01:33 crc kubenswrapper[4789]: I0202 22:01:33.062314 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qngqz" Feb 02 22:01:33 crc kubenswrapper[4789]: I0202 22:01:33.124904 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qw9tn\" (UniqueName: \"kubernetes.io/projected/c814241e-5f15-47d5-a7c2-9ba5028cb09a-kube-api-access-qw9tn\") pod \"c814241e-5f15-47d5-a7c2-9ba5028cb09a\" (UID: \"c814241e-5f15-47d5-a7c2-9ba5028cb09a\") " Feb 02 22:01:33 crc kubenswrapper[4789]: I0202 22:01:33.125386 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c814241e-5f15-47d5-a7c2-9ba5028cb09a-utilities\") pod \"c814241e-5f15-47d5-a7c2-9ba5028cb09a\" (UID: \"c814241e-5f15-47d5-a7c2-9ba5028cb09a\") " Feb 02 22:01:33 crc kubenswrapper[4789]: I0202 22:01:33.125436 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c814241e-5f15-47d5-a7c2-9ba5028cb09a-catalog-content\") pod \"c814241e-5f15-47d5-a7c2-9ba5028cb09a\" (UID: \"c814241e-5f15-47d5-a7c2-9ba5028cb09a\") " Feb 02 22:01:33 crc kubenswrapper[4789]: I0202 22:01:33.127078 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c814241e-5f15-47d5-a7c2-9ba5028cb09a-utilities" (OuterVolumeSpecName: "utilities") pod "c814241e-5f15-47d5-a7c2-9ba5028cb09a" (UID: "c814241e-5f15-47d5-a7c2-9ba5028cb09a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 22:01:33 crc kubenswrapper[4789]: I0202 22:01:33.131429 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c814241e-5f15-47d5-a7c2-9ba5028cb09a-kube-api-access-qw9tn" (OuterVolumeSpecName: "kube-api-access-qw9tn") pod "c814241e-5f15-47d5-a7c2-9ba5028cb09a" (UID: "c814241e-5f15-47d5-a7c2-9ba5028cb09a"). InnerVolumeSpecName "kube-api-access-qw9tn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 22:01:33 crc kubenswrapper[4789]: I0202 22:01:33.184452 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c814241e-5f15-47d5-a7c2-9ba5028cb09a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c814241e-5f15-47d5-a7c2-9ba5028cb09a" (UID: "c814241e-5f15-47d5-a7c2-9ba5028cb09a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 22:01:33 crc kubenswrapper[4789]: I0202 22:01:33.226651 4789 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c814241e-5f15-47d5-a7c2-9ba5028cb09a-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 22:01:33 crc kubenswrapper[4789]: I0202 22:01:33.226850 4789 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c814241e-5f15-47d5-a7c2-9ba5028cb09a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 22:01:33 crc kubenswrapper[4789]: I0202 22:01:33.226934 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qw9tn\" (UniqueName: \"kubernetes.io/projected/c814241e-5f15-47d5-a7c2-9ba5028cb09a-kube-api-access-qw9tn\") on node \"crc\" DevicePath \"\"" Feb 02 22:01:33 crc kubenswrapper[4789]: I0202 22:01:33.587246 4789 generic.go:334] "Generic (PLEG): container finished" podID="c814241e-5f15-47d5-a7c2-9ba5028cb09a" containerID="7541ab29c259066e576c4ceefe7779acf67424d08ef906cbaa478c83e6a883f4" exitCode=0 Feb 02 22:01:33 crc kubenswrapper[4789]: I0202 22:01:33.587306 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qngqz" event={"ID":"c814241e-5f15-47d5-a7c2-9ba5028cb09a","Type":"ContainerDied","Data":"7541ab29c259066e576c4ceefe7779acf67424d08ef906cbaa478c83e6a883f4"} Feb 02 22:01:33 crc kubenswrapper[4789]: I0202 22:01:33.587336 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qngqz" Feb 02 22:01:33 crc kubenswrapper[4789]: I0202 22:01:33.587362 4789 scope.go:117] "RemoveContainer" containerID="7541ab29c259066e576c4ceefe7779acf67424d08ef906cbaa478c83e6a883f4" Feb 02 22:01:33 crc kubenswrapper[4789]: I0202 22:01:33.587345 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qngqz" event={"ID":"c814241e-5f15-47d5-a7c2-9ba5028cb09a","Type":"ContainerDied","Data":"23390ac3c6a25efb6ee55d4a384d87f69f2dd88d959dec5663a7775a02ee2be9"} Feb 02 22:01:33 crc kubenswrapper[4789]: I0202 22:01:33.619892 4789 scope.go:117] "RemoveContainer" containerID="ccc02cb25b9ec11fd5dd940c17023ceb560e7ef7e09d887ed9519a93c9e8391a" Feb 02 22:01:33 crc kubenswrapper[4789]: I0202 22:01:33.648242 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qngqz"] Feb 02 22:01:33 crc kubenswrapper[4789]: I0202 22:01:33.660683 4789 scope.go:117] "RemoveContainer" containerID="33de367149bdd49bb0fcd41c459b815755412597755b8351ceeadb4a06d1acd7" Feb 02 22:01:33 crc kubenswrapper[4789]: I0202 22:01:33.661051 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-qngqz"] Feb 02 22:01:33 crc kubenswrapper[4789]: I0202 22:01:33.697244 4789 scope.go:117] "RemoveContainer" containerID="7541ab29c259066e576c4ceefe7779acf67424d08ef906cbaa478c83e6a883f4" Feb 02 22:01:33 crc kubenswrapper[4789]: E0202 22:01:33.697863 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7541ab29c259066e576c4ceefe7779acf67424d08ef906cbaa478c83e6a883f4\": container with ID starting with 7541ab29c259066e576c4ceefe7779acf67424d08ef906cbaa478c83e6a883f4 not found: ID does not exist" containerID="7541ab29c259066e576c4ceefe7779acf67424d08ef906cbaa478c83e6a883f4" Feb 02 22:01:33 crc kubenswrapper[4789]: I0202 22:01:33.697919 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7541ab29c259066e576c4ceefe7779acf67424d08ef906cbaa478c83e6a883f4"} err="failed to get container status \"7541ab29c259066e576c4ceefe7779acf67424d08ef906cbaa478c83e6a883f4\": rpc error: code = NotFound desc = could not find container \"7541ab29c259066e576c4ceefe7779acf67424d08ef906cbaa478c83e6a883f4\": container with ID starting with 7541ab29c259066e576c4ceefe7779acf67424d08ef906cbaa478c83e6a883f4 not found: ID does not exist" Feb 02 22:01:33 crc kubenswrapper[4789]: I0202 22:01:33.697954 4789 scope.go:117] "RemoveContainer" containerID="ccc02cb25b9ec11fd5dd940c17023ceb560e7ef7e09d887ed9519a93c9e8391a" Feb 02 22:01:33 crc kubenswrapper[4789]: E0202 22:01:33.698431 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ccc02cb25b9ec11fd5dd940c17023ceb560e7ef7e09d887ed9519a93c9e8391a\": container with ID starting with ccc02cb25b9ec11fd5dd940c17023ceb560e7ef7e09d887ed9519a93c9e8391a not found: ID does not exist" containerID="ccc02cb25b9ec11fd5dd940c17023ceb560e7ef7e09d887ed9519a93c9e8391a" Feb 02 22:01:33 crc kubenswrapper[4789]: I0202 22:01:33.698635 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ccc02cb25b9ec11fd5dd940c17023ceb560e7ef7e09d887ed9519a93c9e8391a"} err="failed to get container status \"ccc02cb25b9ec11fd5dd940c17023ceb560e7ef7e09d887ed9519a93c9e8391a\": rpc error: code = NotFound desc = could not find container \"ccc02cb25b9ec11fd5dd940c17023ceb560e7ef7e09d887ed9519a93c9e8391a\": container with ID starting with ccc02cb25b9ec11fd5dd940c17023ceb560e7ef7e09d887ed9519a93c9e8391a not found: ID does not exist" Feb 02 22:01:33 crc kubenswrapper[4789]: I0202 22:01:33.698787 4789 scope.go:117] "RemoveContainer" containerID="33de367149bdd49bb0fcd41c459b815755412597755b8351ceeadb4a06d1acd7" Feb 02 22:01:33 crc kubenswrapper[4789]: E0202 22:01:33.699357 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33de367149bdd49bb0fcd41c459b815755412597755b8351ceeadb4a06d1acd7\": container with ID starting with 33de367149bdd49bb0fcd41c459b815755412597755b8351ceeadb4a06d1acd7 not found: ID does not exist" containerID="33de367149bdd49bb0fcd41c459b815755412597755b8351ceeadb4a06d1acd7" Feb 02 22:01:33 crc kubenswrapper[4789]: I0202 22:01:33.699510 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33de367149bdd49bb0fcd41c459b815755412597755b8351ceeadb4a06d1acd7"} err="failed to get container status \"33de367149bdd49bb0fcd41c459b815755412597755b8351ceeadb4a06d1acd7\": rpc error: code = NotFound desc = could not find container \"33de367149bdd49bb0fcd41c459b815755412597755b8351ceeadb4a06d1acd7\": container with ID starting with 33de367149bdd49bb0fcd41c459b815755412597755b8351ceeadb4a06d1acd7 not found: ID does not exist" Feb 02 22:01:34 crc kubenswrapper[4789]: I0202 22:01:34.435067 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c814241e-5f15-47d5-a7c2-9ba5028cb09a" path="/var/lib/kubelet/pods/c814241e-5f15-47d5-a7c2-9ba5028cb09a/volumes" Feb 02 22:01:41 crc kubenswrapper[4789]: I0202 22:01:41.419665 4789 scope.go:117] "RemoveContainer" containerID="073e0816e5465e56b89054d46f25c31acee8f07d96c3df073123dd7a7f51be95" Feb 02 22:01:41 crc kubenswrapper[4789]: E0202 22:01:41.420749 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:01:52 crc kubenswrapper[4789]: I0202 22:01:52.420477 4789 scope.go:117] "RemoveContainer" containerID="073e0816e5465e56b89054d46f25c31acee8f07d96c3df073123dd7a7f51be95" Feb 02 22:01:52 crc kubenswrapper[4789]: E0202 22:01:52.421738 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:02:04 crc kubenswrapper[4789]: I0202 22:02:04.420145 4789 scope.go:117] "RemoveContainer" containerID="073e0816e5465e56b89054d46f25c31acee8f07d96c3df073123dd7a7f51be95" Feb 02 22:02:04 crc kubenswrapper[4789]: E0202 22:02:04.421128 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:02:17 crc kubenswrapper[4789]: I0202 22:02:17.419638 4789 scope.go:117] "RemoveContainer" containerID="073e0816e5465e56b89054d46f25c31acee8f07d96c3df073123dd7a7f51be95" Feb 02 22:02:17 crc kubenswrapper[4789]: E0202 22:02:17.420695 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:02:30 crc kubenswrapper[4789]: I0202 22:02:30.424468 4789 scope.go:117] "RemoveContainer" containerID="073e0816e5465e56b89054d46f25c31acee8f07d96c3df073123dd7a7f51be95" Feb 02 22:02:30 crc kubenswrapper[4789]: E0202 22:02:30.425378 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:02:45 crc kubenswrapper[4789]: I0202 22:02:45.420522 4789 scope.go:117] "RemoveContainer" containerID="073e0816e5465e56b89054d46f25c31acee8f07d96c3df073123dd7a7f51be95" Feb 02 22:02:45 crc kubenswrapper[4789]: E0202 22:02:45.421968 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:02:57 crc kubenswrapper[4789]: I0202 22:02:57.420251 4789 scope.go:117] "RemoveContainer" containerID="073e0816e5465e56b89054d46f25c31acee8f07d96c3df073123dd7a7f51be95" Feb 02 22:02:57 crc kubenswrapper[4789]: E0202 22:02:57.423266 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:03:09 crc kubenswrapper[4789]: I0202 22:03:09.421037 4789 scope.go:117] "RemoveContainer" containerID="073e0816e5465e56b89054d46f25c31acee8f07d96c3df073123dd7a7f51be95" Feb 02 22:03:09 crc kubenswrapper[4789]: E0202 22:03:09.421978 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:03:22 crc kubenswrapper[4789]: I0202 22:03:22.419934 4789 scope.go:117] "RemoveContainer" containerID="073e0816e5465e56b89054d46f25c31acee8f07d96c3df073123dd7a7f51be95" Feb 02 22:03:22 crc kubenswrapper[4789]: E0202 22:03:22.420848 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:03:33 crc kubenswrapper[4789]: I0202 22:03:33.419552 4789 scope.go:117] "RemoveContainer" containerID="073e0816e5465e56b89054d46f25c31acee8f07d96c3df073123dd7a7f51be95" Feb 02 22:03:33 crc kubenswrapper[4789]: E0202 22:03:33.420416 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:03:44 crc kubenswrapper[4789]: I0202 22:03:44.420191 4789 scope.go:117] "RemoveContainer" containerID="073e0816e5465e56b89054d46f25c31acee8f07d96c3df073123dd7a7f51be95" Feb 02 22:03:44 crc kubenswrapper[4789]: E0202 22:03:44.421349 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:03:55 crc kubenswrapper[4789]: I0202 22:03:55.426042 4789 scope.go:117] "RemoveContainer" containerID="073e0816e5465e56b89054d46f25c31acee8f07d96c3df073123dd7a7f51be95" Feb 02 22:03:55 crc kubenswrapper[4789]: E0202 22:03:55.427181 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:04:08 crc kubenswrapper[4789]: I0202 22:04:08.420074 4789 scope.go:117] "RemoveContainer" containerID="073e0816e5465e56b89054d46f25c31acee8f07d96c3df073123dd7a7f51be95" Feb 02 22:04:08 crc kubenswrapper[4789]: E0202 22:04:08.420703 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:04:17 crc kubenswrapper[4789]: I0202 22:04:17.391479 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-wdbrb"] Feb 02 22:04:17 crc kubenswrapper[4789]: E0202 22:04:17.392181 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c814241e-5f15-47d5-a7c2-9ba5028cb09a" containerName="extract-content" Feb 02 22:04:17 crc kubenswrapper[4789]: I0202 22:04:17.392193 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="c814241e-5f15-47d5-a7c2-9ba5028cb09a" containerName="extract-content" Feb 02 22:04:17 crc kubenswrapper[4789]: E0202 22:04:17.392204 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c814241e-5f15-47d5-a7c2-9ba5028cb09a" containerName="extract-utilities" Feb 02 22:04:17 crc kubenswrapper[4789]: I0202 22:04:17.392211 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="c814241e-5f15-47d5-a7c2-9ba5028cb09a" containerName="extract-utilities" Feb 02 22:04:17 crc kubenswrapper[4789]: E0202 22:04:17.392238 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c814241e-5f15-47d5-a7c2-9ba5028cb09a" containerName="registry-server" Feb 02 22:04:17 crc kubenswrapper[4789]: I0202 22:04:17.392245 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="c814241e-5f15-47d5-a7c2-9ba5028cb09a" containerName="registry-server" Feb 02 22:04:17 crc kubenswrapper[4789]: I0202 22:04:17.392365 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="c814241e-5f15-47d5-a7c2-9ba5028cb09a" containerName="registry-server" Feb 02 22:04:17 crc kubenswrapper[4789]: I0202 22:04:17.393251 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wdbrb" Feb 02 22:04:17 crc kubenswrapper[4789]: I0202 22:04:17.422680 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wdbrb"] Feb 02 22:04:17 crc kubenswrapper[4789]: I0202 22:04:17.526030 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5w7f\" (UniqueName: \"kubernetes.io/projected/f231bfba-0946-4782-82a3-dcafe4159f50-kube-api-access-s5w7f\") pod \"redhat-operators-wdbrb\" (UID: \"f231bfba-0946-4782-82a3-dcafe4159f50\") " pod="openshift-marketplace/redhat-operators-wdbrb" Feb 02 22:04:17 crc kubenswrapper[4789]: I0202 22:04:17.526298 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f231bfba-0946-4782-82a3-dcafe4159f50-catalog-content\") pod \"redhat-operators-wdbrb\" (UID: \"f231bfba-0946-4782-82a3-dcafe4159f50\") " pod="openshift-marketplace/redhat-operators-wdbrb" Feb 02 22:04:17 crc kubenswrapper[4789]: I0202 22:04:17.526352 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f231bfba-0946-4782-82a3-dcafe4159f50-utilities\") pod \"redhat-operators-wdbrb\" (UID: \"f231bfba-0946-4782-82a3-dcafe4159f50\") " pod="openshift-marketplace/redhat-operators-wdbrb" Feb 02 22:04:17 crc kubenswrapper[4789]: I0202 22:04:17.627882 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5w7f\" (UniqueName: \"kubernetes.io/projected/f231bfba-0946-4782-82a3-dcafe4159f50-kube-api-access-s5w7f\") pod \"redhat-operators-wdbrb\" (UID: \"f231bfba-0946-4782-82a3-dcafe4159f50\") " pod="openshift-marketplace/redhat-operators-wdbrb" Feb 02 22:04:17 crc kubenswrapper[4789]: I0202 22:04:17.627985 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f231bfba-0946-4782-82a3-dcafe4159f50-catalog-content\") pod \"redhat-operators-wdbrb\" (UID: \"f231bfba-0946-4782-82a3-dcafe4159f50\") " pod="openshift-marketplace/redhat-operators-wdbrb" Feb 02 22:04:17 crc kubenswrapper[4789]: I0202 22:04:17.628010 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f231bfba-0946-4782-82a3-dcafe4159f50-utilities\") pod \"redhat-operators-wdbrb\" (UID: \"f231bfba-0946-4782-82a3-dcafe4159f50\") " pod="openshift-marketplace/redhat-operators-wdbrb" Feb 02 22:04:17 crc kubenswrapper[4789]: I0202 22:04:17.628540 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f231bfba-0946-4782-82a3-dcafe4159f50-catalog-content\") pod \"redhat-operators-wdbrb\" (UID: \"f231bfba-0946-4782-82a3-dcafe4159f50\") " pod="openshift-marketplace/redhat-operators-wdbrb" Feb 02 22:04:17 crc kubenswrapper[4789]: I0202 22:04:17.628654 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f231bfba-0946-4782-82a3-dcafe4159f50-utilities\") pod \"redhat-operators-wdbrb\" (UID: \"f231bfba-0946-4782-82a3-dcafe4159f50\") " pod="openshift-marketplace/redhat-operators-wdbrb" Feb 02 22:04:17 crc kubenswrapper[4789]: I0202 22:04:17.667855 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5w7f\" (UniqueName: \"kubernetes.io/projected/f231bfba-0946-4782-82a3-dcafe4159f50-kube-api-access-s5w7f\") pod \"redhat-operators-wdbrb\" (UID: \"f231bfba-0946-4782-82a3-dcafe4159f50\") " pod="openshift-marketplace/redhat-operators-wdbrb" Feb 02 22:04:17 crc kubenswrapper[4789]: I0202 22:04:17.731334 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wdbrb" Feb 02 22:04:18 crc kubenswrapper[4789]: I0202 22:04:18.003923 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wdbrb"] Feb 02 22:04:18 crc kubenswrapper[4789]: I0202 22:04:18.276097 4789 generic.go:334] "Generic (PLEG): container finished" podID="f231bfba-0946-4782-82a3-dcafe4159f50" containerID="d0090250e25e03004d6d0c247d871e49eb129989f2cd1cb68f0d6c14c4ac4954" exitCode=0 Feb 02 22:04:18 crc kubenswrapper[4789]: I0202 22:04:18.276145 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wdbrb" event={"ID":"f231bfba-0946-4782-82a3-dcafe4159f50","Type":"ContainerDied","Data":"d0090250e25e03004d6d0c247d871e49eb129989f2cd1cb68f0d6c14c4ac4954"} Feb 02 22:04:18 crc kubenswrapper[4789]: I0202 22:04:18.276183 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wdbrb" event={"ID":"f231bfba-0946-4782-82a3-dcafe4159f50","Type":"ContainerStarted","Data":"f7c1998a84509e94ce520c02bb7de15587c04584b9f0a66df1d3e0cd171357bb"} Feb 02 22:04:20 crc kubenswrapper[4789]: I0202 22:04:20.296139 4789 generic.go:334] "Generic (PLEG): container finished" podID="f231bfba-0946-4782-82a3-dcafe4159f50" containerID="87114dafd05afd897c89f349a0383e893628641451c9820c9b8c9fba3db93d09" exitCode=0 Feb 02 22:04:20 crc kubenswrapper[4789]: I0202 22:04:20.296274 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wdbrb" event={"ID":"f231bfba-0946-4782-82a3-dcafe4159f50","Type":"ContainerDied","Data":"87114dafd05afd897c89f349a0383e893628641451c9820c9b8c9fba3db93d09"} Feb 02 22:04:21 crc kubenswrapper[4789]: I0202 22:04:21.316428 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wdbrb" event={"ID":"f231bfba-0946-4782-82a3-dcafe4159f50","Type":"ContainerStarted","Data":"b974bd2bda675af8eb39ac6a9ae5ab0230f3e2dbc6c22de3a6e0418e1a088b06"} Feb 02 22:04:21 crc kubenswrapper[4789]: I0202 22:04:21.348344 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-wdbrb" podStartSLOduration=1.867909611 podStartE2EDuration="4.348328748s" podCreationTimestamp="2026-02-02 22:04:17 +0000 UTC" firstStartedPulling="2026-02-02 22:04:18.277523847 +0000 UTC m=+2678.572548866" lastFinishedPulling="2026-02-02 22:04:20.757942984 +0000 UTC m=+2681.052968003" observedRunningTime="2026-02-02 22:04:21.344944173 +0000 UTC m=+2681.639969242" watchObservedRunningTime="2026-02-02 22:04:21.348328748 +0000 UTC m=+2681.643353767" Feb 02 22:04:22 crc kubenswrapper[4789]: I0202 22:04:22.422783 4789 scope.go:117] "RemoveContainer" containerID="073e0816e5465e56b89054d46f25c31acee8f07d96c3df073123dd7a7f51be95" Feb 02 22:04:22 crc kubenswrapper[4789]: E0202 22:04:22.423483 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:04:27 crc kubenswrapper[4789]: I0202 22:04:27.731615 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-wdbrb" Feb 02 22:04:27 crc kubenswrapper[4789]: I0202 22:04:27.731972 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-wdbrb" Feb 02 22:04:28 crc kubenswrapper[4789]: I0202 22:04:28.804199 4789 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wdbrb" podUID="f231bfba-0946-4782-82a3-dcafe4159f50" containerName="registry-server" probeResult="failure" output=< Feb 02 22:04:28 crc kubenswrapper[4789]: timeout: failed to connect service ":50051" within 1s Feb 02 22:04:28 crc kubenswrapper[4789]: > Feb 02 22:04:36 crc kubenswrapper[4789]: I0202 22:04:36.420441 4789 scope.go:117] "RemoveContainer" containerID="073e0816e5465e56b89054d46f25c31acee8f07d96c3df073123dd7a7f51be95" Feb 02 22:04:36 crc kubenswrapper[4789]: E0202 22:04:36.421719 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:04:37 crc kubenswrapper[4789]: I0202 22:04:37.793136 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-wdbrb" Feb 02 22:04:37 crc kubenswrapper[4789]: I0202 22:04:37.856500 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-wdbrb" Feb 02 22:04:38 crc kubenswrapper[4789]: I0202 22:04:38.035438 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wdbrb"] Feb 02 22:04:39 crc kubenswrapper[4789]: I0202 22:04:39.487147 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-wdbrb" podUID="f231bfba-0946-4782-82a3-dcafe4159f50" containerName="registry-server" containerID="cri-o://b974bd2bda675af8eb39ac6a9ae5ab0230f3e2dbc6c22de3a6e0418e1a088b06" gracePeriod=2 Feb 02 22:04:39 crc kubenswrapper[4789]: I0202 22:04:39.959476 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wdbrb" Feb 02 22:04:40 crc kubenswrapper[4789]: I0202 22:04:40.066172 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f231bfba-0946-4782-82a3-dcafe4159f50-catalog-content\") pod \"f231bfba-0946-4782-82a3-dcafe4159f50\" (UID: \"f231bfba-0946-4782-82a3-dcafe4159f50\") " Feb 02 22:04:40 crc kubenswrapper[4789]: I0202 22:04:40.066281 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f231bfba-0946-4782-82a3-dcafe4159f50-utilities\") pod \"f231bfba-0946-4782-82a3-dcafe4159f50\" (UID: \"f231bfba-0946-4782-82a3-dcafe4159f50\") " Feb 02 22:04:40 crc kubenswrapper[4789]: I0202 22:04:40.066355 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s5w7f\" (UniqueName: \"kubernetes.io/projected/f231bfba-0946-4782-82a3-dcafe4159f50-kube-api-access-s5w7f\") pod \"f231bfba-0946-4782-82a3-dcafe4159f50\" (UID: \"f231bfba-0946-4782-82a3-dcafe4159f50\") " Feb 02 22:04:40 crc kubenswrapper[4789]: I0202 22:04:40.069480 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f231bfba-0946-4782-82a3-dcafe4159f50-utilities" (OuterVolumeSpecName: "utilities") pod "f231bfba-0946-4782-82a3-dcafe4159f50" (UID: "f231bfba-0946-4782-82a3-dcafe4159f50"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 22:04:40 crc kubenswrapper[4789]: I0202 22:04:40.075289 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f231bfba-0946-4782-82a3-dcafe4159f50-kube-api-access-s5w7f" (OuterVolumeSpecName: "kube-api-access-s5w7f") pod "f231bfba-0946-4782-82a3-dcafe4159f50" (UID: "f231bfba-0946-4782-82a3-dcafe4159f50"). InnerVolumeSpecName "kube-api-access-s5w7f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 22:04:40 crc kubenswrapper[4789]: I0202 22:04:40.168793 4789 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f231bfba-0946-4782-82a3-dcafe4159f50-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 22:04:40 crc kubenswrapper[4789]: I0202 22:04:40.169152 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s5w7f\" (UniqueName: \"kubernetes.io/projected/f231bfba-0946-4782-82a3-dcafe4159f50-kube-api-access-s5w7f\") on node \"crc\" DevicePath \"\"" Feb 02 22:04:40 crc kubenswrapper[4789]: I0202 22:04:40.233849 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f231bfba-0946-4782-82a3-dcafe4159f50-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f231bfba-0946-4782-82a3-dcafe4159f50" (UID: "f231bfba-0946-4782-82a3-dcafe4159f50"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 22:04:40 crc kubenswrapper[4789]: I0202 22:04:40.270762 4789 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f231bfba-0946-4782-82a3-dcafe4159f50-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 22:04:40 crc kubenswrapper[4789]: I0202 22:04:40.499110 4789 generic.go:334] "Generic (PLEG): container finished" podID="f231bfba-0946-4782-82a3-dcafe4159f50" containerID="b974bd2bda675af8eb39ac6a9ae5ab0230f3e2dbc6c22de3a6e0418e1a088b06" exitCode=0 Feb 02 22:04:40 crc kubenswrapper[4789]: I0202 22:04:40.499149 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wdbrb" event={"ID":"f231bfba-0946-4782-82a3-dcafe4159f50","Type":"ContainerDied","Data":"b974bd2bda675af8eb39ac6a9ae5ab0230f3e2dbc6c22de3a6e0418e1a088b06"} Feb 02 22:04:40 crc kubenswrapper[4789]: I0202 22:04:40.499180 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wdbrb" event={"ID":"f231bfba-0946-4782-82a3-dcafe4159f50","Type":"ContainerDied","Data":"f7c1998a84509e94ce520c02bb7de15587c04584b9f0a66df1d3e0cd171357bb"} Feb 02 22:04:40 crc kubenswrapper[4789]: I0202 22:04:40.499227 4789 scope.go:117] "RemoveContainer" containerID="b974bd2bda675af8eb39ac6a9ae5ab0230f3e2dbc6c22de3a6e0418e1a088b06" Feb 02 22:04:40 crc kubenswrapper[4789]: I0202 22:04:40.499254 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wdbrb" Feb 02 22:04:40 crc kubenswrapper[4789]: I0202 22:04:40.530345 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wdbrb"] Feb 02 22:04:40 crc kubenswrapper[4789]: I0202 22:04:40.536486 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-wdbrb"] Feb 02 22:04:40 crc kubenswrapper[4789]: I0202 22:04:40.539692 4789 scope.go:117] "RemoveContainer" containerID="87114dafd05afd897c89f349a0383e893628641451c9820c9b8c9fba3db93d09" Feb 02 22:04:40 crc kubenswrapper[4789]: I0202 22:04:40.575557 4789 scope.go:117] "RemoveContainer" containerID="d0090250e25e03004d6d0c247d871e49eb129989f2cd1cb68f0d6c14c4ac4954" Feb 02 22:04:40 crc kubenswrapper[4789]: I0202 22:04:40.613013 4789 scope.go:117] "RemoveContainer" containerID="b974bd2bda675af8eb39ac6a9ae5ab0230f3e2dbc6c22de3a6e0418e1a088b06" Feb 02 22:04:40 crc kubenswrapper[4789]: E0202 22:04:40.613929 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b974bd2bda675af8eb39ac6a9ae5ab0230f3e2dbc6c22de3a6e0418e1a088b06\": container with ID starting with b974bd2bda675af8eb39ac6a9ae5ab0230f3e2dbc6c22de3a6e0418e1a088b06 not found: ID does not exist" containerID="b974bd2bda675af8eb39ac6a9ae5ab0230f3e2dbc6c22de3a6e0418e1a088b06" Feb 02 22:04:40 crc kubenswrapper[4789]: I0202 22:04:40.614011 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b974bd2bda675af8eb39ac6a9ae5ab0230f3e2dbc6c22de3a6e0418e1a088b06"} err="failed to get container status \"b974bd2bda675af8eb39ac6a9ae5ab0230f3e2dbc6c22de3a6e0418e1a088b06\": rpc error: code = NotFound desc = could not find container \"b974bd2bda675af8eb39ac6a9ae5ab0230f3e2dbc6c22de3a6e0418e1a088b06\": container with ID starting with b974bd2bda675af8eb39ac6a9ae5ab0230f3e2dbc6c22de3a6e0418e1a088b06 not found: ID does not exist" Feb 02 22:04:40 crc kubenswrapper[4789]: I0202 22:04:40.614066 4789 scope.go:117] "RemoveContainer" containerID="87114dafd05afd897c89f349a0383e893628641451c9820c9b8c9fba3db93d09" Feb 02 22:04:40 crc kubenswrapper[4789]: E0202 22:04:40.614911 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87114dafd05afd897c89f349a0383e893628641451c9820c9b8c9fba3db93d09\": container with ID starting with 87114dafd05afd897c89f349a0383e893628641451c9820c9b8c9fba3db93d09 not found: ID does not exist" containerID="87114dafd05afd897c89f349a0383e893628641451c9820c9b8c9fba3db93d09" Feb 02 22:04:40 crc kubenswrapper[4789]: I0202 22:04:40.614968 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87114dafd05afd897c89f349a0383e893628641451c9820c9b8c9fba3db93d09"} err="failed to get container status \"87114dafd05afd897c89f349a0383e893628641451c9820c9b8c9fba3db93d09\": rpc error: code = NotFound desc = could not find container \"87114dafd05afd897c89f349a0383e893628641451c9820c9b8c9fba3db93d09\": container with ID starting with 87114dafd05afd897c89f349a0383e893628641451c9820c9b8c9fba3db93d09 not found: ID does not exist" Feb 02 22:04:40 crc kubenswrapper[4789]: I0202 22:04:40.615009 4789 scope.go:117] "RemoveContainer" containerID="d0090250e25e03004d6d0c247d871e49eb129989f2cd1cb68f0d6c14c4ac4954" Feb 02 22:04:40 crc kubenswrapper[4789]: E0202 22:04:40.615701 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0090250e25e03004d6d0c247d871e49eb129989f2cd1cb68f0d6c14c4ac4954\": container with ID starting with d0090250e25e03004d6d0c247d871e49eb129989f2cd1cb68f0d6c14c4ac4954 not found: ID does not exist" containerID="d0090250e25e03004d6d0c247d871e49eb129989f2cd1cb68f0d6c14c4ac4954" Feb 02 22:04:40 crc kubenswrapper[4789]: I0202 22:04:40.615754 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0090250e25e03004d6d0c247d871e49eb129989f2cd1cb68f0d6c14c4ac4954"} err="failed to get container status \"d0090250e25e03004d6d0c247d871e49eb129989f2cd1cb68f0d6c14c4ac4954\": rpc error: code = NotFound desc = could not find container \"d0090250e25e03004d6d0c247d871e49eb129989f2cd1cb68f0d6c14c4ac4954\": container with ID starting with d0090250e25e03004d6d0c247d871e49eb129989f2cd1cb68f0d6c14c4ac4954 not found: ID does not exist" Feb 02 22:04:41 crc kubenswrapper[4789]: E0202 22:04:41.720693 4789 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf231bfba_0946_4782_82a3_dcafe4159f50.slice/crio-f7c1998a84509e94ce520c02bb7de15587c04584b9f0a66df1d3e0cd171357bb\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf231bfba_0946_4782_82a3_dcafe4159f50.slice\": RecentStats: unable to find data in memory cache]" Feb 02 22:04:42 crc kubenswrapper[4789]: I0202 22:04:42.442967 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f231bfba-0946-4782-82a3-dcafe4159f50" path="/var/lib/kubelet/pods/f231bfba-0946-4782-82a3-dcafe4159f50/volumes" Feb 02 22:04:50 crc kubenswrapper[4789]: I0202 22:04:50.424636 4789 scope.go:117] "RemoveContainer" containerID="073e0816e5465e56b89054d46f25c31acee8f07d96c3df073123dd7a7f51be95" Feb 02 22:04:50 crc kubenswrapper[4789]: E0202 22:04:50.425555 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:04:51 crc kubenswrapper[4789]: E0202 22:04:51.984371 4789 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf231bfba_0946_4782_82a3_dcafe4159f50.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf231bfba_0946_4782_82a3_dcafe4159f50.slice/crio-f7c1998a84509e94ce520c02bb7de15587c04584b9f0a66df1d3e0cd171357bb\": RecentStats: unable to find data in memory cache]" Feb 02 22:05:02 crc kubenswrapper[4789]: E0202 22:05:02.214561 4789 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf231bfba_0946_4782_82a3_dcafe4159f50.slice/crio-f7c1998a84509e94ce520c02bb7de15587c04584b9f0a66df1d3e0cd171357bb\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf231bfba_0946_4782_82a3_dcafe4159f50.slice\": RecentStats: unable to find data in memory cache]" Feb 02 22:05:02 crc kubenswrapper[4789]: I0202 22:05:02.420008 4789 scope.go:117] "RemoveContainer" containerID="073e0816e5465e56b89054d46f25c31acee8f07d96c3df073123dd7a7f51be95" Feb 02 22:05:02 crc kubenswrapper[4789]: I0202 22:05:02.720887 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" event={"ID":"bdf018b4-1451-4d37-be6e-05802b67c73e","Type":"ContainerStarted","Data":"db3d2c7fc44410f68ac067079642b1953acd5044a4217991d998ef7063d4d275"} Feb 02 22:05:12 crc kubenswrapper[4789]: E0202 22:05:12.416213 4789 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf231bfba_0946_4782_82a3_dcafe4159f50.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf231bfba_0946_4782_82a3_dcafe4159f50.slice/crio-f7c1998a84509e94ce520c02bb7de15587c04584b9f0a66df1d3e0cd171357bb\": RecentStats: unable to find data in memory cache]" Feb 02 22:05:22 crc kubenswrapper[4789]: E0202 22:05:22.621497 4789 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf231bfba_0946_4782_82a3_dcafe4159f50.slice/crio-f7c1998a84509e94ce520c02bb7de15587c04584b9f0a66df1d3e0cd171357bb\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf231bfba_0946_4782_82a3_dcafe4159f50.slice\": RecentStats: unable to find data in memory cache]" Feb 02 22:05:32 crc kubenswrapper[4789]: E0202 22:05:32.821058 4789 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf231bfba_0946_4782_82a3_dcafe4159f50.slice/crio-f7c1998a84509e94ce520c02bb7de15587c04584b9f0a66df1d3e0cd171357bb\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf231bfba_0946_4782_82a3_dcafe4159f50.slice\": RecentStats: unable to find data in memory cache]" Feb 02 22:05:35 crc kubenswrapper[4789]: I0202 22:05:35.136623 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fgmls"] Feb 02 22:05:35 crc kubenswrapper[4789]: E0202 22:05:35.137637 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f231bfba-0946-4782-82a3-dcafe4159f50" containerName="extract-utilities" Feb 02 22:05:35 crc kubenswrapper[4789]: I0202 22:05:35.137666 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="f231bfba-0946-4782-82a3-dcafe4159f50" containerName="extract-utilities" Feb 02 22:05:35 crc kubenswrapper[4789]: E0202 22:05:35.141557 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f231bfba-0946-4782-82a3-dcafe4159f50" containerName="extract-content" Feb 02 22:05:35 crc kubenswrapper[4789]: I0202 22:05:35.141617 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="f231bfba-0946-4782-82a3-dcafe4159f50" containerName="extract-content" Feb 02 22:05:35 crc kubenswrapper[4789]: E0202 22:05:35.141647 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f231bfba-0946-4782-82a3-dcafe4159f50" containerName="registry-server" Feb 02 22:05:35 crc kubenswrapper[4789]: I0202 22:05:35.141661 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="f231bfba-0946-4782-82a3-dcafe4159f50" containerName="registry-server" Feb 02 22:05:35 crc kubenswrapper[4789]: I0202 22:05:35.141932 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="f231bfba-0946-4782-82a3-dcafe4159f50" containerName="registry-server" Feb 02 22:05:35 crc kubenswrapper[4789]: I0202 22:05:35.143850 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fgmls" Feb 02 22:05:35 crc kubenswrapper[4789]: I0202 22:05:35.170918 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fgmls"] Feb 02 22:05:35 crc kubenswrapper[4789]: I0202 22:05:35.246778 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8852q\" (UniqueName: \"kubernetes.io/projected/f1362a1b-8e6d-442d-af60-c4b87042a18c-kube-api-access-8852q\") pod \"redhat-marketplace-fgmls\" (UID: \"f1362a1b-8e6d-442d-af60-c4b87042a18c\") " pod="openshift-marketplace/redhat-marketplace-fgmls" Feb 02 22:05:35 crc kubenswrapper[4789]: I0202 22:05:35.246858 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1362a1b-8e6d-442d-af60-c4b87042a18c-utilities\") pod \"redhat-marketplace-fgmls\" (UID: \"f1362a1b-8e6d-442d-af60-c4b87042a18c\") " pod="openshift-marketplace/redhat-marketplace-fgmls" Feb 02 22:05:35 crc kubenswrapper[4789]: I0202 22:05:35.246920 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1362a1b-8e6d-442d-af60-c4b87042a18c-catalog-content\") pod \"redhat-marketplace-fgmls\" (UID: \"f1362a1b-8e6d-442d-af60-c4b87042a18c\") " pod="openshift-marketplace/redhat-marketplace-fgmls" Feb 02 22:05:35 crc kubenswrapper[4789]: I0202 22:05:35.348241 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8852q\" (UniqueName: \"kubernetes.io/projected/f1362a1b-8e6d-442d-af60-c4b87042a18c-kube-api-access-8852q\") pod \"redhat-marketplace-fgmls\" (UID: \"f1362a1b-8e6d-442d-af60-c4b87042a18c\") " pod="openshift-marketplace/redhat-marketplace-fgmls" Feb 02 22:05:35 crc kubenswrapper[4789]: I0202 22:05:35.348305 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1362a1b-8e6d-442d-af60-c4b87042a18c-utilities\") pod \"redhat-marketplace-fgmls\" (UID: \"f1362a1b-8e6d-442d-af60-c4b87042a18c\") " pod="openshift-marketplace/redhat-marketplace-fgmls" Feb 02 22:05:35 crc kubenswrapper[4789]: I0202 22:05:35.348349 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1362a1b-8e6d-442d-af60-c4b87042a18c-catalog-content\") pod \"redhat-marketplace-fgmls\" (UID: \"f1362a1b-8e6d-442d-af60-c4b87042a18c\") " pod="openshift-marketplace/redhat-marketplace-fgmls" Feb 02 22:05:35 crc kubenswrapper[4789]: I0202 22:05:35.349000 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1362a1b-8e6d-442d-af60-c4b87042a18c-catalog-content\") pod \"redhat-marketplace-fgmls\" (UID: \"f1362a1b-8e6d-442d-af60-c4b87042a18c\") " pod="openshift-marketplace/redhat-marketplace-fgmls" Feb 02 22:05:35 crc kubenswrapper[4789]: I0202 22:05:35.349182 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1362a1b-8e6d-442d-af60-c4b87042a18c-utilities\") pod \"redhat-marketplace-fgmls\" (UID: \"f1362a1b-8e6d-442d-af60-c4b87042a18c\") " pod="openshift-marketplace/redhat-marketplace-fgmls" Feb 02 22:05:35 crc kubenswrapper[4789]: I0202 22:05:35.382757 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8852q\" (UniqueName: \"kubernetes.io/projected/f1362a1b-8e6d-442d-af60-c4b87042a18c-kube-api-access-8852q\") pod \"redhat-marketplace-fgmls\" (UID: \"f1362a1b-8e6d-442d-af60-c4b87042a18c\") " pod="openshift-marketplace/redhat-marketplace-fgmls" Feb 02 22:05:35 crc kubenswrapper[4789]: I0202 22:05:35.491642 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fgmls" Feb 02 22:05:35 crc kubenswrapper[4789]: I0202 22:05:35.965562 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fgmls"] Feb 02 22:05:36 crc kubenswrapper[4789]: I0202 22:05:36.029541 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fgmls" event={"ID":"f1362a1b-8e6d-442d-af60-c4b87042a18c","Type":"ContainerStarted","Data":"b282a48ecef02ea5eed8c01fe9e657b2d4cc9482c746a2657c08838019d093b0"} Feb 02 22:05:37 crc kubenswrapper[4789]: I0202 22:05:37.044403 4789 generic.go:334] "Generic (PLEG): container finished" podID="f1362a1b-8e6d-442d-af60-c4b87042a18c" containerID="fdd8f4f1c16778989bb13d47b9ecf1c29bd6855f20c33b09631ac31cca3b7cf2" exitCode=0 Feb 02 22:05:37 crc kubenswrapper[4789]: I0202 22:05:37.044559 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fgmls" event={"ID":"f1362a1b-8e6d-442d-af60-c4b87042a18c","Type":"ContainerDied","Data":"fdd8f4f1c16778989bb13d47b9ecf1c29bd6855f20c33b09631ac31cca3b7cf2"} Feb 02 22:05:38 crc kubenswrapper[4789]: I0202 22:05:38.062519 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fgmls" event={"ID":"f1362a1b-8e6d-442d-af60-c4b87042a18c","Type":"ContainerStarted","Data":"bec07e0455927672f889fca5ee699acb6e390a8e13c7ff77c33e631c747a2e5b"} Feb 02 22:05:39 crc kubenswrapper[4789]: I0202 22:05:39.076675 4789 generic.go:334] "Generic (PLEG): container finished" podID="f1362a1b-8e6d-442d-af60-c4b87042a18c" containerID="bec07e0455927672f889fca5ee699acb6e390a8e13c7ff77c33e631c747a2e5b" exitCode=0 Feb 02 22:05:39 crc kubenswrapper[4789]: I0202 22:05:39.076737 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fgmls" event={"ID":"f1362a1b-8e6d-442d-af60-c4b87042a18c","Type":"ContainerDied","Data":"bec07e0455927672f889fca5ee699acb6e390a8e13c7ff77c33e631c747a2e5b"} Feb 02 22:05:40 crc kubenswrapper[4789]: I0202 22:05:40.087175 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fgmls" event={"ID":"f1362a1b-8e6d-442d-af60-c4b87042a18c","Type":"ContainerStarted","Data":"fc5b0398be418556061093bc37a5a610c868c0fdcfb378ceee248b6cec06a178"} Feb 02 22:05:40 crc kubenswrapper[4789]: I0202 22:05:40.112858 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fgmls" podStartSLOduration=2.597429311 podStartE2EDuration="5.112841574s" podCreationTimestamp="2026-02-02 22:05:35 +0000 UTC" firstStartedPulling="2026-02-02 22:05:37.047666492 +0000 UTC m=+2757.342691551" lastFinishedPulling="2026-02-02 22:05:39.563078755 +0000 UTC m=+2759.858103814" observedRunningTime="2026-02-02 22:05:40.107126043 +0000 UTC m=+2760.402151062" watchObservedRunningTime="2026-02-02 22:05:40.112841574 +0000 UTC m=+2760.407866593" Feb 02 22:05:45 crc kubenswrapper[4789]: I0202 22:05:45.501523 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fgmls" Feb 02 22:05:45 crc kubenswrapper[4789]: I0202 22:05:45.503658 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-fgmls" Feb 02 22:05:45 crc kubenswrapper[4789]: I0202 22:05:45.577133 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fgmls" Feb 02 22:05:46 crc kubenswrapper[4789]: I0202 22:05:46.209475 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fgmls" Feb 02 22:05:46 crc kubenswrapper[4789]: I0202 22:05:46.287632 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fgmls"] Feb 02 22:05:48 crc kubenswrapper[4789]: I0202 22:05:48.154262 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-fgmls" podUID="f1362a1b-8e6d-442d-af60-c4b87042a18c" containerName="registry-server" containerID="cri-o://fc5b0398be418556061093bc37a5a610c868c0fdcfb378ceee248b6cec06a178" gracePeriod=2 Feb 02 22:05:48 crc kubenswrapper[4789]: I0202 22:05:48.683249 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fgmls" Feb 02 22:05:48 crc kubenswrapper[4789]: I0202 22:05:48.759700 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1362a1b-8e6d-442d-af60-c4b87042a18c-catalog-content\") pod \"f1362a1b-8e6d-442d-af60-c4b87042a18c\" (UID: \"f1362a1b-8e6d-442d-af60-c4b87042a18c\") " Feb 02 22:05:48 crc kubenswrapper[4789]: I0202 22:05:48.759800 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8852q\" (UniqueName: \"kubernetes.io/projected/f1362a1b-8e6d-442d-af60-c4b87042a18c-kube-api-access-8852q\") pod \"f1362a1b-8e6d-442d-af60-c4b87042a18c\" (UID: \"f1362a1b-8e6d-442d-af60-c4b87042a18c\") " Feb 02 22:05:48 crc kubenswrapper[4789]: I0202 22:05:48.759847 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1362a1b-8e6d-442d-af60-c4b87042a18c-utilities\") pod \"f1362a1b-8e6d-442d-af60-c4b87042a18c\" (UID: \"f1362a1b-8e6d-442d-af60-c4b87042a18c\") " Feb 02 22:05:48 crc kubenswrapper[4789]: I0202 22:05:48.761711 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1362a1b-8e6d-442d-af60-c4b87042a18c-utilities" (OuterVolumeSpecName: "utilities") pod "f1362a1b-8e6d-442d-af60-c4b87042a18c" (UID: "f1362a1b-8e6d-442d-af60-c4b87042a18c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 22:05:48 crc kubenswrapper[4789]: I0202 22:05:48.768924 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1362a1b-8e6d-442d-af60-c4b87042a18c-kube-api-access-8852q" (OuterVolumeSpecName: "kube-api-access-8852q") pod "f1362a1b-8e6d-442d-af60-c4b87042a18c" (UID: "f1362a1b-8e6d-442d-af60-c4b87042a18c"). InnerVolumeSpecName "kube-api-access-8852q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 22:05:48 crc kubenswrapper[4789]: I0202 22:05:48.822392 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1362a1b-8e6d-442d-af60-c4b87042a18c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f1362a1b-8e6d-442d-af60-c4b87042a18c" (UID: "f1362a1b-8e6d-442d-af60-c4b87042a18c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 22:05:48 crc kubenswrapper[4789]: I0202 22:05:48.861444 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8852q\" (UniqueName: \"kubernetes.io/projected/f1362a1b-8e6d-442d-af60-c4b87042a18c-kube-api-access-8852q\") on node \"crc\" DevicePath \"\"" Feb 02 22:05:48 crc kubenswrapper[4789]: I0202 22:05:48.861478 4789 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1362a1b-8e6d-442d-af60-c4b87042a18c-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 22:05:48 crc kubenswrapper[4789]: I0202 22:05:48.861492 4789 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1362a1b-8e6d-442d-af60-c4b87042a18c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 22:05:49 crc kubenswrapper[4789]: I0202 22:05:49.167692 4789 generic.go:334] "Generic (PLEG): container finished" podID="f1362a1b-8e6d-442d-af60-c4b87042a18c" containerID="fc5b0398be418556061093bc37a5a610c868c0fdcfb378ceee248b6cec06a178" exitCode=0 Feb 02 22:05:49 crc kubenswrapper[4789]: I0202 22:05:49.167839 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fgmls" event={"ID":"f1362a1b-8e6d-442d-af60-c4b87042a18c","Type":"ContainerDied","Data":"fc5b0398be418556061093bc37a5a610c868c0fdcfb378ceee248b6cec06a178"} Feb 02 22:05:49 crc kubenswrapper[4789]: I0202 22:05:49.168950 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fgmls" event={"ID":"f1362a1b-8e6d-442d-af60-c4b87042a18c","Type":"ContainerDied","Data":"b282a48ecef02ea5eed8c01fe9e657b2d4cc9482c746a2657c08838019d093b0"} Feb 02 22:05:49 crc kubenswrapper[4789]: I0202 22:05:49.167865 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fgmls" Feb 02 22:05:49 crc kubenswrapper[4789]: I0202 22:05:49.169000 4789 scope.go:117] "RemoveContainer" containerID="fc5b0398be418556061093bc37a5a610c868c0fdcfb378ceee248b6cec06a178" Feb 02 22:05:49 crc kubenswrapper[4789]: I0202 22:05:49.196671 4789 scope.go:117] "RemoveContainer" containerID="bec07e0455927672f889fca5ee699acb6e390a8e13c7ff77c33e631c747a2e5b" Feb 02 22:05:49 crc kubenswrapper[4789]: I0202 22:05:49.230866 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fgmls"] Feb 02 22:05:49 crc kubenswrapper[4789]: I0202 22:05:49.241322 4789 scope.go:117] "RemoveContainer" containerID="fdd8f4f1c16778989bb13d47b9ecf1c29bd6855f20c33b09631ac31cca3b7cf2" Feb 02 22:05:49 crc kubenswrapper[4789]: I0202 22:05:49.241466 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-fgmls"] Feb 02 22:05:49 crc kubenswrapper[4789]: I0202 22:05:49.275271 4789 scope.go:117] "RemoveContainer" containerID="fc5b0398be418556061093bc37a5a610c868c0fdcfb378ceee248b6cec06a178" Feb 02 22:05:49 crc kubenswrapper[4789]: E0202 22:05:49.275832 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc5b0398be418556061093bc37a5a610c868c0fdcfb378ceee248b6cec06a178\": container with ID starting with fc5b0398be418556061093bc37a5a610c868c0fdcfb378ceee248b6cec06a178 not found: ID does not exist" containerID="fc5b0398be418556061093bc37a5a610c868c0fdcfb378ceee248b6cec06a178" Feb 02 22:05:49 crc kubenswrapper[4789]: I0202 22:05:49.275888 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc5b0398be418556061093bc37a5a610c868c0fdcfb378ceee248b6cec06a178"} err="failed to get container status \"fc5b0398be418556061093bc37a5a610c868c0fdcfb378ceee248b6cec06a178\": rpc error: code = NotFound desc = could not find container \"fc5b0398be418556061093bc37a5a610c868c0fdcfb378ceee248b6cec06a178\": container with ID starting with fc5b0398be418556061093bc37a5a610c868c0fdcfb378ceee248b6cec06a178 not found: ID does not exist" Feb 02 22:05:49 crc kubenswrapper[4789]: I0202 22:05:49.275929 4789 scope.go:117] "RemoveContainer" containerID="bec07e0455927672f889fca5ee699acb6e390a8e13c7ff77c33e631c747a2e5b" Feb 02 22:05:49 crc kubenswrapper[4789]: E0202 22:05:49.276618 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bec07e0455927672f889fca5ee699acb6e390a8e13c7ff77c33e631c747a2e5b\": container with ID starting with bec07e0455927672f889fca5ee699acb6e390a8e13c7ff77c33e631c747a2e5b not found: ID does not exist" containerID="bec07e0455927672f889fca5ee699acb6e390a8e13c7ff77c33e631c747a2e5b" Feb 02 22:05:49 crc kubenswrapper[4789]: I0202 22:05:49.276667 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bec07e0455927672f889fca5ee699acb6e390a8e13c7ff77c33e631c747a2e5b"} err="failed to get container status \"bec07e0455927672f889fca5ee699acb6e390a8e13c7ff77c33e631c747a2e5b\": rpc error: code = NotFound desc = could not find container \"bec07e0455927672f889fca5ee699acb6e390a8e13c7ff77c33e631c747a2e5b\": container with ID starting with bec07e0455927672f889fca5ee699acb6e390a8e13c7ff77c33e631c747a2e5b not found: ID does not exist" Feb 02 22:05:49 crc kubenswrapper[4789]: I0202 22:05:49.276703 4789 scope.go:117] "RemoveContainer" containerID="fdd8f4f1c16778989bb13d47b9ecf1c29bd6855f20c33b09631ac31cca3b7cf2" Feb 02 22:05:49 crc kubenswrapper[4789]: E0202 22:05:49.277255 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fdd8f4f1c16778989bb13d47b9ecf1c29bd6855f20c33b09631ac31cca3b7cf2\": container with ID starting with fdd8f4f1c16778989bb13d47b9ecf1c29bd6855f20c33b09631ac31cca3b7cf2 not found: ID does not exist" containerID="fdd8f4f1c16778989bb13d47b9ecf1c29bd6855f20c33b09631ac31cca3b7cf2" Feb 02 22:05:49 crc kubenswrapper[4789]: I0202 22:05:49.277293 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fdd8f4f1c16778989bb13d47b9ecf1c29bd6855f20c33b09631ac31cca3b7cf2"} err="failed to get container status \"fdd8f4f1c16778989bb13d47b9ecf1c29bd6855f20c33b09631ac31cca3b7cf2\": rpc error: code = NotFound desc = could not find container \"fdd8f4f1c16778989bb13d47b9ecf1c29bd6855f20c33b09631ac31cca3b7cf2\": container with ID starting with fdd8f4f1c16778989bb13d47b9ecf1c29bd6855f20c33b09631ac31cca3b7cf2 not found: ID does not exist" Feb 02 22:05:50 crc kubenswrapper[4789]: I0202 22:05:50.440314 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1362a1b-8e6d-442d-af60-c4b87042a18c" path="/var/lib/kubelet/pods/f1362a1b-8e6d-442d-af60-c4b87042a18c/volumes" Feb 02 22:07:22 crc kubenswrapper[4789]: I0202 22:07:22.842189 4789 patch_prober.go:28] interesting pod/machine-config-daemon-c8vcn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 22:07:22 crc kubenswrapper[4789]: I0202 22:07:22.843882 4789 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 22:07:37 crc kubenswrapper[4789]: I0202 22:07:37.282993 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-zpzm5"] Feb 02 22:07:37 crc kubenswrapper[4789]: E0202 22:07:37.285473 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1362a1b-8e6d-442d-af60-c4b87042a18c" containerName="extract-content" Feb 02 22:07:37 crc kubenswrapper[4789]: I0202 22:07:37.285551 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1362a1b-8e6d-442d-af60-c4b87042a18c" containerName="extract-content" Feb 02 22:07:37 crc kubenswrapper[4789]: E0202 22:07:37.285641 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1362a1b-8e6d-442d-af60-c4b87042a18c" containerName="extract-utilities" Feb 02 22:07:37 crc kubenswrapper[4789]: I0202 22:07:37.285698 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1362a1b-8e6d-442d-af60-c4b87042a18c" containerName="extract-utilities" Feb 02 22:07:37 crc kubenswrapper[4789]: E0202 22:07:37.285801 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1362a1b-8e6d-442d-af60-c4b87042a18c" containerName="registry-server" Feb 02 22:07:37 crc kubenswrapper[4789]: I0202 22:07:37.285861 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1362a1b-8e6d-442d-af60-c4b87042a18c" containerName="registry-server" Feb 02 22:07:37 crc kubenswrapper[4789]: I0202 22:07:37.286049 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1362a1b-8e6d-442d-af60-c4b87042a18c" containerName="registry-server" Feb 02 22:07:37 crc kubenswrapper[4789]: I0202 22:07:37.287259 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zpzm5" Feb 02 22:07:37 crc kubenswrapper[4789]: I0202 22:07:37.318613 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zpzm5"] Feb 02 22:07:37 crc kubenswrapper[4789]: I0202 22:07:37.374264 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cba0dc3-207e-40f3-84d5-39a71100f1c2-utilities\") pod \"certified-operators-zpzm5\" (UID: \"4cba0dc3-207e-40f3-84d5-39a71100f1c2\") " pod="openshift-marketplace/certified-operators-zpzm5" Feb 02 22:07:37 crc kubenswrapper[4789]: I0202 22:07:37.374517 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cba0dc3-207e-40f3-84d5-39a71100f1c2-catalog-content\") pod \"certified-operators-zpzm5\" (UID: \"4cba0dc3-207e-40f3-84d5-39a71100f1c2\") " pod="openshift-marketplace/certified-operators-zpzm5" Feb 02 22:07:37 crc kubenswrapper[4789]: I0202 22:07:37.374635 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8nfg\" (UniqueName: \"kubernetes.io/projected/4cba0dc3-207e-40f3-84d5-39a71100f1c2-kube-api-access-w8nfg\") pod \"certified-operators-zpzm5\" (UID: \"4cba0dc3-207e-40f3-84d5-39a71100f1c2\") " pod="openshift-marketplace/certified-operators-zpzm5" Feb 02 22:07:37 crc kubenswrapper[4789]: I0202 22:07:37.476646 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cba0dc3-207e-40f3-84d5-39a71100f1c2-utilities\") pod \"certified-operators-zpzm5\" (UID: \"4cba0dc3-207e-40f3-84d5-39a71100f1c2\") " pod="openshift-marketplace/certified-operators-zpzm5" Feb 02 22:07:37 crc kubenswrapper[4789]: I0202 22:07:37.476709 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cba0dc3-207e-40f3-84d5-39a71100f1c2-catalog-content\") pod \"certified-operators-zpzm5\" (UID: \"4cba0dc3-207e-40f3-84d5-39a71100f1c2\") " pod="openshift-marketplace/certified-operators-zpzm5" Feb 02 22:07:37 crc kubenswrapper[4789]: I0202 22:07:37.476759 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8nfg\" (UniqueName: \"kubernetes.io/projected/4cba0dc3-207e-40f3-84d5-39a71100f1c2-kube-api-access-w8nfg\") pod \"certified-operators-zpzm5\" (UID: \"4cba0dc3-207e-40f3-84d5-39a71100f1c2\") " pod="openshift-marketplace/certified-operators-zpzm5" Feb 02 22:07:37 crc kubenswrapper[4789]: I0202 22:07:37.477525 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cba0dc3-207e-40f3-84d5-39a71100f1c2-utilities\") pod \"certified-operators-zpzm5\" (UID: \"4cba0dc3-207e-40f3-84d5-39a71100f1c2\") " pod="openshift-marketplace/certified-operators-zpzm5" Feb 02 22:07:37 crc kubenswrapper[4789]: I0202 22:07:37.477551 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cba0dc3-207e-40f3-84d5-39a71100f1c2-catalog-content\") pod \"certified-operators-zpzm5\" (UID: \"4cba0dc3-207e-40f3-84d5-39a71100f1c2\") " pod="openshift-marketplace/certified-operators-zpzm5" Feb 02 22:07:37 crc kubenswrapper[4789]: I0202 22:07:37.511859 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8nfg\" (UniqueName: \"kubernetes.io/projected/4cba0dc3-207e-40f3-84d5-39a71100f1c2-kube-api-access-w8nfg\") pod \"certified-operators-zpzm5\" (UID: \"4cba0dc3-207e-40f3-84d5-39a71100f1c2\") " pod="openshift-marketplace/certified-operators-zpzm5" Feb 02 22:07:37 crc kubenswrapper[4789]: I0202 22:07:37.615303 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zpzm5" Feb 02 22:07:37 crc kubenswrapper[4789]: I0202 22:07:37.915730 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zpzm5"] Feb 02 22:07:38 crc kubenswrapper[4789]: I0202 22:07:38.150092 4789 generic.go:334] "Generic (PLEG): container finished" podID="4cba0dc3-207e-40f3-84d5-39a71100f1c2" containerID="1ab2dfc9257461edbdda8f368c01132ae821c6481c72ee493af7a5830cc76458" exitCode=0 Feb 02 22:07:38 crc kubenswrapper[4789]: I0202 22:07:38.150132 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zpzm5" event={"ID":"4cba0dc3-207e-40f3-84d5-39a71100f1c2","Type":"ContainerDied","Data":"1ab2dfc9257461edbdda8f368c01132ae821c6481c72ee493af7a5830cc76458"} Feb 02 22:07:38 crc kubenswrapper[4789]: I0202 22:07:38.150158 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zpzm5" event={"ID":"4cba0dc3-207e-40f3-84d5-39a71100f1c2","Type":"ContainerStarted","Data":"3b55afb01e310563bb274f6c5e4022d5d1e33328c888a3a8fd2e2a1aedc38041"} Feb 02 22:07:38 crc kubenswrapper[4789]: I0202 22:07:38.151488 4789 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 22:07:39 crc kubenswrapper[4789]: I0202 22:07:39.166534 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zpzm5" event={"ID":"4cba0dc3-207e-40f3-84d5-39a71100f1c2","Type":"ContainerStarted","Data":"4726ad2fa1943530c7699dab9cc5abb0c4b24103a6927ad42241cf46e34efba7"} Feb 02 22:07:40 crc kubenswrapper[4789]: I0202 22:07:40.178179 4789 generic.go:334] "Generic (PLEG): container finished" podID="4cba0dc3-207e-40f3-84d5-39a71100f1c2" containerID="4726ad2fa1943530c7699dab9cc5abb0c4b24103a6927ad42241cf46e34efba7" exitCode=0 Feb 02 22:07:40 crc kubenswrapper[4789]: I0202 22:07:40.178311 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zpzm5" event={"ID":"4cba0dc3-207e-40f3-84d5-39a71100f1c2","Type":"ContainerDied","Data":"4726ad2fa1943530c7699dab9cc5abb0c4b24103a6927ad42241cf46e34efba7"} Feb 02 22:07:41 crc kubenswrapper[4789]: I0202 22:07:41.188094 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zpzm5" event={"ID":"4cba0dc3-207e-40f3-84d5-39a71100f1c2","Type":"ContainerStarted","Data":"c17b3987ba74f49712dfeb8b0dbf60dc5d05c5b2ed2de829a92e4ad0ce9b816b"} Feb 02 22:07:41 crc kubenswrapper[4789]: I0202 22:07:41.214269 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-zpzm5" podStartSLOduration=1.740680239 podStartE2EDuration="4.214242233s" podCreationTimestamp="2026-02-02 22:07:37 +0000 UTC" firstStartedPulling="2026-02-02 22:07:38.151287833 +0000 UTC m=+2878.446312852" lastFinishedPulling="2026-02-02 22:07:40.624849777 +0000 UTC m=+2880.919874846" observedRunningTime="2026-02-02 22:07:41.210691973 +0000 UTC m=+2881.505717032" watchObservedRunningTime="2026-02-02 22:07:41.214242233 +0000 UTC m=+2881.509267262" Feb 02 22:07:47 crc kubenswrapper[4789]: I0202 22:07:47.616124 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-zpzm5" Feb 02 22:07:47 crc kubenswrapper[4789]: I0202 22:07:47.616856 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-zpzm5" Feb 02 22:07:47 crc kubenswrapper[4789]: I0202 22:07:47.691730 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-zpzm5" Feb 02 22:07:48 crc kubenswrapper[4789]: I0202 22:07:48.304838 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-zpzm5" Feb 02 22:07:48 crc kubenswrapper[4789]: I0202 22:07:48.361901 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zpzm5"] Feb 02 22:07:50 crc kubenswrapper[4789]: I0202 22:07:50.264321 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-zpzm5" podUID="4cba0dc3-207e-40f3-84d5-39a71100f1c2" containerName="registry-server" containerID="cri-o://c17b3987ba74f49712dfeb8b0dbf60dc5d05c5b2ed2de829a92e4ad0ce9b816b" gracePeriod=2 Feb 02 22:07:50 crc kubenswrapper[4789]: I0202 22:07:50.799304 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zpzm5" Feb 02 22:07:50 crc kubenswrapper[4789]: I0202 22:07:50.879760 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w8nfg\" (UniqueName: \"kubernetes.io/projected/4cba0dc3-207e-40f3-84d5-39a71100f1c2-kube-api-access-w8nfg\") pod \"4cba0dc3-207e-40f3-84d5-39a71100f1c2\" (UID: \"4cba0dc3-207e-40f3-84d5-39a71100f1c2\") " Feb 02 22:07:50 crc kubenswrapper[4789]: I0202 22:07:50.879942 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cba0dc3-207e-40f3-84d5-39a71100f1c2-catalog-content\") pod \"4cba0dc3-207e-40f3-84d5-39a71100f1c2\" (UID: \"4cba0dc3-207e-40f3-84d5-39a71100f1c2\") " Feb 02 22:07:50 crc kubenswrapper[4789]: I0202 22:07:50.880003 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cba0dc3-207e-40f3-84d5-39a71100f1c2-utilities\") pod \"4cba0dc3-207e-40f3-84d5-39a71100f1c2\" (UID: \"4cba0dc3-207e-40f3-84d5-39a71100f1c2\") " Feb 02 22:07:50 crc kubenswrapper[4789]: I0202 22:07:50.881860 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4cba0dc3-207e-40f3-84d5-39a71100f1c2-utilities" (OuterVolumeSpecName: "utilities") pod "4cba0dc3-207e-40f3-84d5-39a71100f1c2" (UID: "4cba0dc3-207e-40f3-84d5-39a71100f1c2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 22:07:50 crc kubenswrapper[4789]: I0202 22:07:50.890784 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cba0dc3-207e-40f3-84d5-39a71100f1c2-kube-api-access-w8nfg" (OuterVolumeSpecName: "kube-api-access-w8nfg") pod "4cba0dc3-207e-40f3-84d5-39a71100f1c2" (UID: "4cba0dc3-207e-40f3-84d5-39a71100f1c2"). InnerVolumeSpecName "kube-api-access-w8nfg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 22:07:50 crc kubenswrapper[4789]: I0202 22:07:50.943997 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4cba0dc3-207e-40f3-84d5-39a71100f1c2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4cba0dc3-207e-40f3-84d5-39a71100f1c2" (UID: "4cba0dc3-207e-40f3-84d5-39a71100f1c2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 22:07:50 crc kubenswrapper[4789]: I0202 22:07:50.981810 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w8nfg\" (UniqueName: \"kubernetes.io/projected/4cba0dc3-207e-40f3-84d5-39a71100f1c2-kube-api-access-w8nfg\") on node \"crc\" DevicePath \"\"" Feb 02 22:07:50 crc kubenswrapper[4789]: I0202 22:07:50.981870 4789 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cba0dc3-207e-40f3-84d5-39a71100f1c2-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 22:07:50 crc kubenswrapper[4789]: I0202 22:07:50.981891 4789 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cba0dc3-207e-40f3-84d5-39a71100f1c2-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 22:07:51 crc kubenswrapper[4789]: I0202 22:07:51.275644 4789 generic.go:334] "Generic (PLEG): container finished" podID="4cba0dc3-207e-40f3-84d5-39a71100f1c2" containerID="c17b3987ba74f49712dfeb8b0dbf60dc5d05c5b2ed2de829a92e4ad0ce9b816b" exitCode=0 Feb 02 22:07:51 crc kubenswrapper[4789]: I0202 22:07:51.275710 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zpzm5" event={"ID":"4cba0dc3-207e-40f3-84d5-39a71100f1c2","Type":"ContainerDied","Data":"c17b3987ba74f49712dfeb8b0dbf60dc5d05c5b2ed2de829a92e4ad0ce9b816b"} Feb 02 22:07:51 crc kubenswrapper[4789]: I0202 22:07:51.275774 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zpzm5" Feb 02 22:07:51 crc kubenswrapper[4789]: I0202 22:07:51.275866 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zpzm5" event={"ID":"4cba0dc3-207e-40f3-84d5-39a71100f1c2","Type":"ContainerDied","Data":"3b55afb01e310563bb274f6c5e4022d5d1e33328c888a3a8fd2e2a1aedc38041"} Feb 02 22:07:51 crc kubenswrapper[4789]: I0202 22:07:51.275916 4789 scope.go:117] "RemoveContainer" containerID="c17b3987ba74f49712dfeb8b0dbf60dc5d05c5b2ed2de829a92e4ad0ce9b816b" Feb 02 22:07:51 crc kubenswrapper[4789]: I0202 22:07:51.304941 4789 scope.go:117] "RemoveContainer" containerID="4726ad2fa1943530c7699dab9cc5abb0c4b24103a6927ad42241cf46e34efba7" Feb 02 22:07:51 crc kubenswrapper[4789]: I0202 22:07:51.337094 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zpzm5"] Feb 02 22:07:51 crc kubenswrapper[4789]: I0202 22:07:51.354978 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-zpzm5"] Feb 02 22:07:51 crc kubenswrapper[4789]: I0202 22:07:51.371074 4789 scope.go:117] "RemoveContainer" containerID="1ab2dfc9257461edbdda8f368c01132ae821c6481c72ee493af7a5830cc76458" Feb 02 22:07:51 crc kubenswrapper[4789]: I0202 22:07:51.396045 4789 scope.go:117] "RemoveContainer" containerID="c17b3987ba74f49712dfeb8b0dbf60dc5d05c5b2ed2de829a92e4ad0ce9b816b" Feb 02 22:07:51 crc kubenswrapper[4789]: E0202 22:07:51.396545 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c17b3987ba74f49712dfeb8b0dbf60dc5d05c5b2ed2de829a92e4ad0ce9b816b\": container with ID starting with c17b3987ba74f49712dfeb8b0dbf60dc5d05c5b2ed2de829a92e4ad0ce9b816b not found: ID does not exist" containerID="c17b3987ba74f49712dfeb8b0dbf60dc5d05c5b2ed2de829a92e4ad0ce9b816b" Feb 02 22:07:51 crc kubenswrapper[4789]: I0202 22:07:51.396697 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c17b3987ba74f49712dfeb8b0dbf60dc5d05c5b2ed2de829a92e4ad0ce9b816b"} err="failed to get container status \"c17b3987ba74f49712dfeb8b0dbf60dc5d05c5b2ed2de829a92e4ad0ce9b816b\": rpc error: code = NotFound desc = could not find container \"c17b3987ba74f49712dfeb8b0dbf60dc5d05c5b2ed2de829a92e4ad0ce9b816b\": container with ID starting with c17b3987ba74f49712dfeb8b0dbf60dc5d05c5b2ed2de829a92e4ad0ce9b816b not found: ID does not exist" Feb 02 22:07:51 crc kubenswrapper[4789]: I0202 22:07:51.396733 4789 scope.go:117] "RemoveContainer" containerID="4726ad2fa1943530c7699dab9cc5abb0c4b24103a6927ad42241cf46e34efba7" Feb 02 22:07:51 crc kubenswrapper[4789]: E0202 22:07:51.397406 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4726ad2fa1943530c7699dab9cc5abb0c4b24103a6927ad42241cf46e34efba7\": container with ID starting with 4726ad2fa1943530c7699dab9cc5abb0c4b24103a6927ad42241cf46e34efba7 not found: ID does not exist" containerID="4726ad2fa1943530c7699dab9cc5abb0c4b24103a6927ad42241cf46e34efba7" Feb 02 22:07:51 crc kubenswrapper[4789]: I0202 22:07:51.397505 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4726ad2fa1943530c7699dab9cc5abb0c4b24103a6927ad42241cf46e34efba7"} err="failed to get container status \"4726ad2fa1943530c7699dab9cc5abb0c4b24103a6927ad42241cf46e34efba7\": rpc error: code = NotFound desc = could not find container \"4726ad2fa1943530c7699dab9cc5abb0c4b24103a6927ad42241cf46e34efba7\": container with ID starting with 4726ad2fa1943530c7699dab9cc5abb0c4b24103a6927ad42241cf46e34efba7 not found: ID does not exist" Feb 02 22:07:51 crc kubenswrapper[4789]: I0202 22:07:51.397563 4789 scope.go:117] "RemoveContainer" containerID="1ab2dfc9257461edbdda8f368c01132ae821c6481c72ee493af7a5830cc76458" Feb 02 22:07:51 crc kubenswrapper[4789]: E0202 22:07:51.398161 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ab2dfc9257461edbdda8f368c01132ae821c6481c72ee493af7a5830cc76458\": container with ID starting with 1ab2dfc9257461edbdda8f368c01132ae821c6481c72ee493af7a5830cc76458 not found: ID does not exist" containerID="1ab2dfc9257461edbdda8f368c01132ae821c6481c72ee493af7a5830cc76458" Feb 02 22:07:51 crc kubenswrapper[4789]: I0202 22:07:51.398226 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ab2dfc9257461edbdda8f368c01132ae821c6481c72ee493af7a5830cc76458"} err="failed to get container status \"1ab2dfc9257461edbdda8f368c01132ae821c6481c72ee493af7a5830cc76458\": rpc error: code = NotFound desc = could not find container \"1ab2dfc9257461edbdda8f368c01132ae821c6481c72ee493af7a5830cc76458\": container with ID starting with 1ab2dfc9257461edbdda8f368c01132ae821c6481c72ee493af7a5830cc76458 not found: ID does not exist" Feb 02 22:07:52 crc kubenswrapper[4789]: I0202 22:07:52.435440 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4cba0dc3-207e-40f3-84d5-39a71100f1c2" path="/var/lib/kubelet/pods/4cba0dc3-207e-40f3-84d5-39a71100f1c2/volumes" Feb 02 22:07:52 crc kubenswrapper[4789]: I0202 22:07:52.841541 4789 patch_prober.go:28] interesting pod/machine-config-daemon-c8vcn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 22:07:52 crc kubenswrapper[4789]: I0202 22:07:52.841655 4789 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 22:08:22 crc kubenswrapper[4789]: I0202 22:08:22.842290 4789 patch_prober.go:28] interesting pod/machine-config-daemon-c8vcn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 22:08:22 crc kubenswrapper[4789]: I0202 22:08:22.843000 4789 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 22:08:22 crc kubenswrapper[4789]: I0202 22:08:22.843082 4789 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" Feb 02 22:08:22 crc kubenswrapper[4789]: I0202 22:08:22.843790 4789 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"db3d2c7fc44410f68ac067079642b1953acd5044a4217991d998ef7063d4d275"} pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 22:08:22 crc kubenswrapper[4789]: I0202 22:08:22.843842 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" containerName="machine-config-daemon" containerID="cri-o://db3d2c7fc44410f68ac067079642b1953acd5044a4217991d998ef7063d4d275" gracePeriod=600 Feb 02 22:08:23 crc kubenswrapper[4789]: I0202 22:08:23.576123 4789 generic.go:334] "Generic (PLEG): container finished" podID="bdf018b4-1451-4d37-be6e-05802b67c73e" containerID="db3d2c7fc44410f68ac067079642b1953acd5044a4217991d998ef7063d4d275" exitCode=0 Feb 02 22:08:23 crc kubenswrapper[4789]: I0202 22:08:23.576184 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" event={"ID":"bdf018b4-1451-4d37-be6e-05802b67c73e","Type":"ContainerDied","Data":"db3d2c7fc44410f68ac067079642b1953acd5044a4217991d998ef7063d4d275"} Feb 02 22:08:23 crc kubenswrapper[4789]: I0202 22:08:23.576512 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" event={"ID":"bdf018b4-1451-4d37-be6e-05802b67c73e","Type":"ContainerStarted","Data":"f751ab6e20416c55ada057bcc369a80021ff56d1c6434e7eaf32a7fd2e325226"} Feb 02 22:08:23 crc kubenswrapper[4789]: I0202 22:08:23.576564 4789 scope.go:117] "RemoveContainer" containerID="073e0816e5465e56b89054d46f25c31acee8f07d96c3df073123dd7a7f51be95" Feb 02 22:10:52 crc kubenswrapper[4789]: I0202 22:10:52.841976 4789 patch_prober.go:28] interesting pod/machine-config-daemon-c8vcn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 22:10:52 crc kubenswrapper[4789]: I0202 22:10:52.842567 4789 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 22:11:22 crc kubenswrapper[4789]: I0202 22:11:22.842322 4789 patch_prober.go:28] interesting pod/machine-config-daemon-c8vcn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 22:11:22 crc kubenswrapper[4789]: I0202 22:11:22.843050 4789 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 22:11:27 crc kubenswrapper[4789]: I0202 22:11:27.284793 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-md79j"] Feb 02 22:11:27 crc kubenswrapper[4789]: E0202 22:11:27.285918 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cba0dc3-207e-40f3-84d5-39a71100f1c2" containerName="extract-utilities" Feb 02 22:11:27 crc kubenswrapper[4789]: I0202 22:11:27.285948 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cba0dc3-207e-40f3-84d5-39a71100f1c2" containerName="extract-utilities" Feb 02 22:11:27 crc kubenswrapper[4789]: E0202 22:11:27.286007 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cba0dc3-207e-40f3-84d5-39a71100f1c2" containerName="extract-content" Feb 02 22:11:27 crc kubenswrapper[4789]: I0202 22:11:27.286020 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cba0dc3-207e-40f3-84d5-39a71100f1c2" containerName="extract-content" Feb 02 22:11:27 crc kubenswrapper[4789]: E0202 22:11:27.286034 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cba0dc3-207e-40f3-84d5-39a71100f1c2" containerName="registry-server" Feb 02 22:11:27 crc kubenswrapper[4789]: I0202 22:11:27.286046 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cba0dc3-207e-40f3-84d5-39a71100f1c2" containerName="registry-server" Feb 02 22:11:27 crc kubenswrapper[4789]: I0202 22:11:27.286283 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cba0dc3-207e-40f3-84d5-39a71100f1c2" containerName="registry-server" Feb 02 22:11:27 crc kubenswrapper[4789]: I0202 22:11:27.288092 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-md79j" Feb 02 22:11:27 crc kubenswrapper[4789]: I0202 22:11:27.309161 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-md79j"] Feb 02 22:11:27 crc kubenswrapper[4789]: I0202 22:11:27.360760 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mnj9\" (UniqueName: \"kubernetes.io/projected/136a6984-d6a1-4d8c-b703-c6a66402c87f-kube-api-access-2mnj9\") pod \"community-operators-md79j\" (UID: \"136a6984-d6a1-4d8c-b703-c6a66402c87f\") " pod="openshift-marketplace/community-operators-md79j" Feb 02 22:11:27 crc kubenswrapper[4789]: I0202 22:11:27.360936 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/136a6984-d6a1-4d8c-b703-c6a66402c87f-utilities\") pod \"community-operators-md79j\" (UID: \"136a6984-d6a1-4d8c-b703-c6a66402c87f\") " pod="openshift-marketplace/community-operators-md79j" Feb 02 22:11:27 crc kubenswrapper[4789]: I0202 22:11:27.361007 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/136a6984-d6a1-4d8c-b703-c6a66402c87f-catalog-content\") pod \"community-operators-md79j\" (UID: \"136a6984-d6a1-4d8c-b703-c6a66402c87f\") " pod="openshift-marketplace/community-operators-md79j" Feb 02 22:11:27 crc kubenswrapper[4789]: I0202 22:11:27.462969 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/136a6984-d6a1-4d8c-b703-c6a66402c87f-utilities\") pod \"community-operators-md79j\" (UID: \"136a6984-d6a1-4d8c-b703-c6a66402c87f\") " pod="openshift-marketplace/community-operators-md79j" Feb 02 22:11:27 crc kubenswrapper[4789]: I0202 22:11:27.463028 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/136a6984-d6a1-4d8c-b703-c6a66402c87f-catalog-content\") pod \"community-operators-md79j\" (UID: \"136a6984-d6a1-4d8c-b703-c6a66402c87f\") " pod="openshift-marketplace/community-operators-md79j" Feb 02 22:11:27 crc kubenswrapper[4789]: I0202 22:11:27.463106 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mnj9\" (UniqueName: \"kubernetes.io/projected/136a6984-d6a1-4d8c-b703-c6a66402c87f-kube-api-access-2mnj9\") pod \"community-operators-md79j\" (UID: \"136a6984-d6a1-4d8c-b703-c6a66402c87f\") " pod="openshift-marketplace/community-operators-md79j" Feb 02 22:11:27 crc kubenswrapper[4789]: I0202 22:11:27.463905 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/136a6984-d6a1-4d8c-b703-c6a66402c87f-utilities\") pod \"community-operators-md79j\" (UID: \"136a6984-d6a1-4d8c-b703-c6a66402c87f\") " pod="openshift-marketplace/community-operators-md79j" Feb 02 22:11:27 crc kubenswrapper[4789]: I0202 22:11:27.464195 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/136a6984-d6a1-4d8c-b703-c6a66402c87f-catalog-content\") pod \"community-operators-md79j\" (UID: \"136a6984-d6a1-4d8c-b703-c6a66402c87f\") " pod="openshift-marketplace/community-operators-md79j" Feb 02 22:11:27 crc kubenswrapper[4789]: I0202 22:11:27.486830 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mnj9\" (UniqueName: \"kubernetes.io/projected/136a6984-d6a1-4d8c-b703-c6a66402c87f-kube-api-access-2mnj9\") pod \"community-operators-md79j\" (UID: \"136a6984-d6a1-4d8c-b703-c6a66402c87f\") " pod="openshift-marketplace/community-operators-md79j" Feb 02 22:11:27 crc kubenswrapper[4789]: I0202 22:11:27.635407 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-md79j" Feb 02 22:11:28 crc kubenswrapper[4789]: I0202 22:11:28.153037 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-md79j"] Feb 02 22:11:28 crc kubenswrapper[4789]: I0202 22:11:28.336510 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-md79j" event={"ID":"136a6984-d6a1-4d8c-b703-c6a66402c87f","Type":"ContainerStarted","Data":"64cb33632003acb5745e0e2cb2fee89fe08c3fde637533d02c7091b40d4ea420"} Feb 02 22:11:29 crc kubenswrapper[4789]: I0202 22:11:29.349859 4789 generic.go:334] "Generic (PLEG): container finished" podID="136a6984-d6a1-4d8c-b703-c6a66402c87f" containerID="ade54605617b539f27d27cd591e69bc086e8e3f27b591dc545ec9f5641751cec" exitCode=0 Feb 02 22:11:29 crc kubenswrapper[4789]: I0202 22:11:29.349926 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-md79j" event={"ID":"136a6984-d6a1-4d8c-b703-c6a66402c87f","Type":"ContainerDied","Data":"ade54605617b539f27d27cd591e69bc086e8e3f27b591dc545ec9f5641751cec"} Feb 02 22:11:30 crc kubenswrapper[4789]: I0202 22:11:30.364721 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-md79j" event={"ID":"136a6984-d6a1-4d8c-b703-c6a66402c87f","Type":"ContainerStarted","Data":"275ec789a41e2d4a80ff700ff9a4a18cbd365fab707f7110d551a426bd4b6927"} Feb 02 22:11:31 crc kubenswrapper[4789]: I0202 22:11:31.375362 4789 generic.go:334] "Generic (PLEG): container finished" podID="136a6984-d6a1-4d8c-b703-c6a66402c87f" containerID="275ec789a41e2d4a80ff700ff9a4a18cbd365fab707f7110d551a426bd4b6927" exitCode=0 Feb 02 22:11:31 crc kubenswrapper[4789]: I0202 22:11:31.375455 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-md79j" event={"ID":"136a6984-d6a1-4d8c-b703-c6a66402c87f","Type":"ContainerDied","Data":"275ec789a41e2d4a80ff700ff9a4a18cbd365fab707f7110d551a426bd4b6927"} Feb 02 22:11:32 crc kubenswrapper[4789]: I0202 22:11:32.390340 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-md79j" event={"ID":"136a6984-d6a1-4d8c-b703-c6a66402c87f","Type":"ContainerStarted","Data":"20a73eaef00098644081343bbf34d028cfe6e5d28b9f11dba53631f97ce2830c"} Feb 02 22:11:32 crc kubenswrapper[4789]: I0202 22:11:32.422806 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-md79j" podStartSLOduration=2.948697999 podStartE2EDuration="5.42278042s" podCreationTimestamp="2026-02-02 22:11:27 +0000 UTC" firstStartedPulling="2026-02-02 22:11:29.354409542 +0000 UTC m=+3109.649434601" lastFinishedPulling="2026-02-02 22:11:31.828492013 +0000 UTC m=+3112.123517022" observedRunningTime="2026-02-02 22:11:32.412458659 +0000 UTC m=+3112.707483718" watchObservedRunningTime="2026-02-02 22:11:32.42278042 +0000 UTC m=+3112.717805479" Feb 02 22:11:37 crc kubenswrapper[4789]: I0202 22:11:37.636074 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-md79j" Feb 02 22:11:37 crc kubenswrapper[4789]: I0202 22:11:37.638662 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-md79j" Feb 02 22:11:37 crc kubenswrapper[4789]: I0202 22:11:37.716451 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-md79j" Feb 02 22:11:38 crc kubenswrapper[4789]: I0202 22:11:38.482514 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-md79j" Feb 02 22:11:38 crc kubenswrapper[4789]: I0202 22:11:38.536543 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-md79j"] Feb 02 22:11:40 crc kubenswrapper[4789]: I0202 22:11:40.469742 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-md79j" podUID="136a6984-d6a1-4d8c-b703-c6a66402c87f" containerName="registry-server" containerID="cri-o://20a73eaef00098644081343bbf34d028cfe6e5d28b9f11dba53631f97ce2830c" gracePeriod=2 Feb 02 22:11:40 crc kubenswrapper[4789]: E0202 22:11:40.687718 4789 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod136a6984_d6a1_4d8c_b703_c6a66402c87f.slice/crio-conmon-20a73eaef00098644081343bbf34d028cfe6e5d28b9f11dba53631f97ce2830c.scope\": RecentStats: unable to find data in memory cache]" Feb 02 22:11:40 crc kubenswrapper[4789]: I0202 22:11:40.952022 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-md79j" Feb 02 22:11:41 crc kubenswrapper[4789]: I0202 22:11:41.130688 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/136a6984-d6a1-4d8c-b703-c6a66402c87f-utilities\") pod \"136a6984-d6a1-4d8c-b703-c6a66402c87f\" (UID: \"136a6984-d6a1-4d8c-b703-c6a66402c87f\") " Feb 02 22:11:41 crc kubenswrapper[4789]: I0202 22:11:41.130831 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/136a6984-d6a1-4d8c-b703-c6a66402c87f-catalog-content\") pod \"136a6984-d6a1-4d8c-b703-c6a66402c87f\" (UID: \"136a6984-d6a1-4d8c-b703-c6a66402c87f\") " Feb 02 22:11:41 crc kubenswrapper[4789]: I0202 22:11:41.130977 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2mnj9\" (UniqueName: \"kubernetes.io/projected/136a6984-d6a1-4d8c-b703-c6a66402c87f-kube-api-access-2mnj9\") pod \"136a6984-d6a1-4d8c-b703-c6a66402c87f\" (UID: \"136a6984-d6a1-4d8c-b703-c6a66402c87f\") " Feb 02 22:11:41 crc kubenswrapper[4789]: I0202 22:11:41.132064 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/136a6984-d6a1-4d8c-b703-c6a66402c87f-utilities" (OuterVolumeSpecName: "utilities") pod "136a6984-d6a1-4d8c-b703-c6a66402c87f" (UID: "136a6984-d6a1-4d8c-b703-c6a66402c87f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 22:11:41 crc kubenswrapper[4789]: I0202 22:11:41.140452 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/136a6984-d6a1-4d8c-b703-c6a66402c87f-kube-api-access-2mnj9" (OuterVolumeSpecName: "kube-api-access-2mnj9") pod "136a6984-d6a1-4d8c-b703-c6a66402c87f" (UID: "136a6984-d6a1-4d8c-b703-c6a66402c87f"). InnerVolumeSpecName "kube-api-access-2mnj9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 22:11:41 crc kubenswrapper[4789]: I0202 22:11:41.227384 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/136a6984-d6a1-4d8c-b703-c6a66402c87f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "136a6984-d6a1-4d8c-b703-c6a66402c87f" (UID: "136a6984-d6a1-4d8c-b703-c6a66402c87f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 22:11:41 crc kubenswrapper[4789]: I0202 22:11:41.233521 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2mnj9\" (UniqueName: \"kubernetes.io/projected/136a6984-d6a1-4d8c-b703-c6a66402c87f-kube-api-access-2mnj9\") on node \"crc\" DevicePath \"\"" Feb 02 22:11:41 crc kubenswrapper[4789]: I0202 22:11:41.233574 4789 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/136a6984-d6a1-4d8c-b703-c6a66402c87f-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 22:11:41 crc kubenswrapper[4789]: I0202 22:11:41.233622 4789 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/136a6984-d6a1-4d8c-b703-c6a66402c87f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 22:11:41 crc kubenswrapper[4789]: I0202 22:11:41.482243 4789 generic.go:334] "Generic (PLEG): container finished" podID="136a6984-d6a1-4d8c-b703-c6a66402c87f" containerID="20a73eaef00098644081343bbf34d028cfe6e5d28b9f11dba53631f97ce2830c" exitCode=0 Feb 02 22:11:41 crc kubenswrapper[4789]: I0202 22:11:41.482341 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-md79j" Feb 02 22:11:41 crc kubenswrapper[4789]: I0202 22:11:41.482325 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-md79j" event={"ID":"136a6984-d6a1-4d8c-b703-c6a66402c87f","Type":"ContainerDied","Data":"20a73eaef00098644081343bbf34d028cfe6e5d28b9f11dba53631f97ce2830c"} Feb 02 22:11:41 crc kubenswrapper[4789]: I0202 22:11:41.482909 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-md79j" event={"ID":"136a6984-d6a1-4d8c-b703-c6a66402c87f","Type":"ContainerDied","Data":"64cb33632003acb5745e0e2cb2fee89fe08c3fde637533d02c7091b40d4ea420"} Feb 02 22:11:41 crc kubenswrapper[4789]: I0202 22:11:41.482961 4789 scope.go:117] "RemoveContainer" containerID="20a73eaef00098644081343bbf34d028cfe6e5d28b9f11dba53631f97ce2830c" Feb 02 22:11:41 crc kubenswrapper[4789]: I0202 22:11:41.515538 4789 scope.go:117] "RemoveContainer" containerID="275ec789a41e2d4a80ff700ff9a4a18cbd365fab707f7110d551a426bd4b6927" Feb 02 22:11:41 crc kubenswrapper[4789]: I0202 22:11:41.541910 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-md79j"] Feb 02 22:11:41 crc kubenswrapper[4789]: I0202 22:11:41.553347 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-md79j"] Feb 02 22:11:41 crc kubenswrapper[4789]: I0202 22:11:41.564222 4789 scope.go:117] "RemoveContainer" containerID="ade54605617b539f27d27cd591e69bc086e8e3f27b591dc545ec9f5641751cec" Feb 02 22:11:41 crc kubenswrapper[4789]: I0202 22:11:41.596240 4789 scope.go:117] "RemoveContainer" containerID="20a73eaef00098644081343bbf34d028cfe6e5d28b9f11dba53631f97ce2830c" Feb 02 22:11:41 crc kubenswrapper[4789]: E0202 22:11:41.597319 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20a73eaef00098644081343bbf34d028cfe6e5d28b9f11dba53631f97ce2830c\": container with ID starting with 20a73eaef00098644081343bbf34d028cfe6e5d28b9f11dba53631f97ce2830c not found: ID does not exist" containerID="20a73eaef00098644081343bbf34d028cfe6e5d28b9f11dba53631f97ce2830c" Feb 02 22:11:41 crc kubenswrapper[4789]: I0202 22:11:41.597389 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20a73eaef00098644081343bbf34d028cfe6e5d28b9f11dba53631f97ce2830c"} err="failed to get container status \"20a73eaef00098644081343bbf34d028cfe6e5d28b9f11dba53631f97ce2830c\": rpc error: code = NotFound desc = could not find container \"20a73eaef00098644081343bbf34d028cfe6e5d28b9f11dba53631f97ce2830c\": container with ID starting with 20a73eaef00098644081343bbf34d028cfe6e5d28b9f11dba53631f97ce2830c not found: ID does not exist" Feb 02 22:11:41 crc kubenswrapper[4789]: I0202 22:11:41.597435 4789 scope.go:117] "RemoveContainer" containerID="275ec789a41e2d4a80ff700ff9a4a18cbd365fab707f7110d551a426bd4b6927" Feb 02 22:11:41 crc kubenswrapper[4789]: E0202 22:11:41.598567 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"275ec789a41e2d4a80ff700ff9a4a18cbd365fab707f7110d551a426bd4b6927\": container with ID starting with 275ec789a41e2d4a80ff700ff9a4a18cbd365fab707f7110d551a426bd4b6927 not found: ID does not exist" containerID="275ec789a41e2d4a80ff700ff9a4a18cbd365fab707f7110d551a426bd4b6927" Feb 02 22:11:41 crc kubenswrapper[4789]: I0202 22:11:41.598752 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"275ec789a41e2d4a80ff700ff9a4a18cbd365fab707f7110d551a426bd4b6927"} err="failed to get container status \"275ec789a41e2d4a80ff700ff9a4a18cbd365fab707f7110d551a426bd4b6927\": rpc error: code = NotFound desc = could not find container \"275ec789a41e2d4a80ff700ff9a4a18cbd365fab707f7110d551a426bd4b6927\": container with ID starting with 275ec789a41e2d4a80ff700ff9a4a18cbd365fab707f7110d551a426bd4b6927 not found: ID does not exist" Feb 02 22:11:41 crc kubenswrapper[4789]: I0202 22:11:41.598812 4789 scope.go:117] "RemoveContainer" containerID="ade54605617b539f27d27cd591e69bc086e8e3f27b591dc545ec9f5641751cec" Feb 02 22:11:41 crc kubenswrapper[4789]: E0202 22:11:41.599354 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ade54605617b539f27d27cd591e69bc086e8e3f27b591dc545ec9f5641751cec\": container with ID starting with ade54605617b539f27d27cd591e69bc086e8e3f27b591dc545ec9f5641751cec not found: ID does not exist" containerID="ade54605617b539f27d27cd591e69bc086e8e3f27b591dc545ec9f5641751cec" Feb 02 22:11:41 crc kubenswrapper[4789]: I0202 22:11:41.599417 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ade54605617b539f27d27cd591e69bc086e8e3f27b591dc545ec9f5641751cec"} err="failed to get container status \"ade54605617b539f27d27cd591e69bc086e8e3f27b591dc545ec9f5641751cec\": rpc error: code = NotFound desc = could not find container \"ade54605617b539f27d27cd591e69bc086e8e3f27b591dc545ec9f5641751cec\": container with ID starting with ade54605617b539f27d27cd591e69bc086e8e3f27b591dc545ec9f5641751cec not found: ID does not exist" Feb 02 22:11:42 crc kubenswrapper[4789]: I0202 22:11:42.435077 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="136a6984-d6a1-4d8c-b703-c6a66402c87f" path="/var/lib/kubelet/pods/136a6984-d6a1-4d8c-b703-c6a66402c87f/volumes" Feb 02 22:11:52 crc kubenswrapper[4789]: I0202 22:11:52.841347 4789 patch_prober.go:28] interesting pod/machine-config-daemon-c8vcn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 22:11:52 crc kubenswrapper[4789]: I0202 22:11:52.841795 4789 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 22:11:52 crc kubenswrapper[4789]: I0202 22:11:52.841831 4789 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" Feb 02 22:11:52 crc kubenswrapper[4789]: I0202 22:11:52.842376 4789 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f751ab6e20416c55ada057bcc369a80021ff56d1c6434e7eaf32a7fd2e325226"} pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 22:11:52 crc kubenswrapper[4789]: I0202 22:11:52.842439 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" containerName="machine-config-daemon" containerID="cri-o://f751ab6e20416c55ada057bcc369a80021ff56d1c6434e7eaf32a7fd2e325226" gracePeriod=600 Feb 02 22:11:52 crc kubenswrapper[4789]: E0202 22:11:52.970618 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:11:53 crc kubenswrapper[4789]: I0202 22:11:53.613045 4789 generic.go:334] "Generic (PLEG): container finished" podID="bdf018b4-1451-4d37-be6e-05802b67c73e" containerID="f751ab6e20416c55ada057bcc369a80021ff56d1c6434e7eaf32a7fd2e325226" exitCode=0 Feb 02 22:11:53 crc kubenswrapper[4789]: I0202 22:11:53.613179 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" event={"ID":"bdf018b4-1451-4d37-be6e-05802b67c73e","Type":"ContainerDied","Data":"f751ab6e20416c55ada057bcc369a80021ff56d1c6434e7eaf32a7fd2e325226"} Feb 02 22:11:53 crc kubenswrapper[4789]: I0202 22:11:53.613413 4789 scope.go:117] "RemoveContainer" containerID="db3d2c7fc44410f68ac067079642b1953acd5044a4217991d998ef7063d4d275" Feb 02 22:11:53 crc kubenswrapper[4789]: I0202 22:11:53.614294 4789 scope.go:117] "RemoveContainer" containerID="f751ab6e20416c55ada057bcc369a80021ff56d1c6434e7eaf32a7fd2e325226" Feb 02 22:11:53 crc kubenswrapper[4789]: E0202 22:11:53.614728 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:12:08 crc kubenswrapper[4789]: I0202 22:12:08.419805 4789 scope.go:117] "RemoveContainer" containerID="f751ab6e20416c55ada057bcc369a80021ff56d1c6434e7eaf32a7fd2e325226" Feb 02 22:12:08 crc kubenswrapper[4789]: E0202 22:12:08.421104 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:12:19 crc kubenswrapper[4789]: I0202 22:12:19.431863 4789 scope.go:117] "RemoveContainer" containerID="f751ab6e20416c55ada057bcc369a80021ff56d1c6434e7eaf32a7fd2e325226" Feb 02 22:12:19 crc kubenswrapper[4789]: E0202 22:12:19.432815 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:12:33 crc kubenswrapper[4789]: I0202 22:12:33.420138 4789 scope.go:117] "RemoveContainer" containerID="f751ab6e20416c55ada057bcc369a80021ff56d1c6434e7eaf32a7fd2e325226" Feb 02 22:12:33 crc kubenswrapper[4789]: E0202 22:12:33.422203 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:12:44 crc kubenswrapper[4789]: I0202 22:12:44.421309 4789 scope.go:117] "RemoveContainer" containerID="f751ab6e20416c55ada057bcc369a80021ff56d1c6434e7eaf32a7fd2e325226" Feb 02 22:12:44 crc kubenswrapper[4789]: E0202 22:12:44.422417 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:12:56 crc kubenswrapper[4789]: I0202 22:12:56.420308 4789 scope.go:117] "RemoveContainer" containerID="f751ab6e20416c55ada057bcc369a80021ff56d1c6434e7eaf32a7fd2e325226" Feb 02 22:12:56 crc kubenswrapper[4789]: E0202 22:12:56.421230 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:13:11 crc kubenswrapper[4789]: I0202 22:13:11.420435 4789 scope.go:117] "RemoveContainer" containerID="f751ab6e20416c55ada057bcc369a80021ff56d1c6434e7eaf32a7fd2e325226" Feb 02 22:13:11 crc kubenswrapper[4789]: E0202 22:13:11.421707 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:13:25 crc kubenswrapper[4789]: I0202 22:13:25.419918 4789 scope.go:117] "RemoveContainer" containerID="f751ab6e20416c55ada057bcc369a80021ff56d1c6434e7eaf32a7fd2e325226" Feb 02 22:13:25 crc kubenswrapper[4789]: E0202 22:13:25.421997 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:13:39 crc kubenswrapper[4789]: I0202 22:13:39.420206 4789 scope.go:117] "RemoveContainer" containerID="f751ab6e20416c55ada057bcc369a80021ff56d1c6434e7eaf32a7fd2e325226" Feb 02 22:13:39 crc kubenswrapper[4789]: E0202 22:13:39.421248 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:13:53 crc kubenswrapper[4789]: I0202 22:13:53.419664 4789 scope.go:117] "RemoveContainer" containerID="f751ab6e20416c55ada057bcc369a80021ff56d1c6434e7eaf32a7fd2e325226" Feb 02 22:13:53 crc kubenswrapper[4789]: E0202 22:13:53.420573 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:14:07 crc kubenswrapper[4789]: I0202 22:14:07.419708 4789 scope.go:117] "RemoveContainer" containerID="f751ab6e20416c55ada057bcc369a80021ff56d1c6434e7eaf32a7fd2e325226" Feb 02 22:14:07 crc kubenswrapper[4789]: E0202 22:14:07.420354 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:14:19 crc kubenswrapper[4789]: I0202 22:14:19.419661 4789 scope.go:117] "RemoveContainer" containerID="f751ab6e20416c55ada057bcc369a80021ff56d1c6434e7eaf32a7fd2e325226" Feb 02 22:14:19 crc kubenswrapper[4789]: E0202 22:14:19.420607 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:14:32 crc kubenswrapper[4789]: I0202 22:14:32.419614 4789 scope.go:117] "RemoveContainer" containerID="f751ab6e20416c55ada057bcc369a80021ff56d1c6434e7eaf32a7fd2e325226" Feb 02 22:14:32 crc kubenswrapper[4789]: E0202 22:14:32.420926 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:14:46 crc kubenswrapper[4789]: I0202 22:14:46.680914 4789 scope.go:117] "RemoveContainer" containerID="f751ab6e20416c55ada057bcc369a80021ff56d1c6434e7eaf32a7fd2e325226" Feb 02 22:14:46 crc kubenswrapper[4789]: E0202 22:14:46.681657 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:15:00 crc kubenswrapper[4789]: I0202 22:15:00.161309 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29501175-f8pck"] Feb 02 22:15:00 crc kubenswrapper[4789]: E0202 22:15:00.162401 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="136a6984-d6a1-4d8c-b703-c6a66402c87f" containerName="registry-server" Feb 02 22:15:00 crc kubenswrapper[4789]: I0202 22:15:00.162424 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="136a6984-d6a1-4d8c-b703-c6a66402c87f" containerName="registry-server" Feb 02 22:15:00 crc kubenswrapper[4789]: E0202 22:15:00.162470 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="136a6984-d6a1-4d8c-b703-c6a66402c87f" containerName="extract-content" Feb 02 22:15:00 crc kubenswrapper[4789]: I0202 22:15:00.162485 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="136a6984-d6a1-4d8c-b703-c6a66402c87f" containerName="extract-content" Feb 02 22:15:00 crc kubenswrapper[4789]: E0202 22:15:00.162506 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="136a6984-d6a1-4d8c-b703-c6a66402c87f" containerName="extract-utilities" Feb 02 22:15:00 crc kubenswrapper[4789]: I0202 22:15:00.162519 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="136a6984-d6a1-4d8c-b703-c6a66402c87f" containerName="extract-utilities" Feb 02 22:15:00 crc kubenswrapper[4789]: I0202 22:15:00.162827 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="136a6984-d6a1-4d8c-b703-c6a66402c87f" containerName="registry-server" Feb 02 22:15:00 crc kubenswrapper[4789]: I0202 22:15:00.163656 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29501175-f8pck" Feb 02 22:15:00 crc kubenswrapper[4789]: I0202 22:15:00.167296 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 02 22:15:00 crc kubenswrapper[4789]: I0202 22:15:00.167318 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 02 22:15:00 crc kubenswrapper[4789]: I0202 22:15:00.182789 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29501175-f8pck"] Feb 02 22:15:00 crc kubenswrapper[4789]: I0202 22:15:00.191299 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/52393785-e3b0-462a-9dd2-269523de0499-secret-volume\") pod \"collect-profiles-29501175-f8pck\" (UID: \"52393785-e3b0-462a-9dd2-269523de0499\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501175-f8pck" Feb 02 22:15:00 crc kubenswrapper[4789]: I0202 22:15:00.191354 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/52393785-e3b0-462a-9dd2-269523de0499-config-volume\") pod \"collect-profiles-29501175-f8pck\" (UID: \"52393785-e3b0-462a-9dd2-269523de0499\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501175-f8pck" Feb 02 22:15:00 crc kubenswrapper[4789]: I0202 22:15:00.191444 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h58t4\" (UniqueName: \"kubernetes.io/projected/52393785-e3b0-462a-9dd2-269523de0499-kube-api-access-h58t4\") pod \"collect-profiles-29501175-f8pck\" (UID: \"52393785-e3b0-462a-9dd2-269523de0499\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501175-f8pck" Feb 02 22:15:00 crc kubenswrapper[4789]: I0202 22:15:00.292387 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h58t4\" (UniqueName: \"kubernetes.io/projected/52393785-e3b0-462a-9dd2-269523de0499-kube-api-access-h58t4\") pod \"collect-profiles-29501175-f8pck\" (UID: \"52393785-e3b0-462a-9dd2-269523de0499\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501175-f8pck" Feb 02 22:15:00 crc kubenswrapper[4789]: I0202 22:15:00.292524 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/52393785-e3b0-462a-9dd2-269523de0499-secret-volume\") pod \"collect-profiles-29501175-f8pck\" (UID: \"52393785-e3b0-462a-9dd2-269523de0499\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501175-f8pck" Feb 02 22:15:00 crc kubenswrapper[4789]: I0202 22:15:00.292602 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/52393785-e3b0-462a-9dd2-269523de0499-config-volume\") pod \"collect-profiles-29501175-f8pck\" (UID: \"52393785-e3b0-462a-9dd2-269523de0499\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501175-f8pck" Feb 02 22:15:00 crc kubenswrapper[4789]: I0202 22:15:00.293872 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/52393785-e3b0-462a-9dd2-269523de0499-config-volume\") pod \"collect-profiles-29501175-f8pck\" (UID: \"52393785-e3b0-462a-9dd2-269523de0499\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501175-f8pck" Feb 02 22:15:00 crc kubenswrapper[4789]: I0202 22:15:00.298971 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/52393785-e3b0-462a-9dd2-269523de0499-secret-volume\") pod \"collect-profiles-29501175-f8pck\" (UID: \"52393785-e3b0-462a-9dd2-269523de0499\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501175-f8pck" Feb 02 22:15:00 crc kubenswrapper[4789]: I0202 22:15:00.324092 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h58t4\" (UniqueName: \"kubernetes.io/projected/52393785-e3b0-462a-9dd2-269523de0499-kube-api-access-h58t4\") pod \"collect-profiles-29501175-f8pck\" (UID: \"52393785-e3b0-462a-9dd2-269523de0499\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501175-f8pck" Feb 02 22:15:00 crc kubenswrapper[4789]: I0202 22:15:00.490285 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29501175-f8pck" Feb 02 22:15:01 crc kubenswrapper[4789]: I0202 22:15:01.021266 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29501175-f8pck"] Feb 02 22:15:01 crc kubenswrapper[4789]: I0202 22:15:01.419737 4789 scope.go:117] "RemoveContainer" containerID="f751ab6e20416c55ada057bcc369a80021ff56d1c6434e7eaf32a7fd2e325226" Feb 02 22:15:01 crc kubenswrapper[4789]: E0202 22:15:01.420439 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:15:01 crc kubenswrapper[4789]: I0202 22:15:01.828043 4789 generic.go:334] "Generic (PLEG): container finished" podID="52393785-e3b0-462a-9dd2-269523de0499" containerID="9d88d12d6bc102dfda5f7b92251e3382635382ed2af0a183a52fa03f10909b0d" exitCode=0 Feb 02 22:15:01 crc kubenswrapper[4789]: I0202 22:15:01.828102 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29501175-f8pck" event={"ID":"52393785-e3b0-462a-9dd2-269523de0499","Type":"ContainerDied","Data":"9d88d12d6bc102dfda5f7b92251e3382635382ed2af0a183a52fa03f10909b0d"} Feb 02 22:15:01 crc kubenswrapper[4789]: I0202 22:15:01.828142 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29501175-f8pck" event={"ID":"52393785-e3b0-462a-9dd2-269523de0499","Type":"ContainerStarted","Data":"9fbed5b242ed94eede76b7d3a4532d531997213c15787dfc7aef0a124456b3be"} Feb 02 22:15:03 crc kubenswrapper[4789]: I0202 22:15:03.186260 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29501175-f8pck" Feb 02 22:15:03 crc kubenswrapper[4789]: I0202 22:15:03.239747 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h58t4\" (UniqueName: \"kubernetes.io/projected/52393785-e3b0-462a-9dd2-269523de0499-kube-api-access-h58t4\") pod \"52393785-e3b0-462a-9dd2-269523de0499\" (UID: \"52393785-e3b0-462a-9dd2-269523de0499\") " Feb 02 22:15:03 crc kubenswrapper[4789]: I0202 22:15:03.239846 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/52393785-e3b0-462a-9dd2-269523de0499-secret-volume\") pod \"52393785-e3b0-462a-9dd2-269523de0499\" (UID: \"52393785-e3b0-462a-9dd2-269523de0499\") " Feb 02 22:15:03 crc kubenswrapper[4789]: I0202 22:15:03.239903 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/52393785-e3b0-462a-9dd2-269523de0499-config-volume\") pod \"52393785-e3b0-462a-9dd2-269523de0499\" (UID: \"52393785-e3b0-462a-9dd2-269523de0499\") " Feb 02 22:15:03 crc kubenswrapper[4789]: I0202 22:15:03.240802 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52393785-e3b0-462a-9dd2-269523de0499-config-volume" (OuterVolumeSpecName: "config-volume") pod "52393785-e3b0-462a-9dd2-269523de0499" (UID: "52393785-e3b0-462a-9dd2-269523de0499"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 22:15:03 crc kubenswrapper[4789]: I0202 22:15:03.244494 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52393785-e3b0-462a-9dd2-269523de0499-kube-api-access-h58t4" (OuterVolumeSpecName: "kube-api-access-h58t4") pod "52393785-e3b0-462a-9dd2-269523de0499" (UID: "52393785-e3b0-462a-9dd2-269523de0499"). InnerVolumeSpecName "kube-api-access-h58t4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 22:15:03 crc kubenswrapper[4789]: I0202 22:15:03.244573 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52393785-e3b0-462a-9dd2-269523de0499-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "52393785-e3b0-462a-9dd2-269523de0499" (UID: "52393785-e3b0-462a-9dd2-269523de0499"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 22:15:03 crc kubenswrapper[4789]: I0202 22:15:03.341734 4789 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/52393785-e3b0-462a-9dd2-269523de0499-config-volume\") on node \"crc\" DevicePath \"\"" Feb 02 22:15:03 crc kubenswrapper[4789]: I0202 22:15:03.341807 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h58t4\" (UniqueName: \"kubernetes.io/projected/52393785-e3b0-462a-9dd2-269523de0499-kube-api-access-h58t4\") on node \"crc\" DevicePath \"\"" Feb 02 22:15:03 crc kubenswrapper[4789]: I0202 22:15:03.341836 4789 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/52393785-e3b0-462a-9dd2-269523de0499-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 02 22:15:03 crc kubenswrapper[4789]: I0202 22:15:03.849003 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29501175-f8pck" Feb 02 22:15:03 crc kubenswrapper[4789]: I0202 22:15:03.848947 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29501175-f8pck" event={"ID":"52393785-e3b0-462a-9dd2-269523de0499","Type":"ContainerDied","Data":"9fbed5b242ed94eede76b7d3a4532d531997213c15787dfc7aef0a124456b3be"} Feb 02 22:15:03 crc kubenswrapper[4789]: I0202 22:15:03.849723 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9fbed5b242ed94eede76b7d3a4532d531997213c15787dfc7aef0a124456b3be" Feb 02 22:15:04 crc kubenswrapper[4789]: I0202 22:15:04.255514 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29501130-2fpc8"] Feb 02 22:15:04 crc kubenswrapper[4789]: I0202 22:15:04.262165 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29501130-2fpc8"] Feb 02 22:15:04 crc kubenswrapper[4789]: I0202 22:15:04.445502 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69f791f2-1e25-45d3-89bd-5269712d52b2" path="/var/lib/kubelet/pods/69f791f2-1e25-45d3-89bd-5269712d52b2/volumes" Feb 02 22:15:16 crc kubenswrapper[4789]: I0202 22:15:16.420947 4789 scope.go:117] "RemoveContainer" containerID="f751ab6e20416c55ada057bcc369a80021ff56d1c6434e7eaf32a7fd2e325226" Feb 02 22:15:16 crc kubenswrapper[4789]: E0202 22:15:16.421871 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:15:30 crc kubenswrapper[4789]: I0202 22:15:30.428713 4789 scope.go:117] "RemoveContainer" containerID="f751ab6e20416c55ada057bcc369a80021ff56d1c6434e7eaf32a7fd2e325226" Feb 02 22:15:30 crc kubenswrapper[4789]: E0202 22:15:30.429813 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:15:39 crc kubenswrapper[4789]: I0202 22:15:39.942679 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-n9sxf"] Feb 02 22:15:39 crc kubenswrapper[4789]: E0202 22:15:39.943849 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52393785-e3b0-462a-9dd2-269523de0499" containerName="collect-profiles" Feb 02 22:15:39 crc kubenswrapper[4789]: I0202 22:15:39.943877 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="52393785-e3b0-462a-9dd2-269523de0499" containerName="collect-profiles" Feb 02 22:15:39 crc kubenswrapper[4789]: I0202 22:15:39.944215 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="52393785-e3b0-462a-9dd2-269523de0499" containerName="collect-profiles" Feb 02 22:15:39 crc kubenswrapper[4789]: I0202 22:15:39.945989 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n9sxf" Feb 02 22:15:40 crc kubenswrapper[4789]: I0202 22:15:39.968403 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n9sxf"] Feb 02 22:15:40 crc kubenswrapper[4789]: I0202 22:15:40.061340 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd5eda0b-bcf3-4550-9336-f903bd386db9-utilities\") pod \"redhat-marketplace-n9sxf\" (UID: \"dd5eda0b-bcf3-4550-9336-f903bd386db9\") " pod="openshift-marketplace/redhat-marketplace-n9sxf" Feb 02 22:15:40 crc kubenswrapper[4789]: I0202 22:15:40.061795 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fwz4\" (UniqueName: \"kubernetes.io/projected/dd5eda0b-bcf3-4550-9336-f903bd386db9-kube-api-access-5fwz4\") pod \"redhat-marketplace-n9sxf\" (UID: \"dd5eda0b-bcf3-4550-9336-f903bd386db9\") " pod="openshift-marketplace/redhat-marketplace-n9sxf" Feb 02 22:15:40 crc kubenswrapper[4789]: I0202 22:15:40.062097 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd5eda0b-bcf3-4550-9336-f903bd386db9-catalog-content\") pod \"redhat-marketplace-n9sxf\" (UID: \"dd5eda0b-bcf3-4550-9336-f903bd386db9\") " pod="openshift-marketplace/redhat-marketplace-n9sxf" Feb 02 22:15:40 crc kubenswrapper[4789]: I0202 22:15:40.135941 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bjpps"] Feb 02 22:15:40 crc kubenswrapper[4789]: I0202 22:15:40.138458 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bjpps" Feb 02 22:15:40 crc kubenswrapper[4789]: I0202 22:15:40.163812 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bjpps"] Feb 02 22:15:40 crc kubenswrapper[4789]: I0202 22:15:40.165042 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd5eda0b-bcf3-4550-9336-f903bd386db9-utilities\") pod \"redhat-marketplace-n9sxf\" (UID: \"dd5eda0b-bcf3-4550-9336-f903bd386db9\") " pod="openshift-marketplace/redhat-marketplace-n9sxf" Feb 02 22:15:40 crc kubenswrapper[4789]: I0202 22:15:40.165396 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fwz4\" (UniqueName: \"kubernetes.io/projected/dd5eda0b-bcf3-4550-9336-f903bd386db9-kube-api-access-5fwz4\") pod \"redhat-marketplace-n9sxf\" (UID: \"dd5eda0b-bcf3-4550-9336-f903bd386db9\") " pod="openshift-marketplace/redhat-marketplace-n9sxf" Feb 02 22:15:40 crc kubenswrapper[4789]: I0202 22:15:40.165786 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd5eda0b-bcf3-4550-9336-f903bd386db9-utilities\") pod \"redhat-marketplace-n9sxf\" (UID: \"dd5eda0b-bcf3-4550-9336-f903bd386db9\") " pod="openshift-marketplace/redhat-marketplace-n9sxf" Feb 02 22:15:40 crc kubenswrapper[4789]: I0202 22:15:40.166024 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd5eda0b-bcf3-4550-9336-f903bd386db9-catalog-content\") pod \"redhat-marketplace-n9sxf\" (UID: \"dd5eda0b-bcf3-4550-9336-f903bd386db9\") " pod="openshift-marketplace/redhat-marketplace-n9sxf" Feb 02 22:15:40 crc kubenswrapper[4789]: I0202 22:15:40.166372 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd5eda0b-bcf3-4550-9336-f903bd386db9-catalog-content\") pod \"redhat-marketplace-n9sxf\" (UID: \"dd5eda0b-bcf3-4550-9336-f903bd386db9\") " pod="openshift-marketplace/redhat-marketplace-n9sxf" Feb 02 22:15:40 crc kubenswrapper[4789]: I0202 22:15:40.199810 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fwz4\" (UniqueName: \"kubernetes.io/projected/dd5eda0b-bcf3-4550-9336-f903bd386db9-kube-api-access-5fwz4\") pod \"redhat-marketplace-n9sxf\" (UID: \"dd5eda0b-bcf3-4550-9336-f903bd386db9\") " pod="openshift-marketplace/redhat-marketplace-n9sxf" Feb 02 22:15:40 crc kubenswrapper[4789]: I0202 22:15:40.267525 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f71e3aa4-b720-4c55-8516-ff9c1d8062e1-catalog-content\") pod \"redhat-operators-bjpps\" (UID: \"f71e3aa4-b720-4c55-8516-ff9c1d8062e1\") " pod="openshift-marketplace/redhat-operators-bjpps" Feb 02 22:15:40 crc kubenswrapper[4789]: I0202 22:15:40.267598 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f71e3aa4-b720-4c55-8516-ff9c1d8062e1-utilities\") pod \"redhat-operators-bjpps\" (UID: \"f71e3aa4-b720-4c55-8516-ff9c1d8062e1\") " pod="openshift-marketplace/redhat-operators-bjpps" Feb 02 22:15:40 crc kubenswrapper[4789]: I0202 22:15:40.267640 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdsqs\" (UniqueName: \"kubernetes.io/projected/f71e3aa4-b720-4c55-8516-ff9c1d8062e1-kube-api-access-pdsqs\") pod \"redhat-operators-bjpps\" (UID: \"f71e3aa4-b720-4c55-8516-ff9c1d8062e1\") " pod="openshift-marketplace/redhat-operators-bjpps" Feb 02 22:15:40 crc kubenswrapper[4789]: I0202 22:15:40.316259 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n9sxf" Feb 02 22:15:40 crc kubenswrapper[4789]: I0202 22:15:40.368638 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f71e3aa4-b720-4c55-8516-ff9c1d8062e1-utilities\") pod \"redhat-operators-bjpps\" (UID: \"f71e3aa4-b720-4c55-8516-ff9c1d8062e1\") " pod="openshift-marketplace/redhat-operators-bjpps" Feb 02 22:15:40 crc kubenswrapper[4789]: I0202 22:15:40.368733 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdsqs\" (UniqueName: \"kubernetes.io/projected/f71e3aa4-b720-4c55-8516-ff9c1d8062e1-kube-api-access-pdsqs\") pod \"redhat-operators-bjpps\" (UID: \"f71e3aa4-b720-4c55-8516-ff9c1d8062e1\") " pod="openshift-marketplace/redhat-operators-bjpps" Feb 02 22:15:40 crc kubenswrapper[4789]: I0202 22:15:40.368800 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f71e3aa4-b720-4c55-8516-ff9c1d8062e1-catalog-content\") pod \"redhat-operators-bjpps\" (UID: \"f71e3aa4-b720-4c55-8516-ff9c1d8062e1\") " pod="openshift-marketplace/redhat-operators-bjpps" Feb 02 22:15:40 crc kubenswrapper[4789]: I0202 22:15:40.369251 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f71e3aa4-b720-4c55-8516-ff9c1d8062e1-catalog-content\") pod \"redhat-operators-bjpps\" (UID: \"f71e3aa4-b720-4c55-8516-ff9c1d8062e1\") " pod="openshift-marketplace/redhat-operators-bjpps" Feb 02 22:15:40 crc kubenswrapper[4789]: I0202 22:15:40.369734 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f71e3aa4-b720-4c55-8516-ff9c1d8062e1-utilities\") pod \"redhat-operators-bjpps\" (UID: \"f71e3aa4-b720-4c55-8516-ff9c1d8062e1\") " pod="openshift-marketplace/redhat-operators-bjpps" Feb 02 22:15:40 crc kubenswrapper[4789]: I0202 22:15:40.389474 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdsqs\" (UniqueName: \"kubernetes.io/projected/f71e3aa4-b720-4c55-8516-ff9c1d8062e1-kube-api-access-pdsqs\") pod \"redhat-operators-bjpps\" (UID: \"f71e3aa4-b720-4c55-8516-ff9c1d8062e1\") " pod="openshift-marketplace/redhat-operators-bjpps" Feb 02 22:15:40 crc kubenswrapper[4789]: I0202 22:15:40.465367 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bjpps" Feb 02 22:15:40 crc kubenswrapper[4789]: I0202 22:15:40.772976 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n9sxf"] Feb 02 22:15:40 crc kubenswrapper[4789]: I0202 22:15:40.909549 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bjpps"] Feb 02 22:15:41 crc kubenswrapper[4789]: I0202 22:15:41.154991 4789 generic.go:334] "Generic (PLEG): container finished" podID="dd5eda0b-bcf3-4550-9336-f903bd386db9" containerID="c7e6600bdfa10bb52737542135f05b448a52a2e9c53460a822548858a0853806" exitCode=0 Feb 02 22:15:41 crc kubenswrapper[4789]: I0202 22:15:41.155060 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n9sxf" event={"ID":"dd5eda0b-bcf3-4550-9336-f903bd386db9","Type":"ContainerDied","Data":"c7e6600bdfa10bb52737542135f05b448a52a2e9c53460a822548858a0853806"} Feb 02 22:15:41 crc kubenswrapper[4789]: I0202 22:15:41.155085 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n9sxf" event={"ID":"dd5eda0b-bcf3-4550-9336-f903bd386db9","Type":"ContainerStarted","Data":"54e99c2556c3ccb00af3c329b54dec93710af5769595441bcf26c0b9fc353fa0"} Feb 02 22:15:41 crc kubenswrapper[4789]: I0202 22:15:41.157181 4789 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 22:15:41 crc kubenswrapper[4789]: I0202 22:15:41.158022 4789 generic.go:334] "Generic (PLEG): container finished" podID="f71e3aa4-b720-4c55-8516-ff9c1d8062e1" containerID="7f533b73aee92e21daaeeb790946613406b8fb9996784b84190949447408f8d5" exitCode=0 Feb 02 22:15:41 crc kubenswrapper[4789]: I0202 22:15:41.158069 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bjpps" event={"ID":"f71e3aa4-b720-4c55-8516-ff9c1d8062e1","Type":"ContainerDied","Data":"7f533b73aee92e21daaeeb790946613406b8fb9996784b84190949447408f8d5"} Feb 02 22:15:41 crc kubenswrapper[4789]: I0202 22:15:41.158100 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bjpps" event={"ID":"f71e3aa4-b720-4c55-8516-ff9c1d8062e1","Type":"ContainerStarted","Data":"3e6bdb3b72f85368da1f14a24a4fc996f4f813f7decf4747dfae050e0b78f8e2"} Feb 02 22:15:42 crc kubenswrapper[4789]: I0202 22:15:42.193075 4789 generic.go:334] "Generic (PLEG): container finished" podID="dd5eda0b-bcf3-4550-9336-f903bd386db9" containerID="58f8aa5f3caadcfc70497de8dcdeced0bfd190ea36f1f1dea3466445aed4892f" exitCode=0 Feb 02 22:15:42 crc kubenswrapper[4789]: I0202 22:15:42.193395 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n9sxf" event={"ID":"dd5eda0b-bcf3-4550-9336-f903bd386db9","Type":"ContainerDied","Data":"58f8aa5f3caadcfc70497de8dcdeced0bfd190ea36f1f1dea3466445aed4892f"} Feb 02 22:15:42 crc kubenswrapper[4789]: I0202 22:15:42.424798 4789 scope.go:117] "RemoveContainer" containerID="f751ab6e20416c55ada057bcc369a80021ff56d1c6434e7eaf32a7fd2e325226" Feb 02 22:15:42 crc kubenswrapper[4789]: E0202 22:15:42.424963 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:15:43 crc kubenswrapper[4789]: I0202 22:15:43.210075 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n9sxf" event={"ID":"dd5eda0b-bcf3-4550-9336-f903bd386db9","Type":"ContainerStarted","Data":"3b8500ceefb5caa0ffffe83fcedbe196aeff4f31bdeb7c303dcca297db9fdfd0"} Feb 02 22:15:43 crc kubenswrapper[4789]: I0202 22:15:43.215168 4789 generic.go:334] "Generic (PLEG): container finished" podID="f71e3aa4-b720-4c55-8516-ff9c1d8062e1" containerID="be7deaec09317d3ebb64a65ff0f8822733a3d3816395b2ea80ed0d8b67edecba" exitCode=0 Feb 02 22:15:43 crc kubenswrapper[4789]: I0202 22:15:43.215220 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bjpps" event={"ID":"f71e3aa4-b720-4c55-8516-ff9c1d8062e1","Type":"ContainerDied","Data":"be7deaec09317d3ebb64a65ff0f8822733a3d3816395b2ea80ed0d8b67edecba"} Feb 02 22:15:43 crc kubenswrapper[4789]: I0202 22:15:43.234704 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-n9sxf" podStartSLOduration=2.754631008 podStartE2EDuration="4.234683655s" podCreationTimestamp="2026-02-02 22:15:39 +0000 UTC" firstStartedPulling="2026-02-02 22:15:41.156926901 +0000 UTC m=+3361.451951930" lastFinishedPulling="2026-02-02 22:15:42.636979528 +0000 UTC m=+3362.932004577" observedRunningTime="2026-02-02 22:15:43.23133534 +0000 UTC m=+3363.526360389" watchObservedRunningTime="2026-02-02 22:15:43.234683655 +0000 UTC m=+3363.529708684" Feb 02 22:15:44 crc kubenswrapper[4789]: I0202 22:15:44.226647 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bjpps" event={"ID":"f71e3aa4-b720-4c55-8516-ff9c1d8062e1","Type":"ContainerStarted","Data":"aa6c94de685afec27b07fce71f61297c8fb08d9b017cb29b5842fb12d9340aa5"} Feb 02 22:15:44 crc kubenswrapper[4789]: I0202 22:15:44.256865 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-bjpps" podStartSLOduration=1.796187883 podStartE2EDuration="4.256840565s" podCreationTimestamp="2026-02-02 22:15:40 +0000 UTC" firstStartedPulling="2026-02-02 22:15:41.161311455 +0000 UTC m=+3361.456336474" lastFinishedPulling="2026-02-02 22:15:43.621964097 +0000 UTC m=+3363.916989156" observedRunningTime="2026-02-02 22:15:44.250359082 +0000 UTC m=+3364.545384171" watchObservedRunningTime="2026-02-02 22:15:44.256840565 +0000 UTC m=+3364.551865624" Feb 02 22:15:46 crc kubenswrapper[4789]: I0202 22:15:46.701029 4789 scope.go:117] "RemoveContainer" containerID="73ccf7635eba9ed44e9de542e269c9d0af37310053e1fba884076dbda477de85" Feb 02 22:15:50 crc kubenswrapper[4789]: I0202 22:15:50.316961 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-n9sxf" Feb 02 22:15:50 crc kubenswrapper[4789]: I0202 22:15:50.317420 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-n9sxf" Feb 02 22:15:50 crc kubenswrapper[4789]: I0202 22:15:50.392245 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-n9sxf" Feb 02 22:15:50 crc kubenswrapper[4789]: I0202 22:15:50.465951 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-bjpps" Feb 02 22:15:50 crc kubenswrapper[4789]: I0202 22:15:50.466497 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bjpps" Feb 02 22:15:51 crc kubenswrapper[4789]: I0202 22:15:51.364841 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-n9sxf" Feb 02 22:15:51 crc kubenswrapper[4789]: I0202 22:15:51.509888 4789 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bjpps" podUID="f71e3aa4-b720-4c55-8516-ff9c1d8062e1" containerName="registry-server" probeResult="failure" output=< Feb 02 22:15:51 crc kubenswrapper[4789]: timeout: failed to connect service ":50051" within 1s Feb 02 22:15:51 crc kubenswrapper[4789]: > Feb 02 22:15:53 crc kubenswrapper[4789]: I0202 22:15:53.419805 4789 scope.go:117] "RemoveContainer" containerID="f751ab6e20416c55ada057bcc369a80021ff56d1c6434e7eaf32a7fd2e325226" Feb 02 22:15:53 crc kubenswrapper[4789]: E0202 22:15:53.420242 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:15:53 crc kubenswrapper[4789]: I0202 22:15:53.883763 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-n9sxf"] Feb 02 22:15:54 crc kubenswrapper[4789]: I0202 22:15:54.319834 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-n9sxf" podUID="dd5eda0b-bcf3-4550-9336-f903bd386db9" containerName="registry-server" containerID="cri-o://3b8500ceefb5caa0ffffe83fcedbe196aeff4f31bdeb7c303dcca297db9fdfd0" gracePeriod=2 Feb 02 22:15:54 crc kubenswrapper[4789]: I0202 22:15:54.818979 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n9sxf" Feb 02 22:15:54 crc kubenswrapper[4789]: I0202 22:15:54.994113 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5fwz4\" (UniqueName: \"kubernetes.io/projected/dd5eda0b-bcf3-4550-9336-f903bd386db9-kube-api-access-5fwz4\") pod \"dd5eda0b-bcf3-4550-9336-f903bd386db9\" (UID: \"dd5eda0b-bcf3-4550-9336-f903bd386db9\") " Feb 02 22:15:54 crc kubenswrapper[4789]: I0202 22:15:54.994241 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd5eda0b-bcf3-4550-9336-f903bd386db9-catalog-content\") pod \"dd5eda0b-bcf3-4550-9336-f903bd386db9\" (UID: \"dd5eda0b-bcf3-4550-9336-f903bd386db9\") " Feb 02 22:15:54 crc kubenswrapper[4789]: I0202 22:15:54.994340 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd5eda0b-bcf3-4550-9336-f903bd386db9-utilities\") pod \"dd5eda0b-bcf3-4550-9336-f903bd386db9\" (UID: \"dd5eda0b-bcf3-4550-9336-f903bd386db9\") " Feb 02 22:15:54 crc kubenswrapper[4789]: I0202 22:15:54.996643 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd5eda0b-bcf3-4550-9336-f903bd386db9-utilities" (OuterVolumeSpecName: "utilities") pod "dd5eda0b-bcf3-4550-9336-f903bd386db9" (UID: "dd5eda0b-bcf3-4550-9336-f903bd386db9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 22:15:55 crc kubenswrapper[4789]: I0202 22:15:55.003794 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd5eda0b-bcf3-4550-9336-f903bd386db9-kube-api-access-5fwz4" (OuterVolumeSpecName: "kube-api-access-5fwz4") pod "dd5eda0b-bcf3-4550-9336-f903bd386db9" (UID: "dd5eda0b-bcf3-4550-9336-f903bd386db9"). InnerVolumeSpecName "kube-api-access-5fwz4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 22:15:55 crc kubenswrapper[4789]: I0202 22:15:55.051856 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd5eda0b-bcf3-4550-9336-f903bd386db9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dd5eda0b-bcf3-4550-9336-f903bd386db9" (UID: "dd5eda0b-bcf3-4550-9336-f903bd386db9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 22:15:55 crc kubenswrapper[4789]: I0202 22:15:55.096060 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5fwz4\" (UniqueName: \"kubernetes.io/projected/dd5eda0b-bcf3-4550-9336-f903bd386db9-kube-api-access-5fwz4\") on node \"crc\" DevicePath \"\"" Feb 02 22:15:55 crc kubenswrapper[4789]: I0202 22:15:55.096120 4789 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd5eda0b-bcf3-4550-9336-f903bd386db9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 22:15:55 crc kubenswrapper[4789]: I0202 22:15:55.096134 4789 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd5eda0b-bcf3-4550-9336-f903bd386db9-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 22:15:55 crc kubenswrapper[4789]: I0202 22:15:55.329884 4789 generic.go:334] "Generic (PLEG): container finished" podID="dd5eda0b-bcf3-4550-9336-f903bd386db9" containerID="3b8500ceefb5caa0ffffe83fcedbe196aeff4f31bdeb7c303dcca297db9fdfd0" exitCode=0 Feb 02 22:15:55 crc kubenswrapper[4789]: I0202 22:15:55.329927 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n9sxf" Feb 02 22:15:55 crc kubenswrapper[4789]: I0202 22:15:55.329969 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n9sxf" event={"ID":"dd5eda0b-bcf3-4550-9336-f903bd386db9","Type":"ContainerDied","Data":"3b8500ceefb5caa0ffffe83fcedbe196aeff4f31bdeb7c303dcca297db9fdfd0"} Feb 02 22:15:55 crc kubenswrapper[4789]: I0202 22:15:55.330696 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n9sxf" event={"ID":"dd5eda0b-bcf3-4550-9336-f903bd386db9","Type":"ContainerDied","Data":"54e99c2556c3ccb00af3c329b54dec93710af5769595441bcf26c0b9fc353fa0"} Feb 02 22:15:55 crc kubenswrapper[4789]: I0202 22:15:55.330724 4789 scope.go:117] "RemoveContainer" containerID="3b8500ceefb5caa0ffffe83fcedbe196aeff4f31bdeb7c303dcca297db9fdfd0" Feb 02 22:15:55 crc kubenswrapper[4789]: I0202 22:15:55.349687 4789 scope.go:117] "RemoveContainer" containerID="58f8aa5f3caadcfc70497de8dcdeced0bfd190ea36f1f1dea3466445aed4892f" Feb 02 22:15:55 crc kubenswrapper[4789]: I0202 22:15:55.369558 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-n9sxf"] Feb 02 22:15:55 crc kubenswrapper[4789]: I0202 22:15:55.369633 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-n9sxf"] Feb 02 22:15:55 crc kubenswrapper[4789]: I0202 22:15:55.382923 4789 scope.go:117] "RemoveContainer" containerID="c7e6600bdfa10bb52737542135f05b448a52a2e9c53460a822548858a0853806" Feb 02 22:15:55 crc kubenswrapper[4789]: I0202 22:15:55.404216 4789 scope.go:117] "RemoveContainer" containerID="3b8500ceefb5caa0ffffe83fcedbe196aeff4f31bdeb7c303dcca297db9fdfd0" Feb 02 22:15:55 crc kubenswrapper[4789]: E0202 22:15:55.404850 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b8500ceefb5caa0ffffe83fcedbe196aeff4f31bdeb7c303dcca297db9fdfd0\": container with ID starting with 3b8500ceefb5caa0ffffe83fcedbe196aeff4f31bdeb7c303dcca297db9fdfd0 not found: ID does not exist" containerID="3b8500ceefb5caa0ffffe83fcedbe196aeff4f31bdeb7c303dcca297db9fdfd0" Feb 02 22:15:55 crc kubenswrapper[4789]: I0202 22:15:55.404882 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b8500ceefb5caa0ffffe83fcedbe196aeff4f31bdeb7c303dcca297db9fdfd0"} err="failed to get container status \"3b8500ceefb5caa0ffffe83fcedbe196aeff4f31bdeb7c303dcca297db9fdfd0\": rpc error: code = NotFound desc = could not find container \"3b8500ceefb5caa0ffffe83fcedbe196aeff4f31bdeb7c303dcca297db9fdfd0\": container with ID starting with 3b8500ceefb5caa0ffffe83fcedbe196aeff4f31bdeb7c303dcca297db9fdfd0 not found: ID does not exist" Feb 02 22:15:55 crc kubenswrapper[4789]: I0202 22:15:55.404905 4789 scope.go:117] "RemoveContainer" containerID="58f8aa5f3caadcfc70497de8dcdeced0bfd190ea36f1f1dea3466445aed4892f" Feb 02 22:15:55 crc kubenswrapper[4789]: E0202 22:15:55.406102 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58f8aa5f3caadcfc70497de8dcdeced0bfd190ea36f1f1dea3466445aed4892f\": container with ID starting with 58f8aa5f3caadcfc70497de8dcdeced0bfd190ea36f1f1dea3466445aed4892f not found: ID does not exist" containerID="58f8aa5f3caadcfc70497de8dcdeced0bfd190ea36f1f1dea3466445aed4892f" Feb 02 22:15:55 crc kubenswrapper[4789]: I0202 22:15:55.406123 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58f8aa5f3caadcfc70497de8dcdeced0bfd190ea36f1f1dea3466445aed4892f"} err="failed to get container status \"58f8aa5f3caadcfc70497de8dcdeced0bfd190ea36f1f1dea3466445aed4892f\": rpc error: code = NotFound desc = could not find container \"58f8aa5f3caadcfc70497de8dcdeced0bfd190ea36f1f1dea3466445aed4892f\": container with ID starting with 58f8aa5f3caadcfc70497de8dcdeced0bfd190ea36f1f1dea3466445aed4892f not found: ID does not exist" Feb 02 22:15:55 crc kubenswrapper[4789]: I0202 22:15:55.406136 4789 scope.go:117] "RemoveContainer" containerID="c7e6600bdfa10bb52737542135f05b448a52a2e9c53460a822548858a0853806" Feb 02 22:15:55 crc kubenswrapper[4789]: E0202 22:15:55.406455 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7e6600bdfa10bb52737542135f05b448a52a2e9c53460a822548858a0853806\": container with ID starting with c7e6600bdfa10bb52737542135f05b448a52a2e9c53460a822548858a0853806 not found: ID does not exist" containerID="c7e6600bdfa10bb52737542135f05b448a52a2e9c53460a822548858a0853806" Feb 02 22:15:55 crc kubenswrapper[4789]: I0202 22:15:55.406477 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7e6600bdfa10bb52737542135f05b448a52a2e9c53460a822548858a0853806"} err="failed to get container status \"c7e6600bdfa10bb52737542135f05b448a52a2e9c53460a822548858a0853806\": rpc error: code = NotFound desc = could not find container \"c7e6600bdfa10bb52737542135f05b448a52a2e9c53460a822548858a0853806\": container with ID starting with c7e6600bdfa10bb52737542135f05b448a52a2e9c53460a822548858a0853806 not found: ID does not exist" Feb 02 22:15:56 crc kubenswrapper[4789]: I0202 22:15:56.435220 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd5eda0b-bcf3-4550-9336-f903bd386db9" path="/var/lib/kubelet/pods/dd5eda0b-bcf3-4550-9336-f903bd386db9/volumes" Feb 02 22:16:00 crc kubenswrapper[4789]: I0202 22:16:00.568909 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-bjpps" Feb 02 22:16:00 crc kubenswrapper[4789]: I0202 22:16:00.623417 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-bjpps" Feb 02 22:16:00 crc kubenswrapper[4789]: I0202 22:16:00.807105 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bjpps"] Feb 02 22:16:02 crc kubenswrapper[4789]: I0202 22:16:02.426071 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-bjpps" podUID="f71e3aa4-b720-4c55-8516-ff9c1d8062e1" containerName="registry-server" containerID="cri-o://aa6c94de685afec27b07fce71f61297c8fb08d9b017cb29b5842fb12d9340aa5" gracePeriod=2 Feb 02 22:16:02 crc kubenswrapper[4789]: I0202 22:16:02.913601 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bjpps" Feb 02 22:16:03 crc kubenswrapper[4789]: I0202 22:16:03.012927 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f71e3aa4-b720-4c55-8516-ff9c1d8062e1-utilities\") pod \"f71e3aa4-b720-4c55-8516-ff9c1d8062e1\" (UID: \"f71e3aa4-b720-4c55-8516-ff9c1d8062e1\") " Feb 02 22:16:03 crc kubenswrapper[4789]: I0202 22:16:03.013018 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pdsqs\" (UniqueName: \"kubernetes.io/projected/f71e3aa4-b720-4c55-8516-ff9c1d8062e1-kube-api-access-pdsqs\") pod \"f71e3aa4-b720-4c55-8516-ff9c1d8062e1\" (UID: \"f71e3aa4-b720-4c55-8516-ff9c1d8062e1\") " Feb 02 22:16:03 crc kubenswrapper[4789]: I0202 22:16:03.013073 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f71e3aa4-b720-4c55-8516-ff9c1d8062e1-catalog-content\") pod \"f71e3aa4-b720-4c55-8516-ff9c1d8062e1\" (UID: \"f71e3aa4-b720-4c55-8516-ff9c1d8062e1\") " Feb 02 22:16:03 crc kubenswrapper[4789]: I0202 22:16:03.014414 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f71e3aa4-b720-4c55-8516-ff9c1d8062e1-utilities" (OuterVolumeSpecName: "utilities") pod "f71e3aa4-b720-4c55-8516-ff9c1d8062e1" (UID: "f71e3aa4-b720-4c55-8516-ff9c1d8062e1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 22:16:03 crc kubenswrapper[4789]: I0202 22:16:03.030294 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f71e3aa4-b720-4c55-8516-ff9c1d8062e1-kube-api-access-pdsqs" (OuterVolumeSpecName: "kube-api-access-pdsqs") pod "f71e3aa4-b720-4c55-8516-ff9c1d8062e1" (UID: "f71e3aa4-b720-4c55-8516-ff9c1d8062e1"). InnerVolumeSpecName "kube-api-access-pdsqs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 22:16:03 crc kubenswrapper[4789]: I0202 22:16:03.115255 4789 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f71e3aa4-b720-4c55-8516-ff9c1d8062e1-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 22:16:03 crc kubenswrapper[4789]: I0202 22:16:03.115305 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pdsqs\" (UniqueName: \"kubernetes.io/projected/f71e3aa4-b720-4c55-8516-ff9c1d8062e1-kube-api-access-pdsqs\") on node \"crc\" DevicePath \"\"" Feb 02 22:16:03 crc kubenswrapper[4789]: I0202 22:16:03.170425 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f71e3aa4-b720-4c55-8516-ff9c1d8062e1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f71e3aa4-b720-4c55-8516-ff9c1d8062e1" (UID: "f71e3aa4-b720-4c55-8516-ff9c1d8062e1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 22:16:03 crc kubenswrapper[4789]: I0202 22:16:03.216152 4789 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f71e3aa4-b720-4c55-8516-ff9c1d8062e1-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 22:16:03 crc kubenswrapper[4789]: I0202 22:16:03.438001 4789 generic.go:334] "Generic (PLEG): container finished" podID="f71e3aa4-b720-4c55-8516-ff9c1d8062e1" containerID="aa6c94de685afec27b07fce71f61297c8fb08d9b017cb29b5842fb12d9340aa5" exitCode=0 Feb 02 22:16:03 crc kubenswrapper[4789]: I0202 22:16:03.438062 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bjpps" event={"ID":"f71e3aa4-b720-4c55-8516-ff9c1d8062e1","Type":"ContainerDied","Data":"aa6c94de685afec27b07fce71f61297c8fb08d9b017cb29b5842fb12d9340aa5"} Feb 02 22:16:03 crc kubenswrapper[4789]: I0202 22:16:03.438087 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bjpps" Feb 02 22:16:03 crc kubenswrapper[4789]: I0202 22:16:03.438114 4789 scope.go:117] "RemoveContainer" containerID="aa6c94de685afec27b07fce71f61297c8fb08d9b017cb29b5842fb12d9340aa5" Feb 02 22:16:03 crc kubenswrapper[4789]: I0202 22:16:03.438096 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bjpps" event={"ID":"f71e3aa4-b720-4c55-8516-ff9c1d8062e1","Type":"ContainerDied","Data":"3e6bdb3b72f85368da1f14a24a4fc996f4f813f7decf4747dfae050e0b78f8e2"} Feb 02 22:16:03 crc kubenswrapper[4789]: I0202 22:16:03.469379 4789 scope.go:117] "RemoveContainer" containerID="be7deaec09317d3ebb64a65ff0f8822733a3d3816395b2ea80ed0d8b67edecba" Feb 02 22:16:03 crc kubenswrapper[4789]: I0202 22:16:03.504920 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bjpps"] Feb 02 22:16:03 crc kubenswrapper[4789]: I0202 22:16:03.518189 4789 scope.go:117] "RemoveContainer" containerID="7f533b73aee92e21daaeeb790946613406b8fb9996784b84190949447408f8d5" Feb 02 22:16:03 crc kubenswrapper[4789]: I0202 22:16:03.520915 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-bjpps"] Feb 02 22:16:03 crc kubenswrapper[4789]: I0202 22:16:03.544806 4789 scope.go:117] "RemoveContainer" containerID="aa6c94de685afec27b07fce71f61297c8fb08d9b017cb29b5842fb12d9340aa5" Feb 02 22:16:03 crc kubenswrapper[4789]: E0202 22:16:03.545319 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa6c94de685afec27b07fce71f61297c8fb08d9b017cb29b5842fb12d9340aa5\": container with ID starting with aa6c94de685afec27b07fce71f61297c8fb08d9b017cb29b5842fb12d9340aa5 not found: ID does not exist" containerID="aa6c94de685afec27b07fce71f61297c8fb08d9b017cb29b5842fb12d9340aa5" Feb 02 22:16:03 crc kubenswrapper[4789]: I0202 22:16:03.545385 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa6c94de685afec27b07fce71f61297c8fb08d9b017cb29b5842fb12d9340aa5"} err="failed to get container status \"aa6c94de685afec27b07fce71f61297c8fb08d9b017cb29b5842fb12d9340aa5\": rpc error: code = NotFound desc = could not find container \"aa6c94de685afec27b07fce71f61297c8fb08d9b017cb29b5842fb12d9340aa5\": container with ID starting with aa6c94de685afec27b07fce71f61297c8fb08d9b017cb29b5842fb12d9340aa5 not found: ID does not exist" Feb 02 22:16:03 crc kubenswrapper[4789]: I0202 22:16:03.545430 4789 scope.go:117] "RemoveContainer" containerID="be7deaec09317d3ebb64a65ff0f8822733a3d3816395b2ea80ed0d8b67edecba" Feb 02 22:16:03 crc kubenswrapper[4789]: E0202 22:16:03.546037 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be7deaec09317d3ebb64a65ff0f8822733a3d3816395b2ea80ed0d8b67edecba\": container with ID starting with be7deaec09317d3ebb64a65ff0f8822733a3d3816395b2ea80ed0d8b67edecba not found: ID does not exist" containerID="be7deaec09317d3ebb64a65ff0f8822733a3d3816395b2ea80ed0d8b67edecba" Feb 02 22:16:03 crc kubenswrapper[4789]: I0202 22:16:03.546084 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be7deaec09317d3ebb64a65ff0f8822733a3d3816395b2ea80ed0d8b67edecba"} err="failed to get container status \"be7deaec09317d3ebb64a65ff0f8822733a3d3816395b2ea80ed0d8b67edecba\": rpc error: code = NotFound desc = could not find container \"be7deaec09317d3ebb64a65ff0f8822733a3d3816395b2ea80ed0d8b67edecba\": container with ID starting with be7deaec09317d3ebb64a65ff0f8822733a3d3816395b2ea80ed0d8b67edecba not found: ID does not exist" Feb 02 22:16:03 crc kubenswrapper[4789]: I0202 22:16:03.546118 4789 scope.go:117] "RemoveContainer" containerID="7f533b73aee92e21daaeeb790946613406b8fb9996784b84190949447408f8d5" Feb 02 22:16:03 crc kubenswrapper[4789]: E0202 22:16:03.546680 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f533b73aee92e21daaeeb790946613406b8fb9996784b84190949447408f8d5\": container with ID starting with 7f533b73aee92e21daaeeb790946613406b8fb9996784b84190949447408f8d5 not found: ID does not exist" containerID="7f533b73aee92e21daaeeb790946613406b8fb9996784b84190949447408f8d5" Feb 02 22:16:03 crc kubenswrapper[4789]: I0202 22:16:03.546713 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f533b73aee92e21daaeeb790946613406b8fb9996784b84190949447408f8d5"} err="failed to get container status \"7f533b73aee92e21daaeeb790946613406b8fb9996784b84190949447408f8d5\": rpc error: code = NotFound desc = could not find container \"7f533b73aee92e21daaeeb790946613406b8fb9996784b84190949447408f8d5\": container with ID starting with 7f533b73aee92e21daaeeb790946613406b8fb9996784b84190949447408f8d5 not found: ID does not exist" Feb 02 22:16:04 crc kubenswrapper[4789]: I0202 22:16:04.420123 4789 scope.go:117] "RemoveContainer" containerID="f751ab6e20416c55ada057bcc369a80021ff56d1c6434e7eaf32a7fd2e325226" Feb 02 22:16:04 crc kubenswrapper[4789]: E0202 22:16:04.421104 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:16:04 crc kubenswrapper[4789]: I0202 22:16:04.432885 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f71e3aa4-b720-4c55-8516-ff9c1d8062e1" path="/var/lib/kubelet/pods/f71e3aa4-b720-4c55-8516-ff9c1d8062e1/volumes" Feb 02 22:16:16 crc kubenswrapper[4789]: I0202 22:16:16.419568 4789 scope.go:117] "RemoveContainer" containerID="f751ab6e20416c55ada057bcc369a80021ff56d1c6434e7eaf32a7fd2e325226" Feb 02 22:16:16 crc kubenswrapper[4789]: E0202 22:16:16.420973 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:16:27 crc kubenswrapper[4789]: I0202 22:16:27.419139 4789 scope.go:117] "RemoveContainer" containerID="f751ab6e20416c55ada057bcc369a80021ff56d1c6434e7eaf32a7fd2e325226" Feb 02 22:16:27 crc kubenswrapper[4789]: E0202 22:16:27.420101 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:16:38 crc kubenswrapper[4789]: I0202 22:16:38.419882 4789 scope.go:117] "RemoveContainer" containerID="f751ab6e20416c55ada057bcc369a80021ff56d1c6434e7eaf32a7fd2e325226" Feb 02 22:16:38 crc kubenswrapper[4789]: E0202 22:16:38.420634 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:16:53 crc kubenswrapper[4789]: I0202 22:16:53.421038 4789 scope.go:117] "RemoveContainer" containerID="f751ab6e20416c55ada057bcc369a80021ff56d1c6434e7eaf32a7fd2e325226" Feb 02 22:16:53 crc kubenswrapper[4789]: I0202 22:16:53.901765 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" event={"ID":"bdf018b4-1451-4d37-be6e-05802b67c73e","Type":"ContainerStarted","Data":"96c3640b08a2d30b80dcfd7b1113d61c8f3861a4e1e91de5042c11e5f8470460"} Feb 02 22:18:25 crc kubenswrapper[4789]: I0202 22:18:25.979362 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-t6mzh"] Feb 02 22:18:25 crc kubenswrapper[4789]: E0202 22:18:25.980362 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd5eda0b-bcf3-4550-9336-f903bd386db9" containerName="extract-utilities" Feb 02 22:18:25 crc kubenswrapper[4789]: I0202 22:18:25.980384 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd5eda0b-bcf3-4550-9336-f903bd386db9" containerName="extract-utilities" Feb 02 22:18:25 crc kubenswrapper[4789]: E0202 22:18:25.980419 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f71e3aa4-b720-4c55-8516-ff9c1d8062e1" containerName="registry-server" Feb 02 22:18:25 crc kubenswrapper[4789]: I0202 22:18:25.980432 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="f71e3aa4-b720-4c55-8516-ff9c1d8062e1" containerName="registry-server" Feb 02 22:18:25 crc kubenswrapper[4789]: E0202 22:18:25.980457 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd5eda0b-bcf3-4550-9336-f903bd386db9" containerName="extract-content" Feb 02 22:18:25 crc kubenswrapper[4789]: I0202 22:18:25.980471 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd5eda0b-bcf3-4550-9336-f903bd386db9" containerName="extract-content" Feb 02 22:18:25 crc kubenswrapper[4789]: E0202 22:18:25.980496 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd5eda0b-bcf3-4550-9336-f903bd386db9" containerName="registry-server" Feb 02 22:18:25 crc kubenswrapper[4789]: I0202 22:18:25.980508 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd5eda0b-bcf3-4550-9336-f903bd386db9" containerName="registry-server" Feb 02 22:18:25 crc kubenswrapper[4789]: E0202 22:18:25.980532 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f71e3aa4-b720-4c55-8516-ff9c1d8062e1" containerName="extract-content" Feb 02 22:18:25 crc kubenswrapper[4789]: I0202 22:18:25.980544 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="f71e3aa4-b720-4c55-8516-ff9c1d8062e1" containerName="extract-content" Feb 02 22:18:25 crc kubenswrapper[4789]: E0202 22:18:25.980565 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f71e3aa4-b720-4c55-8516-ff9c1d8062e1" containerName="extract-utilities" Feb 02 22:18:25 crc kubenswrapper[4789]: I0202 22:18:25.980576 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="f71e3aa4-b720-4c55-8516-ff9c1d8062e1" containerName="extract-utilities" Feb 02 22:18:25 crc kubenswrapper[4789]: I0202 22:18:25.980837 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd5eda0b-bcf3-4550-9336-f903bd386db9" containerName="registry-server" Feb 02 22:18:25 crc kubenswrapper[4789]: I0202 22:18:25.980869 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="f71e3aa4-b720-4c55-8516-ff9c1d8062e1" containerName="registry-server" Feb 02 22:18:25 crc kubenswrapper[4789]: I0202 22:18:25.982941 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t6mzh" Feb 02 22:18:25 crc kubenswrapper[4789]: I0202 22:18:25.992697 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e9c5f6a-c458-4e2f-9950-70f91788ae1e-utilities\") pod \"certified-operators-t6mzh\" (UID: \"6e9c5f6a-c458-4e2f-9950-70f91788ae1e\") " pod="openshift-marketplace/certified-operators-t6mzh" Feb 02 22:18:25 crc kubenswrapper[4789]: I0202 22:18:25.993068 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e9c5f6a-c458-4e2f-9950-70f91788ae1e-catalog-content\") pod \"certified-operators-t6mzh\" (UID: \"6e9c5f6a-c458-4e2f-9950-70f91788ae1e\") " pod="openshift-marketplace/certified-operators-t6mzh" Feb 02 22:18:25 crc kubenswrapper[4789]: I0202 22:18:25.993291 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsbsw\" (UniqueName: \"kubernetes.io/projected/6e9c5f6a-c458-4e2f-9950-70f91788ae1e-kube-api-access-lsbsw\") pod \"certified-operators-t6mzh\" (UID: \"6e9c5f6a-c458-4e2f-9950-70f91788ae1e\") " pod="openshift-marketplace/certified-operators-t6mzh" Feb 02 22:18:26 crc kubenswrapper[4789]: I0202 22:18:26.009373 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t6mzh"] Feb 02 22:18:26 crc kubenswrapper[4789]: I0202 22:18:26.095151 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e9c5f6a-c458-4e2f-9950-70f91788ae1e-utilities\") pod \"certified-operators-t6mzh\" (UID: \"6e9c5f6a-c458-4e2f-9950-70f91788ae1e\") " pod="openshift-marketplace/certified-operators-t6mzh" Feb 02 22:18:26 crc kubenswrapper[4789]: I0202 22:18:26.095319 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e9c5f6a-c458-4e2f-9950-70f91788ae1e-catalog-content\") pod \"certified-operators-t6mzh\" (UID: \"6e9c5f6a-c458-4e2f-9950-70f91788ae1e\") " pod="openshift-marketplace/certified-operators-t6mzh" Feb 02 22:18:26 crc kubenswrapper[4789]: I0202 22:18:26.095433 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lsbsw\" (UniqueName: \"kubernetes.io/projected/6e9c5f6a-c458-4e2f-9950-70f91788ae1e-kube-api-access-lsbsw\") pod \"certified-operators-t6mzh\" (UID: \"6e9c5f6a-c458-4e2f-9950-70f91788ae1e\") " pod="openshift-marketplace/certified-operators-t6mzh" Feb 02 22:18:26 crc kubenswrapper[4789]: I0202 22:18:26.095755 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e9c5f6a-c458-4e2f-9950-70f91788ae1e-utilities\") pod \"certified-operators-t6mzh\" (UID: \"6e9c5f6a-c458-4e2f-9950-70f91788ae1e\") " pod="openshift-marketplace/certified-operators-t6mzh" Feb 02 22:18:26 crc kubenswrapper[4789]: I0202 22:18:26.095962 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e9c5f6a-c458-4e2f-9950-70f91788ae1e-catalog-content\") pod \"certified-operators-t6mzh\" (UID: \"6e9c5f6a-c458-4e2f-9950-70f91788ae1e\") " pod="openshift-marketplace/certified-operators-t6mzh" Feb 02 22:18:26 crc kubenswrapper[4789]: I0202 22:18:26.127003 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lsbsw\" (UniqueName: \"kubernetes.io/projected/6e9c5f6a-c458-4e2f-9950-70f91788ae1e-kube-api-access-lsbsw\") pod \"certified-operators-t6mzh\" (UID: \"6e9c5f6a-c458-4e2f-9950-70f91788ae1e\") " pod="openshift-marketplace/certified-operators-t6mzh" Feb 02 22:18:26 crc kubenswrapper[4789]: I0202 22:18:26.317134 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t6mzh" Feb 02 22:18:26 crc kubenswrapper[4789]: I0202 22:18:26.828632 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t6mzh"] Feb 02 22:18:27 crc kubenswrapper[4789]: I0202 22:18:27.797799 4789 generic.go:334] "Generic (PLEG): container finished" podID="6e9c5f6a-c458-4e2f-9950-70f91788ae1e" containerID="f4b22071d807355e3b1664b352fbcae56d3043f40f348eaf59caf9feed76db52" exitCode=0 Feb 02 22:18:27 crc kubenswrapper[4789]: I0202 22:18:27.797887 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t6mzh" event={"ID":"6e9c5f6a-c458-4e2f-9950-70f91788ae1e","Type":"ContainerDied","Data":"f4b22071d807355e3b1664b352fbcae56d3043f40f348eaf59caf9feed76db52"} Feb 02 22:18:27 crc kubenswrapper[4789]: I0202 22:18:27.798111 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t6mzh" event={"ID":"6e9c5f6a-c458-4e2f-9950-70f91788ae1e","Type":"ContainerStarted","Data":"db4db2256e653f3bff73c3fc49f0438e5132c1a96e58da71f39231e517d8e894"} Feb 02 22:18:28 crc kubenswrapper[4789]: I0202 22:18:28.807244 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t6mzh" event={"ID":"6e9c5f6a-c458-4e2f-9950-70f91788ae1e","Type":"ContainerStarted","Data":"c465efb1924fd770f084a00725f8d1c5983fe1a3eb04e653fcf9c753e73273e4"} Feb 02 22:18:29 crc kubenswrapper[4789]: I0202 22:18:29.821231 4789 generic.go:334] "Generic (PLEG): container finished" podID="6e9c5f6a-c458-4e2f-9950-70f91788ae1e" containerID="c465efb1924fd770f084a00725f8d1c5983fe1a3eb04e653fcf9c753e73273e4" exitCode=0 Feb 02 22:18:29 crc kubenswrapper[4789]: I0202 22:18:29.821316 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t6mzh" event={"ID":"6e9c5f6a-c458-4e2f-9950-70f91788ae1e","Type":"ContainerDied","Data":"c465efb1924fd770f084a00725f8d1c5983fe1a3eb04e653fcf9c753e73273e4"} Feb 02 22:18:30 crc kubenswrapper[4789]: I0202 22:18:30.835745 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t6mzh" event={"ID":"6e9c5f6a-c458-4e2f-9950-70f91788ae1e","Type":"ContainerStarted","Data":"3e02ba66b734efdf76fd553cc704cce18621abc6b20a57abaff702bf7cf70017"} Feb 02 22:18:30 crc kubenswrapper[4789]: I0202 22:18:30.869557 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-t6mzh" podStartSLOduration=3.386796306 podStartE2EDuration="5.869536234s" podCreationTimestamp="2026-02-02 22:18:25 +0000 UTC" firstStartedPulling="2026-02-02 22:18:27.799717679 +0000 UTC m=+3528.094742738" lastFinishedPulling="2026-02-02 22:18:30.282457617 +0000 UTC m=+3530.577482666" observedRunningTime="2026-02-02 22:18:30.860057546 +0000 UTC m=+3531.155082575" watchObservedRunningTime="2026-02-02 22:18:30.869536234 +0000 UTC m=+3531.164561263" Feb 02 22:18:36 crc kubenswrapper[4789]: I0202 22:18:36.317607 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-t6mzh" Feb 02 22:18:36 crc kubenswrapper[4789]: I0202 22:18:36.317966 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-t6mzh" Feb 02 22:18:36 crc kubenswrapper[4789]: I0202 22:18:36.428807 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-t6mzh" Feb 02 22:18:36 crc kubenswrapper[4789]: I0202 22:18:36.962975 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-t6mzh" Feb 02 22:18:37 crc kubenswrapper[4789]: I0202 22:18:37.030345 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-t6mzh"] Feb 02 22:18:38 crc kubenswrapper[4789]: I0202 22:18:38.907257 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-t6mzh" podUID="6e9c5f6a-c458-4e2f-9950-70f91788ae1e" containerName="registry-server" containerID="cri-o://3e02ba66b734efdf76fd553cc704cce18621abc6b20a57abaff702bf7cf70017" gracePeriod=2 Feb 02 22:18:39 crc kubenswrapper[4789]: I0202 22:18:39.407140 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t6mzh" Feb 02 22:18:39 crc kubenswrapper[4789]: I0202 22:18:39.514970 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e9c5f6a-c458-4e2f-9950-70f91788ae1e-utilities\") pod \"6e9c5f6a-c458-4e2f-9950-70f91788ae1e\" (UID: \"6e9c5f6a-c458-4e2f-9950-70f91788ae1e\") " Feb 02 22:18:39 crc kubenswrapper[4789]: I0202 22:18:39.515973 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e9c5f6a-c458-4e2f-9950-70f91788ae1e-catalog-content\") pod \"6e9c5f6a-c458-4e2f-9950-70f91788ae1e\" (UID: \"6e9c5f6a-c458-4e2f-9950-70f91788ae1e\") " Feb 02 22:18:39 crc kubenswrapper[4789]: I0202 22:18:39.516206 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lsbsw\" (UniqueName: \"kubernetes.io/projected/6e9c5f6a-c458-4e2f-9950-70f91788ae1e-kube-api-access-lsbsw\") pod \"6e9c5f6a-c458-4e2f-9950-70f91788ae1e\" (UID: \"6e9c5f6a-c458-4e2f-9950-70f91788ae1e\") " Feb 02 22:18:39 crc kubenswrapper[4789]: I0202 22:18:39.516233 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e9c5f6a-c458-4e2f-9950-70f91788ae1e-utilities" (OuterVolumeSpecName: "utilities") pod "6e9c5f6a-c458-4e2f-9950-70f91788ae1e" (UID: "6e9c5f6a-c458-4e2f-9950-70f91788ae1e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 22:18:39 crc kubenswrapper[4789]: I0202 22:18:39.516930 4789 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e9c5f6a-c458-4e2f-9950-70f91788ae1e-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 22:18:39 crc kubenswrapper[4789]: I0202 22:18:39.525891 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e9c5f6a-c458-4e2f-9950-70f91788ae1e-kube-api-access-lsbsw" (OuterVolumeSpecName: "kube-api-access-lsbsw") pod "6e9c5f6a-c458-4e2f-9950-70f91788ae1e" (UID: "6e9c5f6a-c458-4e2f-9950-70f91788ae1e"). InnerVolumeSpecName "kube-api-access-lsbsw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 22:18:39 crc kubenswrapper[4789]: I0202 22:18:39.596079 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e9c5f6a-c458-4e2f-9950-70f91788ae1e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6e9c5f6a-c458-4e2f-9950-70f91788ae1e" (UID: "6e9c5f6a-c458-4e2f-9950-70f91788ae1e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 22:18:39 crc kubenswrapper[4789]: I0202 22:18:39.618013 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lsbsw\" (UniqueName: \"kubernetes.io/projected/6e9c5f6a-c458-4e2f-9950-70f91788ae1e-kube-api-access-lsbsw\") on node \"crc\" DevicePath \"\"" Feb 02 22:18:39 crc kubenswrapper[4789]: I0202 22:18:39.618052 4789 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e9c5f6a-c458-4e2f-9950-70f91788ae1e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 22:18:39 crc kubenswrapper[4789]: I0202 22:18:39.922654 4789 generic.go:334] "Generic (PLEG): container finished" podID="6e9c5f6a-c458-4e2f-9950-70f91788ae1e" containerID="3e02ba66b734efdf76fd553cc704cce18621abc6b20a57abaff702bf7cf70017" exitCode=0 Feb 02 22:18:39 crc kubenswrapper[4789]: I0202 22:18:39.922721 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t6mzh" event={"ID":"6e9c5f6a-c458-4e2f-9950-70f91788ae1e","Type":"ContainerDied","Data":"3e02ba66b734efdf76fd553cc704cce18621abc6b20a57abaff702bf7cf70017"} Feb 02 22:18:39 crc kubenswrapper[4789]: I0202 22:18:39.922746 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t6mzh" Feb 02 22:18:39 crc kubenswrapper[4789]: I0202 22:18:39.922766 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t6mzh" event={"ID":"6e9c5f6a-c458-4e2f-9950-70f91788ae1e","Type":"ContainerDied","Data":"db4db2256e653f3bff73c3fc49f0438e5132c1a96e58da71f39231e517d8e894"} Feb 02 22:18:39 crc kubenswrapper[4789]: I0202 22:18:39.922798 4789 scope.go:117] "RemoveContainer" containerID="3e02ba66b734efdf76fd553cc704cce18621abc6b20a57abaff702bf7cf70017" Feb 02 22:18:39 crc kubenswrapper[4789]: I0202 22:18:39.955838 4789 scope.go:117] "RemoveContainer" containerID="c465efb1924fd770f084a00725f8d1c5983fe1a3eb04e653fcf9c753e73273e4" Feb 02 22:18:39 crc kubenswrapper[4789]: I0202 22:18:39.984833 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-t6mzh"] Feb 02 22:18:39 crc kubenswrapper[4789]: I0202 22:18:39.994364 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-t6mzh"] Feb 02 22:18:39 crc kubenswrapper[4789]: I0202 22:18:39.997860 4789 scope.go:117] "RemoveContainer" containerID="f4b22071d807355e3b1664b352fbcae56d3043f40f348eaf59caf9feed76db52" Feb 02 22:18:40 crc kubenswrapper[4789]: I0202 22:18:40.029254 4789 scope.go:117] "RemoveContainer" containerID="3e02ba66b734efdf76fd553cc704cce18621abc6b20a57abaff702bf7cf70017" Feb 02 22:18:40 crc kubenswrapper[4789]: E0202 22:18:40.030053 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e02ba66b734efdf76fd553cc704cce18621abc6b20a57abaff702bf7cf70017\": container with ID starting with 3e02ba66b734efdf76fd553cc704cce18621abc6b20a57abaff702bf7cf70017 not found: ID does not exist" containerID="3e02ba66b734efdf76fd553cc704cce18621abc6b20a57abaff702bf7cf70017" Feb 02 22:18:40 crc kubenswrapper[4789]: I0202 22:18:40.030289 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e02ba66b734efdf76fd553cc704cce18621abc6b20a57abaff702bf7cf70017"} err="failed to get container status \"3e02ba66b734efdf76fd553cc704cce18621abc6b20a57abaff702bf7cf70017\": rpc error: code = NotFound desc = could not find container \"3e02ba66b734efdf76fd553cc704cce18621abc6b20a57abaff702bf7cf70017\": container with ID starting with 3e02ba66b734efdf76fd553cc704cce18621abc6b20a57abaff702bf7cf70017 not found: ID does not exist" Feb 02 22:18:40 crc kubenswrapper[4789]: I0202 22:18:40.030472 4789 scope.go:117] "RemoveContainer" containerID="c465efb1924fd770f084a00725f8d1c5983fe1a3eb04e653fcf9c753e73273e4" Feb 02 22:18:40 crc kubenswrapper[4789]: E0202 22:18:40.031263 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c465efb1924fd770f084a00725f8d1c5983fe1a3eb04e653fcf9c753e73273e4\": container with ID starting with c465efb1924fd770f084a00725f8d1c5983fe1a3eb04e653fcf9c753e73273e4 not found: ID does not exist" containerID="c465efb1924fd770f084a00725f8d1c5983fe1a3eb04e653fcf9c753e73273e4" Feb 02 22:18:40 crc kubenswrapper[4789]: I0202 22:18:40.031328 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c465efb1924fd770f084a00725f8d1c5983fe1a3eb04e653fcf9c753e73273e4"} err="failed to get container status \"c465efb1924fd770f084a00725f8d1c5983fe1a3eb04e653fcf9c753e73273e4\": rpc error: code = NotFound desc = could not find container \"c465efb1924fd770f084a00725f8d1c5983fe1a3eb04e653fcf9c753e73273e4\": container with ID starting with c465efb1924fd770f084a00725f8d1c5983fe1a3eb04e653fcf9c753e73273e4 not found: ID does not exist" Feb 02 22:18:40 crc kubenswrapper[4789]: I0202 22:18:40.031359 4789 scope.go:117] "RemoveContainer" containerID="f4b22071d807355e3b1664b352fbcae56d3043f40f348eaf59caf9feed76db52" Feb 02 22:18:40 crc kubenswrapper[4789]: E0202 22:18:40.032003 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4b22071d807355e3b1664b352fbcae56d3043f40f348eaf59caf9feed76db52\": container with ID starting with f4b22071d807355e3b1664b352fbcae56d3043f40f348eaf59caf9feed76db52 not found: ID does not exist" containerID="f4b22071d807355e3b1664b352fbcae56d3043f40f348eaf59caf9feed76db52" Feb 02 22:18:40 crc kubenswrapper[4789]: I0202 22:18:40.032104 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4b22071d807355e3b1664b352fbcae56d3043f40f348eaf59caf9feed76db52"} err="failed to get container status \"f4b22071d807355e3b1664b352fbcae56d3043f40f348eaf59caf9feed76db52\": rpc error: code = NotFound desc = could not find container \"f4b22071d807355e3b1664b352fbcae56d3043f40f348eaf59caf9feed76db52\": container with ID starting with f4b22071d807355e3b1664b352fbcae56d3043f40f348eaf59caf9feed76db52 not found: ID does not exist" Feb 02 22:18:40 crc kubenswrapper[4789]: I0202 22:18:40.437459 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e9c5f6a-c458-4e2f-9950-70f91788ae1e" path="/var/lib/kubelet/pods/6e9c5f6a-c458-4e2f-9950-70f91788ae1e/volumes" Feb 02 22:19:22 crc kubenswrapper[4789]: I0202 22:19:22.841494 4789 patch_prober.go:28] interesting pod/machine-config-daemon-c8vcn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 22:19:22 crc kubenswrapper[4789]: I0202 22:19:22.842324 4789 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 22:19:52 crc kubenswrapper[4789]: I0202 22:19:52.841348 4789 patch_prober.go:28] interesting pod/machine-config-daemon-c8vcn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 22:19:52 crc kubenswrapper[4789]: I0202 22:19:52.841752 4789 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 22:20:22 crc kubenswrapper[4789]: I0202 22:20:22.841339 4789 patch_prober.go:28] interesting pod/machine-config-daemon-c8vcn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 22:20:22 crc kubenswrapper[4789]: I0202 22:20:22.841957 4789 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 22:20:22 crc kubenswrapper[4789]: I0202 22:20:22.842002 4789 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" Feb 02 22:20:22 crc kubenswrapper[4789]: I0202 22:20:22.842708 4789 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"96c3640b08a2d30b80dcfd7b1113d61c8f3861a4e1e91de5042c11e5f8470460"} pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 22:20:22 crc kubenswrapper[4789]: I0202 22:20:22.842782 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" containerName="machine-config-daemon" containerID="cri-o://96c3640b08a2d30b80dcfd7b1113d61c8f3861a4e1e91de5042c11e5f8470460" gracePeriod=600 Feb 02 22:20:23 crc kubenswrapper[4789]: I0202 22:20:23.850383 4789 generic.go:334] "Generic (PLEG): container finished" podID="bdf018b4-1451-4d37-be6e-05802b67c73e" containerID="96c3640b08a2d30b80dcfd7b1113d61c8f3861a4e1e91de5042c11e5f8470460" exitCode=0 Feb 02 22:20:23 crc kubenswrapper[4789]: I0202 22:20:23.850469 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" event={"ID":"bdf018b4-1451-4d37-be6e-05802b67c73e","Type":"ContainerDied","Data":"96c3640b08a2d30b80dcfd7b1113d61c8f3861a4e1e91de5042c11e5f8470460"} Feb 02 22:20:23 crc kubenswrapper[4789]: I0202 22:20:23.850830 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" event={"ID":"bdf018b4-1451-4d37-be6e-05802b67c73e","Type":"ContainerStarted","Data":"1ebe55b80a46ca02419a86f340c3d531948d8c1a4ab746ab97382212887e9c19"} Feb 02 22:20:23 crc kubenswrapper[4789]: I0202 22:20:23.850864 4789 scope.go:117] "RemoveContainer" containerID="f751ab6e20416c55ada057bcc369a80021ff56d1c6434e7eaf32a7fd2e325226" Feb 02 22:21:27 crc kubenswrapper[4789]: I0202 22:21:27.908315 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-lzxz2"] Feb 02 22:21:27 crc kubenswrapper[4789]: E0202 22:21:27.911358 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e9c5f6a-c458-4e2f-9950-70f91788ae1e" containerName="extract-content" Feb 02 22:21:27 crc kubenswrapper[4789]: I0202 22:21:27.911532 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e9c5f6a-c458-4e2f-9950-70f91788ae1e" containerName="extract-content" Feb 02 22:21:27 crc kubenswrapper[4789]: E0202 22:21:27.911742 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e9c5f6a-c458-4e2f-9950-70f91788ae1e" containerName="extract-utilities" Feb 02 22:21:27 crc kubenswrapper[4789]: I0202 22:21:27.911880 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e9c5f6a-c458-4e2f-9950-70f91788ae1e" containerName="extract-utilities" Feb 02 22:21:27 crc kubenswrapper[4789]: E0202 22:21:27.912054 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e9c5f6a-c458-4e2f-9950-70f91788ae1e" containerName="registry-server" Feb 02 22:21:27 crc kubenswrapper[4789]: I0202 22:21:27.912229 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e9c5f6a-c458-4e2f-9950-70f91788ae1e" containerName="registry-server" Feb 02 22:21:27 crc kubenswrapper[4789]: I0202 22:21:27.912708 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e9c5f6a-c458-4e2f-9950-70f91788ae1e" containerName="registry-server" Feb 02 22:21:27 crc kubenswrapper[4789]: I0202 22:21:27.914456 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lzxz2" Feb 02 22:21:27 crc kubenswrapper[4789]: I0202 22:21:27.927598 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lzxz2"] Feb 02 22:21:28 crc kubenswrapper[4789]: I0202 22:21:28.014857 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8n9b9\" (UniqueName: \"kubernetes.io/projected/bf5e09b2-7688-43b0-b09a-29055f2e39a2-kube-api-access-8n9b9\") pod \"community-operators-lzxz2\" (UID: \"bf5e09b2-7688-43b0-b09a-29055f2e39a2\") " pod="openshift-marketplace/community-operators-lzxz2" Feb 02 22:21:28 crc kubenswrapper[4789]: I0202 22:21:28.014901 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf5e09b2-7688-43b0-b09a-29055f2e39a2-utilities\") pod \"community-operators-lzxz2\" (UID: \"bf5e09b2-7688-43b0-b09a-29055f2e39a2\") " pod="openshift-marketplace/community-operators-lzxz2" Feb 02 22:21:28 crc kubenswrapper[4789]: I0202 22:21:28.014951 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf5e09b2-7688-43b0-b09a-29055f2e39a2-catalog-content\") pod \"community-operators-lzxz2\" (UID: \"bf5e09b2-7688-43b0-b09a-29055f2e39a2\") " pod="openshift-marketplace/community-operators-lzxz2" Feb 02 22:21:28 crc kubenswrapper[4789]: I0202 22:21:28.116510 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8n9b9\" (UniqueName: \"kubernetes.io/projected/bf5e09b2-7688-43b0-b09a-29055f2e39a2-kube-api-access-8n9b9\") pod \"community-operators-lzxz2\" (UID: \"bf5e09b2-7688-43b0-b09a-29055f2e39a2\") " pod="openshift-marketplace/community-operators-lzxz2" Feb 02 22:21:28 crc kubenswrapper[4789]: I0202 22:21:28.116796 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf5e09b2-7688-43b0-b09a-29055f2e39a2-utilities\") pod \"community-operators-lzxz2\" (UID: \"bf5e09b2-7688-43b0-b09a-29055f2e39a2\") " pod="openshift-marketplace/community-operators-lzxz2" Feb 02 22:21:28 crc kubenswrapper[4789]: I0202 22:21:28.116933 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf5e09b2-7688-43b0-b09a-29055f2e39a2-catalog-content\") pod \"community-operators-lzxz2\" (UID: \"bf5e09b2-7688-43b0-b09a-29055f2e39a2\") " pod="openshift-marketplace/community-operators-lzxz2" Feb 02 22:21:28 crc kubenswrapper[4789]: I0202 22:21:28.117400 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf5e09b2-7688-43b0-b09a-29055f2e39a2-utilities\") pod \"community-operators-lzxz2\" (UID: \"bf5e09b2-7688-43b0-b09a-29055f2e39a2\") " pod="openshift-marketplace/community-operators-lzxz2" Feb 02 22:21:28 crc kubenswrapper[4789]: I0202 22:21:28.117437 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf5e09b2-7688-43b0-b09a-29055f2e39a2-catalog-content\") pod \"community-operators-lzxz2\" (UID: \"bf5e09b2-7688-43b0-b09a-29055f2e39a2\") " pod="openshift-marketplace/community-operators-lzxz2" Feb 02 22:21:28 crc kubenswrapper[4789]: I0202 22:21:28.135266 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8n9b9\" (UniqueName: \"kubernetes.io/projected/bf5e09b2-7688-43b0-b09a-29055f2e39a2-kube-api-access-8n9b9\") pod \"community-operators-lzxz2\" (UID: \"bf5e09b2-7688-43b0-b09a-29055f2e39a2\") " pod="openshift-marketplace/community-operators-lzxz2" Feb 02 22:21:28 crc kubenswrapper[4789]: I0202 22:21:28.291693 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lzxz2" Feb 02 22:21:28 crc kubenswrapper[4789]: I0202 22:21:28.784569 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lzxz2"] Feb 02 22:21:28 crc kubenswrapper[4789]: W0202 22:21:28.791172 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbf5e09b2_7688_43b0_b09a_29055f2e39a2.slice/crio-d2a37b03ba11c49a4ae6fa8f85c1262db94adc0c1f80a558aedf21ede3835587 WatchSource:0}: Error finding container d2a37b03ba11c49a4ae6fa8f85c1262db94adc0c1f80a558aedf21ede3835587: Status 404 returned error can't find the container with id d2a37b03ba11c49a4ae6fa8f85c1262db94adc0c1f80a558aedf21ede3835587 Feb 02 22:21:29 crc kubenswrapper[4789]: I0202 22:21:29.471440 4789 generic.go:334] "Generic (PLEG): container finished" podID="bf5e09b2-7688-43b0-b09a-29055f2e39a2" containerID="a8ca67c7c63bf894e64cd0f5bba30d44b9121f81c4232d8ded5a7d96793b445b" exitCode=0 Feb 02 22:21:29 crc kubenswrapper[4789]: I0202 22:21:29.471534 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lzxz2" event={"ID":"bf5e09b2-7688-43b0-b09a-29055f2e39a2","Type":"ContainerDied","Data":"a8ca67c7c63bf894e64cd0f5bba30d44b9121f81c4232d8ded5a7d96793b445b"} Feb 02 22:21:29 crc kubenswrapper[4789]: I0202 22:21:29.471949 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lzxz2" event={"ID":"bf5e09b2-7688-43b0-b09a-29055f2e39a2","Type":"ContainerStarted","Data":"d2a37b03ba11c49a4ae6fa8f85c1262db94adc0c1f80a558aedf21ede3835587"} Feb 02 22:21:29 crc kubenswrapper[4789]: I0202 22:21:29.473679 4789 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 22:21:30 crc kubenswrapper[4789]: I0202 22:21:30.485633 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lzxz2" event={"ID":"bf5e09b2-7688-43b0-b09a-29055f2e39a2","Type":"ContainerStarted","Data":"775cb5a560e63f1264476339a5fd7439557b97aff169794b43556b8935ac2971"} Feb 02 22:21:31 crc kubenswrapper[4789]: I0202 22:21:31.497982 4789 generic.go:334] "Generic (PLEG): container finished" podID="bf5e09b2-7688-43b0-b09a-29055f2e39a2" containerID="775cb5a560e63f1264476339a5fd7439557b97aff169794b43556b8935ac2971" exitCode=0 Feb 02 22:21:31 crc kubenswrapper[4789]: I0202 22:21:31.498054 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lzxz2" event={"ID":"bf5e09b2-7688-43b0-b09a-29055f2e39a2","Type":"ContainerDied","Data":"775cb5a560e63f1264476339a5fd7439557b97aff169794b43556b8935ac2971"} Feb 02 22:21:32 crc kubenswrapper[4789]: I0202 22:21:32.511191 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lzxz2" event={"ID":"bf5e09b2-7688-43b0-b09a-29055f2e39a2","Type":"ContainerStarted","Data":"572115d9df1b3945790ebbfe9beb1c40aa32a1dd2fed922acfb1586045bf95db"} Feb 02 22:21:32 crc kubenswrapper[4789]: I0202 22:21:32.543336 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-lzxz2" podStartSLOduration=3.128810709 podStartE2EDuration="5.543315604s" podCreationTimestamp="2026-02-02 22:21:27 +0000 UTC" firstStartedPulling="2026-02-02 22:21:29.473457345 +0000 UTC m=+3709.768482364" lastFinishedPulling="2026-02-02 22:21:31.8879622 +0000 UTC m=+3712.182987259" observedRunningTime="2026-02-02 22:21:32.543064747 +0000 UTC m=+3712.838089816" watchObservedRunningTime="2026-02-02 22:21:32.543315604 +0000 UTC m=+3712.838340633" Feb 02 22:21:38 crc kubenswrapper[4789]: I0202 22:21:38.292543 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-lzxz2" Feb 02 22:21:38 crc kubenswrapper[4789]: I0202 22:21:38.293216 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-lzxz2" Feb 02 22:21:38 crc kubenswrapper[4789]: I0202 22:21:38.367277 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-lzxz2" Feb 02 22:21:38 crc kubenswrapper[4789]: I0202 22:21:38.642927 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-lzxz2" Feb 02 22:21:38 crc kubenswrapper[4789]: I0202 22:21:38.710152 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lzxz2"] Feb 02 22:21:40 crc kubenswrapper[4789]: I0202 22:21:40.588094 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-lzxz2" podUID="bf5e09b2-7688-43b0-b09a-29055f2e39a2" containerName="registry-server" containerID="cri-o://572115d9df1b3945790ebbfe9beb1c40aa32a1dd2fed922acfb1586045bf95db" gracePeriod=2 Feb 02 22:21:41 crc kubenswrapper[4789]: I0202 22:21:41.568334 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lzxz2" Feb 02 22:21:41 crc kubenswrapper[4789]: I0202 22:21:41.615571 4789 generic.go:334] "Generic (PLEG): container finished" podID="bf5e09b2-7688-43b0-b09a-29055f2e39a2" containerID="572115d9df1b3945790ebbfe9beb1c40aa32a1dd2fed922acfb1586045bf95db" exitCode=0 Feb 02 22:21:41 crc kubenswrapper[4789]: I0202 22:21:41.615632 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lzxz2" event={"ID":"bf5e09b2-7688-43b0-b09a-29055f2e39a2","Type":"ContainerDied","Data":"572115d9df1b3945790ebbfe9beb1c40aa32a1dd2fed922acfb1586045bf95db"} Feb 02 22:21:41 crc kubenswrapper[4789]: I0202 22:21:41.615660 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lzxz2" event={"ID":"bf5e09b2-7688-43b0-b09a-29055f2e39a2","Type":"ContainerDied","Data":"d2a37b03ba11c49a4ae6fa8f85c1262db94adc0c1f80a558aedf21ede3835587"} Feb 02 22:21:41 crc kubenswrapper[4789]: I0202 22:21:41.615693 4789 scope.go:117] "RemoveContainer" containerID="572115d9df1b3945790ebbfe9beb1c40aa32a1dd2fed922acfb1586045bf95db" Feb 02 22:21:41 crc kubenswrapper[4789]: I0202 22:21:41.615724 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lzxz2" Feb 02 22:21:41 crc kubenswrapper[4789]: I0202 22:21:41.645752 4789 scope.go:117] "RemoveContainer" containerID="775cb5a560e63f1264476339a5fd7439557b97aff169794b43556b8935ac2971" Feb 02 22:21:41 crc kubenswrapper[4789]: I0202 22:21:41.669258 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf5e09b2-7688-43b0-b09a-29055f2e39a2-utilities\") pod \"bf5e09b2-7688-43b0-b09a-29055f2e39a2\" (UID: \"bf5e09b2-7688-43b0-b09a-29055f2e39a2\") " Feb 02 22:21:41 crc kubenswrapper[4789]: I0202 22:21:41.669311 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf5e09b2-7688-43b0-b09a-29055f2e39a2-catalog-content\") pod \"bf5e09b2-7688-43b0-b09a-29055f2e39a2\" (UID: \"bf5e09b2-7688-43b0-b09a-29055f2e39a2\") " Feb 02 22:21:41 crc kubenswrapper[4789]: I0202 22:21:41.669354 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8n9b9\" (UniqueName: \"kubernetes.io/projected/bf5e09b2-7688-43b0-b09a-29055f2e39a2-kube-api-access-8n9b9\") pod \"bf5e09b2-7688-43b0-b09a-29055f2e39a2\" (UID: \"bf5e09b2-7688-43b0-b09a-29055f2e39a2\") " Feb 02 22:21:41 crc kubenswrapper[4789]: I0202 22:21:41.670289 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf5e09b2-7688-43b0-b09a-29055f2e39a2-utilities" (OuterVolumeSpecName: "utilities") pod "bf5e09b2-7688-43b0-b09a-29055f2e39a2" (UID: "bf5e09b2-7688-43b0-b09a-29055f2e39a2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 22:21:41 crc kubenswrapper[4789]: I0202 22:21:41.675475 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf5e09b2-7688-43b0-b09a-29055f2e39a2-kube-api-access-8n9b9" (OuterVolumeSpecName: "kube-api-access-8n9b9") pod "bf5e09b2-7688-43b0-b09a-29055f2e39a2" (UID: "bf5e09b2-7688-43b0-b09a-29055f2e39a2"). InnerVolumeSpecName "kube-api-access-8n9b9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 22:21:41 crc kubenswrapper[4789]: I0202 22:21:41.675742 4789 scope.go:117] "RemoveContainer" containerID="a8ca67c7c63bf894e64cd0f5bba30d44b9121f81c4232d8ded5a7d96793b445b" Feb 02 22:21:41 crc kubenswrapper[4789]: I0202 22:21:41.714186 4789 scope.go:117] "RemoveContainer" containerID="572115d9df1b3945790ebbfe9beb1c40aa32a1dd2fed922acfb1586045bf95db" Feb 02 22:21:41 crc kubenswrapper[4789]: E0202 22:21:41.714707 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"572115d9df1b3945790ebbfe9beb1c40aa32a1dd2fed922acfb1586045bf95db\": container with ID starting with 572115d9df1b3945790ebbfe9beb1c40aa32a1dd2fed922acfb1586045bf95db not found: ID does not exist" containerID="572115d9df1b3945790ebbfe9beb1c40aa32a1dd2fed922acfb1586045bf95db" Feb 02 22:21:41 crc kubenswrapper[4789]: I0202 22:21:41.714736 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"572115d9df1b3945790ebbfe9beb1c40aa32a1dd2fed922acfb1586045bf95db"} err="failed to get container status \"572115d9df1b3945790ebbfe9beb1c40aa32a1dd2fed922acfb1586045bf95db\": rpc error: code = NotFound desc = could not find container \"572115d9df1b3945790ebbfe9beb1c40aa32a1dd2fed922acfb1586045bf95db\": container with ID starting with 572115d9df1b3945790ebbfe9beb1c40aa32a1dd2fed922acfb1586045bf95db not found: ID does not exist" Feb 02 22:21:41 crc kubenswrapper[4789]: I0202 22:21:41.714755 4789 scope.go:117] "RemoveContainer" containerID="775cb5a560e63f1264476339a5fd7439557b97aff169794b43556b8935ac2971" Feb 02 22:21:41 crc kubenswrapper[4789]: E0202 22:21:41.715081 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"775cb5a560e63f1264476339a5fd7439557b97aff169794b43556b8935ac2971\": container with ID starting with 775cb5a560e63f1264476339a5fd7439557b97aff169794b43556b8935ac2971 not found: ID does not exist" containerID="775cb5a560e63f1264476339a5fd7439557b97aff169794b43556b8935ac2971" Feb 02 22:21:41 crc kubenswrapper[4789]: I0202 22:21:41.715104 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"775cb5a560e63f1264476339a5fd7439557b97aff169794b43556b8935ac2971"} err="failed to get container status \"775cb5a560e63f1264476339a5fd7439557b97aff169794b43556b8935ac2971\": rpc error: code = NotFound desc = could not find container \"775cb5a560e63f1264476339a5fd7439557b97aff169794b43556b8935ac2971\": container with ID starting with 775cb5a560e63f1264476339a5fd7439557b97aff169794b43556b8935ac2971 not found: ID does not exist" Feb 02 22:21:41 crc kubenswrapper[4789]: I0202 22:21:41.715119 4789 scope.go:117] "RemoveContainer" containerID="a8ca67c7c63bf894e64cd0f5bba30d44b9121f81c4232d8ded5a7d96793b445b" Feb 02 22:21:41 crc kubenswrapper[4789]: E0202 22:21:41.715401 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8ca67c7c63bf894e64cd0f5bba30d44b9121f81c4232d8ded5a7d96793b445b\": container with ID starting with a8ca67c7c63bf894e64cd0f5bba30d44b9121f81c4232d8ded5a7d96793b445b not found: ID does not exist" containerID="a8ca67c7c63bf894e64cd0f5bba30d44b9121f81c4232d8ded5a7d96793b445b" Feb 02 22:21:41 crc kubenswrapper[4789]: I0202 22:21:41.715424 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8ca67c7c63bf894e64cd0f5bba30d44b9121f81c4232d8ded5a7d96793b445b"} err="failed to get container status \"a8ca67c7c63bf894e64cd0f5bba30d44b9121f81c4232d8ded5a7d96793b445b\": rpc error: code = NotFound desc = could not find container \"a8ca67c7c63bf894e64cd0f5bba30d44b9121f81c4232d8ded5a7d96793b445b\": container with ID starting with a8ca67c7c63bf894e64cd0f5bba30d44b9121f81c4232d8ded5a7d96793b445b not found: ID does not exist" Feb 02 22:21:41 crc kubenswrapper[4789]: I0202 22:21:41.741942 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf5e09b2-7688-43b0-b09a-29055f2e39a2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bf5e09b2-7688-43b0-b09a-29055f2e39a2" (UID: "bf5e09b2-7688-43b0-b09a-29055f2e39a2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 22:21:41 crc kubenswrapper[4789]: I0202 22:21:41.771044 4789 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf5e09b2-7688-43b0-b09a-29055f2e39a2-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 22:21:41 crc kubenswrapper[4789]: I0202 22:21:41.771086 4789 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf5e09b2-7688-43b0-b09a-29055f2e39a2-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 22:21:41 crc kubenswrapper[4789]: I0202 22:21:41.771110 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8n9b9\" (UniqueName: \"kubernetes.io/projected/bf5e09b2-7688-43b0-b09a-29055f2e39a2-kube-api-access-8n9b9\") on node \"crc\" DevicePath \"\"" Feb 02 22:21:41 crc kubenswrapper[4789]: I0202 22:21:41.978264 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lzxz2"] Feb 02 22:21:42 crc kubenswrapper[4789]: I0202 22:21:42.000737 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-lzxz2"] Feb 02 22:21:42 crc kubenswrapper[4789]: I0202 22:21:42.431774 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf5e09b2-7688-43b0-b09a-29055f2e39a2" path="/var/lib/kubelet/pods/bf5e09b2-7688-43b0-b09a-29055f2e39a2/volumes" Feb 02 22:22:52 crc kubenswrapper[4789]: I0202 22:22:52.841937 4789 patch_prober.go:28] interesting pod/machine-config-daemon-c8vcn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 22:22:52 crc kubenswrapper[4789]: I0202 22:22:52.842995 4789 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 22:23:22 crc kubenswrapper[4789]: I0202 22:23:22.841309 4789 patch_prober.go:28] interesting pod/machine-config-daemon-c8vcn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 22:23:22 crc kubenswrapper[4789]: I0202 22:23:22.841983 4789 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 22:23:52 crc kubenswrapper[4789]: I0202 22:23:52.841845 4789 patch_prober.go:28] interesting pod/machine-config-daemon-c8vcn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 22:23:52 crc kubenswrapper[4789]: I0202 22:23:52.842463 4789 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 22:23:52 crc kubenswrapper[4789]: I0202 22:23:52.842519 4789 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" Feb 02 22:23:52 crc kubenswrapper[4789]: I0202 22:23:52.843219 4789 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1ebe55b80a46ca02419a86f340c3d531948d8c1a4ab746ab97382212887e9c19"} pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 22:23:52 crc kubenswrapper[4789]: I0202 22:23:52.843291 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" containerName="machine-config-daemon" containerID="cri-o://1ebe55b80a46ca02419a86f340c3d531948d8c1a4ab746ab97382212887e9c19" gracePeriod=600 Feb 02 22:23:52 crc kubenswrapper[4789]: E0202 22:23:52.959533 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:23:53 crc kubenswrapper[4789]: I0202 22:23:53.899718 4789 generic.go:334] "Generic (PLEG): container finished" podID="bdf018b4-1451-4d37-be6e-05802b67c73e" containerID="1ebe55b80a46ca02419a86f340c3d531948d8c1a4ab746ab97382212887e9c19" exitCode=0 Feb 02 22:23:53 crc kubenswrapper[4789]: I0202 22:23:53.899791 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" event={"ID":"bdf018b4-1451-4d37-be6e-05802b67c73e","Type":"ContainerDied","Data":"1ebe55b80a46ca02419a86f340c3d531948d8c1a4ab746ab97382212887e9c19"} Feb 02 22:23:53 crc kubenswrapper[4789]: I0202 22:23:53.899868 4789 scope.go:117] "RemoveContainer" containerID="96c3640b08a2d30b80dcfd7b1113d61c8f3861a4e1e91de5042c11e5f8470460" Feb 02 22:23:53 crc kubenswrapper[4789]: I0202 22:23:53.900760 4789 scope.go:117] "RemoveContainer" containerID="1ebe55b80a46ca02419a86f340c3d531948d8c1a4ab746ab97382212887e9c19" Feb 02 22:23:53 crc kubenswrapper[4789]: E0202 22:23:53.901801 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:24:08 crc kubenswrapper[4789]: I0202 22:24:08.420285 4789 scope.go:117] "RemoveContainer" containerID="1ebe55b80a46ca02419a86f340c3d531948d8c1a4ab746ab97382212887e9c19" Feb 02 22:24:08 crc kubenswrapper[4789]: E0202 22:24:08.420988 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:24:20 crc kubenswrapper[4789]: I0202 22:24:20.424982 4789 scope.go:117] "RemoveContainer" containerID="1ebe55b80a46ca02419a86f340c3d531948d8c1a4ab746ab97382212887e9c19" Feb 02 22:24:20 crc kubenswrapper[4789]: E0202 22:24:20.425991 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:24:33 crc kubenswrapper[4789]: I0202 22:24:33.420785 4789 scope.go:117] "RemoveContainer" containerID="1ebe55b80a46ca02419a86f340c3d531948d8c1a4ab746ab97382212887e9c19" Feb 02 22:24:33 crc kubenswrapper[4789]: E0202 22:24:33.422099 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:24:46 crc kubenswrapper[4789]: I0202 22:24:46.419170 4789 scope.go:117] "RemoveContainer" containerID="1ebe55b80a46ca02419a86f340c3d531948d8c1a4ab746ab97382212887e9c19" Feb 02 22:24:46 crc kubenswrapper[4789]: E0202 22:24:46.419765 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:24:57 crc kubenswrapper[4789]: I0202 22:24:57.418956 4789 scope.go:117] "RemoveContainer" containerID="1ebe55b80a46ca02419a86f340c3d531948d8c1a4ab746ab97382212887e9c19" Feb 02 22:24:57 crc kubenswrapper[4789]: E0202 22:24:57.419618 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:25:12 crc kubenswrapper[4789]: I0202 22:25:12.420128 4789 scope.go:117] "RemoveContainer" containerID="1ebe55b80a46ca02419a86f340c3d531948d8c1a4ab746ab97382212887e9c19" Feb 02 22:25:12 crc kubenswrapper[4789]: E0202 22:25:12.421328 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:25:24 crc kubenswrapper[4789]: I0202 22:25:24.419991 4789 scope.go:117] "RemoveContainer" containerID="1ebe55b80a46ca02419a86f340c3d531948d8c1a4ab746ab97382212887e9c19" Feb 02 22:25:24 crc kubenswrapper[4789]: E0202 22:25:24.421106 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:25:39 crc kubenswrapper[4789]: I0202 22:25:39.420698 4789 scope.go:117] "RemoveContainer" containerID="1ebe55b80a46ca02419a86f340c3d531948d8c1a4ab746ab97382212887e9c19" Feb 02 22:25:39 crc kubenswrapper[4789]: E0202 22:25:39.421808 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:25:51 crc kubenswrapper[4789]: I0202 22:25:51.420282 4789 scope.go:117] "RemoveContainer" containerID="1ebe55b80a46ca02419a86f340c3d531948d8c1a4ab746ab97382212887e9c19" Feb 02 22:25:51 crc kubenswrapper[4789]: E0202 22:25:51.421310 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:25:53 crc kubenswrapper[4789]: I0202 22:25:53.830202 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dtbg9"] Feb 02 22:25:53 crc kubenswrapper[4789]: E0202 22:25:53.831909 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf5e09b2-7688-43b0-b09a-29055f2e39a2" containerName="extract-utilities" Feb 02 22:25:53 crc kubenswrapper[4789]: I0202 22:25:53.832185 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf5e09b2-7688-43b0-b09a-29055f2e39a2" containerName="extract-utilities" Feb 02 22:25:53 crc kubenswrapper[4789]: E0202 22:25:53.832373 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf5e09b2-7688-43b0-b09a-29055f2e39a2" containerName="registry-server" Feb 02 22:25:53 crc kubenswrapper[4789]: I0202 22:25:53.832490 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf5e09b2-7688-43b0-b09a-29055f2e39a2" containerName="registry-server" Feb 02 22:25:53 crc kubenswrapper[4789]: E0202 22:25:53.832651 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf5e09b2-7688-43b0-b09a-29055f2e39a2" containerName="extract-content" Feb 02 22:25:53 crc kubenswrapper[4789]: I0202 22:25:53.832783 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf5e09b2-7688-43b0-b09a-29055f2e39a2" containerName="extract-content" Feb 02 22:25:53 crc kubenswrapper[4789]: I0202 22:25:53.833145 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf5e09b2-7688-43b0-b09a-29055f2e39a2" containerName="registry-server" Feb 02 22:25:53 crc kubenswrapper[4789]: I0202 22:25:53.834914 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dtbg9" Feb 02 22:25:53 crc kubenswrapper[4789]: I0202 22:25:53.844901 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dtbg9"] Feb 02 22:25:53 crc kubenswrapper[4789]: I0202 22:25:53.913736 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc85c1c7-40a6-4c10-aee2-a89846f85506-utilities\") pod \"redhat-operators-dtbg9\" (UID: \"cc85c1c7-40a6-4c10-aee2-a89846f85506\") " pod="openshift-marketplace/redhat-operators-dtbg9" Feb 02 22:25:53 crc kubenswrapper[4789]: I0202 22:25:53.913942 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nddnm\" (UniqueName: \"kubernetes.io/projected/cc85c1c7-40a6-4c10-aee2-a89846f85506-kube-api-access-nddnm\") pod \"redhat-operators-dtbg9\" (UID: \"cc85c1c7-40a6-4c10-aee2-a89846f85506\") " pod="openshift-marketplace/redhat-operators-dtbg9" Feb 02 22:25:53 crc kubenswrapper[4789]: I0202 22:25:53.914014 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc85c1c7-40a6-4c10-aee2-a89846f85506-catalog-content\") pod \"redhat-operators-dtbg9\" (UID: \"cc85c1c7-40a6-4c10-aee2-a89846f85506\") " pod="openshift-marketplace/redhat-operators-dtbg9" Feb 02 22:25:54 crc kubenswrapper[4789]: I0202 22:25:54.015254 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc85c1c7-40a6-4c10-aee2-a89846f85506-catalog-content\") pod \"redhat-operators-dtbg9\" (UID: \"cc85c1c7-40a6-4c10-aee2-a89846f85506\") " pod="openshift-marketplace/redhat-operators-dtbg9" Feb 02 22:25:54 crc kubenswrapper[4789]: I0202 22:25:54.015413 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc85c1c7-40a6-4c10-aee2-a89846f85506-utilities\") pod \"redhat-operators-dtbg9\" (UID: \"cc85c1c7-40a6-4c10-aee2-a89846f85506\") " pod="openshift-marketplace/redhat-operators-dtbg9" Feb 02 22:25:54 crc kubenswrapper[4789]: I0202 22:25:54.015564 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nddnm\" (UniqueName: \"kubernetes.io/projected/cc85c1c7-40a6-4c10-aee2-a89846f85506-kube-api-access-nddnm\") pod \"redhat-operators-dtbg9\" (UID: \"cc85c1c7-40a6-4c10-aee2-a89846f85506\") " pod="openshift-marketplace/redhat-operators-dtbg9" Feb 02 22:25:54 crc kubenswrapper[4789]: I0202 22:25:54.016020 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc85c1c7-40a6-4c10-aee2-a89846f85506-catalog-content\") pod \"redhat-operators-dtbg9\" (UID: \"cc85c1c7-40a6-4c10-aee2-a89846f85506\") " pod="openshift-marketplace/redhat-operators-dtbg9" Feb 02 22:25:54 crc kubenswrapper[4789]: I0202 22:25:54.016053 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc85c1c7-40a6-4c10-aee2-a89846f85506-utilities\") pod \"redhat-operators-dtbg9\" (UID: \"cc85c1c7-40a6-4c10-aee2-a89846f85506\") " pod="openshift-marketplace/redhat-operators-dtbg9" Feb 02 22:25:54 crc kubenswrapper[4789]: I0202 22:25:54.043740 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nddnm\" (UniqueName: \"kubernetes.io/projected/cc85c1c7-40a6-4c10-aee2-a89846f85506-kube-api-access-nddnm\") pod \"redhat-operators-dtbg9\" (UID: \"cc85c1c7-40a6-4c10-aee2-a89846f85506\") " pod="openshift-marketplace/redhat-operators-dtbg9" Feb 02 22:25:54 crc kubenswrapper[4789]: I0202 22:25:54.172910 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dtbg9" Feb 02 22:25:54 crc kubenswrapper[4789]: I0202 22:25:54.615177 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dtbg9"] Feb 02 22:25:55 crc kubenswrapper[4789]: I0202 22:25:55.438916 4789 generic.go:334] "Generic (PLEG): container finished" podID="cc85c1c7-40a6-4c10-aee2-a89846f85506" containerID="d192259385742dae44cb2c67332956d67efd7339f56b4a756087d3b89e23e5d7" exitCode=0 Feb 02 22:25:55 crc kubenswrapper[4789]: I0202 22:25:55.438985 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dtbg9" event={"ID":"cc85c1c7-40a6-4c10-aee2-a89846f85506","Type":"ContainerDied","Data":"d192259385742dae44cb2c67332956d67efd7339f56b4a756087d3b89e23e5d7"} Feb 02 22:25:55 crc kubenswrapper[4789]: I0202 22:25:55.439015 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dtbg9" event={"ID":"cc85c1c7-40a6-4c10-aee2-a89846f85506","Type":"ContainerStarted","Data":"e54435419de0f3430027406e11574b13affdfdcac1f2ea0843fa049e65f31a8a"} Feb 02 22:25:56 crc kubenswrapper[4789]: I0202 22:25:56.450229 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dtbg9" event={"ID":"cc85c1c7-40a6-4c10-aee2-a89846f85506","Type":"ContainerStarted","Data":"5311f3376a456d260ac973b54a755a48da7a719d7baacb5aa9c9cffe1fd89358"} Feb 02 22:25:57 crc kubenswrapper[4789]: I0202 22:25:57.461512 4789 generic.go:334] "Generic (PLEG): container finished" podID="cc85c1c7-40a6-4c10-aee2-a89846f85506" containerID="5311f3376a456d260ac973b54a755a48da7a719d7baacb5aa9c9cffe1fd89358" exitCode=0 Feb 02 22:25:57 crc kubenswrapper[4789]: I0202 22:25:57.461574 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dtbg9" event={"ID":"cc85c1c7-40a6-4c10-aee2-a89846f85506","Type":"ContainerDied","Data":"5311f3376a456d260ac973b54a755a48da7a719d7baacb5aa9c9cffe1fd89358"} Feb 02 22:25:58 crc kubenswrapper[4789]: I0202 22:25:58.479083 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dtbg9" event={"ID":"cc85c1c7-40a6-4c10-aee2-a89846f85506","Type":"ContainerStarted","Data":"fc4fbf762d96d2d2d0c31f4bfbeb11170aedd8beb8bae906d777c5095d7241b3"} Feb 02 22:25:58 crc kubenswrapper[4789]: I0202 22:25:58.515164 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dtbg9" podStartSLOduration=3.05612521 podStartE2EDuration="5.515138752s" podCreationTimestamp="2026-02-02 22:25:53 +0000 UTC" firstStartedPulling="2026-02-02 22:25:55.441813415 +0000 UTC m=+3975.736838434" lastFinishedPulling="2026-02-02 22:25:57.900826927 +0000 UTC m=+3978.195851976" observedRunningTime="2026-02-02 22:25:58.508060262 +0000 UTC m=+3978.803085371" watchObservedRunningTime="2026-02-02 22:25:58.515138752 +0000 UTC m=+3978.810163811" Feb 02 22:26:04 crc kubenswrapper[4789]: I0202 22:26:04.173510 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dtbg9" Feb 02 22:26:04 crc kubenswrapper[4789]: I0202 22:26:04.174165 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dtbg9" Feb 02 22:26:05 crc kubenswrapper[4789]: I0202 22:26:05.229728 4789 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dtbg9" podUID="cc85c1c7-40a6-4c10-aee2-a89846f85506" containerName="registry-server" probeResult="failure" output=< Feb 02 22:26:05 crc kubenswrapper[4789]: timeout: failed to connect service ":50051" within 1s Feb 02 22:26:05 crc kubenswrapper[4789]: > Feb 02 22:26:05 crc kubenswrapper[4789]: I0202 22:26:05.420847 4789 scope.go:117] "RemoveContainer" containerID="1ebe55b80a46ca02419a86f340c3d531948d8c1a4ab746ab97382212887e9c19" Feb 02 22:26:05 crc kubenswrapper[4789]: E0202 22:26:05.421809 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:26:14 crc kubenswrapper[4789]: I0202 22:26:14.262185 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dtbg9" Feb 02 22:26:14 crc kubenswrapper[4789]: I0202 22:26:14.328263 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dtbg9" Feb 02 22:26:14 crc kubenswrapper[4789]: I0202 22:26:14.516788 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dtbg9"] Feb 02 22:26:15 crc kubenswrapper[4789]: I0202 22:26:15.638012 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dtbg9" podUID="cc85c1c7-40a6-4c10-aee2-a89846f85506" containerName="registry-server" containerID="cri-o://fc4fbf762d96d2d2d0c31f4bfbeb11170aedd8beb8bae906d777c5095d7241b3" gracePeriod=2 Feb 02 22:26:16 crc kubenswrapper[4789]: I0202 22:26:16.161637 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dtbg9" Feb 02 22:26:16 crc kubenswrapper[4789]: I0202 22:26:16.285169 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc85c1c7-40a6-4c10-aee2-a89846f85506-catalog-content\") pod \"cc85c1c7-40a6-4c10-aee2-a89846f85506\" (UID: \"cc85c1c7-40a6-4c10-aee2-a89846f85506\") " Feb 02 22:26:16 crc kubenswrapper[4789]: I0202 22:26:16.292965 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nddnm\" (UniqueName: \"kubernetes.io/projected/cc85c1c7-40a6-4c10-aee2-a89846f85506-kube-api-access-nddnm\") pod \"cc85c1c7-40a6-4c10-aee2-a89846f85506\" (UID: \"cc85c1c7-40a6-4c10-aee2-a89846f85506\") " Feb 02 22:26:16 crc kubenswrapper[4789]: I0202 22:26:16.293030 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc85c1c7-40a6-4c10-aee2-a89846f85506-utilities\") pod \"cc85c1c7-40a6-4c10-aee2-a89846f85506\" (UID: \"cc85c1c7-40a6-4c10-aee2-a89846f85506\") " Feb 02 22:26:16 crc kubenswrapper[4789]: I0202 22:26:16.293994 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc85c1c7-40a6-4c10-aee2-a89846f85506-utilities" (OuterVolumeSpecName: "utilities") pod "cc85c1c7-40a6-4c10-aee2-a89846f85506" (UID: "cc85c1c7-40a6-4c10-aee2-a89846f85506"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 22:26:16 crc kubenswrapper[4789]: I0202 22:26:16.302827 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc85c1c7-40a6-4c10-aee2-a89846f85506-kube-api-access-nddnm" (OuterVolumeSpecName: "kube-api-access-nddnm") pod "cc85c1c7-40a6-4c10-aee2-a89846f85506" (UID: "cc85c1c7-40a6-4c10-aee2-a89846f85506"). InnerVolumeSpecName "kube-api-access-nddnm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 22:26:16 crc kubenswrapper[4789]: I0202 22:26:16.395225 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nddnm\" (UniqueName: \"kubernetes.io/projected/cc85c1c7-40a6-4c10-aee2-a89846f85506-kube-api-access-nddnm\") on node \"crc\" DevicePath \"\"" Feb 02 22:26:16 crc kubenswrapper[4789]: I0202 22:26:16.395258 4789 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc85c1c7-40a6-4c10-aee2-a89846f85506-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 22:26:16 crc kubenswrapper[4789]: I0202 22:26:16.433104 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc85c1c7-40a6-4c10-aee2-a89846f85506-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cc85c1c7-40a6-4c10-aee2-a89846f85506" (UID: "cc85c1c7-40a6-4c10-aee2-a89846f85506"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 22:26:16 crc kubenswrapper[4789]: I0202 22:26:16.496176 4789 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc85c1c7-40a6-4c10-aee2-a89846f85506-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 22:26:16 crc kubenswrapper[4789]: I0202 22:26:16.650028 4789 generic.go:334] "Generic (PLEG): container finished" podID="cc85c1c7-40a6-4c10-aee2-a89846f85506" containerID="fc4fbf762d96d2d2d0c31f4bfbeb11170aedd8beb8bae906d777c5095d7241b3" exitCode=0 Feb 02 22:26:16 crc kubenswrapper[4789]: I0202 22:26:16.650082 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dtbg9" event={"ID":"cc85c1c7-40a6-4c10-aee2-a89846f85506","Type":"ContainerDied","Data":"fc4fbf762d96d2d2d0c31f4bfbeb11170aedd8beb8bae906d777c5095d7241b3"} Feb 02 22:26:16 crc kubenswrapper[4789]: I0202 22:26:16.650126 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dtbg9" event={"ID":"cc85c1c7-40a6-4c10-aee2-a89846f85506","Type":"ContainerDied","Data":"e54435419de0f3430027406e11574b13affdfdcac1f2ea0843fa049e65f31a8a"} Feb 02 22:26:16 crc kubenswrapper[4789]: I0202 22:26:16.650155 4789 scope.go:117] "RemoveContainer" containerID="fc4fbf762d96d2d2d0c31f4bfbeb11170aedd8beb8bae906d777c5095d7241b3" Feb 02 22:26:16 crc kubenswrapper[4789]: I0202 22:26:16.650155 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dtbg9" Feb 02 22:26:16 crc kubenswrapper[4789]: I0202 22:26:16.679306 4789 scope.go:117] "RemoveContainer" containerID="5311f3376a456d260ac973b54a755a48da7a719d7baacb5aa9c9cffe1fd89358" Feb 02 22:26:16 crc kubenswrapper[4789]: I0202 22:26:16.720566 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dtbg9"] Feb 02 22:26:16 crc kubenswrapper[4789]: I0202 22:26:16.721948 4789 scope.go:117] "RemoveContainer" containerID="d192259385742dae44cb2c67332956d67efd7339f56b4a756087d3b89e23e5d7" Feb 02 22:26:16 crc kubenswrapper[4789]: I0202 22:26:16.735545 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dtbg9"] Feb 02 22:26:16 crc kubenswrapper[4789]: I0202 22:26:16.757407 4789 scope.go:117] "RemoveContainer" containerID="fc4fbf762d96d2d2d0c31f4bfbeb11170aedd8beb8bae906d777c5095d7241b3" Feb 02 22:26:16 crc kubenswrapper[4789]: E0202 22:26:16.757988 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc4fbf762d96d2d2d0c31f4bfbeb11170aedd8beb8bae906d777c5095d7241b3\": container with ID starting with fc4fbf762d96d2d2d0c31f4bfbeb11170aedd8beb8bae906d777c5095d7241b3 not found: ID does not exist" containerID="fc4fbf762d96d2d2d0c31f4bfbeb11170aedd8beb8bae906d777c5095d7241b3" Feb 02 22:26:16 crc kubenswrapper[4789]: I0202 22:26:16.758055 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc4fbf762d96d2d2d0c31f4bfbeb11170aedd8beb8bae906d777c5095d7241b3"} err="failed to get container status \"fc4fbf762d96d2d2d0c31f4bfbeb11170aedd8beb8bae906d777c5095d7241b3\": rpc error: code = NotFound desc = could not find container \"fc4fbf762d96d2d2d0c31f4bfbeb11170aedd8beb8bae906d777c5095d7241b3\": container with ID starting with fc4fbf762d96d2d2d0c31f4bfbeb11170aedd8beb8bae906d777c5095d7241b3 not found: ID does not exist" Feb 02 22:26:16 crc kubenswrapper[4789]: I0202 22:26:16.758096 4789 scope.go:117] "RemoveContainer" containerID="5311f3376a456d260ac973b54a755a48da7a719d7baacb5aa9c9cffe1fd89358" Feb 02 22:26:16 crc kubenswrapper[4789]: E0202 22:26:16.758798 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5311f3376a456d260ac973b54a755a48da7a719d7baacb5aa9c9cffe1fd89358\": container with ID starting with 5311f3376a456d260ac973b54a755a48da7a719d7baacb5aa9c9cffe1fd89358 not found: ID does not exist" containerID="5311f3376a456d260ac973b54a755a48da7a719d7baacb5aa9c9cffe1fd89358" Feb 02 22:26:16 crc kubenswrapper[4789]: I0202 22:26:16.758860 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5311f3376a456d260ac973b54a755a48da7a719d7baacb5aa9c9cffe1fd89358"} err="failed to get container status \"5311f3376a456d260ac973b54a755a48da7a719d7baacb5aa9c9cffe1fd89358\": rpc error: code = NotFound desc = could not find container \"5311f3376a456d260ac973b54a755a48da7a719d7baacb5aa9c9cffe1fd89358\": container with ID starting with 5311f3376a456d260ac973b54a755a48da7a719d7baacb5aa9c9cffe1fd89358 not found: ID does not exist" Feb 02 22:26:16 crc kubenswrapper[4789]: I0202 22:26:16.758894 4789 scope.go:117] "RemoveContainer" containerID="d192259385742dae44cb2c67332956d67efd7339f56b4a756087d3b89e23e5d7" Feb 02 22:26:16 crc kubenswrapper[4789]: E0202 22:26:16.759477 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d192259385742dae44cb2c67332956d67efd7339f56b4a756087d3b89e23e5d7\": container with ID starting with d192259385742dae44cb2c67332956d67efd7339f56b4a756087d3b89e23e5d7 not found: ID does not exist" containerID="d192259385742dae44cb2c67332956d67efd7339f56b4a756087d3b89e23e5d7" Feb 02 22:26:16 crc kubenswrapper[4789]: I0202 22:26:16.759538 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d192259385742dae44cb2c67332956d67efd7339f56b4a756087d3b89e23e5d7"} err="failed to get container status \"d192259385742dae44cb2c67332956d67efd7339f56b4a756087d3b89e23e5d7\": rpc error: code = NotFound desc = could not find container \"d192259385742dae44cb2c67332956d67efd7339f56b4a756087d3b89e23e5d7\": container with ID starting with d192259385742dae44cb2c67332956d67efd7339f56b4a756087d3b89e23e5d7 not found: ID does not exist" Feb 02 22:26:18 crc kubenswrapper[4789]: I0202 22:26:18.419900 4789 scope.go:117] "RemoveContainer" containerID="1ebe55b80a46ca02419a86f340c3d531948d8c1a4ab746ab97382212887e9c19" Feb 02 22:26:18 crc kubenswrapper[4789]: E0202 22:26:18.420645 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:26:18 crc kubenswrapper[4789]: I0202 22:26:18.428686 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc85c1c7-40a6-4c10-aee2-a89846f85506" path="/var/lib/kubelet/pods/cc85c1c7-40a6-4c10-aee2-a89846f85506/volumes" Feb 02 22:26:33 crc kubenswrapper[4789]: I0202 22:26:33.419915 4789 scope.go:117] "RemoveContainer" containerID="1ebe55b80a46ca02419a86f340c3d531948d8c1a4ab746ab97382212887e9c19" Feb 02 22:26:33 crc kubenswrapper[4789]: E0202 22:26:33.421258 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:26:46 crc kubenswrapper[4789]: I0202 22:26:46.292802 4789 scope.go:117] "RemoveContainer" containerID="1ebe55b80a46ca02419a86f340c3d531948d8c1a4ab746ab97382212887e9c19" Feb 02 22:26:46 crc kubenswrapper[4789]: E0202 22:26:46.293437 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:26:58 crc kubenswrapper[4789]: I0202 22:26:58.419736 4789 scope.go:117] "RemoveContainer" containerID="1ebe55b80a46ca02419a86f340c3d531948d8c1a4ab746ab97382212887e9c19" Feb 02 22:26:58 crc kubenswrapper[4789]: E0202 22:26:58.420603 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:27:09 crc kubenswrapper[4789]: I0202 22:27:09.419495 4789 scope.go:117] "RemoveContainer" containerID="1ebe55b80a46ca02419a86f340c3d531948d8c1a4ab746ab97382212887e9c19" Feb 02 22:27:09 crc kubenswrapper[4789]: E0202 22:27:09.420652 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:27:21 crc kubenswrapper[4789]: I0202 22:27:21.419721 4789 scope.go:117] "RemoveContainer" containerID="1ebe55b80a46ca02419a86f340c3d531948d8c1a4ab746ab97382212887e9c19" Feb 02 22:27:21 crc kubenswrapper[4789]: E0202 22:27:21.420836 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:27:34 crc kubenswrapper[4789]: I0202 22:27:34.420136 4789 scope.go:117] "RemoveContainer" containerID="1ebe55b80a46ca02419a86f340c3d531948d8c1a4ab746ab97382212887e9c19" Feb 02 22:27:34 crc kubenswrapper[4789]: E0202 22:27:34.421213 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:27:45 crc kubenswrapper[4789]: I0202 22:27:45.419967 4789 scope.go:117] "RemoveContainer" containerID="1ebe55b80a46ca02419a86f340c3d531948d8c1a4ab746ab97382212887e9c19" Feb 02 22:27:45 crc kubenswrapper[4789]: E0202 22:27:45.421352 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:27:59 crc kubenswrapper[4789]: I0202 22:27:59.420574 4789 scope.go:117] "RemoveContainer" containerID="1ebe55b80a46ca02419a86f340c3d531948d8c1a4ab746ab97382212887e9c19" Feb 02 22:27:59 crc kubenswrapper[4789]: E0202 22:27:59.424385 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:28:11 crc kubenswrapper[4789]: I0202 22:28:11.420991 4789 scope.go:117] "RemoveContainer" containerID="1ebe55b80a46ca02419a86f340c3d531948d8c1a4ab746ab97382212887e9c19" Feb 02 22:28:11 crc kubenswrapper[4789]: E0202 22:28:11.421999 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:28:14 crc kubenswrapper[4789]: I0202 22:28:14.323189 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8v7cr"] Feb 02 22:28:14 crc kubenswrapper[4789]: E0202 22:28:14.323920 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc85c1c7-40a6-4c10-aee2-a89846f85506" containerName="extract-utilities" Feb 02 22:28:14 crc kubenswrapper[4789]: I0202 22:28:14.323937 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc85c1c7-40a6-4c10-aee2-a89846f85506" containerName="extract-utilities" Feb 02 22:28:14 crc kubenswrapper[4789]: E0202 22:28:14.323957 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc85c1c7-40a6-4c10-aee2-a89846f85506" containerName="extract-content" Feb 02 22:28:14 crc kubenswrapper[4789]: I0202 22:28:14.323965 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc85c1c7-40a6-4c10-aee2-a89846f85506" containerName="extract-content" Feb 02 22:28:14 crc kubenswrapper[4789]: E0202 22:28:14.323982 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc85c1c7-40a6-4c10-aee2-a89846f85506" containerName="registry-server" Feb 02 22:28:14 crc kubenswrapper[4789]: I0202 22:28:14.323993 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc85c1c7-40a6-4c10-aee2-a89846f85506" containerName="registry-server" Feb 02 22:28:14 crc kubenswrapper[4789]: I0202 22:28:14.324169 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc85c1c7-40a6-4c10-aee2-a89846f85506" containerName="registry-server" Feb 02 22:28:14 crc kubenswrapper[4789]: I0202 22:28:14.325315 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8v7cr" Feb 02 22:28:14 crc kubenswrapper[4789]: I0202 22:28:14.336975 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dfc071fd-9cdf-4411-a4bc-9158241c9c95-catalog-content\") pod \"redhat-marketplace-8v7cr\" (UID: \"dfc071fd-9cdf-4411-a4bc-9158241c9c95\") " pod="openshift-marketplace/redhat-marketplace-8v7cr" Feb 02 22:28:14 crc kubenswrapper[4789]: I0202 22:28:14.337128 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dfc071fd-9cdf-4411-a4bc-9158241c9c95-utilities\") pod \"redhat-marketplace-8v7cr\" (UID: \"dfc071fd-9cdf-4411-a4bc-9158241c9c95\") " pod="openshift-marketplace/redhat-marketplace-8v7cr" Feb 02 22:28:14 crc kubenswrapper[4789]: I0202 22:28:14.337341 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fr4pl\" (UniqueName: \"kubernetes.io/projected/dfc071fd-9cdf-4411-a4bc-9158241c9c95-kube-api-access-fr4pl\") pod \"redhat-marketplace-8v7cr\" (UID: \"dfc071fd-9cdf-4411-a4bc-9158241c9c95\") " pod="openshift-marketplace/redhat-marketplace-8v7cr" Feb 02 22:28:14 crc kubenswrapper[4789]: I0202 22:28:14.366830 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8v7cr"] Feb 02 22:28:14 crc kubenswrapper[4789]: I0202 22:28:14.438904 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dfc071fd-9cdf-4411-a4bc-9158241c9c95-catalog-content\") pod \"redhat-marketplace-8v7cr\" (UID: \"dfc071fd-9cdf-4411-a4bc-9158241c9c95\") " pod="openshift-marketplace/redhat-marketplace-8v7cr" Feb 02 22:28:14 crc kubenswrapper[4789]: I0202 22:28:14.439082 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dfc071fd-9cdf-4411-a4bc-9158241c9c95-utilities\") pod \"redhat-marketplace-8v7cr\" (UID: \"dfc071fd-9cdf-4411-a4bc-9158241c9c95\") " pod="openshift-marketplace/redhat-marketplace-8v7cr" Feb 02 22:28:14 crc kubenswrapper[4789]: I0202 22:28:14.439295 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fr4pl\" (UniqueName: \"kubernetes.io/projected/dfc071fd-9cdf-4411-a4bc-9158241c9c95-kube-api-access-fr4pl\") pod \"redhat-marketplace-8v7cr\" (UID: \"dfc071fd-9cdf-4411-a4bc-9158241c9c95\") " pod="openshift-marketplace/redhat-marketplace-8v7cr" Feb 02 22:28:14 crc kubenswrapper[4789]: I0202 22:28:14.439795 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dfc071fd-9cdf-4411-a4bc-9158241c9c95-catalog-content\") pod \"redhat-marketplace-8v7cr\" (UID: \"dfc071fd-9cdf-4411-a4bc-9158241c9c95\") " pod="openshift-marketplace/redhat-marketplace-8v7cr" Feb 02 22:28:14 crc kubenswrapper[4789]: I0202 22:28:14.441070 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dfc071fd-9cdf-4411-a4bc-9158241c9c95-utilities\") pod \"redhat-marketplace-8v7cr\" (UID: \"dfc071fd-9cdf-4411-a4bc-9158241c9c95\") " pod="openshift-marketplace/redhat-marketplace-8v7cr" Feb 02 22:28:14 crc kubenswrapper[4789]: I0202 22:28:14.466182 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fr4pl\" (UniqueName: \"kubernetes.io/projected/dfc071fd-9cdf-4411-a4bc-9158241c9c95-kube-api-access-fr4pl\") pod \"redhat-marketplace-8v7cr\" (UID: \"dfc071fd-9cdf-4411-a4bc-9158241c9c95\") " pod="openshift-marketplace/redhat-marketplace-8v7cr" Feb 02 22:28:14 crc kubenswrapper[4789]: I0202 22:28:14.665638 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8v7cr" Feb 02 22:28:15 crc kubenswrapper[4789]: I0202 22:28:15.131250 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8v7cr"] Feb 02 22:28:16 crc kubenswrapper[4789]: I0202 22:28:16.134715 4789 generic.go:334] "Generic (PLEG): container finished" podID="dfc071fd-9cdf-4411-a4bc-9158241c9c95" containerID="f41a9016a5a907fff2a415b4e714958d1cd20d17a73dcb63724ad81c12a54f5d" exitCode=0 Feb 02 22:28:16 crc kubenswrapper[4789]: I0202 22:28:16.134797 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8v7cr" event={"ID":"dfc071fd-9cdf-4411-a4bc-9158241c9c95","Type":"ContainerDied","Data":"f41a9016a5a907fff2a415b4e714958d1cd20d17a73dcb63724ad81c12a54f5d"} Feb 02 22:28:16 crc kubenswrapper[4789]: I0202 22:28:16.134883 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8v7cr" event={"ID":"dfc071fd-9cdf-4411-a4bc-9158241c9c95","Type":"ContainerStarted","Data":"c5325e64c75395049b196f93b5cb880516a94dfe328db1cb0647f8449c80d76f"} Feb 02 22:28:16 crc kubenswrapper[4789]: I0202 22:28:16.138299 4789 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 22:28:18 crc kubenswrapper[4789]: I0202 22:28:18.162452 4789 generic.go:334] "Generic (PLEG): container finished" podID="dfc071fd-9cdf-4411-a4bc-9158241c9c95" containerID="24102f02fcc8f9e199f16326c2b81d06d5d5f898b47772aae4c106c139f3ea6d" exitCode=0 Feb 02 22:28:18 crc kubenswrapper[4789]: I0202 22:28:18.162564 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8v7cr" event={"ID":"dfc071fd-9cdf-4411-a4bc-9158241c9c95","Type":"ContainerDied","Data":"24102f02fcc8f9e199f16326c2b81d06d5d5f898b47772aae4c106c139f3ea6d"} Feb 02 22:28:19 crc kubenswrapper[4789]: I0202 22:28:19.175497 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8v7cr" event={"ID":"dfc071fd-9cdf-4411-a4bc-9158241c9c95","Type":"ContainerStarted","Data":"3b6b947c25def018355b52124c4e7bfb6292382069f0b5c95dfb46ddbae8ddfa"} Feb 02 22:28:19 crc kubenswrapper[4789]: I0202 22:28:19.204432 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8v7cr" podStartSLOduration=2.559761273 podStartE2EDuration="5.204415433s" podCreationTimestamp="2026-02-02 22:28:14 +0000 UTC" firstStartedPulling="2026-02-02 22:28:16.137735234 +0000 UTC m=+4116.432760293" lastFinishedPulling="2026-02-02 22:28:18.782389404 +0000 UTC m=+4119.077414453" observedRunningTime="2026-02-02 22:28:19.202138739 +0000 UTC m=+4119.497163808" watchObservedRunningTime="2026-02-02 22:28:19.204415433 +0000 UTC m=+4119.499440462" Feb 02 22:28:24 crc kubenswrapper[4789]: I0202 22:28:24.666718 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8v7cr" Feb 02 22:28:24 crc kubenswrapper[4789]: I0202 22:28:24.667504 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8v7cr" Feb 02 22:28:24 crc kubenswrapper[4789]: I0202 22:28:24.739174 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8v7cr" Feb 02 22:28:25 crc kubenswrapper[4789]: I0202 22:28:25.315979 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8v7cr" Feb 02 22:28:25 crc kubenswrapper[4789]: I0202 22:28:25.390484 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8v7cr"] Feb 02 22:28:25 crc kubenswrapper[4789]: I0202 22:28:25.419387 4789 scope.go:117] "RemoveContainer" containerID="1ebe55b80a46ca02419a86f340c3d531948d8c1a4ab746ab97382212887e9c19" Feb 02 22:28:25 crc kubenswrapper[4789]: E0202 22:28:25.419690 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:28:27 crc kubenswrapper[4789]: I0202 22:28:27.256442 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8v7cr" podUID="dfc071fd-9cdf-4411-a4bc-9158241c9c95" containerName="registry-server" containerID="cri-o://3b6b947c25def018355b52124c4e7bfb6292382069f0b5c95dfb46ddbae8ddfa" gracePeriod=2 Feb 02 22:28:27 crc kubenswrapper[4789]: I0202 22:28:27.748885 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8v7cr" Feb 02 22:28:27 crc kubenswrapper[4789]: I0202 22:28:27.757476 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fr4pl\" (UniqueName: \"kubernetes.io/projected/dfc071fd-9cdf-4411-a4bc-9158241c9c95-kube-api-access-fr4pl\") pod \"dfc071fd-9cdf-4411-a4bc-9158241c9c95\" (UID: \"dfc071fd-9cdf-4411-a4bc-9158241c9c95\") " Feb 02 22:28:27 crc kubenswrapper[4789]: I0202 22:28:27.757610 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dfc071fd-9cdf-4411-a4bc-9158241c9c95-utilities\") pod \"dfc071fd-9cdf-4411-a4bc-9158241c9c95\" (UID: \"dfc071fd-9cdf-4411-a4bc-9158241c9c95\") " Feb 02 22:28:27 crc kubenswrapper[4789]: I0202 22:28:27.757659 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dfc071fd-9cdf-4411-a4bc-9158241c9c95-catalog-content\") pod \"dfc071fd-9cdf-4411-a4bc-9158241c9c95\" (UID: \"dfc071fd-9cdf-4411-a4bc-9158241c9c95\") " Feb 02 22:28:27 crc kubenswrapper[4789]: I0202 22:28:27.758853 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dfc071fd-9cdf-4411-a4bc-9158241c9c95-utilities" (OuterVolumeSpecName: "utilities") pod "dfc071fd-9cdf-4411-a4bc-9158241c9c95" (UID: "dfc071fd-9cdf-4411-a4bc-9158241c9c95"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 22:28:27 crc kubenswrapper[4789]: I0202 22:28:27.764912 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfc071fd-9cdf-4411-a4bc-9158241c9c95-kube-api-access-fr4pl" (OuterVolumeSpecName: "kube-api-access-fr4pl") pod "dfc071fd-9cdf-4411-a4bc-9158241c9c95" (UID: "dfc071fd-9cdf-4411-a4bc-9158241c9c95"). InnerVolumeSpecName "kube-api-access-fr4pl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 22:28:27 crc kubenswrapper[4789]: I0202 22:28:27.791300 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dfc071fd-9cdf-4411-a4bc-9158241c9c95-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dfc071fd-9cdf-4411-a4bc-9158241c9c95" (UID: "dfc071fd-9cdf-4411-a4bc-9158241c9c95"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 22:28:27 crc kubenswrapper[4789]: I0202 22:28:27.859503 4789 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dfc071fd-9cdf-4411-a4bc-9158241c9c95-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 22:28:27 crc kubenswrapper[4789]: I0202 22:28:27.859549 4789 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dfc071fd-9cdf-4411-a4bc-9158241c9c95-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 22:28:27 crc kubenswrapper[4789]: I0202 22:28:27.859566 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fr4pl\" (UniqueName: \"kubernetes.io/projected/dfc071fd-9cdf-4411-a4bc-9158241c9c95-kube-api-access-fr4pl\") on node \"crc\" DevicePath \"\"" Feb 02 22:28:28 crc kubenswrapper[4789]: I0202 22:28:28.273484 4789 generic.go:334] "Generic (PLEG): container finished" podID="dfc071fd-9cdf-4411-a4bc-9158241c9c95" containerID="3b6b947c25def018355b52124c4e7bfb6292382069f0b5c95dfb46ddbae8ddfa" exitCode=0 Feb 02 22:28:28 crc kubenswrapper[4789]: I0202 22:28:28.273534 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8v7cr" event={"ID":"dfc071fd-9cdf-4411-a4bc-9158241c9c95","Type":"ContainerDied","Data":"3b6b947c25def018355b52124c4e7bfb6292382069f0b5c95dfb46ddbae8ddfa"} Feb 02 22:28:28 crc kubenswrapper[4789]: I0202 22:28:28.273568 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8v7cr" event={"ID":"dfc071fd-9cdf-4411-a4bc-9158241c9c95","Type":"ContainerDied","Data":"c5325e64c75395049b196f93b5cb880516a94dfe328db1cb0647f8449c80d76f"} Feb 02 22:28:28 crc kubenswrapper[4789]: I0202 22:28:28.273609 4789 scope.go:117] "RemoveContainer" containerID="3b6b947c25def018355b52124c4e7bfb6292382069f0b5c95dfb46ddbae8ddfa" Feb 02 22:28:28 crc kubenswrapper[4789]: I0202 22:28:28.273651 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8v7cr" Feb 02 22:28:28 crc kubenswrapper[4789]: I0202 22:28:28.305994 4789 scope.go:117] "RemoveContainer" containerID="24102f02fcc8f9e199f16326c2b81d06d5d5f898b47772aae4c106c139f3ea6d" Feb 02 22:28:28 crc kubenswrapper[4789]: I0202 22:28:28.326785 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8v7cr"] Feb 02 22:28:28 crc kubenswrapper[4789]: I0202 22:28:28.336971 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8v7cr"] Feb 02 22:28:28 crc kubenswrapper[4789]: I0202 22:28:28.342428 4789 scope.go:117] "RemoveContainer" containerID="f41a9016a5a907fff2a415b4e714958d1cd20d17a73dcb63724ad81c12a54f5d" Feb 02 22:28:28 crc kubenswrapper[4789]: I0202 22:28:28.375263 4789 scope.go:117] "RemoveContainer" containerID="3b6b947c25def018355b52124c4e7bfb6292382069f0b5c95dfb46ddbae8ddfa" Feb 02 22:28:28 crc kubenswrapper[4789]: E0202 22:28:28.377715 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b6b947c25def018355b52124c4e7bfb6292382069f0b5c95dfb46ddbae8ddfa\": container with ID starting with 3b6b947c25def018355b52124c4e7bfb6292382069f0b5c95dfb46ddbae8ddfa not found: ID does not exist" containerID="3b6b947c25def018355b52124c4e7bfb6292382069f0b5c95dfb46ddbae8ddfa" Feb 02 22:28:28 crc kubenswrapper[4789]: I0202 22:28:28.377766 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b6b947c25def018355b52124c4e7bfb6292382069f0b5c95dfb46ddbae8ddfa"} err="failed to get container status \"3b6b947c25def018355b52124c4e7bfb6292382069f0b5c95dfb46ddbae8ddfa\": rpc error: code = NotFound desc = could not find container \"3b6b947c25def018355b52124c4e7bfb6292382069f0b5c95dfb46ddbae8ddfa\": container with ID starting with 3b6b947c25def018355b52124c4e7bfb6292382069f0b5c95dfb46ddbae8ddfa not found: ID does not exist" Feb 02 22:28:28 crc kubenswrapper[4789]: I0202 22:28:28.377798 4789 scope.go:117] "RemoveContainer" containerID="24102f02fcc8f9e199f16326c2b81d06d5d5f898b47772aae4c106c139f3ea6d" Feb 02 22:28:28 crc kubenswrapper[4789]: E0202 22:28:28.378224 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24102f02fcc8f9e199f16326c2b81d06d5d5f898b47772aae4c106c139f3ea6d\": container with ID starting with 24102f02fcc8f9e199f16326c2b81d06d5d5f898b47772aae4c106c139f3ea6d not found: ID does not exist" containerID="24102f02fcc8f9e199f16326c2b81d06d5d5f898b47772aae4c106c139f3ea6d" Feb 02 22:28:28 crc kubenswrapper[4789]: I0202 22:28:28.378312 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24102f02fcc8f9e199f16326c2b81d06d5d5f898b47772aae4c106c139f3ea6d"} err="failed to get container status \"24102f02fcc8f9e199f16326c2b81d06d5d5f898b47772aae4c106c139f3ea6d\": rpc error: code = NotFound desc = could not find container \"24102f02fcc8f9e199f16326c2b81d06d5d5f898b47772aae4c106c139f3ea6d\": container with ID starting with 24102f02fcc8f9e199f16326c2b81d06d5d5f898b47772aae4c106c139f3ea6d not found: ID does not exist" Feb 02 22:28:28 crc kubenswrapper[4789]: I0202 22:28:28.378343 4789 scope.go:117] "RemoveContainer" containerID="f41a9016a5a907fff2a415b4e714958d1cd20d17a73dcb63724ad81c12a54f5d" Feb 02 22:28:28 crc kubenswrapper[4789]: E0202 22:28:28.379019 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f41a9016a5a907fff2a415b4e714958d1cd20d17a73dcb63724ad81c12a54f5d\": container with ID starting with f41a9016a5a907fff2a415b4e714958d1cd20d17a73dcb63724ad81c12a54f5d not found: ID does not exist" containerID="f41a9016a5a907fff2a415b4e714958d1cd20d17a73dcb63724ad81c12a54f5d" Feb 02 22:28:28 crc kubenswrapper[4789]: I0202 22:28:28.379080 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f41a9016a5a907fff2a415b4e714958d1cd20d17a73dcb63724ad81c12a54f5d"} err="failed to get container status \"f41a9016a5a907fff2a415b4e714958d1cd20d17a73dcb63724ad81c12a54f5d\": rpc error: code = NotFound desc = could not find container \"f41a9016a5a907fff2a415b4e714958d1cd20d17a73dcb63724ad81c12a54f5d\": container with ID starting with f41a9016a5a907fff2a415b4e714958d1cd20d17a73dcb63724ad81c12a54f5d not found: ID does not exist" Feb 02 22:28:28 crc kubenswrapper[4789]: I0202 22:28:28.438078 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dfc071fd-9cdf-4411-a4bc-9158241c9c95" path="/var/lib/kubelet/pods/dfc071fd-9cdf-4411-a4bc-9158241c9c95/volumes" Feb 02 22:28:37 crc kubenswrapper[4789]: I0202 22:28:37.419862 4789 scope.go:117] "RemoveContainer" containerID="1ebe55b80a46ca02419a86f340c3d531948d8c1a4ab746ab97382212887e9c19" Feb 02 22:28:37 crc kubenswrapper[4789]: E0202 22:28:37.420737 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:28:49 crc kubenswrapper[4789]: I0202 22:28:49.420091 4789 scope.go:117] "RemoveContainer" containerID="1ebe55b80a46ca02419a86f340c3d531948d8c1a4ab746ab97382212887e9c19" Feb 02 22:28:49 crc kubenswrapper[4789]: E0202 22:28:49.421684 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:29:02 crc kubenswrapper[4789]: I0202 22:29:02.419660 4789 scope.go:117] "RemoveContainer" containerID="1ebe55b80a46ca02419a86f340c3d531948d8c1a4ab746ab97382212887e9c19" Feb 02 22:29:03 crc kubenswrapper[4789]: I0202 22:29:03.582924 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" event={"ID":"bdf018b4-1451-4d37-be6e-05802b67c73e","Type":"ContainerStarted","Data":"3cbc0891eb0dc9eeb16bb56bec643de77ebfd7d6061e907d3c7af4e6dc51d56b"} Feb 02 22:29:26 crc kubenswrapper[4789]: I0202 22:29:26.941995 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xsxxv"] Feb 02 22:29:26 crc kubenswrapper[4789]: E0202 22:29:26.951322 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfc071fd-9cdf-4411-a4bc-9158241c9c95" containerName="extract-utilities" Feb 02 22:29:26 crc kubenswrapper[4789]: I0202 22:29:26.951362 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfc071fd-9cdf-4411-a4bc-9158241c9c95" containerName="extract-utilities" Feb 02 22:29:26 crc kubenswrapper[4789]: E0202 22:29:26.951399 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfc071fd-9cdf-4411-a4bc-9158241c9c95" containerName="registry-server" Feb 02 22:29:26 crc kubenswrapper[4789]: I0202 22:29:26.951416 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfc071fd-9cdf-4411-a4bc-9158241c9c95" containerName="registry-server" Feb 02 22:29:26 crc kubenswrapper[4789]: E0202 22:29:26.951465 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfc071fd-9cdf-4411-a4bc-9158241c9c95" containerName="extract-content" Feb 02 22:29:26 crc kubenswrapper[4789]: I0202 22:29:26.951483 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfc071fd-9cdf-4411-a4bc-9158241c9c95" containerName="extract-content" Feb 02 22:29:26 crc kubenswrapper[4789]: I0202 22:29:26.951854 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfc071fd-9cdf-4411-a4bc-9158241c9c95" containerName="registry-server" Feb 02 22:29:26 crc kubenswrapper[4789]: I0202 22:29:26.954014 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xsxxv" Feb 02 22:29:26 crc kubenswrapper[4789]: I0202 22:29:26.987391 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xsxxv"] Feb 02 22:29:27 crc kubenswrapper[4789]: I0202 22:29:27.127750 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5cf8d6d6-dda3-4a5e-ad05-1400e35be373-utilities\") pod \"certified-operators-xsxxv\" (UID: \"5cf8d6d6-dda3-4a5e-ad05-1400e35be373\") " pod="openshift-marketplace/certified-operators-xsxxv" Feb 02 22:29:27 crc kubenswrapper[4789]: I0202 22:29:27.127852 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5cf8d6d6-dda3-4a5e-ad05-1400e35be373-catalog-content\") pod \"certified-operators-xsxxv\" (UID: \"5cf8d6d6-dda3-4a5e-ad05-1400e35be373\") " pod="openshift-marketplace/certified-operators-xsxxv" Feb 02 22:29:27 crc kubenswrapper[4789]: I0202 22:29:27.127894 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgqz4\" (UniqueName: \"kubernetes.io/projected/5cf8d6d6-dda3-4a5e-ad05-1400e35be373-kube-api-access-sgqz4\") pod \"certified-operators-xsxxv\" (UID: \"5cf8d6d6-dda3-4a5e-ad05-1400e35be373\") " pod="openshift-marketplace/certified-operators-xsxxv" Feb 02 22:29:27 crc kubenswrapper[4789]: I0202 22:29:27.229090 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5cf8d6d6-dda3-4a5e-ad05-1400e35be373-catalog-content\") pod \"certified-operators-xsxxv\" (UID: \"5cf8d6d6-dda3-4a5e-ad05-1400e35be373\") " pod="openshift-marketplace/certified-operators-xsxxv" Feb 02 22:29:27 crc kubenswrapper[4789]: I0202 22:29:27.229161 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgqz4\" (UniqueName: \"kubernetes.io/projected/5cf8d6d6-dda3-4a5e-ad05-1400e35be373-kube-api-access-sgqz4\") pod \"certified-operators-xsxxv\" (UID: \"5cf8d6d6-dda3-4a5e-ad05-1400e35be373\") " pod="openshift-marketplace/certified-operators-xsxxv" Feb 02 22:29:27 crc kubenswrapper[4789]: I0202 22:29:27.229273 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5cf8d6d6-dda3-4a5e-ad05-1400e35be373-utilities\") pod \"certified-operators-xsxxv\" (UID: \"5cf8d6d6-dda3-4a5e-ad05-1400e35be373\") " pod="openshift-marketplace/certified-operators-xsxxv" Feb 02 22:29:27 crc kubenswrapper[4789]: I0202 22:29:27.229575 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5cf8d6d6-dda3-4a5e-ad05-1400e35be373-catalog-content\") pod \"certified-operators-xsxxv\" (UID: \"5cf8d6d6-dda3-4a5e-ad05-1400e35be373\") " pod="openshift-marketplace/certified-operators-xsxxv" Feb 02 22:29:27 crc kubenswrapper[4789]: I0202 22:29:27.229675 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5cf8d6d6-dda3-4a5e-ad05-1400e35be373-utilities\") pod \"certified-operators-xsxxv\" (UID: \"5cf8d6d6-dda3-4a5e-ad05-1400e35be373\") " pod="openshift-marketplace/certified-operators-xsxxv" Feb 02 22:29:27 crc kubenswrapper[4789]: I0202 22:29:27.251755 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgqz4\" (UniqueName: \"kubernetes.io/projected/5cf8d6d6-dda3-4a5e-ad05-1400e35be373-kube-api-access-sgqz4\") pod \"certified-operators-xsxxv\" (UID: \"5cf8d6d6-dda3-4a5e-ad05-1400e35be373\") " pod="openshift-marketplace/certified-operators-xsxxv" Feb 02 22:29:27 crc kubenswrapper[4789]: I0202 22:29:27.325270 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xsxxv" Feb 02 22:29:27 crc kubenswrapper[4789]: I0202 22:29:27.790907 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xsxxv"] Feb 02 22:29:28 crc kubenswrapper[4789]: I0202 22:29:28.813285 4789 generic.go:334] "Generic (PLEG): container finished" podID="5cf8d6d6-dda3-4a5e-ad05-1400e35be373" containerID="098170e7656ddc608d6737e7d752c2df9cce1945db09bbe5e7705db3260689d4" exitCode=0 Feb 02 22:29:28 crc kubenswrapper[4789]: I0202 22:29:28.813401 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xsxxv" event={"ID":"5cf8d6d6-dda3-4a5e-ad05-1400e35be373","Type":"ContainerDied","Data":"098170e7656ddc608d6737e7d752c2df9cce1945db09bbe5e7705db3260689d4"} Feb 02 22:29:28 crc kubenswrapper[4789]: I0202 22:29:28.813691 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xsxxv" event={"ID":"5cf8d6d6-dda3-4a5e-ad05-1400e35be373","Type":"ContainerStarted","Data":"e0079468c945c61ded9ac5d30d3b63e4fda50b5733666fccddd2a90b917883ff"} Feb 02 22:29:29 crc kubenswrapper[4789]: I0202 22:29:29.825565 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xsxxv" event={"ID":"5cf8d6d6-dda3-4a5e-ad05-1400e35be373","Type":"ContainerStarted","Data":"e74b7f0df57cdfb899dfc1a50913d0553361a4f8f9cc86d8b58aa3328ae51dd5"} Feb 02 22:29:30 crc kubenswrapper[4789]: I0202 22:29:30.837068 4789 generic.go:334] "Generic (PLEG): container finished" podID="5cf8d6d6-dda3-4a5e-ad05-1400e35be373" containerID="e74b7f0df57cdfb899dfc1a50913d0553361a4f8f9cc86d8b58aa3328ae51dd5" exitCode=0 Feb 02 22:29:30 crc kubenswrapper[4789]: I0202 22:29:30.837143 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xsxxv" event={"ID":"5cf8d6d6-dda3-4a5e-ad05-1400e35be373","Type":"ContainerDied","Data":"e74b7f0df57cdfb899dfc1a50913d0553361a4f8f9cc86d8b58aa3328ae51dd5"} Feb 02 22:29:31 crc kubenswrapper[4789]: I0202 22:29:31.849309 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xsxxv" event={"ID":"5cf8d6d6-dda3-4a5e-ad05-1400e35be373","Type":"ContainerStarted","Data":"2f88ccaec1189862560463e19b015dc56f76d8848a2cd986d86fe393bd21b87f"} Feb 02 22:29:31 crc kubenswrapper[4789]: I0202 22:29:31.889412 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xsxxv" podStartSLOduration=3.469647209 podStartE2EDuration="5.889371722s" podCreationTimestamp="2026-02-02 22:29:26 +0000 UTC" firstStartedPulling="2026-02-02 22:29:28.815385196 +0000 UTC m=+4189.110410245" lastFinishedPulling="2026-02-02 22:29:31.235109699 +0000 UTC m=+4191.530134758" observedRunningTime="2026-02-02 22:29:31.87829291 +0000 UTC m=+4192.173318089" watchObservedRunningTime="2026-02-02 22:29:31.889371722 +0000 UTC m=+4192.184396781" Feb 02 22:29:37 crc kubenswrapper[4789]: I0202 22:29:37.326703 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xsxxv" Feb 02 22:29:37 crc kubenswrapper[4789]: I0202 22:29:37.327391 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xsxxv" Feb 02 22:29:37 crc kubenswrapper[4789]: I0202 22:29:37.398030 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xsxxv" Feb 02 22:29:37 crc kubenswrapper[4789]: I0202 22:29:37.981183 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xsxxv" Feb 02 22:29:38 crc kubenswrapper[4789]: I0202 22:29:38.048265 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xsxxv"] Feb 02 22:29:39 crc kubenswrapper[4789]: I0202 22:29:39.921473 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xsxxv" podUID="5cf8d6d6-dda3-4a5e-ad05-1400e35be373" containerName="registry-server" containerID="cri-o://2f88ccaec1189862560463e19b015dc56f76d8848a2cd986d86fe393bd21b87f" gracePeriod=2 Feb 02 22:29:40 crc kubenswrapper[4789]: I0202 22:29:40.934635 4789 generic.go:334] "Generic (PLEG): container finished" podID="5cf8d6d6-dda3-4a5e-ad05-1400e35be373" containerID="2f88ccaec1189862560463e19b015dc56f76d8848a2cd986d86fe393bd21b87f" exitCode=0 Feb 02 22:29:40 crc kubenswrapper[4789]: I0202 22:29:40.934731 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xsxxv" event={"ID":"5cf8d6d6-dda3-4a5e-ad05-1400e35be373","Type":"ContainerDied","Data":"2f88ccaec1189862560463e19b015dc56f76d8848a2cd986d86fe393bd21b87f"} Feb 02 22:29:40 crc kubenswrapper[4789]: I0202 22:29:40.935018 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xsxxv" event={"ID":"5cf8d6d6-dda3-4a5e-ad05-1400e35be373","Type":"ContainerDied","Data":"e0079468c945c61ded9ac5d30d3b63e4fda50b5733666fccddd2a90b917883ff"} Feb 02 22:29:40 crc kubenswrapper[4789]: I0202 22:29:40.935047 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0079468c945c61ded9ac5d30d3b63e4fda50b5733666fccddd2a90b917883ff" Feb 02 22:29:40 crc kubenswrapper[4789]: I0202 22:29:40.940990 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xsxxv" Feb 02 22:29:41 crc kubenswrapper[4789]: I0202 22:29:41.109471 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5cf8d6d6-dda3-4a5e-ad05-1400e35be373-utilities\") pod \"5cf8d6d6-dda3-4a5e-ad05-1400e35be373\" (UID: \"5cf8d6d6-dda3-4a5e-ad05-1400e35be373\") " Feb 02 22:29:41 crc kubenswrapper[4789]: I0202 22:29:41.109627 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sgqz4\" (UniqueName: \"kubernetes.io/projected/5cf8d6d6-dda3-4a5e-ad05-1400e35be373-kube-api-access-sgqz4\") pod \"5cf8d6d6-dda3-4a5e-ad05-1400e35be373\" (UID: \"5cf8d6d6-dda3-4a5e-ad05-1400e35be373\") " Feb 02 22:29:41 crc kubenswrapper[4789]: I0202 22:29:41.109693 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5cf8d6d6-dda3-4a5e-ad05-1400e35be373-catalog-content\") pod \"5cf8d6d6-dda3-4a5e-ad05-1400e35be373\" (UID: \"5cf8d6d6-dda3-4a5e-ad05-1400e35be373\") " Feb 02 22:29:41 crc kubenswrapper[4789]: I0202 22:29:41.110927 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5cf8d6d6-dda3-4a5e-ad05-1400e35be373-utilities" (OuterVolumeSpecName: "utilities") pod "5cf8d6d6-dda3-4a5e-ad05-1400e35be373" (UID: "5cf8d6d6-dda3-4a5e-ad05-1400e35be373"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 22:29:41 crc kubenswrapper[4789]: I0202 22:29:41.123757 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5cf8d6d6-dda3-4a5e-ad05-1400e35be373-kube-api-access-sgqz4" (OuterVolumeSpecName: "kube-api-access-sgqz4") pod "5cf8d6d6-dda3-4a5e-ad05-1400e35be373" (UID: "5cf8d6d6-dda3-4a5e-ad05-1400e35be373"). InnerVolumeSpecName "kube-api-access-sgqz4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 22:29:41 crc kubenswrapper[4789]: I0202 22:29:41.177569 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5cf8d6d6-dda3-4a5e-ad05-1400e35be373-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5cf8d6d6-dda3-4a5e-ad05-1400e35be373" (UID: "5cf8d6d6-dda3-4a5e-ad05-1400e35be373"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 22:29:41 crc kubenswrapper[4789]: I0202 22:29:41.212425 4789 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5cf8d6d6-dda3-4a5e-ad05-1400e35be373-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 22:29:41 crc kubenswrapper[4789]: I0202 22:29:41.212483 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sgqz4\" (UniqueName: \"kubernetes.io/projected/5cf8d6d6-dda3-4a5e-ad05-1400e35be373-kube-api-access-sgqz4\") on node \"crc\" DevicePath \"\"" Feb 02 22:29:41 crc kubenswrapper[4789]: I0202 22:29:41.212517 4789 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5cf8d6d6-dda3-4a5e-ad05-1400e35be373-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 22:29:41 crc kubenswrapper[4789]: I0202 22:29:41.944480 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xsxxv" Feb 02 22:29:42 crc kubenswrapper[4789]: I0202 22:29:42.005240 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xsxxv"] Feb 02 22:29:42 crc kubenswrapper[4789]: I0202 22:29:42.015636 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xsxxv"] Feb 02 22:29:42 crc kubenswrapper[4789]: I0202 22:29:42.435523 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5cf8d6d6-dda3-4a5e-ad05-1400e35be373" path="/var/lib/kubelet/pods/5cf8d6d6-dda3-4a5e-ad05-1400e35be373/volumes" Feb 02 22:30:00 crc kubenswrapper[4789]: I0202 22:30:00.243744 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29501190-6kpnw"] Feb 02 22:30:00 crc kubenswrapper[4789]: E0202 22:30:00.244472 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cf8d6d6-dda3-4a5e-ad05-1400e35be373" containerName="extract-utilities" Feb 02 22:30:00 crc kubenswrapper[4789]: I0202 22:30:00.244489 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cf8d6d6-dda3-4a5e-ad05-1400e35be373" containerName="extract-utilities" Feb 02 22:30:00 crc kubenswrapper[4789]: E0202 22:30:00.244513 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cf8d6d6-dda3-4a5e-ad05-1400e35be373" containerName="extract-content" Feb 02 22:30:00 crc kubenswrapper[4789]: I0202 22:30:00.244522 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cf8d6d6-dda3-4a5e-ad05-1400e35be373" containerName="extract-content" Feb 02 22:30:00 crc kubenswrapper[4789]: E0202 22:30:00.244548 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cf8d6d6-dda3-4a5e-ad05-1400e35be373" containerName="registry-server" Feb 02 22:30:00 crc kubenswrapper[4789]: I0202 22:30:00.244557 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cf8d6d6-dda3-4a5e-ad05-1400e35be373" containerName="registry-server" Feb 02 22:30:00 crc kubenswrapper[4789]: I0202 22:30:00.244744 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cf8d6d6-dda3-4a5e-ad05-1400e35be373" containerName="registry-server" Feb 02 22:30:00 crc kubenswrapper[4789]: I0202 22:30:00.245254 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29501190-6kpnw" Feb 02 22:30:00 crc kubenswrapper[4789]: I0202 22:30:00.250241 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 02 22:30:00 crc kubenswrapper[4789]: I0202 22:30:00.250401 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 02 22:30:00 crc kubenswrapper[4789]: I0202 22:30:00.266760 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29501190-6kpnw"] Feb 02 22:30:00 crc kubenswrapper[4789]: I0202 22:30:00.340300 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5272b0dc-9c79-439e-98f3-04cdeb5c1b84-config-volume\") pod \"collect-profiles-29501190-6kpnw\" (UID: \"5272b0dc-9c79-439e-98f3-04cdeb5c1b84\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501190-6kpnw" Feb 02 22:30:00 crc kubenswrapper[4789]: I0202 22:30:00.340346 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxdcq\" (UniqueName: \"kubernetes.io/projected/5272b0dc-9c79-439e-98f3-04cdeb5c1b84-kube-api-access-rxdcq\") pod \"collect-profiles-29501190-6kpnw\" (UID: \"5272b0dc-9c79-439e-98f3-04cdeb5c1b84\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501190-6kpnw" Feb 02 22:30:00 crc kubenswrapper[4789]: I0202 22:30:00.340363 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5272b0dc-9c79-439e-98f3-04cdeb5c1b84-secret-volume\") pod \"collect-profiles-29501190-6kpnw\" (UID: \"5272b0dc-9c79-439e-98f3-04cdeb5c1b84\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501190-6kpnw" Feb 02 22:30:00 crc kubenswrapper[4789]: I0202 22:30:00.442019 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxdcq\" (UniqueName: \"kubernetes.io/projected/5272b0dc-9c79-439e-98f3-04cdeb5c1b84-kube-api-access-rxdcq\") pod \"collect-profiles-29501190-6kpnw\" (UID: \"5272b0dc-9c79-439e-98f3-04cdeb5c1b84\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501190-6kpnw" Feb 02 22:30:00 crc kubenswrapper[4789]: I0202 22:30:00.442259 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5272b0dc-9c79-439e-98f3-04cdeb5c1b84-secret-volume\") pod \"collect-profiles-29501190-6kpnw\" (UID: \"5272b0dc-9c79-439e-98f3-04cdeb5c1b84\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501190-6kpnw" Feb 02 22:30:00 crc kubenswrapper[4789]: I0202 22:30:00.442441 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5272b0dc-9c79-439e-98f3-04cdeb5c1b84-config-volume\") pod \"collect-profiles-29501190-6kpnw\" (UID: \"5272b0dc-9c79-439e-98f3-04cdeb5c1b84\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501190-6kpnw" Feb 02 22:30:00 crc kubenswrapper[4789]: I0202 22:30:00.443444 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5272b0dc-9c79-439e-98f3-04cdeb5c1b84-config-volume\") pod \"collect-profiles-29501190-6kpnw\" (UID: \"5272b0dc-9c79-439e-98f3-04cdeb5c1b84\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501190-6kpnw" Feb 02 22:30:00 crc kubenswrapper[4789]: I0202 22:30:00.449233 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5272b0dc-9c79-439e-98f3-04cdeb5c1b84-secret-volume\") pod \"collect-profiles-29501190-6kpnw\" (UID: \"5272b0dc-9c79-439e-98f3-04cdeb5c1b84\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501190-6kpnw" Feb 02 22:30:00 crc kubenswrapper[4789]: I0202 22:30:00.464076 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxdcq\" (UniqueName: \"kubernetes.io/projected/5272b0dc-9c79-439e-98f3-04cdeb5c1b84-kube-api-access-rxdcq\") pod \"collect-profiles-29501190-6kpnw\" (UID: \"5272b0dc-9c79-439e-98f3-04cdeb5c1b84\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501190-6kpnw" Feb 02 22:30:00 crc kubenswrapper[4789]: I0202 22:30:00.567886 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29501190-6kpnw" Feb 02 22:30:00 crc kubenswrapper[4789]: I0202 22:30:00.852396 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29501190-6kpnw"] Feb 02 22:30:01 crc kubenswrapper[4789]: I0202 22:30:01.138253 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29501190-6kpnw" event={"ID":"5272b0dc-9c79-439e-98f3-04cdeb5c1b84","Type":"ContainerStarted","Data":"df885ac8b37ba6900317572b66d5c1dcc92a0c235d7ad9aea4f375b581e86519"} Feb 02 22:30:01 crc kubenswrapper[4789]: I0202 22:30:01.138961 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29501190-6kpnw" event={"ID":"5272b0dc-9c79-439e-98f3-04cdeb5c1b84","Type":"ContainerStarted","Data":"df97336556db2548c6324ff9d7e83db4d4bf812f290936b16fc8578868350d47"} Feb 02 22:30:01 crc kubenswrapper[4789]: I0202 22:30:01.170168 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29501190-6kpnw" podStartSLOduration=1.170139102 podStartE2EDuration="1.170139102s" podCreationTimestamp="2026-02-02 22:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 22:30:01.160301094 +0000 UTC m=+4221.455326143" watchObservedRunningTime="2026-02-02 22:30:01.170139102 +0000 UTC m=+4221.465164161" Feb 02 22:30:02 crc kubenswrapper[4789]: I0202 22:30:02.156332 4789 generic.go:334] "Generic (PLEG): container finished" podID="5272b0dc-9c79-439e-98f3-04cdeb5c1b84" containerID="df885ac8b37ba6900317572b66d5c1dcc92a0c235d7ad9aea4f375b581e86519" exitCode=0 Feb 02 22:30:02 crc kubenswrapper[4789]: I0202 22:30:02.156478 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29501190-6kpnw" event={"ID":"5272b0dc-9c79-439e-98f3-04cdeb5c1b84","Type":"ContainerDied","Data":"df885ac8b37ba6900317572b66d5c1dcc92a0c235d7ad9aea4f375b581e86519"} Feb 02 22:30:03 crc kubenswrapper[4789]: I0202 22:30:03.526914 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29501190-6kpnw" Feb 02 22:30:03 crc kubenswrapper[4789]: I0202 22:30:03.696951 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rxdcq\" (UniqueName: \"kubernetes.io/projected/5272b0dc-9c79-439e-98f3-04cdeb5c1b84-kube-api-access-rxdcq\") pod \"5272b0dc-9c79-439e-98f3-04cdeb5c1b84\" (UID: \"5272b0dc-9c79-439e-98f3-04cdeb5c1b84\") " Feb 02 22:30:03 crc kubenswrapper[4789]: I0202 22:30:03.697117 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5272b0dc-9c79-439e-98f3-04cdeb5c1b84-secret-volume\") pod \"5272b0dc-9c79-439e-98f3-04cdeb5c1b84\" (UID: \"5272b0dc-9c79-439e-98f3-04cdeb5c1b84\") " Feb 02 22:30:03 crc kubenswrapper[4789]: I0202 22:30:03.697250 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5272b0dc-9c79-439e-98f3-04cdeb5c1b84-config-volume\") pod \"5272b0dc-9c79-439e-98f3-04cdeb5c1b84\" (UID: \"5272b0dc-9c79-439e-98f3-04cdeb5c1b84\") " Feb 02 22:30:03 crc kubenswrapper[4789]: I0202 22:30:03.698011 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5272b0dc-9c79-439e-98f3-04cdeb5c1b84-config-volume" (OuterVolumeSpecName: "config-volume") pod "5272b0dc-9c79-439e-98f3-04cdeb5c1b84" (UID: "5272b0dc-9c79-439e-98f3-04cdeb5c1b84"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 22:30:03 crc kubenswrapper[4789]: I0202 22:30:03.699180 4789 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5272b0dc-9c79-439e-98f3-04cdeb5c1b84-config-volume\") on node \"crc\" DevicePath \"\"" Feb 02 22:30:03 crc kubenswrapper[4789]: I0202 22:30:03.707113 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5272b0dc-9c79-439e-98f3-04cdeb5c1b84-kube-api-access-rxdcq" (OuterVolumeSpecName: "kube-api-access-rxdcq") pod "5272b0dc-9c79-439e-98f3-04cdeb5c1b84" (UID: "5272b0dc-9c79-439e-98f3-04cdeb5c1b84"). InnerVolumeSpecName "kube-api-access-rxdcq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 22:30:03 crc kubenswrapper[4789]: I0202 22:30:03.707696 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5272b0dc-9c79-439e-98f3-04cdeb5c1b84-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "5272b0dc-9c79-439e-98f3-04cdeb5c1b84" (UID: "5272b0dc-9c79-439e-98f3-04cdeb5c1b84"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 22:30:03 crc kubenswrapper[4789]: I0202 22:30:03.800106 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rxdcq\" (UniqueName: \"kubernetes.io/projected/5272b0dc-9c79-439e-98f3-04cdeb5c1b84-kube-api-access-rxdcq\") on node \"crc\" DevicePath \"\"" Feb 02 22:30:03 crc kubenswrapper[4789]: I0202 22:30:03.800143 4789 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5272b0dc-9c79-439e-98f3-04cdeb5c1b84-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 02 22:30:04 crc kubenswrapper[4789]: I0202 22:30:04.178749 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29501190-6kpnw" event={"ID":"5272b0dc-9c79-439e-98f3-04cdeb5c1b84","Type":"ContainerDied","Data":"df97336556db2548c6324ff9d7e83db4d4bf812f290936b16fc8578868350d47"} Feb 02 22:30:04 crc kubenswrapper[4789]: I0202 22:30:04.178806 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df97336556db2548c6324ff9d7e83db4d4bf812f290936b16fc8578868350d47" Feb 02 22:30:04 crc kubenswrapper[4789]: I0202 22:30:04.178848 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29501190-6kpnw" Feb 02 22:30:04 crc kubenswrapper[4789]: I0202 22:30:04.630555 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29501145-wht77"] Feb 02 22:30:04 crc kubenswrapper[4789]: I0202 22:30:04.640980 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29501145-wht77"] Feb 02 22:30:06 crc kubenswrapper[4789]: I0202 22:30:06.434351 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="603e47a2-9b11-440a-92cb-4e869da257cf" path="/var/lib/kubelet/pods/603e47a2-9b11-440a-92cb-4e869da257cf/volumes" Feb 02 22:30:47 crc kubenswrapper[4789]: I0202 22:30:47.344547 4789 scope.go:117] "RemoveContainer" containerID="f170f83f360a287e662f5da2e77de61b5529a442188a7800b6a935241f283768" Feb 02 22:31:22 crc kubenswrapper[4789]: I0202 22:31:22.841434 4789 patch_prober.go:28] interesting pod/machine-config-daemon-c8vcn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 22:31:22 crc kubenswrapper[4789]: I0202 22:31:22.842278 4789 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 22:31:52 crc kubenswrapper[4789]: I0202 22:31:52.155224 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8x64g"] Feb 02 22:31:52 crc kubenswrapper[4789]: E0202 22:31:52.156515 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5272b0dc-9c79-439e-98f3-04cdeb5c1b84" containerName="collect-profiles" Feb 02 22:31:52 crc kubenswrapper[4789]: I0202 22:31:52.156539 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="5272b0dc-9c79-439e-98f3-04cdeb5c1b84" containerName="collect-profiles" Feb 02 22:31:52 crc kubenswrapper[4789]: I0202 22:31:52.156867 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="5272b0dc-9c79-439e-98f3-04cdeb5c1b84" containerName="collect-profiles" Feb 02 22:31:52 crc kubenswrapper[4789]: I0202 22:31:52.158646 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8x64g" Feb 02 22:31:52 crc kubenswrapper[4789]: I0202 22:31:52.167474 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8x64g"] Feb 02 22:31:52 crc kubenswrapper[4789]: I0202 22:31:52.306392 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e759e28-291e-43b4-b856-8c89fd7af5ae-utilities\") pod \"community-operators-8x64g\" (UID: \"2e759e28-291e-43b4-b856-8c89fd7af5ae\") " pod="openshift-marketplace/community-operators-8x64g" Feb 02 22:31:52 crc kubenswrapper[4789]: I0202 22:31:52.306438 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qznjr\" (UniqueName: \"kubernetes.io/projected/2e759e28-291e-43b4-b856-8c89fd7af5ae-kube-api-access-qznjr\") pod \"community-operators-8x64g\" (UID: \"2e759e28-291e-43b4-b856-8c89fd7af5ae\") " pod="openshift-marketplace/community-operators-8x64g" Feb 02 22:31:52 crc kubenswrapper[4789]: I0202 22:31:52.306498 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e759e28-291e-43b4-b856-8c89fd7af5ae-catalog-content\") pod \"community-operators-8x64g\" (UID: \"2e759e28-291e-43b4-b856-8c89fd7af5ae\") " pod="openshift-marketplace/community-operators-8x64g" Feb 02 22:31:52 crc kubenswrapper[4789]: I0202 22:31:52.407688 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e759e28-291e-43b4-b856-8c89fd7af5ae-catalog-content\") pod \"community-operators-8x64g\" (UID: \"2e759e28-291e-43b4-b856-8c89fd7af5ae\") " pod="openshift-marketplace/community-operators-8x64g" Feb 02 22:31:52 crc kubenswrapper[4789]: I0202 22:31:52.408085 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e759e28-291e-43b4-b856-8c89fd7af5ae-utilities\") pod \"community-operators-8x64g\" (UID: \"2e759e28-291e-43b4-b856-8c89fd7af5ae\") " pod="openshift-marketplace/community-operators-8x64g" Feb 02 22:31:52 crc kubenswrapper[4789]: I0202 22:31:52.408183 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qznjr\" (UniqueName: \"kubernetes.io/projected/2e759e28-291e-43b4-b856-8c89fd7af5ae-kube-api-access-qznjr\") pod \"community-operators-8x64g\" (UID: \"2e759e28-291e-43b4-b856-8c89fd7af5ae\") " pod="openshift-marketplace/community-operators-8x64g" Feb 02 22:31:52 crc kubenswrapper[4789]: I0202 22:31:52.408511 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e759e28-291e-43b4-b856-8c89fd7af5ae-catalog-content\") pod \"community-operators-8x64g\" (UID: \"2e759e28-291e-43b4-b856-8c89fd7af5ae\") " pod="openshift-marketplace/community-operators-8x64g" Feb 02 22:31:52 crc kubenswrapper[4789]: I0202 22:31:52.408550 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e759e28-291e-43b4-b856-8c89fd7af5ae-utilities\") pod \"community-operators-8x64g\" (UID: \"2e759e28-291e-43b4-b856-8c89fd7af5ae\") " pod="openshift-marketplace/community-operators-8x64g" Feb 02 22:31:52 crc kubenswrapper[4789]: I0202 22:31:52.433430 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qznjr\" (UniqueName: \"kubernetes.io/projected/2e759e28-291e-43b4-b856-8c89fd7af5ae-kube-api-access-qznjr\") pod \"community-operators-8x64g\" (UID: \"2e759e28-291e-43b4-b856-8c89fd7af5ae\") " pod="openshift-marketplace/community-operators-8x64g" Feb 02 22:31:52 crc kubenswrapper[4789]: I0202 22:31:52.493111 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8x64g" Feb 02 22:31:52 crc kubenswrapper[4789]: I0202 22:31:52.790835 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8x64g"] Feb 02 22:31:52 crc kubenswrapper[4789]: I0202 22:31:52.842110 4789 patch_prober.go:28] interesting pod/machine-config-daemon-c8vcn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 22:31:52 crc kubenswrapper[4789]: I0202 22:31:52.842153 4789 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 22:31:53 crc kubenswrapper[4789]: I0202 22:31:53.026835 4789 generic.go:334] "Generic (PLEG): container finished" podID="2e759e28-291e-43b4-b856-8c89fd7af5ae" containerID="3ae47a4f685e98cf95ca27a335225a0ec307032221f070a0192e2bf9c1650ce0" exitCode=0 Feb 02 22:31:53 crc kubenswrapper[4789]: I0202 22:31:53.026873 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8x64g" event={"ID":"2e759e28-291e-43b4-b856-8c89fd7af5ae","Type":"ContainerDied","Data":"3ae47a4f685e98cf95ca27a335225a0ec307032221f070a0192e2bf9c1650ce0"} Feb 02 22:31:53 crc kubenswrapper[4789]: I0202 22:31:53.026915 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8x64g" event={"ID":"2e759e28-291e-43b4-b856-8c89fd7af5ae","Type":"ContainerStarted","Data":"b239b10c5719b6580b4cd6d7406b7c74f951092f184bc54e8fb5e71f70c24b64"} Feb 02 22:31:58 crc kubenswrapper[4789]: I0202 22:31:58.061729 4789 generic.go:334] "Generic (PLEG): container finished" podID="2e759e28-291e-43b4-b856-8c89fd7af5ae" containerID="30fc03a36a79867a651043ffb1d645531ba62a36081f804e6e4a1660e07f7ce7" exitCode=0 Feb 02 22:31:58 crc kubenswrapper[4789]: I0202 22:31:58.061781 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8x64g" event={"ID":"2e759e28-291e-43b4-b856-8c89fd7af5ae","Type":"ContainerDied","Data":"30fc03a36a79867a651043ffb1d645531ba62a36081f804e6e4a1660e07f7ce7"} Feb 02 22:31:59 crc kubenswrapper[4789]: I0202 22:31:59.073450 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8x64g" event={"ID":"2e759e28-291e-43b4-b856-8c89fd7af5ae","Type":"ContainerStarted","Data":"9d5b2a32587e6c15d6320d455079d36cfb4c77232ee37d3ebb62b3e909372d59"} Feb 02 22:31:59 crc kubenswrapper[4789]: I0202 22:31:59.100635 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8x64g" podStartSLOduration=1.6525011219999999 podStartE2EDuration="7.100618065s" podCreationTimestamp="2026-02-02 22:31:52 +0000 UTC" firstStartedPulling="2026-02-02 22:31:53.028083501 +0000 UTC m=+4333.323108530" lastFinishedPulling="2026-02-02 22:31:58.476200424 +0000 UTC m=+4338.771225473" observedRunningTime="2026-02-02 22:31:59.096521679 +0000 UTC m=+4339.391546708" watchObservedRunningTime="2026-02-02 22:31:59.100618065 +0000 UTC m=+4339.395643094" Feb 02 22:32:02 crc kubenswrapper[4789]: I0202 22:32:02.494230 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8x64g" Feb 02 22:32:02 crc kubenswrapper[4789]: I0202 22:32:02.494781 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8x64g" Feb 02 22:32:02 crc kubenswrapper[4789]: I0202 22:32:02.570376 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8x64g" Feb 02 22:32:03 crc kubenswrapper[4789]: I0202 22:32:03.171319 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8x64g" Feb 02 22:32:03 crc kubenswrapper[4789]: I0202 22:32:03.283078 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8x64g"] Feb 02 22:32:03 crc kubenswrapper[4789]: I0202 22:32:03.328460 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jm6mv"] Feb 02 22:32:03 crc kubenswrapper[4789]: I0202 22:32:03.328760 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-jm6mv" podUID="6abf2c76-f615-4359-b413-545477a9a5c9" containerName="registry-server" containerID="cri-o://87b95b762c6f8fa9f68b02262fb26147d9e3bc99f8a9d2f30d80233406d519bf" gracePeriod=2 Feb 02 22:32:03 crc kubenswrapper[4789]: I0202 22:32:03.750872 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jm6mv" Feb 02 22:32:03 crc kubenswrapper[4789]: I0202 22:32:03.879758 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6abf2c76-f615-4359-b413-545477a9a5c9-catalog-content\") pod \"6abf2c76-f615-4359-b413-545477a9a5c9\" (UID: \"6abf2c76-f615-4359-b413-545477a9a5c9\") " Feb 02 22:32:03 crc kubenswrapper[4789]: I0202 22:32:03.879848 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6abf2c76-f615-4359-b413-545477a9a5c9-utilities\") pod \"6abf2c76-f615-4359-b413-545477a9a5c9\" (UID: \"6abf2c76-f615-4359-b413-545477a9a5c9\") " Feb 02 22:32:03 crc kubenswrapper[4789]: I0202 22:32:03.879870 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n85ph\" (UniqueName: \"kubernetes.io/projected/6abf2c76-f615-4359-b413-545477a9a5c9-kube-api-access-n85ph\") pod \"6abf2c76-f615-4359-b413-545477a9a5c9\" (UID: \"6abf2c76-f615-4359-b413-545477a9a5c9\") " Feb 02 22:32:03 crc kubenswrapper[4789]: I0202 22:32:03.881373 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6abf2c76-f615-4359-b413-545477a9a5c9-utilities" (OuterVolumeSpecName: "utilities") pod "6abf2c76-f615-4359-b413-545477a9a5c9" (UID: "6abf2c76-f615-4359-b413-545477a9a5c9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 22:32:03 crc kubenswrapper[4789]: I0202 22:32:03.888439 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6abf2c76-f615-4359-b413-545477a9a5c9-kube-api-access-n85ph" (OuterVolumeSpecName: "kube-api-access-n85ph") pod "6abf2c76-f615-4359-b413-545477a9a5c9" (UID: "6abf2c76-f615-4359-b413-545477a9a5c9"). InnerVolumeSpecName "kube-api-access-n85ph". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 22:32:03 crc kubenswrapper[4789]: I0202 22:32:03.931544 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6abf2c76-f615-4359-b413-545477a9a5c9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6abf2c76-f615-4359-b413-545477a9a5c9" (UID: "6abf2c76-f615-4359-b413-545477a9a5c9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 22:32:03 crc kubenswrapper[4789]: I0202 22:32:03.981555 4789 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6abf2c76-f615-4359-b413-545477a9a5c9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 22:32:03 crc kubenswrapper[4789]: I0202 22:32:03.981605 4789 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6abf2c76-f615-4359-b413-545477a9a5c9-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 22:32:03 crc kubenswrapper[4789]: I0202 22:32:03.981615 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n85ph\" (UniqueName: \"kubernetes.io/projected/6abf2c76-f615-4359-b413-545477a9a5c9-kube-api-access-n85ph\") on node \"crc\" DevicePath \"\"" Feb 02 22:32:04 crc kubenswrapper[4789]: I0202 22:32:04.111211 4789 generic.go:334] "Generic (PLEG): container finished" podID="6abf2c76-f615-4359-b413-545477a9a5c9" containerID="87b95b762c6f8fa9f68b02262fb26147d9e3bc99f8a9d2f30d80233406d519bf" exitCode=0 Feb 02 22:32:04 crc kubenswrapper[4789]: I0202 22:32:04.111345 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jm6mv" Feb 02 22:32:04 crc kubenswrapper[4789]: I0202 22:32:04.111397 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jm6mv" event={"ID":"6abf2c76-f615-4359-b413-545477a9a5c9","Type":"ContainerDied","Data":"87b95b762c6f8fa9f68b02262fb26147d9e3bc99f8a9d2f30d80233406d519bf"} Feb 02 22:32:04 crc kubenswrapper[4789]: I0202 22:32:04.111436 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jm6mv" event={"ID":"6abf2c76-f615-4359-b413-545477a9a5c9","Type":"ContainerDied","Data":"f745b3f4cef54c01fdff2eb9a760889cf207f72eb1a7f6270e570650629964c9"} Feb 02 22:32:04 crc kubenswrapper[4789]: I0202 22:32:04.111456 4789 scope.go:117] "RemoveContainer" containerID="87b95b762c6f8fa9f68b02262fb26147d9e3bc99f8a9d2f30d80233406d519bf" Feb 02 22:32:04 crc kubenswrapper[4789]: I0202 22:32:04.130279 4789 scope.go:117] "RemoveContainer" containerID="5c2484f5063ecab6568c41fb39496fdcf797cdc7932c41ee4c1803f07da13201" Feb 02 22:32:04 crc kubenswrapper[4789]: I0202 22:32:04.153377 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jm6mv"] Feb 02 22:32:04 crc kubenswrapper[4789]: I0202 22:32:04.156431 4789 scope.go:117] "RemoveContainer" containerID="d0ffeec50a30a5582a7ba2c0bb5b150ffbc05aabccf30e8b895a7935710ac3cf" Feb 02 22:32:04 crc kubenswrapper[4789]: I0202 22:32:04.158117 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jm6mv"] Feb 02 22:32:04 crc kubenswrapper[4789]: I0202 22:32:04.186444 4789 scope.go:117] "RemoveContainer" containerID="87b95b762c6f8fa9f68b02262fb26147d9e3bc99f8a9d2f30d80233406d519bf" Feb 02 22:32:04 crc kubenswrapper[4789]: E0202 22:32:04.188206 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87b95b762c6f8fa9f68b02262fb26147d9e3bc99f8a9d2f30d80233406d519bf\": container with ID starting with 87b95b762c6f8fa9f68b02262fb26147d9e3bc99f8a9d2f30d80233406d519bf not found: ID does not exist" containerID="87b95b762c6f8fa9f68b02262fb26147d9e3bc99f8a9d2f30d80233406d519bf" Feb 02 22:32:04 crc kubenswrapper[4789]: I0202 22:32:04.188260 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87b95b762c6f8fa9f68b02262fb26147d9e3bc99f8a9d2f30d80233406d519bf"} err="failed to get container status \"87b95b762c6f8fa9f68b02262fb26147d9e3bc99f8a9d2f30d80233406d519bf\": rpc error: code = NotFound desc = could not find container \"87b95b762c6f8fa9f68b02262fb26147d9e3bc99f8a9d2f30d80233406d519bf\": container with ID starting with 87b95b762c6f8fa9f68b02262fb26147d9e3bc99f8a9d2f30d80233406d519bf not found: ID does not exist" Feb 02 22:32:04 crc kubenswrapper[4789]: I0202 22:32:04.188295 4789 scope.go:117] "RemoveContainer" containerID="5c2484f5063ecab6568c41fb39496fdcf797cdc7932c41ee4c1803f07da13201" Feb 02 22:32:04 crc kubenswrapper[4789]: E0202 22:32:04.218161 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c2484f5063ecab6568c41fb39496fdcf797cdc7932c41ee4c1803f07da13201\": container with ID starting with 5c2484f5063ecab6568c41fb39496fdcf797cdc7932c41ee4c1803f07da13201 not found: ID does not exist" containerID="5c2484f5063ecab6568c41fb39496fdcf797cdc7932c41ee4c1803f07da13201" Feb 02 22:32:04 crc kubenswrapper[4789]: I0202 22:32:04.218242 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c2484f5063ecab6568c41fb39496fdcf797cdc7932c41ee4c1803f07da13201"} err="failed to get container status \"5c2484f5063ecab6568c41fb39496fdcf797cdc7932c41ee4c1803f07da13201\": rpc error: code = NotFound desc = could not find container \"5c2484f5063ecab6568c41fb39496fdcf797cdc7932c41ee4c1803f07da13201\": container with ID starting with 5c2484f5063ecab6568c41fb39496fdcf797cdc7932c41ee4c1803f07da13201 not found: ID does not exist" Feb 02 22:32:04 crc kubenswrapper[4789]: I0202 22:32:04.218269 4789 scope.go:117] "RemoveContainer" containerID="d0ffeec50a30a5582a7ba2c0bb5b150ffbc05aabccf30e8b895a7935710ac3cf" Feb 02 22:32:04 crc kubenswrapper[4789]: E0202 22:32:04.218696 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0ffeec50a30a5582a7ba2c0bb5b150ffbc05aabccf30e8b895a7935710ac3cf\": container with ID starting with d0ffeec50a30a5582a7ba2c0bb5b150ffbc05aabccf30e8b895a7935710ac3cf not found: ID does not exist" containerID="d0ffeec50a30a5582a7ba2c0bb5b150ffbc05aabccf30e8b895a7935710ac3cf" Feb 02 22:32:04 crc kubenswrapper[4789]: I0202 22:32:04.218726 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0ffeec50a30a5582a7ba2c0bb5b150ffbc05aabccf30e8b895a7935710ac3cf"} err="failed to get container status \"d0ffeec50a30a5582a7ba2c0bb5b150ffbc05aabccf30e8b895a7935710ac3cf\": rpc error: code = NotFound desc = could not find container \"d0ffeec50a30a5582a7ba2c0bb5b150ffbc05aabccf30e8b895a7935710ac3cf\": container with ID starting with d0ffeec50a30a5582a7ba2c0bb5b150ffbc05aabccf30e8b895a7935710ac3cf not found: ID does not exist" Feb 02 22:32:04 crc kubenswrapper[4789]: I0202 22:32:04.430073 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6abf2c76-f615-4359-b413-545477a9a5c9" path="/var/lib/kubelet/pods/6abf2c76-f615-4359-b413-545477a9a5c9/volumes" Feb 02 22:32:22 crc kubenswrapper[4789]: I0202 22:32:22.842001 4789 patch_prober.go:28] interesting pod/machine-config-daemon-c8vcn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 22:32:22 crc kubenswrapper[4789]: I0202 22:32:22.843882 4789 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 22:32:22 crc kubenswrapper[4789]: I0202 22:32:22.844142 4789 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" Feb 02 22:32:22 crc kubenswrapper[4789]: I0202 22:32:22.844977 4789 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3cbc0891eb0dc9eeb16bb56bec643de77ebfd7d6061e907d3c7af4e6dc51d56b"} pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 22:32:22 crc kubenswrapper[4789]: I0202 22:32:22.845156 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" containerName="machine-config-daemon" containerID="cri-o://3cbc0891eb0dc9eeb16bb56bec643de77ebfd7d6061e907d3c7af4e6dc51d56b" gracePeriod=600 Feb 02 22:32:23 crc kubenswrapper[4789]: I0202 22:32:23.306918 4789 generic.go:334] "Generic (PLEG): container finished" podID="bdf018b4-1451-4d37-be6e-05802b67c73e" containerID="3cbc0891eb0dc9eeb16bb56bec643de77ebfd7d6061e907d3c7af4e6dc51d56b" exitCode=0 Feb 02 22:32:23 crc kubenswrapper[4789]: I0202 22:32:23.306999 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" event={"ID":"bdf018b4-1451-4d37-be6e-05802b67c73e","Type":"ContainerDied","Data":"3cbc0891eb0dc9eeb16bb56bec643de77ebfd7d6061e907d3c7af4e6dc51d56b"} Feb 02 22:32:23 crc kubenswrapper[4789]: I0202 22:32:23.307061 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" event={"ID":"bdf018b4-1451-4d37-be6e-05802b67c73e","Type":"ContainerStarted","Data":"e1882e0fff6a0a98d0a3daff7d66a9d268ab6f6c60c38af019ec759180dddf7d"} Feb 02 22:32:23 crc kubenswrapper[4789]: I0202 22:32:23.307101 4789 scope.go:117] "RemoveContainer" containerID="1ebe55b80a46ca02419a86f340c3d531948d8c1a4ab746ab97382212887e9c19" Feb 02 22:34:42 crc kubenswrapper[4789]: I0202 22:34:42.777707 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["crc-storage/crc-storage-crc-ggq8b"] Feb 02 22:34:42 crc kubenswrapper[4789]: I0202 22:34:42.783255 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["crc-storage/crc-storage-crc-ggq8b"] Feb 02 22:34:42 crc kubenswrapper[4789]: I0202 22:34:42.907767 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-4r9c8"] Feb 02 22:34:42 crc kubenswrapper[4789]: E0202 22:34:42.908072 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6abf2c76-f615-4359-b413-545477a9a5c9" containerName="extract-content" Feb 02 22:34:42 crc kubenswrapper[4789]: I0202 22:34:42.908089 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="6abf2c76-f615-4359-b413-545477a9a5c9" containerName="extract-content" Feb 02 22:34:42 crc kubenswrapper[4789]: E0202 22:34:42.908109 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6abf2c76-f615-4359-b413-545477a9a5c9" containerName="extract-utilities" Feb 02 22:34:42 crc kubenswrapper[4789]: I0202 22:34:42.908117 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="6abf2c76-f615-4359-b413-545477a9a5c9" containerName="extract-utilities" Feb 02 22:34:42 crc kubenswrapper[4789]: E0202 22:34:42.908134 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6abf2c76-f615-4359-b413-545477a9a5c9" containerName="registry-server" Feb 02 22:34:42 crc kubenswrapper[4789]: I0202 22:34:42.908143 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="6abf2c76-f615-4359-b413-545477a9a5c9" containerName="registry-server" Feb 02 22:34:42 crc kubenswrapper[4789]: I0202 22:34:42.908337 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="6abf2c76-f615-4359-b413-545477a9a5c9" containerName="registry-server" Feb 02 22:34:42 crc kubenswrapper[4789]: I0202 22:34:42.908915 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-4r9c8" Feb 02 22:34:42 crc kubenswrapper[4789]: I0202 22:34:42.911651 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Feb 02 22:34:42 crc kubenswrapper[4789]: I0202 22:34:42.911805 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Feb 02 22:34:42 crc kubenswrapper[4789]: I0202 22:34:42.912911 4789 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-xrcmg" Feb 02 22:34:42 crc kubenswrapper[4789]: I0202 22:34:42.916177 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Feb 02 22:34:42 crc kubenswrapper[4789]: I0202 22:34:42.948532 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-4r9c8"] Feb 02 22:34:43 crc kubenswrapper[4789]: I0202 22:34:43.096612 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/aeb4cda7-c4c6-4199-8748-a91b8160adcd-crc-storage\") pod \"crc-storage-crc-4r9c8\" (UID: \"aeb4cda7-c4c6-4199-8748-a91b8160adcd\") " pod="crc-storage/crc-storage-crc-4r9c8" Feb 02 22:34:43 crc kubenswrapper[4789]: I0202 22:34:43.096749 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7z87\" (UniqueName: \"kubernetes.io/projected/aeb4cda7-c4c6-4199-8748-a91b8160adcd-kube-api-access-q7z87\") pod \"crc-storage-crc-4r9c8\" (UID: \"aeb4cda7-c4c6-4199-8748-a91b8160adcd\") " pod="crc-storage/crc-storage-crc-4r9c8" Feb 02 22:34:43 crc kubenswrapper[4789]: I0202 22:34:43.097167 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/aeb4cda7-c4c6-4199-8748-a91b8160adcd-node-mnt\") pod \"crc-storage-crc-4r9c8\" (UID: \"aeb4cda7-c4c6-4199-8748-a91b8160adcd\") " pod="crc-storage/crc-storage-crc-4r9c8" Feb 02 22:34:43 crc kubenswrapper[4789]: I0202 22:34:43.198209 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/aeb4cda7-c4c6-4199-8748-a91b8160adcd-crc-storage\") pod \"crc-storage-crc-4r9c8\" (UID: \"aeb4cda7-c4c6-4199-8748-a91b8160adcd\") " pod="crc-storage/crc-storage-crc-4r9c8" Feb 02 22:34:43 crc kubenswrapper[4789]: I0202 22:34:43.198283 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7z87\" (UniqueName: \"kubernetes.io/projected/aeb4cda7-c4c6-4199-8748-a91b8160adcd-kube-api-access-q7z87\") pod \"crc-storage-crc-4r9c8\" (UID: \"aeb4cda7-c4c6-4199-8748-a91b8160adcd\") " pod="crc-storage/crc-storage-crc-4r9c8" Feb 02 22:34:43 crc kubenswrapper[4789]: I0202 22:34:43.198453 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/aeb4cda7-c4c6-4199-8748-a91b8160adcd-node-mnt\") pod \"crc-storage-crc-4r9c8\" (UID: \"aeb4cda7-c4c6-4199-8748-a91b8160adcd\") " pod="crc-storage/crc-storage-crc-4r9c8" Feb 02 22:34:43 crc kubenswrapper[4789]: I0202 22:34:43.198926 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/aeb4cda7-c4c6-4199-8748-a91b8160adcd-node-mnt\") pod \"crc-storage-crc-4r9c8\" (UID: \"aeb4cda7-c4c6-4199-8748-a91b8160adcd\") " pod="crc-storage/crc-storage-crc-4r9c8" Feb 02 22:34:43 crc kubenswrapper[4789]: I0202 22:34:43.199486 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/aeb4cda7-c4c6-4199-8748-a91b8160adcd-crc-storage\") pod \"crc-storage-crc-4r9c8\" (UID: \"aeb4cda7-c4c6-4199-8748-a91b8160adcd\") " pod="crc-storage/crc-storage-crc-4r9c8" Feb 02 22:34:43 crc kubenswrapper[4789]: I0202 22:34:43.229458 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7z87\" (UniqueName: \"kubernetes.io/projected/aeb4cda7-c4c6-4199-8748-a91b8160adcd-kube-api-access-q7z87\") pod \"crc-storage-crc-4r9c8\" (UID: \"aeb4cda7-c4c6-4199-8748-a91b8160adcd\") " pod="crc-storage/crc-storage-crc-4r9c8" Feb 02 22:34:43 crc kubenswrapper[4789]: I0202 22:34:43.244106 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-4r9c8" Feb 02 22:34:43 crc kubenswrapper[4789]: I0202 22:34:43.521219 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-4r9c8"] Feb 02 22:34:43 crc kubenswrapper[4789]: I0202 22:34:43.527287 4789 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 22:34:43 crc kubenswrapper[4789]: I0202 22:34:43.641915 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-4r9c8" event={"ID":"aeb4cda7-c4c6-4199-8748-a91b8160adcd","Type":"ContainerStarted","Data":"0b38a0ee9d34cb8a376a9b19eafd675df7f6acfa83bb9ff56d7bb878fe59400a"} Feb 02 22:34:44 crc kubenswrapper[4789]: I0202 22:34:44.435954 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52782065-ac4f-47c2-9c7e-fd883fba3d65" path="/var/lib/kubelet/pods/52782065-ac4f-47c2-9c7e-fd883fba3d65/volumes" Feb 02 22:34:44 crc kubenswrapper[4789]: I0202 22:34:44.652316 4789 generic.go:334] "Generic (PLEG): container finished" podID="aeb4cda7-c4c6-4199-8748-a91b8160adcd" containerID="197151257b67b3a9df95a4f148a75a76f2992f304c900f1cfc646fe766a581bc" exitCode=0 Feb 02 22:34:44 crc kubenswrapper[4789]: I0202 22:34:44.652390 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-4r9c8" event={"ID":"aeb4cda7-c4c6-4199-8748-a91b8160adcd","Type":"ContainerDied","Data":"197151257b67b3a9df95a4f148a75a76f2992f304c900f1cfc646fe766a581bc"} Feb 02 22:34:46 crc kubenswrapper[4789]: I0202 22:34:46.162855 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-4r9c8" Feb 02 22:34:46 crc kubenswrapper[4789]: I0202 22:34:46.269097 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/aeb4cda7-c4c6-4199-8748-a91b8160adcd-crc-storage\") pod \"aeb4cda7-c4c6-4199-8748-a91b8160adcd\" (UID: \"aeb4cda7-c4c6-4199-8748-a91b8160adcd\") " Feb 02 22:34:46 crc kubenswrapper[4789]: I0202 22:34:46.269170 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/aeb4cda7-c4c6-4199-8748-a91b8160adcd-node-mnt\") pod \"aeb4cda7-c4c6-4199-8748-a91b8160adcd\" (UID: \"aeb4cda7-c4c6-4199-8748-a91b8160adcd\") " Feb 02 22:34:46 crc kubenswrapper[4789]: I0202 22:34:46.269346 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aeb4cda7-c4c6-4199-8748-a91b8160adcd-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "aeb4cda7-c4c6-4199-8748-a91b8160adcd" (UID: "aeb4cda7-c4c6-4199-8748-a91b8160adcd"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 22:34:46 crc kubenswrapper[4789]: I0202 22:34:46.269463 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q7z87\" (UniqueName: \"kubernetes.io/projected/aeb4cda7-c4c6-4199-8748-a91b8160adcd-kube-api-access-q7z87\") pod \"aeb4cda7-c4c6-4199-8748-a91b8160adcd\" (UID: \"aeb4cda7-c4c6-4199-8748-a91b8160adcd\") " Feb 02 22:34:46 crc kubenswrapper[4789]: I0202 22:34:46.271040 4789 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/aeb4cda7-c4c6-4199-8748-a91b8160adcd-node-mnt\") on node \"crc\" DevicePath \"\"" Feb 02 22:34:46 crc kubenswrapper[4789]: I0202 22:34:46.277081 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aeb4cda7-c4c6-4199-8748-a91b8160adcd-kube-api-access-q7z87" (OuterVolumeSpecName: "kube-api-access-q7z87") pod "aeb4cda7-c4c6-4199-8748-a91b8160adcd" (UID: "aeb4cda7-c4c6-4199-8748-a91b8160adcd"). InnerVolumeSpecName "kube-api-access-q7z87". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 22:34:46 crc kubenswrapper[4789]: I0202 22:34:46.300231 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aeb4cda7-c4c6-4199-8748-a91b8160adcd-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "aeb4cda7-c4c6-4199-8748-a91b8160adcd" (UID: "aeb4cda7-c4c6-4199-8748-a91b8160adcd"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 22:34:46 crc kubenswrapper[4789]: I0202 22:34:46.372326 4789 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/aeb4cda7-c4c6-4199-8748-a91b8160adcd-crc-storage\") on node \"crc\" DevicePath \"\"" Feb 02 22:34:46 crc kubenswrapper[4789]: I0202 22:34:46.372358 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q7z87\" (UniqueName: \"kubernetes.io/projected/aeb4cda7-c4c6-4199-8748-a91b8160adcd-kube-api-access-q7z87\") on node \"crc\" DevicePath \"\"" Feb 02 22:34:46 crc kubenswrapper[4789]: I0202 22:34:46.670728 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-4r9c8" event={"ID":"aeb4cda7-c4c6-4199-8748-a91b8160adcd","Type":"ContainerDied","Data":"0b38a0ee9d34cb8a376a9b19eafd675df7f6acfa83bb9ff56d7bb878fe59400a"} Feb 02 22:34:46 crc kubenswrapper[4789]: I0202 22:34:46.670765 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0b38a0ee9d34cb8a376a9b19eafd675df7f6acfa83bb9ff56d7bb878fe59400a" Feb 02 22:34:46 crc kubenswrapper[4789]: I0202 22:34:46.670836 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-4r9c8" Feb 02 22:34:47 crc kubenswrapper[4789]: I0202 22:34:47.472063 4789 scope.go:117] "RemoveContainer" containerID="b9abf8df79433fbb9c146f9fb3a16666d748aadc6550e5f4e2dae2aeacb72a0a" Feb 02 22:34:48 crc kubenswrapper[4789]: I0202 22:34:48.719948 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["crc-storage/crc-storage-crc-4r9c8"] Feb 02 22:34:48 crc kubenswrapper[4789]: I0202 22:34:48.729457 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["crc-storage/crc-storage-crc-4r9c8"] Feb 02 22:34:48 crc kubenswrapper[4789]: I0202 22:34:48.919232 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-lwsm7"] Feb 02 22:34:48 crc kubenswrapper[4789]: E0202 22:34:48.919891 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aeb4cda7-c4c6-4199-8748-a91b8160adcd" containerName="storage" Feb 02 22:34:48 crc kubenswrapper[4789]: I0202 22:34:48.919923 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="aeb4cda7-c4c6-4199-8748-a91b8160adcd" containerName="storage" Feb 02 22:34:48 crc kubenswrapper[4789]: I0202 22:34:48.920271 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="aeb4cda7-c4c6-4199-8748-a91b8160adcd" containerName="storage" Feb 02 22:34:48 crc kubenswrapper[4789]: I0202 22:34:48.921382 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-lwsm7" Feb 02 22:34:48 crc kubenswrapper[4789]: I0202 22:34:48.924964 4789 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-xrcmg" Feb 02 22:34:48 crc kubenswrapper[4789]: I0202 22:34:48.925680 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Feb 02 22:34:48 crc kubenswrapper[4789]: I0202 22:34:48.926332 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Feb 02 22:34:48 crc kubenswrapper[4789]: I0202 22:34:48.931358 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-lwsm7"] Feb 02 22:34:48 crc kubenswrapper[4789]: I0202 22:34:48.931528 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Feb 02 22:34:49 crc kubenswrapper[4789]: I0202 22:34:49.015284 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/3e4478c3-63ed-41ce-9f32-c249859e2235-crc-storage\") pod \"crc-storage-crc-lwsm7\" (UID: \"3e4478c3-63ed-41ce-9f32-c249859e2235\") " pod="crc-storage/crc-storage-crc-lwsm7" Feb 02 22:34:49 crc kubenswrapper[4789]: I0202 22:34:49.015572 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4m7s8\" (UniqueName: \"kubernetes.io/projected/3e4478c3-63ed-41ce-9f32-c249859e2235-kube-api-access-4m7s8\") pod \"crc-storage-crc-lwsm7\" (UID: \"3e4478c3-63ed-41ce-9f32-c249859e2235\") " pod="crc-storage/crc-storage-crc-lwsm7" Feb 02 22:34:49 crc kubenswrapper[4789]: I0202 22:34:49.015718 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/3e4478c3-63ed-41ce-9f32-c249859e2235-node-mnt\") pod \"crc-storage-crc-lwsm7\" (UID: \"3e4478c3-63ed-41ce-9f32-c249859e2235\") " pod="crc-storage/crc-storage-crc-lwsm7" Feb 02 22:34:49 crc kubenswrapper[4789]: I0202 22:34:49.117348 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/3e4478c3-63ed-41ce-9f32-c249859e2235-crc-storage\") pod \"crc-storage-crc-lwsm7\" (UID: \"3e4478c3-63ed-41ce-9f32-c249859e2235\") " pod="crc-storage/crc-storage-crc-lwsm7" Feb 02 22:34:49 crc kubenswrapper[4789]: I0202 22:34:49.117519 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4m7s8\" (UniqueName: \"kubernetes.io/projected/3e4478c3-63ed-41ce-9f32-c249859e2235-kube-api-access-4m7s8\") pod \"crc-storage-crc-lwsm7\" (UID: \"3e4478c3-63ed-41ce-9f32-c249859e2235\") " pod="crc-storage/crc-storage-crc-lwsm7" Feb 02 22:34:49 crc kubenswrapper[4789]: I0202 22:34:49.117559 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/3e4478c3-63ed-41ce-9f32-c249859e2235-node-mnt\") pod \"crc-storage-crc-lwsm7\" (UID: \"3e4478c3-63ed-41ce-9f32-c249859e2235\") " pod="crc-storage/crc-storage-crc-lwsm7" Feb 02 22:34:49 crc kubenswrapper[4789]: I0202 22:34:49.118093 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/3e4478c3-63ed-41ce-9f32-c249859e2235-node-mnt\") pod \"crc-storage-crc-lwsm7\" (UID: \"3e4478c3-63ed-41ce-9f32-c249859e2235\") " pod="crc-storage/crc-storage-crc-lwsm7" Feb 02 22:34:49 crc kubenswrapper[4789]: I0202 22:34:49.119139 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/3e4478c3-63ed-41ce-9f32-c249859e2235-crc-storage\") pod \"crc-storage-crc-lwsm7\" (UID: \"3e4478c3-63ed-41ce-9f32-c249859e2235\") " pod="crc-storage/crc-storage-crc-lwsm7" Feb 02 22:34:49 crc kubenswrapper[4789]: I0202 22:34:49.153321 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4m7s8\" (UniqueName: \"kubernetes.io/projected/3e4478c3-63ed-41ce-9f32-c249859e2235-kube-api-access-4m7s8\") pod \"crc-storage-crc-lwsm7\" (UID: \"3e4478c3-63ed-41ce-9f32-c249859e2235\") " pod="crc-storage/crc-storage-crc-lwsm7" Feb 02 22:34:49 crc kubenswrapper[4789]: I0202 22:34:49.253799 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-lwsm7" Feb 02 22:34:49 crc kubenswrapper[4789]: I0202 22:34:49.780005 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-lwsm7"] Feb 02 22:34:50 crc kubenswrapper[4789]: I0202 22:34:50.443895 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aeb4cda7-c4c6-4199-8748-a91b8160adcd" path="/var/lib/kubelet/pods/aeb4cda7-c4c6-4199-8748-a91b8160adcd/volumes" Feb 02 22:34:50 crc kubenswrapper[4789]: I0202 22:34:50.707389 4789 generic.go:334] "Generic (PLEG): container finished" podID="3e4478c3-63ed-41ce-9f32-c249859e2235" containerID="cccdd3a146b065a4aedf6975ceaae44e4236a164fd7e9b1e603eae240877cc03" exitCode=0 Feb 02 22:34:50 crc kubenswrapper[4789]: I0202 22:34:50.707471 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-lwsm7" event={"ID":"3e4478c3-63ed-41ce-9f32-c249859e2235","Type":"ContainerDied","Data":"cccdd3a146b065a4aedf6975ceaae44e4236a164fd7e9b1e603eae240877cc03"} Feb 02 22:34:50 crc kubenswrapper[4789]: I0202 22:34:50.707809 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-lwsm7" event={"ID":"3e4478c3-63ed-41ce-9f32-c249859e2235","Type":"ContainerStarted","Data":"3c81c0d0e8248720bb2dfd777f22b033c839085440525933ea4b6d5e1fa919e9"} Feb 02 22:34:52 crc kubenswrapper[4789]: I0202 22:34:52.091447 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-lwsm7" Feb 02 22:34:52 crc kubenswrapper[4789]: I0202 22:34:52.164741 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/3e4478c3-63ed-41ce-9f32-c249859e2235-node-mnt\") pod \"3e4478c3-63ed-41ce-9f32-c249859e2235\" (UID: \"3e4478c3-63ed-41ce-9f32-c249859e2235\") " Feb 02 22:34:52 crc kubenswrapper[4789]: I0202 22:34:52.164873 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/3e4478c3-63ed-41ce-9f32-c249859e2235-crc-storage\") pod \"3e4478c3-63ed-41ce-9f32-c249859e2235\" (UID: \"3e4478c3-63ed-41ce-9f32-c249859e2235\") " Feb 02 22:34:52 crc kubenswrapper[4789]: I0202 22:34:52.164880 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e4478c3-63ed-41ce-9f32-c249859e2235-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "3e4478c3-63ed-41ce-9f32-c249859e2235" (UID: "3e4478c3-63ed-41ce-9f32-c249859e2235"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 22:34:52 crc kubenswrapper[4789]: I0202 22:34:52.165819 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4m7s8\" (UniqueName: \"kubernetes.io/projected/3e4478c3-63ed-41ce-9f32-c249859e2235-kube-api-access-4m7s8\") pod \"3e4478c3-63ed-41ce-9f32-c249859e2235\" (UID: \"3e4478c3-63ed-41ce-9f32-c249859e2235\") " Feb 02 22:34:52 crc kubenswrapper[4789]: I0202 22:34:52.166083 4789 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/3e4478c3-63ed-41ce-9f32-c249859e2235-node-mnt\") on node \"crc\" DevicePath \"\"" Feb 02 22:34:52 crc kubenswrapper[4789]: I0202 22:34:52.171024 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e4478c3-63ed-41ce-9f32-c249859e2235-kube-api-access-4m7s8" (OuterVolumeSpecName: "kube-api-access-4m7s8") pod "3e4478c3-63ed-41ce-9f32-c249859e2235" (UID: "3e4478c3-63ed-41ce-9f32-c249859e2235"). InnerVolumeSpecName "kube-api-access-4m7s8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 22:34:52 crc kubenswrapper[4789]: I0202 22:34:52.187241 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e4478c3-63ed-41ce-9f32-c249859e2235-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "3e4478c3-63ed-41ce-9f32-c249859e2235" (UID: "3e4478c3-63ed-41ce-9f32-c249859e2235"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 22:34:52 crc kubenswrapper[4789]: I0202 22:34:52.267265 4789 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/3e4478c3-63ed-41ce-9f32-c249859e2235-crc-storage\") on node \"crc\" DevicePath \"\"" Feb 02 22:34:52 crc kubenswrapper[4789]: I0202 22:34:52.267308 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4m7s8\" (UniqueName: \"kubernetes.io/projected/3e4478c3-63ed-41ce-9f32-c249859e2235-kube-api-access-4m7s8\") on node \"crc\" DevicePath \"\"" Feb 02 22:34:52 crc kubenswrapper[4789]: I0202 22:34:52.735189 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-lwsm7" event={"ID":"3e4478c3-63ed-41ce-9f32-c249859e2235","Type":"ContainerDied","Data":"3c81c0d0e8248720bb2dfd777f22b033c839085440525933ea4b6d5e1fa919e9"} Feb 02 22:34:52 crc kubenswrapper[4789]: I0202 22:34:52.735248 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c81c0d0e8248720bb2dfd777f22b033c839085440525933ea4b6d5e1fa919e9" Feb 02 22:34:52 crc kubenswrapper[4789]: I0202 22:34:52.735332 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-lwsm7" Feb 02 22:34:52 crc kubenswrapper[4789]: I0202 22:34:52.842437 4789 patch_prober.go:28] interesting pod/machine-config-daemon-c8vcn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 22:34:52 crc kubenswrapper[4789]: I0202 22:34:52.842519 4789 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 22:35:22 crc kubenswrapper[4789]: I0202 22:35:22.841904 4789 patch_prober.go:28] interesting pod/machine-config-daemon-c8vcn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 22:35:22 crc kubenswrapper[4789]: I0202 22:35:22.844571 4789 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 22:35:47 crc kubenswrapper[4789]: I0202 22:35:47.546571 4789 scope.go:117] "RemoveContainer" containerID="e74b7f0df57cdfb899dfc1a50913d0553361a4f8f9cc86d8b58aa3328ae51dd5" Feb 02 22:35:47 crc kubenswrapper[4789]: I0202 22:35:47.588267 4789 scope.go:117] "RemoveContainer" containerID="2f88ccaec1189862560463e19b015dc56f76d8848a2cd986d86fe393bd21b87f" Feb 02 22:35:47 crc kubenswrapper[4789]: I0202 22:35:47.626240 4789 scope.go:117] "RemoveContainer" containerID="098170e7656ddc608d6737e7d752c2df9cce1945db09bbe5e7705db3260689d4" Feb 02 22:35:52 crc kubenswrapper[4789]: I0202 22:35:52.841493 4789 patch_prober.go:28] interesting pod/machine-config-daemon-c8vcn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 22:35:52 crc kubenswrapper[4789]: I0202 22:35:52.842015 4789 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 22:35:52 crc kubenswrapper[4789]: I0202 22:35:52.842055 4789 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" Feb 02 22:35:52 crc kubenswrapper[4789]: I0202 22:35:52.842498 4789 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e1882e0fff6a0a98d0a3daff7d66a9d268ab6f6c60c38af019ec759180dddf7d"} pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 22:35:52 crc kubenswrapper[4789]: I0202 22:35:52.842548 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" containerName="machine-config-daemon" containerID="cri-o://e1882e0fff6a0a98d0a3daff7d66a9d268ab6f6c60c38af019ec759180dddf7d" gracePeriod=600 Feb 02 22:35:52 crc kubenswrapper[4789]: E0202 22:35:52.984142 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:35:53 crc kubenswrapper[4789]: I0202 22:35:53.291308 4789 generic.go:334] "Generic (PLEG): container finished" podID="bdf018b4-1451-4d37-be6e-05802b67c73e" containerID="e1882e0fff6a0a98d0a3daff7d66a9d268ab6f6c60c38af019ec759180dddf7d" exitCode=0 Feb 02 22:35:53 crc kubenswrapper[4789]: I0202 22:35:53.291369 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" event={"ID":"bdf018b4-1451-4d37-be6e-05802b67c73e","Type":"ContainerDied","Data":"e1882e0fff6a0a98d0a3daff7d66a9d268ab6f6c60c38af019ec759180dddf7d"} Feb 02 22:35:53 crc kubenswrapper[4789]: I0202 22:35:53.291472 4789 scope.go:117] "RemoveContainer" containerID="3cbc0891eb0dc9eeb16bb56bec643de77ebfd7d6061e907d3c7af4e6dc51d56b" Feb 02 22:35:53 crc kubenswrapper[4789]: I0202 22:35:53.292346 4789 scope.go:117] "RemoveContainer" containerID="e1882e0fff6a0a98d0a3daff7d66a9d268ab6f6c60c38af019ec759180dddf7d" Feb 02 22:35:53 crc kubenswrapper[4789]: E0202 22:35:53.293123 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:36:07 crc kubenswrapper[4789]: I0202 22:36:07.419142 4789 scope.go:117] "RemoveContainer" containerID="e1882e0fff6a0a98d0a3daff7d66a9d268ab6f6c60c38af019ec759180dddf7d" Feb 02 22:36:07 crc kubenswrapper[4789]: E0202 22:36:07.419971 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:36:18 crc kubenswrapper[4789]: I0202 22:36:18.420480 4789 scope.go:117] "RemoveContainer" containerID="e1882e0fff6a0a98d0a3daff7d66a9d268ab6f6c60c38af019ec759180dddf7d" Feb 02 22:36:18 crc kubenswrapper[4789]: E0202 22:36:18.421620 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:36:29 crc kubenswrapper[4789]: I0202 22:36:29.419996 4789 scope.go:117] "RemoveContainer" containerID="e1882e0fff6a0a98d0a3daff7d66a9d268ab6f6c60c38af019ec759180dddf7d" Feb 02 22:36:29 crc kubenswrapper[4789]: E0202 22:36:29.439026 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:36:41 crc kubenswrapper[4789]: I0202 22:36:41.420565 4789 scope.go:117] "RemoveContainer" containerID="e1882e0fff6a0a98d0a3daff7d66a9d268ab6f6c60c38af019ec759180dddf7d" Feb 02 22:36:41 crc kubenswrapper[4789]: E0202 22:36:41.422019 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:36:55 crc kubenswrapper[4789]: I0202 22:36:55.420334 4789 scope.go:117] "RemoveContainer" containerID="e1882e0fff6a0a98d0a3daff7d66a9d268ab6f6c60c38af019ec759180dddf7d" Feb 02 22:36:55 crc kubenswrapper[4789]: E0202 22:36:55.421309 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:37:08 crc kubenswrapper[4789]: I0202 22:37:08.421859 4789 scope.go:117] "RemoveContainer" containerID="e1882e0fff6a0a98d0a3daff7d66a9d268ab6f6c60c38af019ec759180dddf7d" Feb 02 22:37:08 crc kubenswrapper[4789]: E0202 22:37:08.422870 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:37:09 crc kubenswrapper[4789]: I0202 22:37:09.366856 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ckqm4"] Feb 02 22:37:09 crc kubenswrapper[4789]: E0202 22:37:09.367482 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e4478c3-63ed-41ce-9f32-c249859e2235" containerName="storage" Feb 02 22:37:09 crc kubenswrapper[4789]: I0202 22:37:09.367517 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e4478c3-63ed-41ce-9f32-c249859e2235" containerName="storage" Feb 02 22:37:09 crc kubenswrapper[4789]: I0202 22:37:09.367946 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e4478c3-63ed-41ce-9f32-c249859e2235" containerName="storage" Feb 02 22:37:09 crc kubenswrapper[4789]: I0202 22:37:09.373871 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ckqm4" Feb 02 22:37:09 crc kubenswrapper[4789]: I0202 22:37:09.379515 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ckqm4"] Feb 02 22:37:09 crc kubenswrapper[4789]: I0202 22:37:09.494687 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df20b5e6-9fb1-4363-b6df-eb756b6e4f4b-catalog-content\") pod \"redhat-operators-ckqm4\" (UID: \"df20b5e6-9fb1-4363-b6df-eb756b6e4f4b\") " pod="openshift-marketplace/redhat-operators-ckqm4" Feb 02 22:37:09 crc kubenswrapper[4789]: I0202 22:37:09.494756 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df20b5e6-9fb1-4363-b6df-eb756b6e4f4b-utilities\") pod \"redhat-operators-ckqm4\" (UID: \"df20b5e6-9fb1-4363-b6df-eb756b6e4f4b\") " pod="openshift-marketplace/redhat-operators-ckqm4" Feb 02 22:37:09 crc kubenswrapper[4789]: I0202 22:37:09.496009 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmxzp\" (UniqueName: \"kubernetes.io/projected/df20b5e6-9fb1-4363-b6df-eb756b6e4f4b-kube-api-access-qmxzp\") pod \"redhat-operators-ckqm4\" (UID: \"df20b5e6-9fb1-4363-b6df-eb756b6e4f4b\") " pod="openshift-marketplace/redhat-operators-ckqm4" Feb 02 22:37:09 crc kubenswrapper[4789]: I0202 22:37:09.600342 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmxzp\" (UniqueName: \"kubernetes.io/projected/df20b5e6-9fb1-4363-b6df-eb756b6e4f4b-kube-api-access-qmxzp\") pod \"redhat-operators-ckqm4\" (UID: \"df20b5e6-9fb1-4363-b6df-eb756b6e4f4b\") " pod="openshift-marketplace/redhat-operators-ckqm4" Feb 02 22:37:09 crc kubenswrapper[4789]: I0202 22:37:09.600462 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df20b5e6-9fb1-4363-b6df-eb756b6e4f4b-catalog-content\") pod \"redhat-operators-ckqm4\" (UID: \"df20b5e6-9fb1-4363-b6df-eb756b6e4f4b\") " pod="openshift-marketplace/redhat-operators-ckqm4" Feb 02 22:37:09 crc kubenswrapper[4789]: I0202 22:37:09.600498 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df20b5e6-9fb1-4363-b6df-eb756b6e4f4b-utilities\") pod \"redhat-operators-ckqm4\" (UID: \"df20b5e6-9fb1-4363-b6df-eb756b6e4f4b\") " pod="openshift-marketplace/redhat-operators-ckqm4" Feb 02 22:37:09 crc kubenswrapper[4789]: I0202 22:37:09.601741 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df20b5e6-9fb1-4363-b6df-eb756b6e4f4b-catalog-content\") pod \"redhat-operators-ckqm4\" (UID: \"df20b5e6-9fb1-4363-b6df-eb756b6e4f4b\") " pod="openshift-marketplace/redhat-operators-ckqm4" Feb 02 22:37:09 crc kubenswrapper[4789]: I0202 22:37:09.601799 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df20b5e6-9fb1-4363-b6df-eb756b6e4f4b-utilities\") pod \"redhat-operators-ckqm4\" (UID: \"df20b5e6-9fb1-4363-b6df-eb756b6e4f4b\") " pod="openshift-marketplace/redhat-operators-ckqm4" Feb 02 22:37:09 crc kubenswrapper[4789]: I0202 22:37:09.634747 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmxzp\" (UniqueName: \"kubernetes.io/projected/df20b5e6-9fb1-4363-b6df-eb756b6e4f4b-kube-api-access-qmxzp\") pod \"redhat-operators-ckqm4\" (UID: \"df20b5e6-9fb1-4363-b6df-eb756b6e4f4b\") " pod="openshift-marketplace/redhat-operators-ckqm4" Feb 02 22:37:09 crc kubenswrapper[4789]: I0202 22:37:09.709726 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ckqm4" Feb 02 22:37:10 crc kubenswrapper[4789]: I0202 22:37:10.128738 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ckqm4"] Feb 02 22:37:10 crc kubenswrapper[4789]: W0202 22:37:10.134900 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddf20b5e6_9fb1_4363_b6df_eb756b6e4f4b.slice/crio-656cd0f339b0c102fb8416822abba78141aa4aea16089e541185728cc0b1944f WatchSource:0}: Error finding container 656cd0f339b0c102fb8416822abba78141aa4aea16089e541185728cc0b1944f: Status 404 returned error can't find the container with id 656cd0f339b0c102fb8416822abba78141aa4aea16089e541185728cc0b1944f Feb 02 22:37:11 crc kubenswrapper[4789]: I0202 22:37:11.033345 4789 generic.go:334] "Generic (PLEG): container finished" podID="df20b5e6-9fb1-4363-b6df-eb756b6e4f4b" containerID="7cc3c882a7a04de4760cccb7b72f574852982eed1a7c48a25cc557ecccbf7a22" exitCode=0 Feb 02 22:37:11 crc kubenswrapper[4789]: I0202 22:37:11.033408 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ckqm4" event={"ID":"df20b5e6-9fb1-4363-b6df-eb756b6e4f4b","Type":"ContainerDied","Data":"7cc3c882a7a04de4760cccb7b72f574852982eed1a7c48a25cc557ecccbf7a22"} Feb 02 22:37:11 crc kubenswrapper[4789]: I0202 22:37:11.033907 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ckqm4" event={"ID":"df20b5e6-9fb1-4363-b6df-eb756b6e4f4b","Type":"ContainerStarted","Data":"656cd0f339b0c102fb8416822abba78141aa4aea16089e541185728cc0b1944f"} Feb 02 22:37:12 crc kubenswrapper[4789]: I0202 22:37:12.047431 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ckqm4" event={"ID":"df20b5e6-9fb1-4363-b6df-eb756b6e4f4b","Type":"ContainerStarted","Data":"a619a8a0f069c8da9a84fee5bc6839e9d81bfc2e78dba3f080984b7c2a4c4f98"} Feb 02 22:37:13 crc kubenswrapper[4789]: I0202 22:37:13.057033 4789 generic.go:334] "Generic (PLEG): container finished" podID="df20b5e6-9fb1-4363-b6df-eb756b6e4f4b" containerID="a619a8a0f069c8da9a84fee5bc6839e9d81bfc2e78dba3f080984b7c2a4c4f98" exitCode=0 Feb 02 22:37:13 crc kubenswrapper[4789]: I0202 22:37:13.057084 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ckqm4" event={"ID":"df20b5e6-9fb1-4363-b6df-eb756b6e4f4b","Type":"ContainerDied","Data":"a619a8a0f069c8da9a84fee5bc6839e9d81bfc2e78dba3f080984b7c2a4c4f98"} Feb 02 22:37:14 crc kubenswrapper[4789]: I0202 22:37:14.067271 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ckqm4" event={"ID":"df20b5e6-9fb1-4363-b6df-eb756b6e4f4b","Type":"ContainerStarted","Data":"c245bc833d9bab6c7bcae21b171411435e808fb3557fec10fe792f580e19eb84"} Feb 02 22:37:19 crc kubenswrapper[4789]: I0202 22:37:19.710072 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ckqm4" Feb 02 22:37:19 crc kubenswrapper[4789]: I0202 22:37:19.710389 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ckqm4" Feb 02 22:37:20 crc kubenswrapper[4789]: I0202 22:37:20.755116 4789 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ckqm4" podUID="df20b5e6-9fb1-4363-b6df-eb756b6e4f4b" containerName="registry-server" probeResult="failure" output=< Feb 02 22:37:20 crc kubenswrapper[4789]: timeout: failed to connect service ":50051" within 1s Feb 02 22:37:20 crc kubenswrapper[4789]: > Feb 02 22:37:21 crc kubenswrapper[4789]: I0202 22:37:21.420451 4789 scope.go:117] "RemoveContainer" containerID="e1882e0fff6a0a98d0a3daff7d66a9d268ab6f6c60c38af019ec759180dddf7d" Feb 02 22:37:21 crc kubenswrapper[4789]: E0202 22:37:21.420860 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:37:29 crc kubenswrapper[4789]: I0202 22:37:29.778956 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ckqm4" Feb 02 22:37:29 crc kubenswrapper[4789]: I0202 22:37:29.824671 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ckqm4" podStartSLOduration=18.247954896 podStartE2EDuration="20.824656578s" podCreationTimestamp="2026-02-02 22:37:09 +0000 UTC" firstStartedPulling="2026-02-02 22:37:11.036550361 +0000 UTC m=+4651.331575410" lastFinishedPulling="2026-02-02 22:37:13.613252033 +0000 UTC m=+4653.908277092" observedRunningTime="2026-02-02 22:37:14.090798729 +0000 UTC m=+4654.385823758" watchObservedRunningTime="2026-02-02 22:37:29.824656578 +0000 UTC m=+4670.119681597" Feb 02 22:37:30 crc kubenswrapper[4789]: I0202 22:37:30.120212 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ckqm4" Feb 02 22:37:30 crc kubenswrapper[4789]: I0202 22:37:30.182055 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ckqm4"] Feb 02 22:37:31 crc kubenswrapper[4789]: I0202 22:37:31.214915 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ckqm4" podUID="df20b5e6-9fb1-4363-b6df-eb756b6e4f4b" containerName="registry-server" containerID="cri-o://c245bc833d9bab6c7bcae21b171411435e808fb3557fec10fe792f580e19eb84" gracePeriod=2 Feb 02 22:37:32 crc kubenswrapper[4789]: I0202 22:37:32.065851 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ckqm4" Feb 02 22:37:32 crc kubenswrapper[4789]: I0202 22:37:32.194966 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df20b5e6-9fb1-4363-b6df-eb756b6e4f4b-catalog-content\") pod \"df20b5e6-9fb1-4363-b6df-eb756b6e4f4b\" (UID: \"df20b5e6-9fb1-4363-b6df-eb756b6e4f4b\") " Feb 02 22:37:32 crc kubenswrapper[4789]: I0202 22:37:32.195064 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df20b5e6-9fb1-4363-b6df-eb756b6e4f4b-utilities\") pod \"df20b5e6-9fb1-4363-b6df-eb756b6e4f4b\" (UID: \"df20b5e6-9fb1-4363-b6df-eb756b6e4f4b\") " Feb 02 22:37:32 crc kubenswrapper[4789]: I0202 22:37:32.195103 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qmxzp\" (UniqueName: \"kubernetes.io/projected/df20b5e6-9fb1-4363-b6df-eb756b6e4f4b-kube-api-access-qmxzp\") pod \"df20b5e6-9fb1-4363-b6df-eb756b6e4f4b\" (UID: \"df20b5e6-9fb1-4363-b6df-eb756b6e4f4b\") " Feb 02 22:37:32 crc kubenswrapper[4789]: I0202 22:37:32.196815 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df20b5e6-9fb1-4363-b6df-eb756b6e4f4b-utilities" (OuterVolumeSpecName: "utilities") pod "df20b5e6-9fb1-4363-b6df-eb756b6e4f4b" (UID: "df20b5e6-9fb1-4363-b6df-eb756b6e4f4b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 22:37:32 crc kubenswrapper[4789]: I0202 22:37:32.204561 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df20b5e6-9fb1-4363-b6df-eb756b6e4f4b-kube-api-access-qmxzp" (OuterVolumeSpecName: "kube-api-access-qmxzp") pod "df20b5e6-9fb1-4363-b6df-eb756b6e4f4b" (UID: "df20b5e6-9fb1-4363-b6df-eb756b6e4f4b"). InnerVolumeSpecName "kube-api-access-qmxzp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 22:37:32 crc kubenswrapper[4789]: I0202 22:37:32.228810 4789 generic.go:334] "Generic (PLEG): container finished" podID="df20b5e6-9fb1-4363-b6df-eb756b6e4f4b" containerID="c245bc833d9bab6c7bcae21b171411435e808fb3557fec10fe792f580e19eb84" exitCode=0 Feb 02 22:37:32 crc kubenswrapper[4789]: I0202 22:37:32.228867 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ckqm4" event={"ID":"df20b5e6-9fb1-4363-b6df-eb756b6e4f4b","Type":"ContainerDied","Data":"c245bc833d9bab6c7bcae21b171411435e808fb3557fec10fe792f580e19eb84"} Feb 02 22:37:32 crc kubenswrapper[4789]: I0202 22:37:32.228920 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ckqm4" event={"ID":"df20b5e6-9fb1-4363-b6df-eb756b6e4f4b","Type":"ContainerDied","Data":"656cd0f339b0c102fb8416822abba78141aa4aea16089e541185728cc0b1944f"} Feb 02 22:37:32 crc kubenswrapper[4789]: I0202 22:37:32.228960 4789 scope.go:117] "RemoveContainer" containerID="c245bc833d9bab6c7bcae21b171411435e808fb3557fec10fe792f580e19eb84" Feb 02 22:37:32 crc kubenswrapper[4789]: I0202 22:37:32.229929 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ckqm4" Feb 02 22:37:32 crc kubenswrapper[4789]: I0202 22:37:32.263196 4789 scope.go:117] "RemoveContainer" containerID="a619a8a0f069c8da9a84fee5bc6839e9d81bfc2e78dba3f080984b7c2a4c4f98" Feb 02 22:37:32 crc kubenswrapper[4789]: I0202 22:37:32.293985 4789 scope.go:117] "RemoveContainer" containerID="7cc3c882a7a04de4760cccb7b72f574852982eed1a7c48a25cc557ecccbf7a22" Feb 02 22:37:32 crc kubenswrapper[4789]: I0202 22:37:32.296773 4789 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df20b5e6-9fb1-4363-b6df-eb756b6e4f4b-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 22:37:32 crc kubenswrapper[4789]: I0202 22:37:32.296989 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qmxzp\" (UniqueName: \"kubernetes.io/projected/df20b5e6-9fb1-4363-b6df-eb756b6e4f4b-kube-api-access-qmxzp\") on node \"crc\" DevicePath \"\"" Feb 02 22:37:32 crc kubenswrapper[4789]: I0202 22:37:32.340481 4789 scope.go:117] "RemoveContainer" containerID="c245bc833d9bab6c7bcae21b171411435e808fb3557fec10fe792f580e19eb84" Feb 02 22:37:32 crc kubenswrapper[4789]: E0202 22:37:32.341143 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c245bc833d9bab6c7bcae21b171411435e808fb3557fec10fe792f580e19eb84\": container with ID starting with c245bc833d9bab6c7bcae21b171411435e808fb3557fec10fe792f580e19eb84 not found: ID does not exist" containerID="c245bc833d9bab6c7bcae21b171411435e808fb3557fec10fe792f580e19eb84" Feb 02 22:37:32 crc kubenswrapper[4789]: I0202 22:37:32.341191 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c245bc833d9bab6c7bcae21b171411435e808fb3557fec10fe792f580e19eb84"} err="failed to get container status \"c245bc833d9bab6c7bcae21b171411435e808fb3557fec10fe792f580e19eb84\": rpc error: code = NotFound desc = could not find container \"c245bc833d9bab6c7bcae21b171411435e808fb3557fec10fe792f580e19eb84\": container with ID starting with c245bc833d9bab6c7bcae21b171411435e808fb3557fec10fe792f580e19eb84 not found: ID does not exist" Feb 02 22:37:32 crc kubenswrapper[4789]: I0202 22:37:32.341225 4789 scope.go:117] "RemoveContainer" containerID="a619a8a0f069c8da9a84fee5bc6839e9d81bfc2e78dba3f080984b7c2a4c4f98" Feb 02 22:37:32 crc kubenswrapper[4789]: E0202 22:37:32.342007 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a619a8a0f069c8da9a84fee5bc6839e9d81bfc2e78dba3f080984b7c2a4c4f98\": container with ID starting with a619a8a0f069c8da9a84fee5bc6839e9d81bfc2e78dba3f080984b7c2a4c4f98 not found: ID does not exist" containerID="a619a8a0f069c8da9a84fee5bc6839e9d81bfc2e78dba3f080984b7c2a4c4f98" Feb 02 22:37:32 crc kubenswrapper[4789]: I0202 22:37:32.342223 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a619a8a0f069c8da9a84fee5bc6839e9d81bfc2e78dba3f080984b7c2a4c4f98"} err="failed to get container status \"a619a8a0f069c8da9a84fee5bc6839e9d81bfc2e78dba3f080984b7c2a4c4f98\": rpc error: code = NotFound desc = could not find container \"a619a8a0f069c8da9a84fee5bc6839e9d81bfc2e78dba3f080984b7c2a4c4f98\": container with ID starting with a619a8a0f069c8da9a84fee5bc6839e9d81bfc2e78dba3f080984b7c2a4c4f98 not found: ID does not exist" Feb 02 22:37:32 crc kubenswrapper[4789]: I0202 22:37:32.342398 4789 scope.go:117] "RemoveContainer" containerID="7cc3c882a7a04de4760cccb7b72f574852982eed1a7c48a25cc557ecccbf7a22" Feb 02 22:37:32 crc kubenswrapper[4789]: E0202 22:37:32.343038 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7cc3c882a7a04de4760cccb7b72f574852982eed1a7c48a25cc557ecccbf7a22\": container with ID starting with 7cc3c882a7a04de4760cccb7b72f574852982eed1a7c48a25cc557ecccbf7a22 not found: ID does not exist" containerID="7cc3c882a7a04de4760cccb7b72f574852982eed1a7c48a25cc557ecccbf7a22" Feb 02 22:37:32 crc kubenswrapper[4789]: I0202 22:37:32.343074 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7cc3c882a7a04de4760cccb7b72f574852982eed1a7c48a25cc557ecccbf7a22"} err="failed to get container status \"7cc3c882a7a04de4760cccb7b72f574852982eed1a7c48a25cc557ecccbf7a22\": rpc error: code = NotFound desc = could not find container \"7cc3c882a7a04de4760cccb7b72f574852982eed1a7c48a25cc557ecccbf7a22\": container with ID starting with 7cc3c882a7a04de4760cccb7b72f574852982eed1a7c48a25cc557ecccbf7a22 not found: ID does not exist" Feb 02 22:37:32 crc kubenswrapper[4789]: I0202 22:37:32.393218 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df20b5e6-9fb1-4363-b6df-eb756b6e4f4b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "df20b5e6-9fb1-4363-b6df-eb756b6e4f4b" (UID: "df20b5e6-9fb1-4363-b6df-eb756b6e4f4b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 22:37:32 crc kubenswrapper[4789]: I0202 22:37:32.397962 4789 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df20b5e6-9fb1-4363-b6df-eb756b6e4f4b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 22:37:32 crc kubenswrapper[4789]: I0202 22:37:32.562780 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ckqm4"] Feb 02 22:37:32 crc kubenswrapper[4789]: I0202 22:37:32.562864 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ckqm4"] Feb 02 22:37:33 crc kubenswrapper[4789]: I0202 22:37:33.419401 4789 scope.go:117] "RemoveContainer" containerID="e1882e0fff6a0a98d0a3daff7d66a9d268ab6f6c60c38af019ec759180dddf7d" Feb 02 22:37:33 crc kubenswrapper[4789]: E0202 22:37:33.419818 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:37:34 crc kubenswrapper[4789]: I0202 22:37:34.427060 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df20b5e6-9fb1-4363-b6df-eb756b6e4f4b" path="/var/lib/kubelet/pods/df20b5e6-9fb1-4363-b6df-eb756b6e4f4b/volumes" Feb 02 22:37:45 crc kubenswrapper[4789]: I0202 22:37:45.421514 4789 scope.go:117] "RemoveContainer" containerID="e1882e0fff6a0a98d0a3daff7d66a9d268ab6f6c60c38af019ec759180dddf7d" Feb 02 22:37:45 crc kubenswrapper[4789]: E0202 22:37:45.422693 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:37:58 crc kubenswrapper[4789]: I0202 22:37:58.420253 4789 scope.go:117] "RemoveContainer" containerID="e1882e0fff6a0a98d0a3daff7d66a9d268ab6f6c60c38af019ec759180dddf7d" Feb 02 22:37:58 crc kubenswrapper[4789]: E0202 22:37:58.420966 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:38:07 crc kubenswrapper[4789]: I0202 22:38:07.434285 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5d7b5456f5-kf2j5"] Feb 02 22:38:07 crc kubenswrapper[4789]: E0202 22:38:07.435086 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df20b5e6-9fb1-4363-b6df-eb756b6e4f4b" containerName="registry-server" Feb 02 22:38:07 crc kubenswrapper[4789]: I0202 22:38:07.435100 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="df20b5e6-9fb1-4363-b6df-eb756b6e4f4b" containerName="registry-server" Feb 02 22:38:07 crc kubenswrapper[4789]: E0202 22:38:07.435116 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df20b5e6-9fb1-4363-b6df-eb756b6e4f4b" containerName="extract-utilities" Feb 02 22:38:07 crc kubenswrapper[4789]: I0202 22:38:07.435123 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="df20b5e6-9fb1-4363-b6df-eb756b6e4f4b" containerName="extract-utilities" Feb 02 22:38:07 crc kubenswrapper[4789]: E0202 22:38:07.435134 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df20b5e6-9fb1-4363-b6df-eb756b6e4f4b" containerName="extract-content" Feb 02 22:38:07 crc kubenswrapper[4789]: I0202 22:38:07.435140 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="df20b5e6-9fb1-4363-b6df-eb756b6e4f4b" containerName="extract-content" Feb 02 22:38:07 crc kubenswrapper[4789]: I0202 22:38:07.435305 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="df20b5e6-9fb1-4363-b6df-eb756b6e4f4b" containerName="registry-server" Feb 02 22:38:07 crc kubenswrapper[4789]: I0202 22:38:07.436231 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d7b5456f5-kf2j5" Feb 02 22:38:07 crc kubenswrapper[4789]: I0202 22:38:07.442590 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d7b5456f5-kf2j5"] Feb 02 22:38:07 crc kubenswrapper[4789]: I0202 22:38:07.442984 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 02 22:38:07 crc kubenswrapper[4789]: I0202 22:38:07.443142 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 02 22:38:07 crc kubenswrapper[4789]: I0202 22:38:07.443288 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 02 22:38:07 crc kubenswrapper[4789]: I0202 22:38:07.443455 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 02 22:38:07 crc kubenswrapper[4789]: I0202 22:38:07.443637 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-6hw9f" Feb 02 22:38:07 crc kubenswrapper[4789]: I0202 22:38:07.579601 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/61ba7d08-5203-4399-9929-8e9aa19e5049-dns-svc\") pod \"dnsmasq-dns-5d7b5456f5-kf2j5\" (UID: \"61ba7d08-5203-4399-9929-8e9aa19e5049\") " pod="openstack/dnsmasq-dns-5d7b5456f5-kf2j5" Feb 02 22:38:07 crc kubenswrapper[4789]: I0202 22:38:07.579647 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pv7bf\" (UniqueName: \"kubernetes.io/projected/61ba7d08-5203-4399-9929-8e9aa19e5049-kube-api-access-pv7bf\") pod \"dnsmasq-dns-5d7b5456f5-kf2j5\" (UID: \"61ba7d08-5203-4399-9929-8e9aa19e5049\") " pod="openstack/dnsmasq-dns-5d7b5456f5-kf2j5" Feb 02 22:38:07 crc kubenswrapper[4789]: I0202 22:38:07.579675 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61ba7d08-5203-4399-9929-8e9aa19e5049-config\") pod \"dnsmasq-dns-5d7b5456f5-kf2j5\" (UID: \"61ba7d08-5203-4399-9929-8e9aa19e5049\") " pod="openstack/dnsmasq-dns-5d7b5456f5-kf2j5" Feb 02 22:38:07 crc kubenswrapper[4789]: I0202 22:38:07.680819 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/61ba7d08-5203-4399-9929-8e9aa19e5049-dns-svc\") pod \"dnsmasq-dns-5d7b5456f5-kf2j5\" (UID: \"61ba7d08-5203-4399-9929-8e9aa19e5049\") " pod="openstack/dnsmasq-dns-5d7b5456f5-kf2j5" Feb 02 22:38:07 crc kubenswrapper[4789]: I0202 22:38:07.681127 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pv7bf\" (UniqueName: \"kubernetes.io/projected/61ba7d08-5203-4399-9929-8e9aa19e5049-kube-api-access-pv7bf\") pod \"dnsmasq-dns-5d7b5456f5-kf2j5\" (UID: \"61ba7d08-5203-4399-9929-8e9aa19e5049\") " pod="openstack/dnsmasq-dns-5d7b5456f5-kf2j5" Feb 02 22:38:07 crc kubenswrapper[4789]: I0202 22:38:07.681149 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61ba7d08-5203-4399-9929-8e9aa19e5049-config\") pod \"dnsmasq-dns-5d7b5456f5-kf2j5\" (UID: \"61ba7d08-5203-4399-9929-8e9aa19e5049\") " pod="openstack/dnsmasq-dns-5d7b5456f5-kf2j5" Feb 02 22:38:07 crc kubenswrapper[4789]: I0202 22:38:07.681462 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-98ddfc8f-74xt8"] Feb 02 22:38:07 crc kubenswrapper[4789]: I0202 22:38:07.681787 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/61ba7d08-5203-4399-9929-8e9aa19e5049-dns-svc\") pod \"dnsmasq-dns-5d7b5456f5-kf2j5\" (UID: \"61ba7d08-5203-4399-9929-8e9aa19e5049\") " pod="openstack/dnsmasq-dns-5d7b5456f5-kf2j5" Feb 02 22:38:07 crc kubenswrapper[4789]: I0202 22:38:07.682021 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61ba7d08-5203-4399-9929-8e9aa19e5049-config\") pod \"dnsmasq-dns-5d7b5456f5-kf2j5\" (UID: \"61ba7d08-5203-4399-9929-8e9aa19e5049\") " pod="openstack/dnsmasq-dns-5d7b5456f5-kf2j5" Feb 02 22:38:07 crc kubenswrapper[4789]: I0202 22:38:07.682502 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-98ddfc8f-74xt8" Feb 02 22:38:07 crc kubenswrapper[4789]: I0202 22:38:07.703108 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-98ddfc8f-74xt8"] Feb 02 22:38:07 crc kubenswrapper[4789]: I0202 22:38:07.709757 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pv7bf\" (UniqueName: \"kubernetes.io/projected/61ba7d08-5203-4399-9929-8e9aa19e5049-kube-api-access-pv7bf\") pod \"dnsmasq-dns-5d7b5456f5-kf2j5\" (UID: \"61ba7d08-5203-4399-9929-8e9aa19e5049\") " pod="openstack/dnsmasq-dns-5d7b5456f5-kf2j5" Feb 02 22:38:07 crc kubenswrapper[4789]: I0202 22:38:07.759707 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d7b5456f5-kf2j5" Feb 02 22:38:07 crc kubenswrapper[4789]: I0202 22:38:07.782117 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b99aab4-38cb-404e-8156-39b0491442cc-config\") pod \"dnsmasq-dns-98ddfc8f-74xt8\" (UID: \"0b99aab4-38cb-404e-8156-39b0491442cc\") " pod="openstack/dnsmasq-dns-98ddfc8f-74xt8" Feb 02 22:38:07 crc kubenswrapper[4789]: I0202 22:38:07.782207 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0b99aab4-38cb-404e-8156-39b0491442cc-dns-svc\") pod \"dnsmasq-dns-98ddfc8f-74xt8\" (UID: \"0b99aab4-38cb-404e-8156-39b0491442cc\") " pod="openstack/dnsmasq-dns-98ddfc8f-74xt8" Feb 02 22:38:07 crc kubenswrapper[4789]: I0202 22:38:07.782250 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jf8s9\" (UniqueName: \"kubernetes.io/projected/0b99aab4-38cb-404e-8156-39b0491442cc-kube-api-access-jf8s9\") pod \"dnsmasq-dns-98ddfc8f-74xt8\" (UID: \"0b99aab4-38cb-404e-8156-39b0491442cc\") " pod="openstack/dnsmasq-dns-98ddfc8f-74xt8" Feb 02 22:38:07 crc kubenswrapper[4789]: I0202 22:38:07.883820 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b99aab4-38cb-404e-8156-39b0491442cc-config\") pod \"dnsmasq-dns-98ddfc8f-74xt8\" (UID: \"0b99aab4-38cb-404e-8156-39b0491442cc\") " pod="openstack/dnsmasq-dns-98ddfc8f-74xt8" Feb 02 22:38:07 crc kubenswrapper[4789]: I0202 22:38:07.884123 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0b99aab4-38cb-404e-8156-39b0491442cc-dns-svc\") pod \"dnsmasq-dns-98ddfc8f-74xt8\" (UID: \"0b99aab4-38cb-404e-8156-39b0491442cc\") " pod="openstack/dnsmasq-dns-98ddfc8f-74xt8" Feb 02 22:38:07 crc kubenswrapper[4789]: I0202 22:38:07.884161 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jf8s9\" (UniqueName: \"kubernetes.io/projected/0b99aab4-38cb-404e-8156-39b0491442cc-kube-api-access-jf8s9\") pod \"dnsmasq-dns-98ddfc8f-74xt8\" (UID: \"0b99aab4-38cb-404e-8156-39b0491442cc\") " pod="openstack/dnsmasq-dns-98ddfc8f-74xt8" Feb 02 22:38:07 crc kubenswrapper[4789]: I0202 22:38:07.884922 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b99aab4-38cb-404e-8156-39b0491442cc-config\") pod \"dnsmasq-dns-98ddfc8f-74xt8\" (UID: \"0b99aab4-38cb-404e-8156-39b0491442cc\") " pod="openstack/dnsmasq-dns-98ddfc8f-74xt8" Feb 02 22:38:07 crc kubenswrapper[4789]: I0202 22:38:07.885086 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0b99aab4-38cb-404e-8156-39b0491442cc-dns-svc\") pod \"dnsmasq-dns-98ddfc8f-74xt8\" (UID: \"0b99aab4-38cb-404e-8156-39b0491442cc\") " pod="openstack/dnsmasq-dns-98ddfc8f-74xt8" Feb 02 22:38:07 crc kubenswrapper[4789]: I0202 22:38:07.911337 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jf8s9\" (UniqueName: \"kubernetes.io/projected/0b99aab4-38cb-404e-8156-39b0491442cc-kube-api-access-jf8s9\") pod \"dnsmasq-dns-98ddfc8f-74xt8\" (UID: \"0b99aab4-38cb-404e-8156-39b0491442cc\") " pod="openstack/dnsmasq-dns-98ddfc8f-74xt8" Feb 02 22:38:08 crc kubenswrapper[4789]: I0202 22:38:08.005363 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-98ddfc8f-74xt8" Feb 02 22:38:08 crc kubenswrapper[4789]: I0202 22:38:08.216846 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d7b5456f5-kf2j5"] Feb 02 22:38:08 crc kubenswrapper[4789]: I0202 22:38:08.246401 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-98ddfc8f-74xt8"] Feb 02 22:38:08 crc kubenswrapper[4789]: W0202 22:38:08.251254 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0b99aab4_38cb_404e_8156_39b0491442cc.slice/crio-f37ee109c275c9ca2bd7a76fe38de3f2704eafa09c970f56fe7d5e789fd9fcec WatchSource:0}: Error finding container f37ee109c275c9ca2bd7a76fe38de3f2704eafa09c970f56fe7d5e789fd9fcec: Status 404 returned error can't find the container with id f37ee109c275c9ca2bd7a76fe38de3f2704eafa09c970f56fe7d5e789fd9fcec Feb 02 22:38:08 crc kubenswrapper[4789]: I0202 22:38:08.573518 4789 generic.go:334] "Generic (PLEG): container finished" podID="61ba7d08-5203-4399-9929-8e9aa19e5049" containerID="784909410b45f7420c0f1a773eca81e35efdbd34a4cd962bdec87aa5a3e86c8a" exitCode=0 Feb 02 22:38:08 crc kubenswrapper[4789]: I0202 22:38:08.573638 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d7b5456f5-kf2j5" event={"ID":"61ba7d08-5203-4399-9929-8e9aa19e5049","Type":"ContainerDied","Data":"784909410b45f7420c0f1a773eca81e35efdbd34a4cd962bdec87aa5a3e86c8a"} Feb 02 22:38:08 crc kubenswrapper[4789]: I0202 22:38:08.574080 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d7b5456f5-kf2j5" event={"ID":"61ba7d08-5203-4399-9929-8e9aa19e5049","Type":"ContainerStarted","Data":"c66a0f8051f5128eaa9382c263509f3b3340556b5d73c91e82bd5aaefab45388"} Feb 02 22:38:08 crc kubenswrapper[4789]: I0202 22:38:08.578070 4789 generic.go:334] "Generic (PLEG): container finished" podID="0b99aab4-38cb-404e-8156-39b0491442cc" containerID="c1723ac67f75880368c6e8f7880fc6a3d80854529520f88e42d2c5b9ff6b3a4a" exitCode=0 Feb 02 22:38:08 crc kubenswrapper[4789]: I0202 22:38:08.578118 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-98ddfc8f-74xt8" event={"ID":"0b99aab4-38cb-404e-8156-39b0491442cc","Type":"ContainerDied","Data":"c1723ac67f75880368c6e8f7880fc6a3d80854529520f88e42d2c5b9ff6b3a4a"} Feb 02 22:38:08 crc kubenswrapper[4789]: I0202 22:38:08.578145 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-98ddfc8f-74xt8" event={"ID":"0b99aab4-38cb-404e-8156-39b0491442cc","Type":"ContainerStarted","Data":"f37ee109c275c9ca2bd7a76fe38de3f2704eafa09c970f56fe7d5e789fd9fcec"} Feb 02 22:38:08 crc kubenswrapper[4789]: I0202 22:38:08.598603 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 02 22:38:08 crc kubenswrapper[4789]: I0202 22:38:08.600522 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 02 22:38:08 crc kubenswrapper[4789]: I0202 22:38:08.605774 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 02 22:38:08 crc kubenswrapper[4789]: I0202 22:38:08.606108 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 02 22:38:08 crc kubenswrapper[4789]: I0202 22:38:08.606839 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 02 22:38:08 crc kubenswrapper[4789]: I0202 22:38:08.609388 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-xv9ct" Feb 02 22:38:08 crc kubenswrapper[4789]: I0202 22:38:08.614240 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 02 22:38:08 crc kubenswrapper[4789]: I0202 22:38:08.674351 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 02 22:38:08 crc kubenswrapper[4789]: I0202 22:38:08.699424 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/16adfa27-ae3d-4915-8156-03be4321a9a2-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"16adfa27-ae3d-4915-8156-03be4321a9a2\") " pod="openstack/rabbitmq-server-0" Feb 02 22:38:08 crc kubenswrapper[4789]: I0202 22:38:08.699703 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-9eb619b9-a0b2-4538-b62e-02c1541dc8bf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9eb619b9-a0b2-4538-b62e-02c1541dc8bf\") pod \"rabbitmq-server-0\" (UID: \"16adfa27-ae3d-4915-8156-03be4321a9a2\") " pod="openstack/rabbitmq-server-0" Feb 02 22:38:08 crc kubenswrapper[4789]: I0202 22:38:08.699883 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/16adfa27-ae3d-4915-8156-03be4321a9a2-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"16adfa27-ae3d-4915-8156-03be4321a9a2\") " pod="openstack/rabbitmq-server-0" Feb 02 22:38:08 crc kubenswrapper[4789]: I0202 22:38:08.699978 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/16adfa27-ae3d-4915-8156-03be4321a9a2-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"16adfa27-ae3d-4915-8156-03be4321a9a2\") " pod="openstack/rabbitmq-server-0" Feb 02 22:38:08 crc kubenswrapper[4789]: I0202 22:38:08.701085 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmr42\" (UniqueName: \"kubernetes.io/projected/16adfa27-ae3d-4915-8156-03be4321a9a2-kube-api-access-mmr42\") pod \"rabbitmq-server-0\" (UID: \"16adfa27-ae3d-4915-8156-03be4321a9a2\") " pod="openstack/rabbitmq-server-0" Feb 02 22:38:08 crc kubenswrapper[4789]: I0202 22:38:08.702219 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/16adfa27-ae3d-4915-8156-03be4321a9a2-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"16adfa27-ae3d-4915-8156-03be4321a9a2\") " pod="openstack/rabbitmq-server-0" Feb 02 22:38:08 crc kubenswrapper[4789]: I0202 22:38:08.703211 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/16adfa27-ae3d-4915-8156-03be4321a9a2-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"16adfa27-ae3d-4915-8156-03be4321a9a2\") " pod="openstack/rabbitmq-server-0" Feb 02 22:38:08 crc kubenswrapper[4789]: I0202 22:38:08.705313 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/16adfa27-ae3d-4915-8156-03be4321a9a2-pod-info\") pod \"rabbitmq-server-0\" (UID: \"16adfa27-ae3d-4915-8156-03be4321a9a2\") " pod="openstack/rabbitmq-server-0" Feb 02 22:38:08 crc kubenswrapper[4789]: I0202 22:38:08.705475 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/16adfa27-ae3d-4915-8156-03be4321a9a2-server-conf\") pod \"rabbitmq-server-0\" (UID: \"16adfa27-ae3d-4915-8156-03be4321a9a2\") " pod="openstack/rabbitmq-server-0" Feb 02 22:38:08 crc kubenswrapper[4789]: I0202 22:38:08.807062 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/16adfa27-ae3d-4915-8156-03be4321a9a2-server-conf\") pod \"rabbitmq-server-0\" (UID: \"16adfa27-ae3d-4915-8156-03be4321a9a2\") " pod="openstack/rabbitmq-server-0" Feb 02 22:38:08 crc kubenswrapper[4789]: I0202 22:38:08.807126 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/16adfa27-ae3d-4915-8156-03be4321a9a2-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"16adfa27-ae3d-4915-8156-03be4321a9a2\") " pod="openstack/rabbitmq-server-0" Feb 02 22:38:08 crc kubenswrapper[4789]: I0202 22:38:08.807155 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-9eb619b9-a0b2-4538-b62e-02c1541dc8bf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9eb619b9-a0b2-4538-b62e-02c1541dc8bf\") pod \"rabbitmq-server-0\" (UID: \"16adfa27-ae3d-4915-8156-03be4321a9a2\") " pod="openstack/rabbitmq-server-0" Feb 02 22:38:08 crc kubenswrapper[4789]: I0202 22:38:08.807185 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/16adfa27-ae3d-4915-8156-03be4321a9a2-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"16adfa27-ae3d-4915-8156-03be4321a9a2\") " pod="openstack/rabbitmq-server-0" Feb 02 22:38:08 crc kubenswrapper[4789]: I0202 22:38:08.807211 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/16adfa27-ae3d-4915-8156-03be4321a9a2-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"16adfa27-ae3d-4915-8156-03be4321a9a2\") " pod="openstack/rabbitmq-server-0" Feb 02 22:38:08 crc kubenswrapper[4789]: I0202 22:38:08.807234 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmr42\" (UniqueName: \"kubernetes.io/projected/16adfa27-ae3d-4915-8156-03be4321a9a2-kube-api-access-mmr42\") pod \"rabbitmq-server-0\" (UID: \"16adfa27-ae3d-4915-8156-03be4321a9a2\") " pod="openstack/rabbitmq-server-0" Feb 02 22:38:08 crc kubenswrapper[4789]: I0202 22:38:08.807262 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/16adfa27-ae3d-4915-8156-03be4321a9a2-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"16adfa27-ae3d-4915-8156-03be4321a9a2\") " pod="openstack/rabbitmq-server-0" Feb 02 22:38:08 crc kubenswrapper[4789]: I0202 22:38:08.807284 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/16adfa27-ae3d-4915-8156-03be4321a9a2-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"16adfa27-ae3d-4915-8156-03be4321a9a2\") " pod="openstack/rabbitmq-server-0" Feb 02 22:38:08 crc kubenswrapper[4789]: I0202 22:38:08.807313 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/16adfa27-ae3d-4915-8156-03be4321a9a2-pod-info\") pod \"rabbitmq-server-0\" (UID: \"16adfa27-ae3d-4915-8156-03be4321a9a2\") " pod="openstack/rabbitmq-server-0" Feb 02 22:38:08 crc kubenswrapper[4789]: I0202 22:38:08.808931 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/16adfa27-ae3d-4915-8156-03be4321a9a2-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"16adfa27-ae3d-4915-8156-03be4321a9a2\") " pod="openstack/rabbitmq-server-0" Feb 02 22:38:08 crc kubenswrapper[4789]: I0202 22:38:08.809013 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/16adfa27-ae3d-4915-8156-03be4321a9a2-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"16adfa27-ae3d-4915-8156-03be4321a9a2\") " pod="openstack/rabbitmq-server-0" Feb 02 22:38:08 crc kubenswrapper[4789]: I0202 22:38:08.809049 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/16adfa27-ae3d-4915-8156-03be4321a9a2-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"16adfa27-ae3d-4915-8156-03be4321a9a2\") " pod="openstack/rabbitmq-server-0" Feb 02 22:38:08 crc kubenswrapper[4789]: I0202 22:38:08.809108 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/16adfa27-ae3d-4915-8156-03be4321a9a2-server-conf\") pod \"rabbitmq-server-0\" (UID: \"16adfa27-ae3d-4915-8156-03be4321a9a2\") " pod="openstack/rabbitmq-server-0" Feb 02 22:38:08 crc kubenswrapper[4789]: I0202 22:38:08.811701 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/16adfa27-ae3d-4915-8156-03be4321a9a2-pod-info\") pod \"rabbitmq-server-0\" (UID: \"16adfa27-ae3d-4915-8156-03be4321a9a2\") " pod="openstack/rabbitmq-server-0" Feb 02 22:38:08 crc kubenswrapper[4789]: I0202 22:38:08.811785 4789 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 02 22:38:08 crc kubenswrapper[4789]: I0202 22:38:08.811964 4789 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-9eb619b9-a0b2-4538-b62e-02c1541dc8bf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9eb619b9-a0b2-4538-b62e-02c1541dc8bf\") pod \"rabbitmq-server-0\" (UID: \"16adfa27-ae3d-4915-8156-03be4321a9a2\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/50f3ec5928e7fad23fc11727c84c4bdcaec865a8125c7a7bb074d3bc349e942b/globalmount\"" pod="openstack/rabbitmq-server-0" Feb 02 22:38:08 crc kubenswrapper[4789]: I0202 22:38:08.815926 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/16adfa27-ae3d-4915-8156-03be4321a9a2-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"16adfa27-ae3d-4915-8156-03be4321a9a2\") " pod="openstack/rabbitmq-server-0" Feb 02 22:38:08 crc kubenswrapper[4789]: I0202 22:38:08.816313 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/16adfa27-ae3d-4915-8156-03be4321a9a2-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"16adfa27-ae3d-4915-8156-03be4321a9a2\") " pod="openstack/rabbitmq-server-0" Feb 02 22:38:08 crc kubenswrapper[4789]: I0202 22:38:08.830481 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmr42\" (UniqueName: \"kubernetes.io/projected/16adfa27-ae3d-4915-8156-03be4321a9a2-kube-api-access-mmr42\") pod \"rabbitmq-server-0\" (UID: \"16adfa27-ae3d-4915-8156-03be4321a9a2\") " pod="openstack/rabbitmq-server-0" Feb 02 22:38:08 crc kubenswrapper[4789]: I0202 22:38:08.859998 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 02 22:38:08 crc kubenswrapper[4789]: I0202 22:38:08.861697 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 02 22:38:08 crc kubenswrapper[4789]: I0202 22:38:08.865925 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 02 22:38:08 crc kubenswrapper[4789]: I0202 22:38:08.866120 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-m6jzt" Feb 02 22:38:08 crc kubenswrapper[4789]: I0202 22:38:08.866225 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 02 22:38:08 crc kubenswrapper[4789]: I0202 22:38:08.866372 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 02 22:38:08 crc kubenswrapper[4789]: I0202 22:38:08.867057 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 02 22:38:08 crc kubenswrapper[4789]: I0202 22:38:08.868809 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-9eb619b9-a0b2-4538-b62e-02c1541dc8bf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9eb619b9-a0b2-4538-b62e-02c1541dc8bf\") pod \"rabbitmq-server-0\" (UID: \"16adfa27-ae3d-4915-8156-03be4321a9a2\") " pod="openstack/rabbitmq-server-0" Feb 02 22:38:08 crc kubenswrapper[4789]: I0202 22:38:08.875592 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 02 22:38:08 crc kubenswrapper[4789]: I0202 22:38:08.908169 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7f79b555-f224-4e21-8650-5deed8215651-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f79b555-f224-4e21-8650-5deed8215651\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 22:38:08 crc kubenswrapper[4789]: I0202 22:38:08.908217 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7f79b555-f224-4e21-8650-5deed8215651-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f79b555-f224-4e21-8650-5deed8215651\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 22:38:08 crc kubenswrapper[4789]: I0202 22:38:08.908235 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7f79b555-f224-4e21-8650-5deed8215651-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f79b555-f224-4e21-8650-5deed8215651\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 22:38:08 crc kubenswrapper[4789]: I0202 22:38:08.908426 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fn6fw\" (UniqueName: \"kubernetes.io/projected/7f79b555-f224-4e21-8650-5deed8215651-kube-api-access-fn6fw\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f79b555-f224-4e21-8650-5deed8215651\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 22:38:08 crc kubenswrapper[4789]: I0202 22:38:08.908503 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7f79b555-f224-4e21-8650-5deed8215651-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f79b555-f224-4e21-8650-5deed8215651\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 22:38:08 crc kubenswrapper[4789]: I0202 22:38:08.908538 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-01b9bde9-4c97-4e47-bd8b-0247431a5880\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-01b9bde9-4c97-4e47-bd8b-0247431a5880\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f79b555-f224-4e21-8650-5deed8215651\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 22:38:08 crc kubenswrapper[4789]: I0202 22:38:08.908553 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7f79b555-f224-4e21-8650-5deed8215651-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f79b555-f224-4e21-8650-5deed8215651\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 22:38:08 crc kubenswrapper[4789]: I0202 22:38:08.908634 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7f79b555-f224-4e21-8650-5deed8215651-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f79b555-f224-4e21-8650-5deed8215651\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 22:38:08 crc kubenswrapper[4789]: I0202 22:38:08.908862 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7f79b555-f224-4e21-8650-5deed8215651-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f79b555-f224-4e21-8650-5deed8215651\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 22:38:08 crc kubenswrapper[4789]: I0202 22:38:08.953448 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 02 22:38:09 crc kubenswrapper[4789]: I0202 22:38:09.010570 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7f79b555-f224-4e21-8650-5deed8215651-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f79b555-f224-4e21-8650-5deed8215651\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 22:38:09 crc kubenswrapper[4789]: I0202 22:38:09.010999 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7f79b555-f224-4e21-8650-5deed8215651-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f79b555-f224-4e21-8650-5deed8215651\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 22:38:09 crc kubenswrapper[4789]: I0202 22:38:09.011028 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7f79b555-f224-4e21-8650-5deed8215651-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f79b555-f224-4e21-8650-5deed8215651\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 22:38:09 crc kubenswrapper[4789]: I0202 22:38:09.011065 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fn6fw\" (UniqueName: \"kubernetes.io/projected/7f79b555-f224-4e21-8650-5deed8215651-kube-api-access-fn6fw\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f79b555-f224-4e21-8650-5deed8215651\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 22:38:09 crc kubenswrapper[4789]: I0202 22:38:09.011105 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7f79b555-f224-4e21-8650-5deed8215651-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f79b555-f224-4e21-8650-5deed8215651\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 22:38:09 crc kubenswrapper[4789]: I0202 22:38:09.011132 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-01b9bde9-4c97-4e47-bd8b-0247431a5880\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-01b9bde9-4c97-4e47-bd8b-0247431a5880\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f79b555-f224-4e21-8650-5deed8215651\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 22:38:09 crc kubenswrapper[4789]: I0202 22:38:09.011152 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7f79b555-f224-4e21-8650-5deed8215651-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f79b555-f224-4e21-8650-5deed8215651\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 22:38:09 crc kubenswrapper[4789]: I0202 22:38:09.011176 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7f79b555-f224-4e21-8650-5deed8215651-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f79b555-f224-4e21-8650-5deed8215651\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 22:38:09 crc kubenswrapper[4789]: I0202 22:38:09.011231 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7f79b555-f224-4e21-8650-5deed8215651-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f79b555-f224-4e21-8650-5deed8215651\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 22:38:09 crc kubenswrapper[4789]: I0202 22:38:09.011408 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7f79b555-f224-4e21-8650-5deed8215651-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f79b555-f224-4e21-8650-5deed8215651\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 22:38:09 crc kubenswrapper[4789]: I0202 22:38:09.012247 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7f79b555-f224-4e21-8650-5deed8215651-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f79b555-f224-4e21-8650-5deed8215651\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 22:38:09 crc kubenswrapper[4789]: I0202 22:38:09.012667 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7f79b555-f224-4e21-8650-5deed8215651-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f79b555-f224-4e21-8650-5deed8215651\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 22:38:09 crc kubenswrapper[4789]: I0202 22:38:09.013476 4789 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 02 22:38:09 crc kubenswrapper[4789]: I0202 22:38:09.013510 4789 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-01b9bde9-4c97-4e47-bd8b-0247431a5880\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-01b9bde9-4c97-4e47-bd8b-0247431a5880\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f79b555-f224-4e21-8650-5deed8215651\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/3ce274a3d6ff6b0985238182f74ff55317fd054224578c6e2ba90e2dd927c745/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Feb 02 22:38:09 crc kubenswrapper[4789]: I0202 22:38:09.013818 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7f79b555-f224-4e21-8650-5deed8215651-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f79b555-f224-4e21-8650-5deed8215651\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 22:38:09 crc kubenswrapper[4789]: I0202 22:38:09.016476 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7f79b555-f224-4e21-8650-5deed8215651-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f79b555-f224-4e21-8650-5deed8215651\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 22:38:09 crc kubenswrapper[4789]: I0202 22:38:09.016509 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7f79b555-f224-4e21-8650-5deed8215651-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f79b555-f224-4e21-8650-5deed8215651\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 22:38:09 crc kubenswrapper[4789]: I0202 22:38:09.021112 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7f79b555-f224-4e21-8650-5deed8215651-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f79b555-f224-4e21-8650-5deed8215651\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 22:38:09 crc kubenswrapper[4789]: I0202 22:38:09.032069 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fn6fw\" (UniqueName: \"kubernetes.io/projected/7f79b555-f224-4e21-8650-5deed8215651-kube-api-access-fn6fw\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f79b555-f224-4e21-8650-5deed8215651\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 22:38:09 crc kubenswrapper[4789]: I0202 22:38:09.062105 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-01b9bde9-4c97-4e47-bd8b-0247431a5880\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-01b9bde9-4c97-4e47-bd8b-0247431a5880\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f79b555-f224-4e21-8650-5deed8215651\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 22:38:09 crc kubenswrapper[4789]: I0202 22:38:09.211833 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 02 22:38:09 crc kubenswrapper[4789]: I0202 22:38:09.364091 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 02 22:38:09 crc kubenswrapper[4789]: W0202 22:38:09.383485 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16adfa27_ae3d_4915_8156_03be4321a9a2.slice/crio-e5ab25a5dab04210fe7e09fc4ab5273bbcab8cc5f3e1ab18ef3f7d3109775cea WatchSource:0}: Error finding container e5ab25a5dab04210fe7e09fc4ab5273bbcab8cc5f3e1ab18ef3f7d3109775cea: Status 404 returned error can't find the container with id e5ab25a5dab04210fe7e09fc4ab5273bbcab8cc5f3e1ab18ef3f7d3109775cea Feb 02 22:38:09 crc kubenswrapper[4789]: I0202 22:38:09.589595 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d7b5456f5-kf2j5" event={"ID":"61ba7d08-5203-4399-9929-8e9aa19e5049","Type":"ContainerStarted","Data":"8c38925782971382b59b19a0d94ce265c1215918076da5b9cd6df328a36f7eff"} Feb 02 22:38:09 crc kubenswrapper[4789]: I0202 22:38:09.589997 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5d7b5456f5-kf2j5" Feb 02 22:38:09 crc kubenswrapper[4789]: I0202 22:38:09.592469 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"16adfa27-ae3d-4915-8156-03be4321a9a2","Type":"ContainerStarted","Data":"e5ab25a5dab04210fe7e09fc4ab5273bbcab8cc5f3e1ab18ef3f7d3109775cea"} Feb 02 22:38:09 crc kubenswrapper[4789]: I0202 22:38:09.595884 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-98ddfc8f-74xt8" event={"ID":"0b99aab4-38cb-404e-8156-39b0491442cc","Type":"ContainerStarted","Data":"74b699484ed0ab074a613e01dfb47e7f3ec80c5eaadeb3f25b8a6386b1ca1c9b"} Feb 02 22:38:09 crc kubenswrapper[4789]: I0202 22:38:09.596038 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-98ddfc8f-74xt8" Feb 02 22:38:09 crc kubenswrapper[4789]: I0202 22:38:09.608252 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5d7b5456f5-kf2j5" podStartSLOduration=2.6082271 podStartE2EDuration="2.6082271s" podCreationTimestamp="2026-02-02 22:38:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 22:38:09.604773463 +0000 UTC m=+4709.899798552" watchObservedRunningTime="2026-02-02 22:38:09.6082271 +0000 UTC m=+4709.903252139" Feb 02 22:38:09 crc kubenswrapper[4789]: I0202 22:38:09.624786 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-98ddfc8f-74xt8" podStartSLOduration=2.624763727 podStartE2EDuration="2.624763727s" podCreationTimestamp="2026-02-02 22:38:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 22:38:09.619249511 +0000 UTC m=+4709.914274580" watchObservedRunningTime="2026-02-02 22:38:09.624763727 +0000 UTC m=+4709.919788756" Feb 02 22:38:09 crc kubenswrapper[4789]: I0202 22:38:09.657314 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 02 22:38:09 crc kubenswrapper[4789]: W0202 22:38:09.665872 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7f79b555_f224_4e21_8650_5deed8215651.slice/crio-f8fb85908d3c92d3f07c97a3161c95bbc61817efb93bb50c9298ce80a5ce5303 WatchSource:0}: Error finding container f8fb85908d3c92d3f07c97a3161c95bbc61817efb93bb50c9298ce80a5ce5303: Status 404 returned error can't find the container with id f8fb85908d3c92d3f07c97a3161c95bbc61817efb93bb50c9298ce80a5ce5303 Feb 02 22:38:10 crc kubenswrapper[4789]: I0202 22:38:10.000285 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 02 22:38:10 crc kubenswrapper[4789]: I0202 22:38:10.001564 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 02 22:38:10 crc kubenswrapper[4789]: I0202 22:38:10.003955 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-8kr4x" Feb 02 22:38:10 crc kubenswrapper[4789]: I0202 22:38:10.004029 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 02 22:38:10 crc kubenswrapper[4789]: I0202 22:38:10.004464 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 02 22:38:10 crc kubenswrapper[4789]: I0202 22:38:10.005376 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 02 22:38:10 crc kubenswrapper[4789]: I0202 22:38:10.018647 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 02 22:38:10 crc kubenswrapper[4789]: I0202 22:38:10.018819 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 02 22:38:10 crc kubenswrapper[4789]: I0202 22:38:10.029299 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-14a5b2fe-3697-4d30-b1bf-e707734a31cb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-14a5b2fe-3697-4d30-b1bf-e707734a31cb\") pod \"openstack-galera-0\" (UID: \"5b0e9cfc-d618-4cbc-ab7c-f86d711e087f\") " pod="openstack/openstack-galera-0" Feb 02 22:38:10 crc kubenswrapper[4789]: I0202 22:38:10.029737 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/5b0e9cfc-d618-4cbc-ab7c-f86d711e087f-config-data-generated\") pod \"openstack-galera-0\" (UID: \"5b0e9cfc-d618-4cbc-ab7c-f86d711e087f\") " pod="openstack/openstack-galera-0" Feb 02 22:38:10 crc kubenswrapper[4789]: I0202 22:38:10.029925 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/5b0e9cfc-d618-4cbc-ab7c-f86d711e087f-config-data-default\") pod \"openstack-galera-0\" (UID: \"5b0e9cfc-d618-4cbc-ab7c-f86d711e087f\") " pod="openstack/openstack-galera-0" Feb 02 22:38:10 crc kubenswrapper[4789]: I0202 22:38:10.030210 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b0e9cfc-d618-4cbc-ab7c-f86d711e087f-operator-scripts\") pod \"openstack-galera-0\" (UID: \"5b0e9cfc-d618-4cbc-ab7c-f86d711e087f\") " pod="openstack/openstack-galera-0" Feb 02 22:38:10 crc kubenswrapper[4789]: I0202 22:38:10.030392 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8l5gf\" (UniqueName: \"kubernetes.io/projected/5b0e9cfc-d618-4cbc-ab7c-f86d711e087f-kube-api-access-8l5gf\") pod \"openstack-galera-0\" (UID: \"5b0e9cfc-d618-4cbc-ab7c-f86d711e087f\") " pod="openstack/openstack-galera-0" Feb 02 22:38:10 crc kubenswrapper[4789]: I0202 22:38:10.030640 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/5b0e9cfc-d618-4cbc-ab7c-f86d711e087f-kolla-config\") pod \"openstack-galera-0\" (UID: \"5b0e9cfc-d618-4cbc-ab7c-f86d711e087f\") " pod="openstack/openstack-galera-0" Feb 02 22:38:10 crc kubenswrapper[4789]: I0202 22:38:10.030824 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b0e9cfc-d618-4cbc-ab7c-f86d711e087f-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"5b0e9cfc-d618-4cbc-ab7c-f86d711e087f\") " pod="openstack/openstack-galera-0" Feb 02 22:38:10 crc kubenswrapper[4789]: I0202 22:38:10.030983 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b0e9cfc-d618-4cbc-ab7c-f86d711e087f-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"5b0e9cfc-d618-4cbc-ab7c-f86d711e087f\") " pod="openstack/openstack-galera-0" Feb 02 22:38:10 crc kubenswrapper[4789]: I0202 22:38:10.132027 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8l5gf\" (UniqueName: \"kubernetes.io/projected/5b0e9cfc-d618-4cbc-ab7c-f86d711e087f-kube-api-access-8l5gf\") pod \"openstack-galera-0\" (UID: \"5b0e9cfc-d618-4cbc-ab7c-f86d711e087f\") " pod="openstack/openstack-galera-0" Feb 02 22:38:10 crc kubenswrapper[4789]: I0202 22:38:10.132349 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/5b0e9cfc-d618-4cbc-ab7c-f86d711e087f-kolla-config\") pod \"openstack-galera-0\" (UID: \"5b0e9cfc-d618-4cbc-ab7c-f86d711e087f\") " pod="openstack/openstack-galera-0" Feb 02 22:38:10 crc kubenswrapper[4789]: I0202 22:38:10.132389 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b0e9cfc-d618-4cbc-ab7c-f86d711e087f-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"5b0e9cfc-d618-4cbc-ab7c-f86d711e087f\") " pod="openstack/openstack-galera-0" Feb 02 22:38:10 crc kubenswrapper[4789]: I0202 22:38:10.132416 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b0e9cfc-d618-4cbc-ab7c-f86d711e087f-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"5b0e9cfc-d618-4cbc-ab7c-f86d711e087f\") " pod="openstack/openstack-galera-0" Feb 02 22:38:10 crc kubenswrapper[4789]: I0202 22:38:10.132470 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-14a5b2fe-3697-4d30-b1bf-e707734a31cb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-14a5b2fe-3697-4d30-b1bf-e707734a31cb\") pod \"openstack-galera-0\" (UID: \"5b0e9cfc-d618-4cbc-ab7c-f86d711e087f\") " pod="openstack/openstack-galera-0" Feb 02 22:38:10 crc kubenswrapper[4789]: I0202 22:38:10.132509 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/5b0e9cfc-d618-4cbc-ab7c-f86d711e087f-config-data-generated\") pod \"openstack-galera-0\" (UID: \"5b0e9cfc-d618-4cbc-ab7c-f86d711e087f\") " pod="openstack/openstack-galera-0" Feb 02 22:38:10 crc kubenswrapper[4789]: I0202 22:38:10.132532 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/5b0e9cfc-d618-4cbc-ab7c-f86d711e087f-config-data-default\") pod \"openstack-galera-0\" (UID: \"5b0e9cfc-d618-4cbc-ab7c-f86d711e087f\") " pod="openstack/openstack-galera-0" Feb 02 22:38:10 crc kubenswrapper[4789]: I0202 22:38:10.132571 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b0e9cfc-d618-4cbc-ab7c-f86d711e087f-operator-scripts\") pod \"openstack-galera-0\" (UID: \"5b0e9cfc-d618-4cbc-ab7c-f86d711e087f\") " pod="openstack/openstack-galera-0" Feb 02 22:38:10 crc kubenswrapper[4789]: I0202 22:38:10.134237 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b0e9cfc-d618-4cbc-ab7c-f86d711e087f-operator-scripts\") pod \"openstack-galera-0\" (UID: \"5b0e9cfc-d618-4cbc-ab7c-f86d711e087f\") " pod="openstack/openstack-galera-0" Feb 02 22:38:10 crc kubenswrapper[4789]: I0202 22:38:10.135145 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/5b0e9cfc-d618-4cbc-ab7c-f86d711e087f-kolla-config\") pod \"openstack-galera-0\" (UID: \"5b0e9cfc-d618-4cbc-ab7c-f86d711e087f\") " pod="openstack/openstack-galera-0" Feb 02 22:38:10 crc kubenswrapper[4789]: I0202 22:38:10.137148 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/5b0e9cfc-d618-4cbc-ab7c-f86d711e087f-config-data-generated\") pod \"openstack-galera-0\" (UID: \"5b0e9cfc-d618-4cbc-ab7c-f86d711e087f\") " pod="openstack/openstack-galera-0" Feb 02 22:38:10 crc kubenswrapper[4789]: I0202 22:38:10.138715 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/5b0e9cfc-d618-4cbc-ab7c-f86d711e087f-config-data-default\") pod \"openstack-galera-0\" (UID: \"5b0e9cfc-d618-4cbc-ab7c-f86d711e087f\") " pod="openstack/openstack-galera-0" Feb 02 22:38:10 crc kubenswrapper[4789]: I0202 22:38:10.142856 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b0e9cfc-d618-4cbc-ab7c-f86d711e087f-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"5b0e9cfc-d618-4cbc-ab7c-f86d711e087f\") " pod="openstack/openstack-galera-0" Feb 02 22:38:10 crc kubenswrapper[4789]: I0202 22:38:10.144522 4789 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 02 22:38:10 crc kubenswrapper[4789]: I0202 22:38:10.144556 4789 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-14a5b2fe-3697-4d30-b1bf-e707734a31cb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-14a5b2fe-3697-4d30-b1bf-e707734a31cb\") pod \"openstack-galera-0\" (UID: \"5b0e9cfc-d618-4cbc-ab7c-f86d711e087f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/45ff8d45ce3a9d914f0d8ce96a493968fbc5b24c386eb8949c2f18ba8f775b9f/globalmount\"" pod="openstack/openstack-galera-0" Feb 02 22:38:10 crc kubenswrapper[4789]: I0202 22:38:10.148169 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b0e9cfc-d618-4cbc-ab7c-f86d711e087f-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"5b0e9cfc-d618-4cbc-ab7c-f86d711e087f\") " pod="openstack/openstack-galera-0" Feb 02 22:38:10 crc kubenswrapper[4789]: I0202 22:38:10.149964 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8l5gf\" (UniqueName: \"kubernetes.io/projected/5b0e9cfc-d618-4cbc-ab7c-f86d711e087f-kube-api-access-8l5gf\") pod \"openstack-galera-0\" (UID: \"5b0e9cfc-d618-4cbc-ab7c-f86d711e087f\") " pod="openstack/openstack-galera-0" Feb 02 22:38:10 crc kubenswrapper[4789]: I0202 22:38:10.282527 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-14a5b2fe-3697-4d30-b1bf-e707734a31cb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-14a5b2fe-3697-4d30-b1bf-e707734a31cb\") pod \"openstack-galera-0\" (UID: \"5b0e9cfc-d618-4cbc-ab7c-f86d711e087f\") " pod="openstack/openstack-galera-0" Feb 02 22:38:10 crc kubenswrapper[4789]: I0202 22:38:10.359613 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 02 22:38:10 crc kubenswrapper[4789]: I0202 22:38:10.388078 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 02 22:38:10 crc kubenswrapper[4789]: I0202 22:38:10.388964 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 02 22:38:10 crc kubenswrapper[4789]: I0202 22:38:10.391357 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 02 22:38:10 crc kubenswrapper[4789]: I0202 22:38:10.391704 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-nbvnb" Feb 02 22:38:10 crc kubenswrapper[4789]: I0202 22:38:10.404815 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 02 22:38:10 crc kubenswrapper[4789]: I0202 22:38:10.441979 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/69ba12ec-7b8a-4ef5-8e4a-7ff0adda7040-kolla-config\") pod \"memcached-0\" (UID: \"69ba12ec-7b8a-4ef5-8e4a-7ff0adda7040\") " pod="openstack/memcached-0" Feb 02 22:38:10 crc kubenswrapper[4789]: I0202 22:38:10.442042 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fttsp\" (UniqueName: \"kubernetes.io/projected/69ba12ec-7b8a-4ef5-8e4a-7ff0adda7040-kube-api-access-fttsp\") pod \"memcached-0\" (UID: \"69ba12ec-7b8a-4ef5-8e4a-7ff0adda7040\") " pod="openstack/memcached-0" Feb 02 22:38:10 crc kubenswrapper[4789]: I0202 22:38:10.442156 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/69ba12ec-7b8a-4ef5-8e4a-7ff0adda7040-config-data\") pod \"memcached-0\" (UID: \"69ba12ec-7b8a-4ef5-8e4a-7ff0adda7040\") " pod="openstack/memcached-0" Feb 02 22:38:10 crc kubenswrapper[4789]: I0202 22:38:10.551892 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/69ba12ec-7b8a-4ef5-8e4a-7ff0adda7040-config-data\") pod \"memcached-0\" (UID: \"69ba12ec-7b8a-4ef5-8e4a-7ff0adda7040\") " pod="openstack/memcached-0" Feb 02 22:38:10 crc kubenswrapper[4789]: I0202 22:38:10.552213 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/69ba12ec-7b8a-4ef5-8e4a-7ff0adda7040-kolla-config\") pod \"memcached-0\" (UID: \"69ba12ec-7b8a-4ef5-8e4a-7ff0adda7040\") " pod="openstack/memcached-0" Feb 02 22:38:10 crc kubenswrapper[4789]: I0202 22:38:10.552258 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fttsp\" (UniqueName: \"kubernetes.io/projected/69ba12ec-7b8a-4ef5-8e4a-7ff0adda7040-kube-api-access-fttsp\") pod \"memcached-0\" (UID: \"69ba12ec-7b8a-4ef5-8e4a-7ff0adda7040\") " pod="openstack/memcached-0" Feb 02 22:38:10 crc kubenswrapper[4789]: I0202 22:38:10.553411 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/69ba12ec-7b8a-4ef5-8e4a-7ff0adda7040-config-data\") pod \"memcached-0\" (UID: \"69ba12ec-7b8a-4ef5-8e4a-7ff0adda7040\") " pod="openstack/memcached-0" Feb 02 22:38:10 crc kubenswrapper[4789]: I0202 22:38:10.553425 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/69ba12ec-7b8a-4ef5-8e4a-7ff0adda7040-kolla-config\") pod \"memcached-0\" (UID: \"69ba12ec-7b8a-4ef5-8e4a-7ff0adda7040\") " pod="openstack/memcached-0" Feb 02 22:38:10 crc kubenswrapper[4789]: I0202 22:38:10.583459 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fttsp\" (UniqueName: \"kubernetes.io/projected/69ba12ec-7b8a-4ef5-8e4a-7ff0adda7040-kube-api-access-fttsp\") pod \"memcached-0\" (UID: \"69ba12ec-7b8a-4ef5-8e4a-7ff0adda7040\") " pod="openstack/memcached-0" Feb 02 22:38:10 crc kubenswrapper[4789]: I0202 22:38:10.617763 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"7f79b555-f224-4e21-8650-5deed8215651","Type":"ContainerStarted","Data":"f8fb85908d3c92d3f07c97a3161c95bbc61817efb93bb50c9298ce80a5ce5303"} Feb 02 22:38:10 crc kubenswrapper[4789]: I0202 22:38:10.622199 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"16adfa27-ae3d-4915-8156-03be4321a9a2","Type":"ContainerStarted","Data":"841bf57b7d79c0c452ae515753e4403bf64c0bc9e269612b1b258106049976f8"} Feb 02 22:38:10 crc kubenswrapper[4789]: I0202 22:38:10.642197 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 02 22:38:10 crc kubenswrapper[4789]: I0202 22:38:10.863492 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 02 22:38:10 crc kubenswrapper[4789]: I0202 22:38:10.936414 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 02 22:38:10 crc kubenswrapper[4789]: W0202 22:38:10.946948 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5b0e9cfc_d618_4cbc_ab7c_f86d711e087f.slice/crio-bd27994ab65d67fd714c9663c4940bfe8a0a716b390d121a9d87ab9cf3ba1ee2 WatchSource:0}: Error finding container bd27994ab65d67fd714c9663c4940bfe8a0a716b390d121a9d87ab9cf3ba1ee2: Status 404 returned error can't find the container with id bd27994ab65d67fd714c9663c4940bfe8a0a716b390d121a9d87ab9cf3ba1ee2 Feb 02 22:38:11 crc kubenswrapper[4789]: I0202 22:38:11.618089 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 02 22:38:11 crc kubenswrapper[4789]: I0202 22:38:11.620325 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 02 22:38:11 crc kubenswrapper[4789]: I0202 22:38:11.623136 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-sd4xg" Feb 02 22:38:11 crc kubenswrapper[4789]: I0202 22:38:11.625066 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 02 22:38:11 crc kubenswrapper[4789]: I0202 22:38:11.627473 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 02 22:38:11 crc kubenswrapper[4789]: I0202 22:38:11.635396 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 02 22:38:11 crc kubenswrapper[4789]: I0202 22:38:11.636184 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 02 22:38:11 crc kubenswrapper[4789]: I0202 22:38:11.637350 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"5b0e9cfc-d618-4cbc-ab7c-f86d711e087f","Type":"ContainerStarted","Data":"ab1db822ba6f851189166de1939432aaff6b2a9b3eed322e26fd76a88dbe9760"} Feb 02 22:38:11 crc kubenswrapper[4789]: I0202 22:38:11.637461 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"5b0e9cfc-d618-4cbc-ab7c-f86d711e087f","Type":"ContainerStarted","Data":"bd27994ab65d67fd714c9663c4940bfe8a0a716b390d121a9d87ab9cf3ba1ee2"} Feb 02 22:38:11 crc kubenswrapper[4789]: I0202 22:38:11.640000 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"69ba12ec-7b8a-4ef5-8e4a-7ff0adda7040","Type":"ContainerStarted","Data":"d9cfa4792532540cb47071acb0517ab70c5e9ef15cec4e22b144201491a5e9a3"} Feb 02 22:38:11 crc kubenswrapper[4789]: I0202 22:38:11.640202 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 02 22:38:11 crc kubenswrapper[4789]: I0202 22:38:11.640228 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"69ba12ec-7b8a-4ef5-8e4a-7ff0adda7040","Type":"ContainerStarted","Data":"d9bdbd9c7fc860edab56dd13e1155b32a39c102d8428fd48ad41b00895ee6046"} Feb 02 22:38:11 crc kubenswrapper[4789]: I0202 22:38:11.643046 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"7f79b555-f224-4e21-8650-5deed8215651","Type":"ContainerStarted","Data":"be581ca8cc76aec1f9a6c630b95d069125f8b8746ab5caee53a8829ac239cd5c"} Feb 02 22:38:11 crc kubenswrapper[4789]: I0202 22:38:11.706898 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=1.706880483 podStartE2EDuration="1.706880483s" podCreationTimestamp="2026-02-02 22:38:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 22:38:11.697008445 +0000 UTC m=+4711.992033464" watchObservedRunningTime="2026-02-02 22:38:11.706880483 +0000 UTC m=+4712.001905502" Feb 02 22:38:11 crc kubenswrapper[4789]: I0202 22:38:11.766684 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6vt4\" (UniqueName: \"kubernetes.io/projected/1617a40a-8c8d-4940-b8d0-bc501567c07d-kube-api-access-g6vt4\") pod \"openstack-cell1-galera-0\" (UID: \"1617a40a-8c8d-4940-b8d0-bc501567c07d\") " pod="openstack/openstack-cell1-galera-0" Feb 02 22:38:11 crc kubenswrapper[4789]: I0202 22:38:11.766730 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/1617a40a-8c8d-4940-b8d0-bc501567c07d-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"1617a40a-8c8d-4940-b8d0-bc501567c07d\") " pod="openstack/openstack-cell1-galera-0" Feb 02 22:38:11 crc kubenswrapper[4789]: I0202 22:38:11.766766 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1617a40a-8c8d-4940-b8d0-bc501567c07d-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"1617a40a-8c8d-4940-b8d0-bc501567c07d\") " pod="openstack/openstack-cell1-galera-0" Feb 02 22:38:11 crc kubenswrapper[4789]: I0202 22:38:11.767002 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1617a40a-8c8d-4940-b8d0-bc501567c07d-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"1617a40a-8c8d-4940-b8d0-bc501567c07d\") " pod="openstack/openstack-cell1-galera-0" Feb 02 22:38:11 crc kubenswrapper[4789]: I0202 22:38:11.767290 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1617a40a-8c8d-4940-b8d0-bc501567c07d-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"1617a40a-8c8d-4940-b8d0-bc501567c07d\") " pod="openstack/openstack-cell1-galera-0" Feb 02 22:38:11 crc kubenswrapper[4789]: I0202 22:38:11.767394 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1617a40a-8c8d-4940-b8d0-bc501567c07d-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"1617a40a-8c8d-4940-b8d0-bc501567c07d\") " pod="openstack/openstack-cell1-galera-0" Feb 02 22:38:11 crc kubenswrapper[4789]: I0202 22:38:11.767695 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-1f5e340b-8f86-4e85-b4d6-567310f3a7e8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1f5e340b-8f86-4e85-b4d6-567310f3a7e8\") pod \"openstack-cell1-galera-0\" (UID: \"1617a40a-8c8d-4940-b8d0-bc501567c07d\") " pod="openstack/openstack-cell1-galera-0" Feb 02 22:38:11 crc kubenswrapper[4789]: I0202 22:38:11.767776 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1617a40a-8c8d-4940-b8d0-bc501567c07d-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"1617a40a-8c8d-4940-b8d0-bc501567c07d\") " pod="openstack/openstack-cell1-galera-0" Feb 02 22:38:11 crc kubenswrapper[4789]: I0202 22:38:11.869693 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1617a40a-8c8d-4940-b8d0-bc501567c07d-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"1617a40a-8c8d-4940-b8d0-bc501567c07d\") " pod="openstack/openstack-cell1-galera-0" Feb 02 22:38:11 crc kubenswrapper[4789]: I0202 22:38:11.869813 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1617a40a-8c8d-4940-b8d0-bc501567c07d-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"1617a40a-8c8d-4940-b8d0-bc501567c07d\") " pod="openstack/openstack-cell1-galera-0" Feb 02 22:38:11 crc kubenswrapper[4789]: I0202 22:38:11.869848 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1617a40a-8c8d-4940-b8d0-bc501567c07d-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"1617a40a-8c8d-4940-b8d0-bc501567c07d\") " pod="openstack/openstack-cell1-galera-0" Feb 02 22:38:11 crc kubenswrapper[4789]: I0202 22:38:11.869929 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-1f5e340b-8f86-4e85-b4d6-567310f3a7e8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1f5e340b-8f86-4e85-b4d6-567310f3a7e8\") pod \"openstack-cell1-galera-0\" (UID: \"1617a40a-8c8d-4940-b8d0-bc501567c07d\") " pod="openstack/openstack-cell1-galera-0" Feb 02 22:38:11 crc kubenswrapper[4789]: I0202 22:38:11.869977 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1617a40a-8c8d-4940-b8d0-bc501567c07d-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"1617a40a-8c8d-4940-b8d0-bc501567c07d\") " pod="openstack/openstack-cell1-galera-0" Feb 02 22:38:11 crc kubenswrapper[4789]: I0202 22:38:11.870036 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6vt4\" (UniqueName: \"kubernetes.io/projected/1617a40a-8c8d-4940-b8d0-bc501567c07d-kube-api-access-g6vt4\") pod \"openstack-cell1-galera-0\" (UID: \"1617a40a-8c8d-4940-b8d0-bc501567c07d\") " pod="openstack/openstack-cell1-galera-0" Feb 02 22:38:11 crc kubenswrapper[4789]: I0202 22:38:11.870067 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/1617a40a-8c8d-4940-b8d0-bc501567c07d-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"1617a40a-8c8d-4940-b8d0-bc501567c07d\") " pod="openstack/openstack-cell1-galera-0" Feb 02 22:38:11 crc kubenswrapper[4789]: I0202 22:38:11.870109 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1617a40a-8c8d-4940-b8d0-bc501567c07d-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"1617a40a-8c8d-4940-b8d0-bc501567c07d\") " pod="openstack/openstack-cell1-galera-0" Feb 02 22:38:11 crc kubenswrapper[4789]: I0202 22:38:11.870200 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1617a40a-8c8d-4940-b8d0-bc501567c07d-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"1617a40a-8c8d-4940-b8d0-bc501567c07d\") " pod="openstack/openstack-cell1-galera-0" Feb 02 22:38:11 crc kubenswrapper[4789]: I0202 22:38:11.871375 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1617a40a-8c8d-4940-b8d0-bc501567c07d-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"1617a40a-8c8d-4940-b8d0-bc501567c07d\") " pod="openstack/openstack-cell1-galera-0" Feb 02 22:38:11 crc kubenswrapper[4789]: I0202 22:38:11.871773 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1617a40a-8c8d-4940-b8d0-bc501567c07d-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"1617a40a-8c8d-4940-b8d0-bc501567c07d\") " pod="openstack/openstack-cell1-galera-0" Feb 02 22:38:11 crc kubenswrapper[4789]: I0202 22:38:11.872096 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1617a40a-8c8d-4940-b8d0-bc501567c07d-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"1617a40a-8c8d-4940-b8d0-bc501567c07d\") " pod="openstack/openstack-cell1-galera-0" Feb 02 22:38:11 crc kubenswrapper[4789]: I0202 22:38:11.874416 4789 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 02 22:38:11 crc kubenswrapper[4789]: I0202 22:38:11.874468 4789 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-1f5e340b-8f86-4e85-b4d6-567310f3a7e8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1f5e340b-8f86-4e85-b4d6-567310f3a7e8\") pod \"openstack-cell1-galera-0\" (UID: \"1617a40a-8c8d-4940-b8d0-bc501567c07d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/cdea6528709010ba34dfec2f30509e6d6ed9d5890d59e62ae12a52e6de045dd2/globalmount\"" pod="openstack/openstack-cell1-galera-0" Feb 02 22:38:11 crc kubenswrapper[4789]: I0202 22:38:11.875950 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1617a40a-8c8d-4940-b8d0-bc501567c07d-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"1617a40a-8c8d-4940-b8d0-bc501567c07d\") " pod="openstack/openstack-cell1-galera-0" Feb 02 22:38:11 crc kubenswrapper[4789]: I0202 22:38:11.883279 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/1617a40a-8c8d-4940-b8d0-bc501567c07d-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"1617a40a-8c8d-4940-b8d0-bc501567c07d\") " pod="openstack/openstack-cell1-galera-0" Feb 02 22:38:11 crc kubenswrapper[4789]: I0202 22:38:11.907269 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6vt4\" (UniqueName: \"kubernetes.io/projected/1617a40a-8c8d-4940-b8d0-bc501567c07d-kube-api-access-g6vt4\") pod \"openstack-cell1-galera-0\" (UID: \"1617a40a-8c8d-4940-b8d0-bc501567c07d\") " pod="openstack/openstack-cell1-galera-0" Feb 02 22:38:11 crc kubenswrapper[4789]: I0202 22:38:11.908441 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-1f5e340b-8f86-4e85-b4d6-567310f3a7e8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1f5e340b-8f86-4e85-b4d6-567310f3a7e8\") pod \"openstack-cell1-galera-0\" (UID: \"1617a40a-8c8d-4940-b8d0-bc501567c07d\") " pod="openstack/openstack-cell1-galera-0" Feb 02 22:38:11 crc kubenswrapper[4789]: I0202 22:38:11.948997 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 02 22:38:12 crc kubenswrapper[4789]: I0202 22:38:12.203131 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 02 22:38:12 crc kubenswrapper[4789]: I0202 22:38:12.652320 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"1617a40a-8c8d-4940-b8d0-bc501567c07d","Type":"ContainerStarted","Data":"457b526776504a627f85d6090f0f5632e4f4c268eebd51c155fa1b164d7fb908"} Feb 02 22:38:12 crc kubenswrapper[4789]: I0202 22:38:12.652673 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"1617a40a-8c8d-4940-b8d0-bc501567c07d","Type":"ContainerStarted","Data":"a623c46ff570bf0b3e1cce18307ad068e9c96e7dd0b6d8a1878bc939f69959d4"} Feb 02 22:38:13 crc kubenswrapper[4789]: I0202 22:38:13.008331 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-98ddfc8f-74xt8" Feb 02 22:38:13 crc kubenswrapper[4789]: I0202 22:38:13.131081 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d7b5456f5-kf2j5"] Feb 02 22:38:13 crc kubenswrapper[4789]: I0202 22:38:13.131434 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5d7b5456f5-kf2j5" podUID="61ba7d08-5203-4399-9929-8e9aa19e5049" containerName="dnsmasq-dns" containerID="cri-o://8c38925782971382b59b19a0d94ce265c1215918076da5b9cd6df328a36f7eff" gracePeriod=10 Feb 02 22:38:13 crc kubenswrapper[4789]: I0202 22:38:13.140865 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5d7b5456f5-kf2j5" Feb 02 22:38:13 crc kubenswrapper[4789]: I0202 22:38:13.419549 4789 scope.go:117] "RemoveContainer" containerID="e1882e0fff6a0a98d0a3daff7d66a9d268ab6f6c60c38af019ec759180dddf7d" Feb 02 22:38:13 crc kubenswrapper[4789]: E0202 22:38:13.420190 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:38:13 crc kubenswrapper[4789]: I0202 22:38:13.621379 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d7b5456f5-kf2j5" Feb 02 22:38:13 crc kubenswrapper[4789]: I0202 22:38:13.661816 4789 generic.go:334] "Generic (PLEG): container finished" podID="61ba7d08-5203-4399-9929-8e9aa19e5049" containerID="8c38925782971382b59b19a0d94ce265c1215918076da5b9cd6df328a36f7eff" exitCode=0 Feb 02 22:38:13 crc kubenswrapper[4789]: I0202 22:38:13.661937 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d7b5456f5-kf2j5" event={"ID":"61ba7d08-5203-4399-9929-8e9aa19e5049","Type":"ContainerDied","Data":"8c38925782971382b59b19a0d94ce265c1215918076da5b9cd6df328a36f7eff"} Feb 02 22:38:13 crc kubenswrapper[4789]: I0202 22:38:13.662024 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d7b5456f5-kf2j5" event={"ID":"61ba7d08-5203-4399-9929-8e9aa19e5049","Type":"ContainerDied","Data":"c66a0f8051f5128eaa9382c263509f3b3340556b5d73c91e82bd5aaefab45388"} Feb 02 22:38:13 crc kubenswrapper[4789]: I0202 22:38:13.662063 4789 scope.go:117] "RemoveContainer" containerID="8c38925782971382b59b19a0d94ce265c1215918076da5b9cd6df328a36f7eff" Feb 02 22:38:13 crc kubenswrapper[4789]: I0202 22:38:13.663017 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d7b5456f5-kf2j5" Feb 02 22:38:13 crc kubenswrapper[4789]: I0202 22:38:13.687325 4789 scope.go:117] "RemoveContainer" containerID="784909410b45f7420c0f1a773eca81e35efdbd34a4cd962bdec87aa5a3e86c8a" Feb 02 22:38:13 crc kubenswrapper[4789]: I0202 22:38:13.703346 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/61ba7d08-5203-4399-9929-8e9aa19e5049-dns-svc\") pod \"61ba7d08-5203-4399-9929-8e9aa19e5049\" (UID: \"61ba7d08-5203-4399-9929-8e9aa19e5049\") " Feb 02 22:38:13 crc kubenswrapper[4789]: I0202 22:38:13.703988 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pv7bf\" (UniqueName: \"kubernetes.io/projected/61ba7d08-5203-4399-9929-8e9aa19e5049-kube-api-access-pv7bf\") pod \"61ba7d08-5203-4399-9929-8e9aa19e5049\" (UID: \"61ba7d08-5203-4399-9929-8e9aa19e5049\") " Feb 02 22:38:13 crc kubenswrapper[4789]: I0202 22:38:13.704077 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61ba7d08-5203-4399-9929-8e9aa19e5049-config\") pod \"61ba7d08-5203-4399-9929-8e9aa19e5049\" (UID: \"61ba7d08-5203-4399-9929-8e9aa19e5049\") " Feb 02 22:38:13 crc kubenswrapper[4789]: I0202 22:38:13.710864 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61ba7d08-5203-4399-9929-8e9aa19e5049-kube-api-access-pv7bf" (OuterVolumeSpecName: "kube-api-access-pv7bf") pod "61ba7d08-5203-4399-9929-8e9aa19e5049" (UID: "61ba7d08-5203-4399-9929-8e9aa19e5049"). InnerVolumeSpecName "kube-api-access-pv7bf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 22:38:13 crc kubenswrapper[4789]: I0202 22:38:13.743280 4789 scope.go:117] "RemoveContainer" containerID="8c38925782971382b59b19a0d94ce265c1215918076da5b9cd6df328a36f7eff" Feb 02 22:38:13 crc kubenswrapper[4789]: E0202 22:38:13.743696 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c38925782971382b59b19a0d94ce265c1215918076da5b9cd6df328a36f7eff\": container with ID starting with 8c38925782971382b59b19a0d94ce265c1215918076da5b9cd6df328a36f7eff not found: ID does not exist" containerID="8c38925782971382b59b19a0d94ce265c1215918076da5b9cd6df328a36f7eff" Feb 02 22:38:13 crc kubenswrapper[4789]: I0202 22:38:13.743727 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c38925782971382b59b19a0d94ce265c1215918076da5b9cd6df328a36f7eff"} err="failed to get container status \"8c38925782971382b59b19a0d94ce265c1215918076da5b9cd6df328a36f7eff\": rpc error: code = NotFound desc = could not find container \"8c38925782971382b59b19a0d94ce265c1215918076da5b9cd6df328a36f7eff\": container with ID starting with 8c38925782971382b59b19a0d94ce265c1215918076da5b9cd6df328a36f7eff not found: ID does not exist" Feb 02 22:38:13 crc kubenswrapper[4789]: I0202 22:38:13.743745 4789 scope.go:117] "RemoveContainer" containerID="784909410b45f7420c0f1a773eca81e35efdbd34a4cd962bdec87aa5a3e86c8a" Feb 02 22:38:13 crc kubenswrapper[4789]: E0202 22:38:13.744000 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"784909410b45f7420c0f1a773eca81e35efdbd34a4cd962bdec87aa5a3e86c8a\": container with ID starting with 784909410b45f7420c0f1a773eca81e35efdbd34a4cd962bdec87aa5a3e86c8a not found: ID does not exist" containerID="784909410b45f7420c0f1a773eca81e35efdbd34a4cd962bdec87aa5a3e86c8a" Feb 02 22:38:13 crc kubenswrapper[4789]: I0202 22:38:13.744029 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"784909410b45f7420c0f1a773eca81e35efdbd34a4cd962bdec87aa5a3e86c8a"} err="failed to get container status \"784909410b45f7420c0f1a773eca81e35efdbd34a4cd962bdec87aa5a3e86c8a\": rpc error: code = NotFound desc = could not find container \"784909410b45f7420c0f1a773eca81e35efdbd34a4cd962bdec87aa5a3e86c8a\": container with ID starting with 784909410b45f7420c0f1a773eca81e35efdbd34a4cd962bdec87aa5a3e86c8a not found: ID does not exist" Feb 02 22:38:13 crc kubenswrapper[4789]: I0202 22:38:13.750198 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61ba7d08-5203-4399-9929-8e9aa19e5049-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "61ba7d08-5203-4399-9929-8e9aa19e5049" (UID: "61ba7d08-5203-4399-9929-8e9aa19e5049"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 22:38:13 crc kubenswrapper[4789]: I0202 22:38:13.767049 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61ba7d08-5203-4399-9929-8e9aa19e5049-config" (OuterVolumeSpecName: "config") pod "61ba7d08-5203-4399-9929-8e9aa19e5049" (UID: "61ba7d08-5203-4399-9929-8e9aa19e5049"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 22:38:13 crc kubenswrapper[4789]: I0202 22:38:13.805390 4789 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/61ba7d08-5203-4399-9929-8e9aa19e5049-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 22:38:13 crc kubenswrapper[4789]: I0202 22:38:13.805639 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pv7bf\" (UniqueName: \"kubernetes.io/projected/61ba7d08-5203-4399-9929-8e9aa19e5049-kube-api-access-pv7bf\") on node \"crc\" DevicePath \"\"" Feb 02 22:38:13 crc kubenswrapper[4789]: I0202 22:38:13.805709 4789 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61ba7d08-5203-4399-9929-8e9aa19e5049-config\") on node \"crc\" DevicePath \"\"" Feb 02 22:38:14 crc kubenswrapper[4789]: I0202 22:38:13.999831 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d7b5456f5-kf2j5"] Feb 02 22:38:14 crc kubenswrapper[4789]: I0202 22:38:14.005859 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5d7b5456f5-kf2j5"] Feb 02 22:38:14 crc kubenswrapper[4789]: I0202 22:38:14.437291 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61ba7d08-5203-4399-9929-8e9aa19e5049" path="/var/lib/kubelet/pods/61ba7d08-5203-4399-9929-8e9aa19e5049/volumes" Feb 02 22:38:14 crc kubenswrapper[4789]: I0202 22:38:14.676697 4789 generic.go:334] "Generic (PLEG): container finished" podID="5b0e9cfc-d618-4cbc-ab7c-f86d711e087f" containerID="ab1db822ba6f851189166de1939432aaff6b2a9b3eed322e26fd76a88dbe9760" exitCode=0 Feb 02 22:38:14 crc kubenswrapper[4789]: I0202 22:38:14.676743 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"5b0e9cfc-d618-4cbc-ab7c-f86d711e087f","Type":"ContainerDied","Data":"ab1db822ba6f851189166de1939432aaff6b2a9b3eed322e26fd76a88dbe9760"} Feb 02 22:38:15 crc kubenswrapper[4789]: I0202 22:38:15.690386 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"5b0e9cfc-d618-4cbc-ab7c-f86d711e087f","Type":"ContainerStarted","Data":"db4019f84944fd081851ff339d96842c4d81fc0206965459b1f6f63f34ab1eb7"} Feb 02 22:38:15 crc kubenswrapper[4789]: I0202 22:38:15.737017 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=7.7369772999999995 podStartE2EDuration="7.7369773s" podCreationTimestamp="2026-02-02 22:38:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 22:38:15.723236812 +0000 UTC m=+4716.018261871" watchObservedRunningTime="2026-02-02 22:38:15.7369773 +0000 UTC m=+4716.032002349" Feb 02 22:38:16 crc kubenswrapper[4789]: I0202 22:38:16.702656 4789 generic.go:334] "Generic (PLEG): container finished" podID="1617a40a-8c8d-4940-b8d0-bc501567c07d" containerID="457b526776504a627f85d6090f0f5632e4f4c268eebd51c155fa1b164d7fb908" exitCode=0 Feb 02 22:38:16 crc kubenswrapper[4789]: I0202 22:38:16.702738 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"1617a40a-8c8d-4940-b8d0-bc501567c07d","Type":"ContainerDied","Data":"457b526776504a627f85d6090f0f5632e4f4c268eebd51c155fa1b164d7fb908"} Feb 02 22:38:17 crc kubenswrapper[4789]: I0202 22:38:17.724428 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"1617a40a-8c8d-4940-b8d0-bc501567c07d","Type":"ContainerStarted","Data":"ea4d40948204dc903f9a9e83e6f489e2350b964c894b89c83bd5fdfad45e120a"} Feb 02 22:38:17 crc kubenswrapper[4789]: I0202 22:38:17.762897 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=7.762862469 podStartE2EDuration="7.762862469s" podCreationTimestamp="2026-02-02 22:38:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 22:38:17.761779269 +0000 UTC m=+4718.056804328" watchObservedRunningTime="2026-02-02 22:38:17.762862469 +0000 UTC m=+4718.057887538" Feb 02 22:38:20 crc kubenswrapper[4789]: I0202 22:38:20.360517 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 02 22:38:20 crc kubenswrapper[4789]: I0202 22:38:20.360981 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 02 22:38:20 crc kubenswrapper[4789]: I0202 22:38:20.643750 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 02 22:38:21 crc kubenswrapper[4789]: I0202 22:38:21.949682 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 02 22:38:21 crc kubenswrapper[4789]: I0202 22:38:21.949745 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 02 22:38:22 crc kubenswrapper[4789]: I0202 22:38:22.748084 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 02 22:38:22 crc kubenswrapper[4789]: I0202 22:38:22.822362 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 02 22:38:24 crc kubenswrapper[4789]: I0202 22:38:24.344885 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 02 22:38:24 crc kubenswrapper[4789]: I0202 22:38:24.460767 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 02 22:38:27 crc kubenswrapper[4789]: I0202 22:38:27.419725 4789 scope.go:117] "RemoveContainer" containerID="e1882e0fff6a0a98d0a3daff7d66a9d268ab6f6c60c38af019ec759180dddf7d" Feb 02 22:38:27 crc kubenswrapper[4789]: E0202 22:38:27.420487 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:38:29 crc kubenswrapper[4789]: I0202 22:38:29.024943 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-mtvts"] Feb 02 22:38:29 crc kubenswrapper[4789]: E0202 22:38:29.025342 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61ba7d08-5203-4399-9929-8e9aa19e5049" containerName="init" Feb 02 22:38:29 crc kubenswrapper[4789]: I0202 22:38:29.025359 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="61ba7d08-5203-4399-9929-8e9aa19e5049" containerName="init" Feb 02 22:38:29 crc kubenswrapper[4789]: E0202 22:38:29.025385 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61ba7d08-5203-4399-9929-8e9aa19e5049" containerName="dnsmasq-dns" Feb 02 22:38:29 crc kubenswrapper[4789]: I0202 22:38:29.025393 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="61ba7d08-5203-4399-9929-8e9aa19e5049" containerName="dnsmasq-dns" Feb 02 22:38:29 crc kubenswrapper[4789]: I0202 22:38:29.025550 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="61ba7d08-5203-4399-9929-8e9aa19e5049" containerName="dnsmasq-dns" Feb 02 22:38:29 crc kubenswrapper[4789]: I0202 22:38:29.026171 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-mtvts" Feb 02 22:38:29 crc kubenswrapper[4789]: I0202 22:38:29.031938 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 02 22:38:29 crc kubenswrapper[4789]: I0202 22:38:29.051098 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-mtvts"] Feb 02 22:38:29 crc kubenswrapper[4789]: I0202 22:38:29.194369 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxlmk\" (UniqueName: \"kubernetes.io/projected/194b3dd6-8ae4-4960-b65e-b14fdb97b6a2-kube-api-access-fxlmk\") pod \"root-account-create-update-mtvts\" (UID: \"194b3dd6-8ae4-4960-b65e-b14fdb97b6a2\") " pod="openstack/root-account-create-update-mtvts" Feb 02 22:38:29 crc kubenswrapper[4789]: I0202 22:38:29.194662 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/194b3dd6-8ae4-4960-b65e-b14fdb97b6a2-operator-scripts\") pod \"root-account-create-update-mtvts\" (UID: \"194b3dd6-8ae4-4960-b65e-b14fdb97b6a2\") " pod="openstack/root-account-create-update-mtvts" Feb 02 22:38:29 crc kubenswrapper[4789]: I0202 22:38:29.296092 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxlmk\" (UniqueName: \"kubernetes.io/projected/194b3dd6-8ae4-4960-b65e-b14fdb97b6a2-kube-api-access-fxlmk\") pod \"root-account-create-update-mtvts\" (UID: \"194b3dd6-8ae4-4960-b65e-b14fdb97b6a2\") " pod="openstack/root-account-create-update-mtvts" Feb 02 22:38:29 crc kubenswrapper[4789]: I0202 22:38:29.296253 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/194b3dd6-8ae4-4960-b65e-b14fdb97b6a2-operator-scripts\") pod \"root-account-create-update-mtvts\" (UID: \"194b3dd6-8ae4-4960-b65e-b14fdb97b6a2\") " pod="openstack/root-account-create-update-mtvts" Feb 02 22:38:29 crc kubenswrapper[4789]: I0202 22:38:29.297699 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/194b3dd6-8ae4-4960-b65e-b14fdb97b6a2-operator-scripts\") pod \"root-account-create-update-mtvts\" (UID: \"194b3dd6-8ae4-4960-b65e-b14fdb97b6a2\") " pod="openstack/root-account-create-update-mtvts" Feb 02 22:38:29 crc kubenswrapper[4789]: I0202 22:38:29.678791 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxlmk\" (UniqueName: \"kubernetes.io/projected/194b3dd6-8ae4-4960-b65e-b14fdb97b6a2-kube-api-access-fxlmk\") pod \"root-account-create-update-mtvts\" (UID: \"194b3dd6-8ae4-4960-b65e-b14fdb97b6a2\") " pod="openstack/root-account-create-update-mtvts" Feb 02 22:38:29 crc kubenswrapper[4789]: I0202 22:38:29.944926 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-mtvts" Feb 02 22:38:30 crc kubenswrapper[4789]: I0202 22:38:30.219092 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-mtvts"] Feb 02 22:38:30 crc kubenswrapper[4789]: W0202 22:38:30.223979 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod194b3dd6_8ae4_4960_b65e_b14fdb97b6a2.slice/crio-0a8f6576bf18f62aed93682d7620fc57743040486bde8224e0e760939fa39423 WatchSource:0}: Error finding container 0a8f6576bf18f62aed93682d7620fc57743040486bde8224e0e760939fa39423: Status 404 returned error can't find the container with id 0a8f6576bf18f62aed93682d7620fc57743040486bde8224e0e760939fa39423 Feb 02 22:38:30 crc kubenswrapper[4789]: I0202 22:38:30.843881 4789 generic.go:334] "Generic (PLEG): container finished" podID="194b3dd6-8ae4-4960-b65e-b14fdb97b6a2" containerID="ef77e0d66218310d7c09fa212a256af117992e1d62fd914ab84f166bd5d6bdcb" exitCode=0 Feb 02 22:38:30 crc kubenswrapper[4789]: I0202 22:38:30.843978 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-mtvts" event={"ID":"194b3dd6-8ae4-4960-b65e-b14fdb97b6a2","Type":"ContainerDied","Data":"ef77e0d66218310d7c09fa212a256af117992e1d62fd914ab84f166bd5d6bdcb"} Feb 02 22:38:30 crc kubenswrapper[4789]: I0202 22:38:30.844184 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-mtvts" event={"ID":"194b3dd6-8ae4-4960-b65e-b14fdb97b6a2","Type":"ContainerStarted","Data":"0a8f6576bf18f62aed93682d7620fc57743040486bde8224e0e760939fa39423"} Feb 02 22:38:32 crc kubenswrapper[4789]: I0202 22:38:32.283431 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-mtvts" Feb 02 22:38:32 crc kubenswrapper[4789]: I0202 22:38:32.449710 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fxlmk\" (UniqueName: \"kubernetes.io/projected/194b3dd6-8ae4-4960-b65e-b14fdb97b6a2-kube-api-access-fxlmk\") pod \"194b3dd6-8ae4-4960-b65e-b14fdb97b6a2\" (UID: \"194b3dd6-8ae4-4960-b65e-b14fdb97b6a2\") " Feb 02 22:38:32 crc kubenswrapper[4789]: I0202 22:38:32.449757 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/194b3dd6-8ae4-4960-b65e-b14fdb97b6a2-operator-scripts\") pod \"194b3dd6-8ae4-4960-b65e-b14fdb97b6a2\" (UID: \"194b3dd6-8ae4-4960-b65e-b14fdb97b6a2\") " Feb 02 22:38:32 crc kubenswrapper[4789]: I0202 22:38:32.450653 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/194b3dd6-8ae4-4960-b65e-b14fdb97b6a2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "194b3dd6-8ae4-4960-b65e-b14fdb97b6a2" (UID: "194b3dd6-8ae4-4960-b65e-b14fdb97b6a2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 22:38:32 crc kubenswrapper[4789]: I0202 22:38:32.458264 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/194b3dd6-8ae4-4960-b65e-b14fdb97b6a2-kube-api-access-fxlmk" (OuterVolumeSpecName: "kube-api-access-fxlmk") pod "194b3dd6-8ae4-4960-b65e-b14fdb97b6a2" (UID: "194b3dd6-8ae4-4960-b65e-b14fdb97b6a2"). InnerVolumeSpecName "kube-api-access-fxlmk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 22:38:32 crc kubenswrapper[4789]: I0202 22:38:32.551888 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fxlmk\" (UniqueName: \"kubernetes.io/projected/194b3dd6-8ae4-4960-b65e-b14fdb97b6a2-kube-api-access-fxlmk\") on node \"crc\" DevicePath \"\"" Feb 02 22:38:32 crc kubenswrapper[4789]: I0202 22:38:32.551963 4789 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/194b3dd6-8ae4-4960-b65e-b14fdb97b6a2-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 22:38:32 crc kubenswrapper[4789]: I0202 22:38:32.862336 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-mtvts" event={"ID":"194b3dd6-8ae4-4960-b65e-b14fdb97b6a2","Type":"ContainerDied","Data":"0a8f6576bf18f62aed93682d7620fc57743040486bde8224e0e760939fa39423"} Feb 02 22:38:32 crc kubenswrapper[4789]: I0202 22:38:32.862555 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a8f6576bf18f62aed93682d7620fc57743040486bde8224e0e760939fa39423" Feb 02 22:38:32 crc kubenswrapper[4789]: I0202 22:38:32.863144 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-mtvts" Feb 02 22:38:35 crc kubenswrapper[4789]: I0202 22:38:35.577781 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-mtvts"] Feb 02 22:38:35 crc kubenswrapper[4789]: I0202 22:38:35.590977 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-mtvts"] Feb 02 22:38:36 crc kubenswrapper[4789]: I0202 22:38:36.436571 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="194b3dd6-8ae4-4960-b65e-b14fdb97b6a2" path="/var/lib/kubelet/pods/194b3dd6-8ae4-4960-b65e-b14fdb97b6a2/volumes" Feb 02 22:38:40 crc kubenswrapper[4789]: I0202 22:38:40.575236 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-xs9v2"] Feb 02 22:38:40 crc kubenswrapper[4789]: E0202 22:38:40.576111 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="194b3dd6-8ae4-4960-b65e-b14fdb97b6a2" containerName="mariadb-account-create-update" Feb 02 22:38:40 crc kubenswrapper[4789]: I0202 22:38:40.576135 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="194b3dd6-8ae4-4960-b65e-b14fdb97b6a2" containerName="mariadb-account-create-update" Feb 02 22:38:40 crc kubenswrapper[4789]: I0202 22:38:40.576351 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="194b3dd6-8ae4-4960-b65e-b14fdb97b6a2" containerName="mariadb-account-create-update" Feb 02 22:38:40 crc kubenswrapper[4789]: I0202 22:38:40.577136 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-xs9v2" Feb 02 22:38:40 crc kubenswrapper[4789]: I0202 22:38:40.579525 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 02 22:38:40 crc kubenswrapper[4789]: I0202 22:38:40.590438 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-xs9v2"] Feb 02 22:38:40 crc kubenswrapper[4789]: I0202 22:38:40.684189 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/153c6b69-6597-4b78-bbd1-99f7bf407c2f-operator-scripts\") pod \"root-account-create-update-xs9v2\" (UID: \"153c6b69-6597-4b78-bbd1-99f7bf407c2f\") " pod="openstack/root-account-create-update-xs9v2" Feb 02 22:38:40 crc kubenswrapper[4789]: I0202 22:38:40.684280 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnwl8\" (UniqueName: \"kubernetes.io/projected/153c6b69-6597-4b78-bbd1-99f7bf407c2f-kube-api-access-gnwl8\") pod \"root-account-create-update-xs9v2\" (UID: \"153c6b69-6597-4b78-bbd1-99f7bf407c2f\") " pod="openstack/root-account-create-update-xs9v2" Feb 02 22:38:40 crc kubenswrapper[4789]: I0202 22:38:40.785183 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnwl8\" (UniqueName: \"kubernetes.io/projected/153c6b69-6597-4b78-bbd1-99f7bf407c2f-kube-api-access-gnwl8\") pod \"root-account-create-update-xs9v2\" (UID: \"153c6b69-6597-4b78-bbd1-99f7bf407c2f\") " pod="openstack/root-account-create-update-xs9v2" Feb 02 22:38:40 crc kubenswrapper[4789]: I0202 22:38:40.785571 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/153c6b69-6597-4b78-bbd1-99f7bf407c2f-operator-scripts\") pod \"root-account-create-update-xs9v2\" (UID: \"153c6b69-6597-4b78-bbd1-99f7bf407c2f\") " pod="openstack/root-account-create-update-xs9v2" Feb 02 22:38:40 crc kubenswrapper[4789]: I0202 22:38:40.786807 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/153c6b69-6597-4b78-bbd1-99f7bf407c2f-operator-scripts\") pod \"root-account-create-update-xs9v2\" (UID: \"153c6b69-6597-4b78-bbd1-99f7bf407c2f\") " pod="openstack/root-account-create-update-xs9v2" Feb 02 22:38:40 crc kubenswrapper[4789]: I0202 22:38:40.827639 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnwl8\" (UniqueName: \"kubernetes.io/projected/153c6b69-6597-4b78-bbd1-99f7bf407c2f-kube-api-access-gnwl8\") pod \"root-account-create-update-xs9v2\" (UID: \"153c6b69-6597-4b78-bbd1-99f7bf407c2f\") " pod="openstack/root-account-create-update-xs9v2" Feb 02 22:38:40 crc kubenswrapper[4789]: I0202 22:38:40.903795 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-xs9v2" Feb 02 22:38:41 crc kubenswrapper[4789]: I0202 22:38:41.373770 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-xs9v2"] Feb 02 22:38:41 crc kubenswrapper[4789]: W0202 22:38:41.379541 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod153c6b69_6597_4b78_bbd1_99f7bf407c2f.slice/crio-4860f430fbafd1270465fc57b37e83f44e8b0615d254e643779903ffc45adefb WatchSource:0}: Error finding container 4860f430fbafd1270465fc57b37e83f44e8b0615d254e643779903ffc45adefb: Status 404 returned error can't find the container with id 4860f430fbafd1270465fc57b37e83f44e8b0615d254e643779903ffc45adefb Feb 02 22:38:41 crc kubenswrapper[4789]: I0202 22:38:41.972465 4789 generic.go:334] "Generic (PLEG): container finished" podID="153c6b69-6597-4b78-bbd1-99f7bf407c2f" containerID="2795663c70bc821bd41ea770940adcaa5b72b4ca8b5d4823cc32028558cf62c8" exitCode=0 Feb 02 22:38:41 crc kubenswrapper[4789]: I0202 22:38:41.972563 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-xs9v2" event={"ID":"153c6b69-6597-4b78-bbd1-99f7bf407c2f","Type":"ContainerDied","Data":"2795663c70bc821bd41ea770940adcaa5b72b4ca8b5d4823cc32028558cf62c8"} Feb 02 22:38:41 crc kubenswrapper[4789]: I0202 22:38:41.972808 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-xs9v2" event={"ID":"153c6b69-6597-4b78-bbd1-99f7bf407c2f","Type":"ContainerStarted","Data":"4860f430fbafd1270465fc57b37e83f44e8b0615d254e643779903ffc45adefb"} Feb 02 22:38:42 crc kubenswrapper[4789]: I0202 22:38:42.419534 4789 scope.go:117] "RemoveContainer" containerID="e1882e0fff6a0a98d0a3daff7d66a9d268ab6f6c60c38af019ec759180dddf7d" Feb 02 22:38:42 crc kubenswrapper[4789]: E0202 22:38:42.419940 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:38:43 crc kubenswrapper[4789]: I0202 22:38:43.394545 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-xs9v2" Feb 02 22:38:43 crc kubenswrapper[4789]: I0202 22:38:43.531633 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/153c6b69-6597-4b78-bbd1-99f7bf407c2f-operator-scripts\") pod \"153c6b69-6597-4b78-bbd1-99f7bf407c2f\" (UID: \"153c6b69-6597-4b78-bbd1-99f7bf407c2f\") " Feb 02 22:38:43 crc kubenswrapper[4789]: I0202 22:38:43.531719 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gnwl8\" (UniqueName: \"kubernetes.io/projected/153c6b69-6597-4b78-bbd1-99f7bf407c2f-kube-api-access-gnwl8\") pod \"153c6b69-6597-4b78-bbd1-99f7bf407c2f\" (UID: \"153c6b69-6597-4b78-bbd1-99f7bf407c2f\") " Feb 02 22:38:43 crc kubenswrapper[4789]: I0202 22:38:43.534707 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/153c6b69-6597-4b78-bbd1-99f7bf407c2f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "153c6b69-6597-4b78-bbd1-99f7bf407c2f" (UID: "153c6b69-6597-4b78-bbd1-99f7bf407c2f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 22:38:43 crc kubenswrapper[4789]: I0202 22:38:43.581821 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/153c6b69-6597-4b78-bbd1-99f7bf407c2f-kube-api-access-gnwl8" (OuterVolumeSpecName: "kube-api-access-gnwl8") pod "153c6b69-6597-4b78-bbd1-99f7bf407c2f" (UID: "153c6b69-6597-4b78-bbd1-99f7bf407c2f"). InnerVolumeSpecName "kube-api-access-gnwl8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 22:38:43 crc kubenswrapper[4789]: I0202 22:38:43.634210 4789 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/153c6b69-6597-4b78-bbd1-99f7bf407c2f-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 22:38:43 crc kubenswrapper[4789]: I0202 22:38:43.634237 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gnwl8\" (UniqueName: \"kubernetes.io/projected/153c6b69-6597-4b78-bbd1-99f7bf407c2f-kube-api-access-gnwl8\") on node \"crc\" DevicePath \"\"" Feb 02 22:38:44 crc kubenswrapper[4789]: I0202 22:38:44.007886 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-xs9v2" Feb 02 22:38:44 crc kubenswrapper[4789]: I0202 22:38:44.009391 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-xs9v2" event={"ID":"153c6b69-6597-4b78-bbd1-99f7bf407c2f","Type":"ContainerDied","Data":"4860f430fbafd1270465fc57b37e83f44e8b0615d254e643779903ffc45adefb"} Feb 02 22:38:44 crc kubenswrapper[4789]: I0202 22:38:44.009461 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4860f430fbafd1270465fc57b37e83f44e8b0615d254e643779903ffc45adefb" Feb 02 22:38:44 crc kubenswrapper[4789]: I0202 22:38:44.012219 4789 generic.go:334] "Generic (PLEG): container finished" podID="7f79b555-f224-4e21-8650-5deed8215651" containerID="be581ca8cc76aec1f9a6c630b95d069125f8b8746ab5caee53a8829ac239cd5c" exitCode=0 Feb 02 22:38:44 crc kubenswrapper[4789]: I0202 22:38:44.012312 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"7f79b555-f224-4e21-8650-5deed8215651","Type":"ContainerDied","Data":"be581ca8cc76aec1f9a6c630b95d069125f8b8746ab5caee53a8829ac239cd5c"} Feb 02 22:38:44 crc kubenswrapper[4789]: I0202 22:38:44.017223 4789 generic.go:334] "Generic (PLEG): container finished" podID="16adfa27-ae3d-4915-8156-03be4321a9a2" containerID="841bf57b7d79c0c452ae515753e4403bf64c0bc9e269612b1b258106049976f8" exitCode=0 Feb 02 22:38:44 crc kubenswrapper[4789]: I0202 22:38:44.017314 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"16adfa27-ae3d-4915-8156-03be4321a9a2","Type":"ContainerDied","Data":"841bf57b7d79c0c452ae515753e4403bf64c0bc9e269612b1b258106049976f8"} Feb 02 22:38:45 crc kubenswrapper[4789]: I0202 22:38:45.026922 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"16adfa27-ae3d-4915-8156-03be4321a9a2","Type":"ContainerStarted","Data":"0a69b1c14ff9b0bf087bfa156127adaff928501940aa9c274f13c36ae34524a4"} Feb 02 22:38:45 crc kubenswrapper[4789]: I0202 22:38:45.027437 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 02 22:38:45 crc kubenswrapper[4789]: I0202 22:38:45.030637 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"7f79b555-f224-4e21-8650-5deed8215651","Type":"ContainerStarted","Data":"93b29bf947087bcb165cc1e3b85b44fd25592640c6c35a90022912e0c96de71d"} Feb 02 22:38:45 crc kubenswrapper[4789]: I0202 22:38:45.031139 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 02 22:38:45 crc kubenswrapper[4789]: I0202 22:38:45.075277 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.075256525 podStartE2EDuration="38.075256525s" podCreationTimestamp="2026-02-02 22:38:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 22:38:45.06230229 +0000 UTC m=+4745.357327309" watchObservedRunningTime="2026-02-02 22:38:45.075256525 +0000 UTC m=+4745.370281554" Feb 02 22:38:45 crc kubenswrapper[4789]: I0202 22:38:45.092369 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=38.092353708 podStartE2EDuration="38.092353708s" podCreationTimestamp="2026-02-02 22:38:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 22:38:45.089408145 +0000 UTC m=+4745.384433164" watchObservedRunningTime="2026-02-02 22:38:45.092353708 +0000 UTC m=+4745.387378727" Feb 02 22:38:56 crc kubenswrapper[4789]: I0202 22:38:56.420187 4789 scope.go:117] "RemoveContainer" containerID="e1882e0fff6a0a98d0a3daff7d66a9d268ab6f6c60c38af019ec759180dddf7d" Feb 02 22:38:56 crc kubenswrapper[4789]: E0202 22:38:56.421225 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:38:58 crc kubenswrapper[4789]: I0202 22:38:58.956801 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 02 22:38:59 crc kubenswrapper[4789]: I0202 22:38:59.214734 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 02 22:39:04 crc kubenswrapper[4789]: I0202 22:39:04.489566 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b7946d7b9-dxg9d"] Feb 02 22:39:04 crc kubenswrapper[4789]: E0202 22:39:04.492185 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="153c6b69-6597-4b78-bbd1-99f7bf407c2f" containerName="mariadb-account-create-update" Feb 02 22:39:04 crc kubenswrapper[4789]: I0202 22:39:04.492362 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="153c6b69-6597-4b78-bbd1-99f7bf407c2f" containerName="mariadb-account-create-update" Feb 02 22:39:04 crc kubenswrapper[4789]: I0202 22:39:04.492871 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="153c6b69-6597-4b78-bbd1-99f7bf407c2f" containerName="mariadb-account-create-update" Feb 02 22:39:04 crc kubenswrapper[4789]: I0202 22:39:04.494122 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b7946d7b9-dxg9d" Feb 02 22:39:04 crc kubenswrapper[4789]: I0202 22:39:04.507393 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b7946d7b9-dxg9d"] Feb 02 22:39:04 crc kubenswrapper[4789]: I0202 22:39:04.589800 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/52bd3963-7c35-4c60-ba08-d537d3f8ec1d-dns-svc\") pod \"dnsmasq-dns-5b7946d7b9-dxg9d\" (UID: \"52bd3963-7c35-4c60-ba08-d537d3f8ec1d\") " pod="openstack/dnsmasq-dns-5b7946d7b9-dxg9d" Feb 02 22:39:04 crc kubenswrapper[4789]: I0202 22:39:04.589876 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sz2xv\" (UniqueName: \"kubernetes.io/projected/52bd3963-7c35-4c60-ba08-d537d3f8ec1d-kube-api-access-sz2xv\") pod \"dnsmasq-dns-5b7946d7b9-dxg9d\" (UID: \"52bd3963-7c35-4c60-ba08-d537d3f8ec1d\") " pod="openstack/dnsmasq-dns-5b7946d7b9-dxg9d" Feb 02 22:39:04 crc kubenswrapper[4789]: I0202 22:39:04.590039 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52bd3963-7c35-4c60-ba08-d537d3f8ec1d-config\") pod \"dnsmasq-dns-5b7946d7b9-dxg9d\" (UID: \"52bd3963-7c35-4c60-ba08-d537d3f8ec1d\") " pod="openstack/dnsmasq-dns-5b7946d7b9-dxg9d" Feb 02 22:39:04 crc kubenswrapper[4789]: I0202 22:39:04.691923 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/52bd3963-7c35-4c60-ba08-d537d3f8ec1d-dns-svc\") pod \"dnsmasq-dns-5b7946d7b9-dxg9d\" (UID: \"52bd3963-7c35-4c60-ba08-d537d3f8ec1d\") " pod="openstack/dnsmasq-dns-5b7946d7b9-dxg9d" Feb 02 22:39:04 crc kubenswrapper[4789]: I0202 22:39:04.691972 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sz2xv\" (UniqueName: \"kubernetes.io/projected/52bd3963-7c35-4c60-ba08-d537d3f8ec1d-kube-api-access-sz2xv\") pod \"dnsmasq-dns-5b7946d7b9-dxg9d\" (UID: \"52bd3963-7c35-4c60-ba08-d537d3f8ec1d\") " pod="openstack/dnsmasq-dns-5b7946d7b9-dxg9d" Feb 02 22:39:04 crc kubenswrapper[4789]: I0202 22:39:04.692042 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52bd3963-7c35-4c60-ba08-d537d3f8ec1d-config\") pod \"dnsmasq-dns-5b7946d7b9-dxg9d\" (UID: \"52bd3963-7c35-4c60-ba08-d537d3f8ec1d\") " pod="openstack/dnsmasq-dns-5b7946d7b9-dxg9d" Feb 02 22:39:04 crc kubenswrapper[4789]: I0202 22:39:04.692932 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52bd3963-7c35-4c60-ba08-d537d3f8ec1d-config\") pod \"dnsmasq-dns-5b7946d7b9-dxg9d\" (UID: \"52bd3963-7c35-4c60-ba08-d537d3f8ec1d\") " pod="openstack/dnsmasq-dns-5b7946d7b9-dxg9d" Feb 02 22:39:04 crc kubenswrapper[4789]: I0202 22:39:04.693063 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/52bd3963-7c35-4c60-ba08-d537d3f8ec1d-dns-svc\") pod \"dnsmasq-dns-5b7946d7b9-dxg9d\" (UID: \"52bd3963-7c35-4c60-ba08-d537d3f8ec1d\") " pod="openstack/dnsmasq-dns-5b7946d7b9-dxg9d" Feb 02 22:39:04 crc kubenswrapper[4789]: I0202 22:39:04.725343 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sz2xv\" (UniqueName: \"kubernetes.io/projected/52bd3963-7c35-4c60-ba08-d537d3f8ec1d-kube-api-access-sz2xv\") pod \"dnsmasq-dns-5b7946d7b9-dxg9d\" (UID: \"52bd3963-7c35-4c60-ba08-d537d3f8ec1d\") " pod="openstack/dnsmasq-dns-5b7946d7b9-dxg9d" Feb 02 22:39:04 crc kubenswrapper[4789]: I0202 22:39:04.824939 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b7946d7b9-dxg9d" Feb 02 22:39:05 crc kubenswrapper[4789]: I0202 22:39:05.123755 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 02 22:39:05 crc kubenswrapper[4789]: I0202 22:39:05.272190 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b7946d7b9-dxg9d"] Feb 02 22:39:05 crc kubenswrapper[4789]: W0202 22:39:05.272797 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod52bd3963_7c35_4c60_ba08_d537d3f8ec1d.slice/crio-5b98a338f23db5ad0201e2477413a690d5bc49f78a8837ca09c36c6f6c893b6a WatchSource:0}: Error finding container 5b98a338f23db5ad0201e2477413a690d5bc49f78a8837ca09c36c6f6c893b6a: Status 404 returned error can't find the container with id 5b98a338f23db5ad0201e2477413a690d5bc49f78a8837ca09c36c6f6c893b6a Feb 02 22:39:05 crc kubenswrapper[4789]: I0202 22:39:05.919845 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 02 22:39:06 crc kubenswrapper[4789]: I0202 22:39:06.221422 4789 generic.go:334] "Generic (PLEG): container finished" podID="52bd3963-7c35-4c60-ba08-d537d3f8ec1d" containerID="60b0bebdff37c0e9bec28f7e0746b96cbe025f5262c8de1d069e7ba8595d6bc4" exitCode=0 Feb 02 22:39:06 crc kubenswrapper[4789]: I0202 22:39:06.221460 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b7946d7b9-dxg9d" event={"ID":"52bd3963-7c35-4c60-ba08-d537d3f8ec1d","Type":"ContainerDied","Data":"60b0bebdff37c0e9bec28f7e0746b96cbe025f5262c8de1d069e7ba8595d6bc4"} Feb 02 22:39:06 crc kubenswrapper[4789]: I0202 22:39:06.221484 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b7946d7b9-dxg9d" event={"ID":"52bd3963-7c35-4c60-ba08-d537d3f8ec1d","Type":"ContainerStarted","Data":"5b98a338f23db5ad0201e2477413a690d5bc49f78a8837ca09c36c6f6c893b6a"} Feb 02 22:39:06 crc kubenswrapper[4789]: I0202 22:39:06.861686 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="16adfa27-ae3d-4915-8156-03be4321a9a2" containerName="rabbitmq" containerID="cri-o://0a69b1c14ff9b0bf087bfa156127adaff928501940aa9c274f13c36ae34524a4" gracePeriod=604799 Feb 02 22:39:07 crc kubenswrapper[4789]: I0202 22:39:07.232052 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b7946d7b9-dxg9d" event={"ID":"52bd3963-7c35-4c60-ba08-d537d3f8ec1d","Type":"ContainerStarted","Data":"e71533d78d9a4947cc4bff7b06f77378bf709625db20ae14a3ebec3b8277a16f"} Feb 02 22:39:07 crc kubenswrapper[4789]: I0202 22:39:07.232475 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b7946d7b9-dxg9d" Feb 02 22:39:07 crc kubenswrapper[4789]: I0202 22:39:07.745747 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="7f79b555-f224-4e21-8650-5deed8215651" containerName="rabbitmq" containerID="cri-o://93b29bf947087bcb165cc1e3b85b44fd25592640c6c35a90022912e0c96de71d" gracePeriod=604799 Feb 02 22:39:08 crc kubenswrapper[4789]: I0202 22:39:08.954967 4789 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="16adfa27-ae3d-4915-8156-03be4321a9a2" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.243:5672: connect: connection refused" Feb 02 22:39:09 crc kubenswrapper[4789]: I0202 22:39:09.213105 4789 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="7f79b555-f224-4e21-8650-5deed8215651" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.244:5672: connect: connection refused" Feb 02 22:39:09 crc kubenswrapper[4789]: I0202 22:39:09.420079 4789 scope.go:117] "RemoveContainer" containerID="e1882e0fff6a0a98d0a3daff7d66a9d268ab6f6c60c38af019ec759180dddf7d" Feb 02 22:39:09 crc kubenswrapper[4789]: E0202 22:39:09.420621 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:39:13 crc kubenswrapper[4789]: I0202 22:39:13.288236 4789 generic.go:334] "Generic (PLEG): container finished" podID="16adfa27-ae3d-4915-8156-03be4321a9a2" containerID="0a69b1c14ff9b0bf087bfa156127adaff928501940aa9c274f13c36ae34524a4" exitCode=0 Feb 02 22:39:13 crc kubenswrapper[4789]: I0202 22:39:13.289235 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"16adfa27-ae3d-4915-8156-03be4321a9a2","Type":"ContainerDied","Data":"0a69b1c14ff9b0bf087bfa156127adaff928501940aa9c274f13c36ae34524a4"} Feb 02 22:39:13 crc kubenswrapper[4789]: I0202 22:39:13.500303 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 02 22:39:13 crc kubenswrapper[4789]: I0202 22:39:13.527147 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b7946d7b9-dxg9d" podStartSLOduration=9.527123749 podStartE2EDuration="9.527123749s" podCreationTimestamp="2026-02-02 22:39:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 22:39:07.260235732 +0000 UTC m=+4767.555260801" watchObservedRunningTime="2026-02-02 22:39:13.527123749 +0000 UTC m=+4773.822148798" Feb 02 22:39:13 crc kubenswrapper[4789]: I0202 22:39:13.650154 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mmr42\" (UniqueName: \"kubernetes.io/projected/16adfa27-ae3d-4915-8156-03be4321a9a2-kube-api-access-mmr42\") pod \"16adfa27-ae3d-4915-8156-03be4321a9a2\" (UID: \"16adfa27-ae3d-4915-8156-03be4321a9a2\") " Feb 02 22:39:13 crc kubenswrapper[4789]: I0202 22:39:13.651688 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/16adfa27-ae3d-4915-8156-03be4321a9a2-rabbitmq-plugins\") pod \"16adfa27-ae3d-4915-8156-03be4321a9a2\" (UID: \"16adfa27-ae3d-4915-8156-03be4321a9a2\") " Feb 02 22:39:13 crc kubenswrapper[4789]: I0202 22:39:13.651727 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/16adfa27-ae3d-4915-8156-03be4321a9a2-erlang-cookie-secret\") pod \"16adfa27-ae3d-4915-8156-03be4321a9a2\" (UID: \"16adfa27-ae3d-4915-8156-03be4321a9a2\") " Feb 02 22:39:13 crc kubenswrapper[4789]: I0202 22:39:13.651764 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/16adfa27-ae3d-4915-8156-03be4321a9a2-rabbitmq-confd\") pod \"16adfa27-ae3d-4915-8156-03be4321a9a2\" (UID: \"16adfa27-ae3d-4915-8156-03be4321a9a2\") " Feb 02 22:39:13 crc kubenswrapper[4789]: I0202 22:39:13.651801 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/16adfa27-ae3d-4915-8156-03be4321a9a2-pod-info\") pod \"16adfa27-ae3d-4915-8156-03be4321a9a2\" (UID: \"16adfa27-ae3d-4915-8156-03be4321a9a2\") " Feb 02 22:39:13 crc kubenswrapper[4789]: I0202 22:39:13.652663 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16adfa27-ae3d-4915-8156-03be4321a9a2-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "16adfa27-ae3d-4915-8156-03be4321a9a2" (UID: "16adfa27-ae3d-4915-8156-03be4321a9a2"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 22:39:13 crc kubenswrapper[4789]: I0202 22:39:13.653466 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9eb619b9-a0b2-4538-b62e-02c1541dc8bf\") pod \"16adfa27-ae3d-4915-8156-03be4321a9a2\" (UID: \"16adfa27-ae3d-4915-8156-03be4321a9a2\") " Feb 02 22:39:13 crc kubenswrapper[4789]: I0202 22:39:13.653653 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/16adfa27-ae3d-4915-8156-03be4321a9a2-server-conf\") pod \"16adfa27-ae3d-4915-8156-03be4321a9a2\" (UID: \"16adfa27-ae3d-4915-8156-03be4321a9a2\") " Feb 02 22:39:13 crc kubenswrapper[4789]: I0202 22:39:13.653757 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/16adfa27-ae3d-4915-8156-03be4321a9a2-rabbitmq-erlang-cookie\") pod \"16adfa27-ae3d-4915-8156-03be4321a9a2\" (UID: \"16adfa27-ae3d-4915-8156-03be4321a9a2\") " Feb 02 22:39:13 crc kubenswrapper[4789]: I0202 22:39:13.653877 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/16adfa27-ae3d-4915-8156-03be4321a9a2-plugins-conf\") pod \"16adfa27-ae3d-4915-8156-03be4321a9a2\" (UID: \"16adfa27-ae3d-4915-8156-03be4321a9a2\") " Feb 02 22:39:13 crc kubenswrapper[4789]: I0202 22:39:13.654535 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16adfa27-ae3d-4915-8156-03be4321a9a2-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "16adfa27-ae3d-4915-8156-03be4321a9a2" (UID: "16adfa27-ae3d-4915-8156-03be4321a9a2"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 22:39:13 crc kubenswrapper[4789]: I0202 22:39:13.654942 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16adfa27-ae3d-4915-8156-03be4321a9a2-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "16adfa27-ae3d-4915-8156-03be4321a9a2" (UID: "16adfa27-ae3d-4915-8156-03be4321a9a2"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 22:39:13 crc kubenswrapper[4789]: I0202 22:39:13.655014 4789 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/16adfa27-ae3d-4915-8156-03be4321a9a2-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 02 22:39:13 crc kubenswrapper[4789]: I0202 22:39:13.655126 4789 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/16adfa27-ae3d-4915-8156-03be4321a9a2-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 02 22:39:13 crc kubenswrapper[4789]: I0202 22:39:13.682303 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/16adfa27-ae3d-4915-8156-03be4321a9a2-pod-info" (OuterVolumeSpecName: "pod-info") pod "16adfa27-ae3d-4915-8156-03be4321a9a2" (UID: "16adfa27-ae3d-4915-8156-03be4321a9a2"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 02 22:39:13 crc kubenswrapper[4789]: I0202 22:39:13.682775 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16adfa27-ae3d-4915-8156-03be4321a9a2-server-conf" (OuterVolumeSpecName: "server-conf") pod "16adfa27-ae3d-4915-8156-03be4321a9a2" (UID: "16adfa27-ae3d-4915-8156-03be4321a9a2"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 22:39:13 crc kubenswrapper[4789]: I0202 22:39:13.685130 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16adfa27-ae3d-4915-8156-03be4321a9a2-kube-api-access-mmr42" (OuterVolumeSpecName: "kube-api-access-mmr42") pod "16adfa27-ae3d-4915-8156-03be4321a9a2" (UID: "16adfa27-ae3d-4915-8156-03be4321a9a2"). InnerVolumeSpecName "kube-api-access-mmr42". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 22:39:13 crc kubenswrapper[4789]: I0202 22:39:13.685323 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16adfa27-ae3d-4915-8156-03be4321a9a2-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "16adfa27-ae3d-4915-8156-03be4321a9a2" (UID: "16adfa27-ae3d-4915-8156-03be4321a9a2"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 22:39:13 crc kubenswrapper[4789]: I0202 22:39:13.692294 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9eb619b9-a0b2-4538-b62e-02c1541dc8bf" (OuterVolumeSpecName: "persistence") pod "16adfa27-ae3d-4915-8156-03be4321a9a2" (UID: "16adfa27-ae3d-4915-8156-03be4321a9a2"). InnerVolumeSpecName "pvc-9eb619b9-a0b2-4538-b62e-02c1541dc8bf". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 02 22:39:13 crc kubenswrapper[4789]: I0202 22:39:13.736502 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16adfa27-ae3d-4915-8156-03be4321a9a2-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "16adfa27-ae3d-4915-8156-03be4321a9a2" (UID: "16adfa27-ae3d-4915-8156-03be4321a9a2"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 22:39:13 crc kubenswrapper[4789]: I0202 22:39:13.757148 4789 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-9eb619b9-a0b2-4538-b62e-02c1541dc8bf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9eb619b9-a0b2-4538-b62e-02c1541dc8bf\") on node \"crc\" " Feb 02 22:39:13 crc kubenswrapper[4789]: I0202 22:39:13.757528 4789 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/16adfa27-ae3d-4915-8156-03be4321a9a2-server-conf\") on node \"crc\" DevicePath \"\"" Feb 02 22:39:13 crc kubenswrapper[4789]: I0202 22:39:13.757844 4789 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/16adfa27-ae3d-4915-8156-03be4321a9a2-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 02 22:39:13 crc kubenswrapper[4789]: I0202 22:39:13.758103 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mmr42\" (UniqueName: \"kubernetes.io/projected/16adfa27-ae3d-4915-8156-03be4321a9a2-kube-api-access-mmr42\") on node \"crc\" DevicePath \"\"" Feb 02 22:39:13 crc kubenswrapper[4789]: I0202 22:39:13.758289 4789 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/16adfa27-ae3d-4915-8156-03be4321a9a2-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 02 22:39:13 crc kubenswrapper[4789]: I0202 22:39:13.758422 4789 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/16adfa27-ae3d-4915-8156-03be4321a9a2-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 02 22:39:13 crc kubenswrapper[4789]: I0202 22:39:13.758546 4789 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/16adfa27-ae3d-4915-8156-03be4321a9a2-pod-info\") on node \"crc\" DevicePath \"\"" Feb 02 22:39:13 crc kubenswrapper[4789]: I0202 22:39:13.776727 4789 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 02 22:39:13 crc kubenswrapper[4789]: I0202 22:39:13.776889 4789 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-9eb619b9-a0b2-4538-b62e-02c1541dc8bf" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9eb619b9-a0b2-4538-b62e-02c1541dc8bf") on node "crc" Feb 02 22:39:13 crc kubenswrapper[4789]: I0202 22:39:13.862188 4789 reconciler_common.go:293] "Volume detached for volume \"pvc-9eb619b9-a0b2-4538-b62e-02c1541dc8bf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9eb619b9-a0b2-4538-b62e-02c1541dc8bf\") on node \"crc\" DevicePath \"\"" Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.274441 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.322293 4789 generic.go:334] "Generic (PLEG): container finished" podID="7f79b555-f224-4e21-8650-5deed8215651" containerID="93b29bf947087bcb165cc1e3b85b44fd25592640c6c35a90022912e0c96de71d" exitCode=0 Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.322401 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"7f79b555-f224-4e21-8650-5deed8215651","Type":"ContainerDied","Data":"93b29bf947087bcb165cc1e3b85b44fd25592640c6c35a90022912e0c96de71d"} Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.322441 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"7f79b555-f224-4e21-8650-5deed8215651","Type":"ContainerDied","Data":"f8fb85908d3c92d3f07c97a3161c95bbc61817efb93bb50c9298ce80a5ce5303"} Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.322730 4789 scope.go:117] "RemoveContainer" containerID="93b29bf947087bcb165cc1e3b85b44fd25592640c6c35a90022912e0c96de71d" Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.322947 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.332421 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"16adfa27-ae3d-4915-8156-03be4321a9a2","Type":"ContainerDied","Data":"e5ab25a5dab04210fe7e09fc4ab5273bbcab8cc5f3e1ab18ef3f7d3109775cea"} Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.332635 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.371127 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fn6fw\" (UniqueName: \"kubernetes.io/projected/7f79b555-f224-4e21-8650-5deed8215651-kube-api-access-fn6fw\") pod \"7f79b555-f224-4e21-8650-5deed8215651\" (UID: \"7f79b555-f224-4e21-8650-5deed8215651\") " Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.371442 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7f79b555-f224-4e21-8650-5deed8215651-rabbitmq-confd\") pod \"7f79b555-f224-4e21-8650-5deed8215651\" (UID: \"7f79b555-f224-4e21-8650-5deed8215651\") " Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.371564 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7f79b555-f224-4e21-8650-5deed8215651-plugins-conf\") pod \"7f79b555-f224-4e21-8650-5deed8215651\" (UID: \"7f79b555-f224-4e21-8650-5deed8215651\") " Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.371696 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7f79b555-f224-4e21-8650-5deed8215651-erlang-cookie-secret\") pod \"7f79b555-f224-4e21-8650-5deed8215651\" (UID: \"7f79b555-f224-4e21-8650-5deed8215651\") " Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.371845 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7f79b555-f224-4e21-8650-5deed8215651-rabbitmq-erlang-cookie\") pod \"7f79b555-f224-4e21-8650-5deed8215651\" (UID: \"7f79b555-f224-4e21-8650-5deed8215651\") " Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.372098 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-01b9bde9-4c97-4e47-bd8b-0247431a5880\") pod \"7f79b555-f224-4e21-8650-5deed8215651\" (UID: \"7f79b555-f224-4e21-8650-5deed8215651\") " Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.372229 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7f79b555-f224-4e21-8650-5deed8215651-rabbitmq-plugins\") pod \"7f79b555-f224-4e21-8650-5deed8215651\" (UID: \"7f79b555-f224-4e21-8650-5deed8215651\") " Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.372348 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7f79b555-f224-4e21-8650-5deed8215651-pod-info\") pod \"7f79b555-f224-4e21-8650-5deed8215651\" (UID: \"7f79b555-f224-4e21-8650-5deed8215651\") " Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.372452 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7f79b555-f224-4e21-8650-5deed8215651-server-conf\") pod \"7f79b555-f224-4e21-8650-5deed8215651\" (UID: \"7f79b555-f224-4e21-8650-5deed8215651\") " Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.379430 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f79b555-f224-4e21-8650-5deed8215651-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "7f79b555-f224-4e21-8650-5deed8215651" (UID: "7f79b555-f224-4e21-8650-5deed8215651"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.379614 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f79b555-f224-4e21-8650-5deed8215651-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "7f79b555-f224-4e21-8650-5deed8215651" (UID: "7f79b555-f224-4e21-8650-5deed8215651"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.380097 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f79b555-f224-4e21-8650-5deed8215651-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "7f79b555-f224-4e21-8650-5deed8215651" (UID: "7f79b555-f224-4e21-8650-5deed8215651"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.380328 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f79b555-f224-4e21-8650-5deed8215651-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "7f79b555-f224-4e21-8650-5deed8215651" (UID: "7f79b555-f224-4e21-8650-5deed8215651"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.381377 4789 scope.go:117] "RemoveContainer" containerID="be581ca8cc76aec1f9a6c630b95d069125f8b8746ab5caee53a8829ac239cd5c" Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.383746 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.385918 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/7f79b555-f224-4e21-8650-5deed8215651-pod-info" (OuterVolumeSpecName: "pod-info") pod "7f79b555-f224-4e21-8650-5deed8215651" (UID: "7f79b555-f224-4e21-8650-5deed8215651"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.397540 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.402749 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 02 22:39:14 crc kubenswrapper[4789]: E0202 22:39:14.403041 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f79b555-f224-4e21-8650-5deed8215651" containerName="setup-container" Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.403053 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f79b555-f224-4e21-8650-5deed8215651" containerName="setup-container" Feb 02 22:39:14 crc kubenswrapper[4789]: E0202 22:39:14.403062 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16adfa27-ae3d-4915-8156-03be4321a9a2" containerName="rabbitmq" Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.403070 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="16adfa27-ae3d-4915-8156-03be4321a9a2" containerName="rabbitmq" Feb 02 22:39:14 crc kubenswrapper[4789]: E0202 22:39:14.403077 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16adfa27-ae3d-4915-8156-03be4321a9a2" containerName="setup-container" Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.403083 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="16adfa27-ae3d-4915-8156-03be4321a9a2" containerName="setup-container" Feb 02 22:39:14 crc kubenswrapper[4789]: E0202 22:39:14.403100 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f79b555-f224-4e21-8650-5deed8215651" containerName="rabbitmq" Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.403108 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f79b555-f224-4e21-8650-5deed8215651" containerName="rabbitmq" Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.403244 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="16adfa27-ae3d-4915-8156-03be4321a9a2" containerName="rabbitmq" Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.403260 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f79b555-f224-4e21-8650-5deed8215651" containerName="rabbitmq" Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.403972 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.404702 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f79b555-f224-4e21-8650-5deed8215651-kube-api-access-fn6fw" (OuterVolumeSpecName: "kube-api-access-fn6fw") pod "7f79b555-f224-4e21-8650-5deed8215651" (UID: "7f79b555-f224-4e21-8650-5deed8215651"). InnerVolumeSpecName "kube-api-access-fn6fw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.409877 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-01b9bde9-4c97-4e47-bd8b-0247431a5880" (OuterVolumeSpecName: "persistence") pod "7f79b555-f224-4e21-8650-5deed8215651" (UID: "7f79b555-f224-4e21-8650-5deed8215651"). InnerVolumeSpecName "pvc-01b9bde9-4c97-4e47-bd8b-0247431a5880". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.413104 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.413537 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.413664 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.413806 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f79b555-f224-4e21-8650-5deed8215651-server-conf" (OuterVolumeSpecName: "server-conf") pod "7f79b555-f224-4e21-8650-5deed8215651" (UID: "7f79b555-f224-4e21-8650-5deed8215651"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.414132 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-xv9ct" Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.415778 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.434488 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16adfa27-ae3d-4915-8156-03be4321a9a2" path="/var/lib/kubelet/pods/16adfa27-ae3d-4915-8156-03be4321a9a2/volumes" Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.462460 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.474384 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6492b19f-f809-4667-b7d7-94ee3bfaa669-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"6492b19f-f809-4667-b7d7-94ee3bfaa669\") " pod="openstack/rabbitmq-server-0" Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.474429 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6492b19f-f809-4667-b7d7-94ee3bfaa669-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"6492b19f-f809-4667-b7d7-94ee3bfaa669\") " pod="openstack/rabbitmq-server-0" Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.474467 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5gn9\" (UniqueName: \"kubernetes.io/projected/6492b19f-f809-4667-b7d7-94ee3bfaa669-kube-api-access-b5gn9\") pod \"rabbitmq-server-0\" (UID: \"6492b19f-f809-4667-b7d7-94ee3bfaa669\") " pod="openstack/rabbitmq-server-0" Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.474708 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6492b19f-f809-4667-b7d7-94ee3bfaa669-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"6492b19f-f809-4667-b7d7-94ee3bfaa669\") " pod="openstack/rabbitmq-server-0" Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.474802 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6492b19f-f809-4667-b7d7-94ee3bfaa669-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"6492b19f-f809-4667-b7d7-94ee3bfaa669\") " pod="openstack/rabbitmq-server-0" Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.474891 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6492b19f-f809-4667-b7d7-94ee3bfaa669-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"6492b19f-f809-4667-b7d7-94ee3bfaa669\") " pod="openstack/rabbitmq-server-0" Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.475024 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6492b19f-f809-4667-b7d7-94ee3bfaa669-server-conf\") pod \"rabbitmq-server-0\" (UID: \"6492b19f-f809-4667-b7d7-94ee3bfaa669\") " pod="openstack/rabbitmq-server-0" Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.475103 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-9eb619b9-a0b2-4538-b62e-02c1541dc8bf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9eb619b9-a0b2-4538-b62e-02c1541dc8bf\") pod \"rabbitmq-server-0\" (UID: \"6492b19f-f809-4667-b7d7-94ee3bfaa669\") " pod="openstack/rabbitmq-server-0" Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.475148 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6492b19f-f809-4667-b7d7-94ee3bfaa669-pod-info\") pod \"rabbitmq-server-0\" (UID: \"6492b19f-f809-4667-b7d7-94ee3bfaa669\") " pod="openstack/rabbitmq-server-0" Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.475241 4789 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7f79b555-f224-4e21-8650-5deed8215651-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.475289 4789 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-01b9bde9-4c97-4e47-bd8b-0247431a5880\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-01b9bde9-4c97-4e47-bd8b-0247431a5880\") on node \"crc\" " Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.475309 4789 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7f79b555-f224-4e21-8650-5deed8215651-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.475324 4789 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7f79b555-f224-4e21-8650-5deed8215651-pod-info\") on node \"crc\" DevicePath \"\"" Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.475337 4789 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7f79b555-f224-4e21-8650-5deed8215651-server-conf\") on node \"crc\" DevicePath \"\"" Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.475349 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fn6fw\" (UniqueName: \"kubernetes.io/projected/7f79b555-f224-4e21-8650-5deed8215651-kube-api-access-fn6fw\") on node \"crc\" DevicePath \"\"" Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.475363 4789 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7f79b555-f224-4e21-8650-5deed8215651-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.475375 4789 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7f79b555-f224-4e21-8650-5deed8215651-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.477970 4789 scope.go:117] "RemoveContainer" containerID="93b29bf947087bcb165cc1e3b85b44fd25592640c6c35a90022912e0c96de71d" Feb 02 22:39:14 crc kubenswrapper[4789]: E0202 22:39:14.479331 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93b29bf947087bcb165cc1e3b85b44fd25592640c6c35a90022912e0c96de71d\": container with ID starting with 93b29bf947087bcb165cc1e3b85b44fd25592640c6c35a90022912e0c96de71d not found: ID does not exist" containerID="93b29bf947087bcb165cc1e3b85b44fd25592640c6c35a90022912e0c96de71d" Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.479368 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93b29bf947087bcb165cc1e3b85b44fd25592640c6c35a90022912e0c96de71d"} err="failed to get container status \"93b29bf947087bcb165cc1e3b85b44fd25592640c6c35a90022912e0c96de71d\": rpc error: code = NotFound desc = could not find container \"93b29bf947087bcb165cc1e3b85b44fd25592640c6c35a90022912e0c96de71d\": container with ID starting with 93b29bf947087bcb165cc1e3b85b44fd25592640c6c35a90022912e0c96de71d not found: ID does not exist" Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.479414 4789 scope.go:117] "RemoveContainer" containerID="be581ca8cc76aec1f9a6c630b95d069125f8b8746ab5caee53a8829ac239cd5c" Feb 02 22:39:14 crc kubenswrapper[4789]: E0202 22:39:14.479803 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be581ca8cc76aec1f9a6c630b95d069125f8b8746ab5caee53a8829ac239cd5c\": container with ID starting with be581ca8cc76aec1f9a6c630b95d069125f8b8746ab5caee53a8829ac239cd5c not found: ID does not exist" containerID="be581ca8cc76aec1f9a6c630b95d069125f8b8746ab5caee53a8829ac239cd5c" Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.479842 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be581ca8cc76aec1f9a6c630b95d069125f8b8746ab5caee53a8829ac239cd5c"} err="failed to get container status \"be581ca8cc76aec1f9a6c630b95d069125f8b8746ab5caee53a8829ac239cd5c\": rpc error: code = NotFound desc = could not find container \"be581ca8cc76aec1f9a6c630b95d069125f8b8746ab5caee53a8829ac239cd5c\": container with ID starting with be581ca8cc76aec1f9a6c630b95d069125f8b8746ab5caee53a8829ac239cd5c not found: ID does not exist" Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.479870 4789 scope.go:117] "RemoveContainer" containerID="0a69b1c14ff9b0bf087bfa156127adaff928501940aa9c274f13c36ae34524a4" Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.498119 4789 scope.go:117] "RemoveContainer" containerID="841bf57b7d79c0c452ae515753e4403bf64c0bc9e269612b1b258106049976f8" Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.501791 4789 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.501957 4789 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-01b9bde9-4c97-4e47-bd8b-0247431a5880" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-01b9bde9-4c97-4e47-bd8b-0247431a5880") on node "crc" Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.518323 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f79b555-f224-4e21-8650-5deed8215651-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "7f79b555-f224-4e21-8650-5deed8215651" (UID: "7f79b555-f224-4e21-8650-5deed8215651"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.576419 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6492b19f-f809-4667-b7d7-94ee3bfaa669-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"6492b19f-f809-4667-b7d7-94ee3bfaa669\") " pod="openstack/rabbitmq-server-0" Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.576484 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6492b19f-f809-4667-b7d7-94ee3bfaa669-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"6492b19f-f809-4667-b7d7-94ee3bfaa669\") " pod="openstack/rabbitmq-server-0" Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.576541 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5gn9\" (UniqueName: \"kubernetes.io/projected/6492b19f-f809-4667-b7d7-94ee3bfaa669-kube-api-access-b5gn9\") pod \"rabbitmq-server-0\" (UID: \"6492b19f-f809-4667-b7d7-94ee3bfaa669\") " pod="openstack/rabbitmq-server-0" Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.576570 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6492b19f-f809-4667-b7d7-94ee3bfaa669-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"6492b19f-f809-4667-b7d7-94ee3bfaa669\") " pod="openstack/rabbitmq-server-0" Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.576627 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6492b19f-f809-4667-b7d7-94ee3bfaa669-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"6492b19f-f809-4667-b7d7-94ee3bfaa669\") " pod="openstack/rabbitmq-server-0" Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.576663 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6492b19f-f809-4667-b7d7-94ee3bfaa669-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"6492b19f-f809-4667-b7d7-94ee3bfaa669\") " pod="openstack/rabbitmq-server-0" Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.576712 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6492b19f-f809-4667-b7d7-94ee3bfaa669-server-conf\") pod \"rabbitmq-server-0\" (UID: \"6492b19f-f809-4667-b7d7-94ee3bfaa669\") " pod="openstack/rabbitmq-server-0" Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.576760 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-9eb619b9-a0b2-4538-b62e-02c1541dc8bf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9eb619b9-a0b2-4538-b62e-02c1541dc8bf\") pod \"rabbitmq-server-0\" (UID: \"6492b19f-f809-4667-b7d7-94ee3bfaa669\") " pod="openstack/rabbitmq-server-0" Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.576796 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6492b19f-f809-4667-b7d7-94ee3bfaa669-pod-info\") pod \"rabbitmq-server-0\" (UID: \"6492b19f-f809-4667-b7d7-94ee3bfaa669\") " pod="openstack/rabbitmq-server-0" Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.576847 4789 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7f79b555-f224-4e21-8650-5deed8215651-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.576860 4789 reconciler_common.go:293] "Volume detached for volume \"pvc-01b9bde9-4c97-4e47-bd8b-0247431a5880\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-01b9bde9-4c97-4e47-bd8b-0247431a5880\") on node \"crc\" DevicePath \"\"" Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.577992 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6492b19f-f809-4667-b7d7-94ee3bfaa669-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"6492b19f-f809-4667-b7d7-94ee3bfaa669\") " pod="openstack/rabbitmq-server-0" Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.578324 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6492b19f-f809-4667-b7d7-94ee3bfaa669-server-conf\") pod \"rabbitmq-server-0\" (UID: \"6492b19f-f809-4667-b7d7-94ee3bfaa669\") " pod="openstack/rabbitmq-server-0" Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.578408 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6492b19f-f809-4667-b7d7-94ee3bfaa669-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"6492b19f-f809-4667-b7d7-94ee3bfaa669\") " pod="openstack/rabbitmq-server-0" Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.578615 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6492b19f-f809-4667-b7d7-94ee3bfaa669-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"6492b19f-f809-4667-b7d7-94ee3bfaa669\") " pod="openstack/rabbitmq-server-0" Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.581256 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6492b19f-f809-4667-b7d7-94ee3bfaa669-pod-info\") pod \"rabbitmq-server-0\" (UID: \"6492b19f-f809-4667-b7d7-94ee3bfaa669\") " pod="openstack/rabbitmq-server-0" Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.581613 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6492b19f-f809-4667-b7d7-94ee3bfaa669-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"6492b19f-f809-4667-b7d7-94ee3bfaa669\") " pod="openstack/rabbitmq-server-0" Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.581834 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6492b19f-f809-4667-b7d7-94ee3bfaa669-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"6492b19f-f809-4667-b7d7-94ee3bfaa669\") " pod="openstack/rabbitmq-server-0" Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.582818 4789 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.582861 4789 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-9eb619b9-a0b2-4538-b62e-02c1541dc8bf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9eb619b9-a0b2-4538-b62e-02c1541dc8bf\") pod \"rabbitmq-server-0\" (UID: \"6492b19f-f809-4667-b7d7-94ee3bfaa669\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/50f3ec5928e7fad23fc11727c84c4bdcaec865a8125c7a7bb074d3bc349e942b/globalmount\"" pod="openstack/rabbitmq-server-0" Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.597861 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5gn9\" (UniqueName: \"kubernetes.io/projected/6492b19f-f809-4667-b7d7-94ee3bfaa669-kube-api-access-b5gn9\") pod \"rabbitmq-server-0\" (UID: \"6492b19f-f809-4667-b7d7-94ee3bfaa669\") " pod="openstack/rabbitmq-server-0" Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.612123 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-9eb619b9-a0b2-4538-b62e-02c1541dc8bf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9eb619b9-a0b2-4538-b62e-02c1541dc8bf\") pod \"rabbitmq-server-0\" (UID: \"6492b19f-f809-4667-b7d7-94ee3bfaa669\") " pod="openstack/rabbitmq-server-0" Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.661117 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.677920 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.683423 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.684574 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.689932 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.691205 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.691389 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.691491 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.691612 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-m6jzt" Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.692836 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.776918 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.779101 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f37562ff-5ea3-4230-9c68-09d330bb64c8-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f37562ff-5ea3-4230-9c68-09d330bb64c8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.779150 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f37562ff-5ea3-4230-9c68-09d330bb64c8-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f37562ff-5ea3-4230-9c68-09d330bb64c8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.779174 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsm5g\" (UniqueName: \"kubernetes.io/projected/f37562ff-5ea3-4230-9c68-09d330bb64c8-kube-api-access-bsm5g\") pod \"rabbitmq-cell1-server-0\" (UID: \"f37562ff-5ea3-4230-9c68-09d330bb64c8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.779249 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f37562ff-5ea3-4230-9c68-09d330bb64c8-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f37562ff-5ea3-4230-9c68-09d330bb64c8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.779270 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-01b9bde9-4c97-4e47-bd8b-0247431a5880\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-01b9bde9-4c97-4e47-bd8b-0247431a5880\") pod \"rabbitmq-cell1-server-0\" (UID: \"f37562ff-5ea3-4230-9c68-09d330bb64c8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.779288 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f37562ff-5ea3-4230-9c68-09d330bb64c8-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f37562ff-5ea3-4230-9c68-09d330bb64c8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.779302 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f37562ff-5ea3-4230-9c68-09d330bb64c8-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f37562ff-5ea3-4230-9c68-09d330bb64c8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.779319 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f37562ff-5ea3-4230-9c68-09d330bb64c8-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f37562ff-5ea3-4230-9c68-09d330bb64c8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.779335 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f37562ff-5ea3-4230-9c68-09d330bb64c8-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f37562ff-5ea3-4230-9c68-09d330bb64c8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.825815 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5b7946d7b9-dxg9d" Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.882302 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f37562ff-5ea3-4230-9c68-09d330bb64c8-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f37562ff-5ea3-4230-9c68-09d330bb64c8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.882342 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-01b9bde9-4c97-4e47-bd8b-0247431a5880\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-01b9bde9-4c97-4e47-bd8b-0247431a5880\") pod \"rabbitmq-cell1-server-0\" (UID: \"f37562ff-5ea3-4230-9c68-09d330bb64c8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.882365 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f37562ff-5ea3-4230-9c68-09d330bb64c8-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f37562ff-5ea3-4230-9c68-09d330bb64c8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.882379 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f37562ff-5ea3-4230-9c68-09d330bb64c8-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f37562ff-5ea3-4230-9c68-09d330bb64c8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.882398 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f37562ff-5ea3-4230-9c68-09d330bb64c8-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f37562ff-5ea3-4230-9c68-09d330bb64c8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.882417 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f37562ff-5ea3-4230-9c68-09d330bb64c8-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f37562ff-5ea3-4230-9c68-09d330bb64c8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.882466 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f37562ff-5ea3-4230-9c68-09d330bb64c8-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f37562ff-5ea3-4230-9c68-09d330bb64c8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.882499 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f37562ff-5ea3-4230-9c68-09d330bb64c8-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f37562ff-5ea3-4230-9c68-09d330bb64c8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.882514 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bsm5g\" (UniqueName: \"kubernetes.io/projected/f37562ff-5ea3-4230-9c68-09d330bb64c8-kube-api-access-bsm5g\") pod \"rabbitmq-cell1-server-0\" (UID: \"f37562ff-5ea3-4230-9c68-09d330bb64c8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.886929 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f37562ff-5ea3-4230-9c68-09d330bb64c8-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f37562ff-5ea3-4230-9c68-09d330bb64c8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.887419 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f37562ff-5ea3-4230-9c68-09d330bb64c8-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f37562ff-5ea3-4230-9c68-09d330bb64c8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.888870 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f37562ff-5ea3-4230-9c68-09d330bb64c8-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f37562ff-5ea3-4230-9c68-09d330bb64c8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.890213 4789 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.890243 4789 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-01b9bde9-4c97-4e47-bd8b-0247431a5880\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-01b9bde9-4c97-4e47-bd8b-0247431a5880\") pod \"rabbitmq-cell1-server-0\" (UID: \"f37562ff-5ea3-4230-9c68-09d330bb64c8\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/3ce274a3d6ff6b0985238182f74ff55317fd054224578c6e2ba90e2dd927c745/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.891705 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f37562ff-5ea3-4230-9c68-09d330bb64c8-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f37562ff-5ea3-4230-9c68-09d330bb64c8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.892326 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f37562ff-5ea3-4230-9c68-09d330bb64c8-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f37562ff-5ea3-4230-9c68-09d330bb64c8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.892365 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f37562ff-5ea3-4230-9c68-09d330bb64c8-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f37562ff-5ea3-4230-9c68-09d330bb64c8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.893420 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f37562ff-5ea3-4230-9c68-09d330bb64c8-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f37562ff-5ea3-4230-9c68-09d330bb64c8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.900484 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-98ddfc8f-74xt8"] Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.901127 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-98ddfc8f-74xt8" podUID="0b99aab4-38cb-404e-8156-39b0491442cc" containerName="dnsmasq-dns" containerID="cri-o://74b699484ed0ab074a613e01dfb47e7f3ec80c5eaadeb3f25b8a6386b1ca1c9b" gracePeriod=10 Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.907018 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bsm5g\" (UniqueName: \"kubernetes.io/projected/f37562ff-5ea3-4230-9c68-09d330bb64c8-kube-api-access-bsm5g\") pod \"rabbitmq-cell1-server-0\" (UID: \"f37562ff-5ea3-4230-9c68-09d330bb64c8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 22:39:14 crc kubenswrapper[4789]: I0202 22:39:14.946729 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-01b9bde9-4c97-4e47-bd8b-0247431a5880\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-01b9bde9-4c97-4e47-bd8b-0247431a5880\") pod \"rabbitmq-cell1-server-0\" (UID: \"f37562ff-5ea3-4230-9c68-09d330bb64c8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 22:39:15 crc kubenswrapper[4789]: I0202 22:39:15.000784 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 02 22:39:15 crc kubenswrapper[4789]: I0202 22:39:15.023450 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 02 22:39:15 crc kubenswrapper[4789]: I0202 22:39:15.345681 4789 generic.go:334] "Generic (PLEG): container finished" podID="0b99aab4-38cb-404e-8156-39b0491442cc" containerID="74b699484ed0ab074a613e01dfb47e7f3ec80c5eaadeb3f25b8a6386b1ca1c9b" exitCode=0 Feb 02 22:39:15 crc kubenswrapper[4789]: I0202 22:39:15.345732 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-98ddfc8f-74xt8" event={"ID":"0b99aab4-38cb-404e-8156-39b0491442cc","Type":"ContainerDied","Data":"74b699484ed0ab074a613e01dfb47e7f3ec80c5eaadeb3f25b8a6386b1ca1c9b"} Feb 02 22:39:15 crc kubenswrapper[4789]: I0202 22:39:15.346210 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-98ddfc8f-74xt8" event={"ID":"0b99aab4-38cb-404e-8156-39b0491442cc","Type":"ContainerDied","Data":"f37ee109c275c9ca2bd7a76fe38de3f2704eafa09c970f56fe7d5e789fd9fcec"} Feb 02 22:39:15 crc kubenswrapper[4789]: I0202 22:39:15.346225 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f37ee109c275c9ca2bd7a76fe38de3f2704eafa09c970f56fe7d5e789fd9fcec" Feb 02 22:39:15 crc kubenswrapper[4789]: I0202 22:39:15.347983 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"6492b19f-f809-4667-b7d7-94ee3bfaa669","Type":"ContainerStarted","Data":"36b596dc8b7163a6bc001dbca88907b4fb8216413fd2b8bc926e99a30aa939e4"} Feb 02 22:39:15 crc kubenswrapper[4789]: I0202 22:39:15.375737 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-98ddfc8f-74xt8" Feb 02 22:39:15 crc kubenswrapper[4789]: I0202 22:39:15.507309 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jf8s9\" (UniqueName: \"kubernetes.io/projected/0b99aab4-38cb-404e-8156-39b0491442cc-kube-api-access-jf8s9\") pod \"0b99aab4-38cb-404e-8156-39b0491442cc\" (UID: \"0b99aab4-38cb-404e-8156-39b0491442cc\") " Feb 02 22:39:15 crc kubenswrapper[4789]: I0202 22:39:15.507458 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0b99aab4-38cb-404e-8156-39b0491442cc-dns-svc\") pod \"0b99aab4-38cb-404e-8156-39b0491442cc\" (UID: \"0b99aab4-38cb-404e-8156-39b0491442cc\") " Feb 02 22:39:15 crc kubenswrapper[4789]: I0202 22:39:15.507612 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b99aab4-38cb-404e-8156-39b0491442cc-config\") pod \"0b99aab4-38cb-404e-8156-39b0491442cc\" (UID: \"0b99aab4-38cb-404e-8156-39b0491442cc\") " Feb 02 22:39:15 crc kubenswrapper[4789]: I0202 22:39:15.544781 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b99aab4-38cb-404e-8156-39b0491442cc-kube-api-access-jf8s9" (OuterVolumeSpecName: "kube-api-access-jf8s9") pod "0b99aab4-38cb-404e-8156-39b0491442cc" (UID: "0b99aab4-38cb-404e-8156-39b0491442cc"). InnerVolumeSpecName "kube-api-access-jf8s9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 22:39:15 crc kubenswrapper[4789]: I0202 22:39:15.560344 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b99aab4-38cb-404e-8156-39b0491442cc-config" (OuterVolumeSpecName: "config") pod "0b99aab4-38cb-404e-8156-39b0491442cc" (UID: "0b99aab4-38cb-404e-8156-39b0491442cc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 22:39:15 crc kubenswrapper[4789]: I0202 22:39:15.563695 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b99aab4-38cb-404e-8156-39b0491442cc-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0b99aab4-38cb-404e-8156-39b0491442cc" (UID: "0b99aab4-38cb-404e-8156-39b0491442cc"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 22:39:15 crc kubenswrapper[4789]: I0202 22:39:15.587467 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 02 22:39:15 crc kubenswrapper[4789]: I0202 22:39:15.609030 4789 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0b99aab4-38cb-404e-8156-39b0491442cc-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 22:39:15 crc kubenswrapper[4789]: I0202 22:39:15.609070 4789 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b99aab4-38cb-404e-8156-39b0491442cc-config\") on node \"crc\" DevicePath \"\"" Feb 02 22:39:15 crc kubenswrapper[4789]: I0202 22:39:15.609082 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jf8s9\" (UniqueName: \"kubernetes.io/projected/0b99aab4-38cb-404e-8156-39b0491442cc-kube-api-access-jf8s9\") on node \"crc\" DevicePath \"\"" Feb 02 22:39:16 crc kubenswrapper[4789]: I0202 22:39:16.369460 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f37562ff-5ea3-4230-9c68-09d330bb64c8","Type":"ContainerStarted","Data":"e35dfc3069bb6ca370f6727bfb74e9eea3ce0410b5d5a7802ff1f759cf4dc9be"} Feb 02 22:39:16 crc kubenswrapper[4789]: I0202 22:39:16.373219 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-98ddfc8f-74xt8" Feb 02 22:39:16 crc kubenswrapper[4789]: I0202 22:39:16.373258 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"6492b19f-f809-4667-b7d7-94ee3bfaa669","Type":"ContainerStarted","Data":"b630892fe9d3fcd96ad6a7be7971ae5865d366bcb06aee03729d8872215f72f4"} Feb 02 22:39:16 crc kubenswrapper[4789]: I0202 22:39:16.442156 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f79b555-f224-4e21-8650-5deed8215651" path="/var/lib/kubelet/pods/7f79b555-f224-4e21-8650-5deed8215651/volumes" Feb 02 22:39:16 crc kubenswrapper[4789]: I0202 22:39:16.468666 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-98ddfc8f-74xt8"] Feb 02 22:39:16 crc kubenswrapper[4789]: I0202 22:39:16.482902 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-98ddfc8f-74xt8"] Feb 02 22:39:17 crc kubenswrapper[4789]: I0202 22:39:17.387877 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f37562ff-5ea3-4230-9c68-09d330bb64c8","Type":"ContainerStarted","Data":"c59688936c925b4afea24ff97d063d06858ea039c00ece5ce5e0088a6be7f5f3"} Feb 02 22:39:18 crc kubenswrapper[4789]: I0202 22:39:18.433084 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b99aab4-38cb-404e-8156-39b0491442cc" path="/var/lib/kubelet/pods/0b99aab4-38cb-404e-8156-39b0491442cc/volumes" Feb 02 22:39:21 crc kubenswrapper[4789]: I0202 22:39:21.419422 4789 scope.go:117] "RemoveContainer" containerID="e1882e0fff6a0a98d0a3daff7d66a9d268ab6f6c60c38af019ec759180dddf7d" Feb 02 22:39:21 crc kubenswrapper[4789]: E0202 22:39:21.422805 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:39:34 crc kubenswrapper[4789]: I0202 22:39:34.420630 4789 scope.go:117] "RemoveContainer" containerID="e1882e0fff6a0a98d0a3daff7d66a9d268ab6f6c60c38af019ec759180dddf7d" Feb 02 22:39:34 crc kubenswrapper[4789]: E0202 22:39:34.423143 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:39:47 crc kubenswrapper[4789]: I0202 22:39:47.420120 4789 scope.go:117] "RemoveContainer" containerID="e1882e0fff6a0a98d0a3daff7d66a9d268ab6f6c60c38af019ec759180dddf7d" Feb 02 22:39:47 crc kubenswrapper[4789]: E0202 22:39:47.421322 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:39:50 crc kubenswrapper[4789]: I0202 22:39:50.710810 4789 generic.go:334] "Generic (PLEG): container finished" podID="6492b19f-f809-4667-b7d7-94ee3bfaa669" containerID="b630892fe9d3fcd96ad6a7be7971ae5865d366bcb06aee03729d8872215f72f4" exitCode=0 Feb 02 22:39:50 crc kubenswrapper[4789]: I0202 22:39:50.710945 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"6492b19f-f809-4667-b7d7-94ee3bfaa669","Type":"ContainerDied","Data":"b630892fe9d3fcd96ad6a7be7971ae5865d366bcb06aee03729d8872215f72f4"} Feb 02 22:39:50 crc kubenswrapper[4789]: I0202 22:39:50.714387 4789 generic.go:334] "Generic (PLEG): container finished" podID="f37562ff-5ea3-4230-9c68-09d330bb64c8" containerID="c59688936c925b4afea24ff97d063d06858ea039c00ece5ce5e0088a6be7f5f3" exitCode=0 Feb 02 22:39:50 crc kubenswrapper[4789]: I0202 22:39:50.714435 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f37562ff-5ea3-4230-9c68-09d330bb64c8","Type":"ContainerDied","Data":"c59688936c925b4afea24ff97d063d06858ea039c00ece5ce5e0088a6be7f5f3"} Feb 02 22:39:51 crc kubenswrapper[4789]: I0202 22:39:51.726334 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"6492b19f-f809-4667-b7d7-94ee3bfaa669","Type":"ContainerStarted","Data":"cebd41259618077da4cc64537af2993eb481d4e581332a8e60fbcf2b1d663692"} Feb 02 22:39:51 crc kubenswrapper[4789]: I0202 22:39:51.727319 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 02 22:39:51 crc kubenswrapper[4789]: I0202 22:39:51.729073 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f37562ff-5ea3-4230-9c68-09d330bb64c8","Type":"ContainerStarted","Data":"545f445fd9edfa2e993fd9d5c161b11fcc0b030fb4e6b43cee5d108fefb5d2aa"} Feb 02 22:39:51 crc kubenswrapper[4789]: I0202 22:39:51.729312 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 02 22:39:51 crc kubenswrapper[4789]: I0202 22:39:51.765607 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.765556747 podStartE2EDuration="37.765556747s" podCreationTimestamp="2026-02-02 22:39:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 22:39:51.756183753 +0000 UTC m=+4812.051208792" watchObservedRunningTime="2026-02-02 22:39:51.765556747 +0000 UTC m=+4812.060581776" Feb 02 22:39:51 crc kubenswrapper[4789]: I0202 22:39:51.788087 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.788064612 podStartE2EDuration="37.788064612s" podCreationTimestamp="2026-02-02 22:39:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 22:39:51.782893747 +0000 UTC m=+4812.077918806" watchObservedRunningTime="2026-02-02 22:39:51.788064612 +0000 UTC m=+4812.083089641" Feb 02 22:39:59 crc kubenswrapper[4789]: I0202 22:39:59.420492 4789 scope.go:117] "RemoveContainer" containerID="e1882e0fff6a0a98d0a3daff7d66a9d268ab6f6c60c38af019ec759180dddf7d" Feb 02 22:39:59 crc kubenswrapper[4789]: E0202 22:39:59.423559 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:40:04 crc kubenswrapper[4789]: I0202 22:40:04.781771 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 02 22:40:05 crc kubenswrapper[4789]: I0202 22:40:05.004902 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 02 22:40:11 crc kubenswrapper[4789]: I0202 22:40:11.420491 4789 scope.go:117] "RemoveContainer" containerID="e1882e0fff6a0a98d0a3daff7d66a9d268ab6f6c60c38af019ec759180dddf7d" Feb 02 22:40:11 crc kubenswrapper[4789]: E0202 22:40:11.421211 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:40:13 crc kubenswrapper[4789]: I0202 22:40:13.141069 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Feb 02 22:40:13 crc kubenswrapper[4789]: E0202 22:40:13.141995 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b99aab4-38cb-404e-8156-39b0491442cc" containerName="dnsmasq-dns" Feb 02 22:40:13 crc kubenswrapper[4789]: I0202 22:40:13.142032 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b99aab4-38cb-404e-8156-39b0491442cc" containerName="dnsmasq-dns" Feb 02 22:40:13 crc kubenswrapper[4789]: E0202 22:40:13.142081 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b99aab4-38cb-404e-8156-39b0491442cc" containerName="init" Feb 02 22:40:13 crc kubenswrapper[4789]: I0202 22:40:13.142100 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b99aab4-38cb-404e-8156-39b0491442cc" containerName="init" Feb 02 22:40:13 crc kubenswrapper[4789]: I0202 22:40:13.142426 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b99aab4-38cb-404e-8156-39b0491442cc" containerName="dnsmasq-dns" Feb 02 22:40:13 crc kubenswrapper[4789]: I0202 22:40:13.150517 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Feb 02 22:40:13 crc kubenswrapper[4789]: I0202 22:40:13.160635 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Feb 02 22:40:13 crc kubenswrapper[4789]: I0202 22:40:13.163541 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-6vlcn" Feb 02 22:40:13 crc kubenswrapper[4789]: I0202 22:40:13.339287 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bxnj\" (UniqueName: \"kubernetes.io/projected/8a6396d5-2467-4f72-bdc3-d54fb5f740c7-kube-api-access-6bxnj\") pod \"mariadb-client\" (UID: \"8a6396d5-2467-4f72-bdc3-d54fb5f740c7\") " pod="openstack/mariadb-client" Feb 02 22:40:13 crc kubenswrapper[4789]: I0202 22:40:13.441480 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bxnj\" (UniqueName: \"kubernetes.io/projected/8a6396d5-2467-4f72-bdc3-d54fb5f740c7-kube-api-access-6bxnj\") pod \"mariadb-client\" (UID: \"8a6396d5-2467-4f72-bdc3-d54fb5f740c7\") " pod="openstack/mariadb-client" Feb 02 22:40:13 crc kubenswrapper[4789]: I0202 22:40:13.477357 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bxnj\" (UniqueName: \"kubernetes.io/projected/8a6396d5-2467-4f72-bdc3-d54fb5f740c7-kube-api-access-6bxnj\") pod \"mariadb-client\" (UID: \"8a6396d5-2467-4f72-bdc3-d54fb5f740c7\") " pod="openstack/mariadb-client" Feb 02 22:40:13 crc kubenswrapper[4789]: I0202 22:40:13.493450 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Feb 02 22:40:14 crc kubenswrapper[4789]: I0202 22:40:14.081530 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Feb 02 22:40:14 crc kubenswrapper[4789]: I0202 22:40:14.948047 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"8a6396d5-2467-4f72-bdc3-d54fb5f740c7","Type":"ContainerStarted","Data":"9e6cb40d5836d7db053cd1b138d0e9f028ede27f845803712c17ac735219c21f"} Feb 02 22:40:14 crc kubenswrapper[4789]: I0202 22:40:14.948366 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"8a6396d5-2467-4f72-bdc3-d54fb5f740c7","Type":"ContainerStarted","Data":"c3afdc1273c0d4a496c57adeba6b8ea26a4fe31a71a96644521530495e8de33d"} Feb 02 22:40:14 crc kubenswrapper[4789]: I0202 22:40:14.976439 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-client" podStartSLOduration=1.9764117319999999 podStartE2EDuration="1.976411732s" podCreationTimestamp="2026-02-02 22:40:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 22:40:14.966750519 +0000 UTC m=+4835.261775578" watchObservedRunningTime="2026-02-02 22:40:14.976411732 +0000 UTC m=+4835.271436791" Feb 02 22:40:26 crc kubenswrapper[4789]: I0202 22:40:26.420546 4789 scope.go:117] "RemoveContainer" containerID="e1882e0fff6a0a98d0a3daff7d66a9d268ab6f6c60c38af019ec759180dddf7d" Feb 02 22:40:26 crc kubenswrapper[4789]: E0202 22:40:26.422070 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:40:30 crc kubenswrapper[4789]: I0202 22:40:30.582291 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Feb 02 22:40:30 crc kubenswrapper[4789]: I0202 22:40:30.583333 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/mariadb-client" podUID="8a6396d5-2467-4f72-bdc3-d54fb5f740c7" containerName="mariadb-client" containerID="cri-o://9e6cb40d5836d7db053cd1b138d0e9f028ede27f845803712c17ac735219c21f" gracePeriod=30 Feb 02 22:40:31 crc kubenswrapper[4789]: I0202 22:40:31.097108 4789 generic.go:334] "Generic (PLEG): container finished" podID="8a6396d5-2467-4f72-bdc3-d54fb5f740c7" containerID="9e6cb40d5836d7db053cd1b138d0e9f028ede27f845803712c17ac735219c21f" exitCode=143 Feb 02 22:40:31 crc kubenswrapper[4789]: I0202 22:40:31.097161 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"8a6396d5-2467-4f72-bdc3-d54fb5f740c7","Type":"ContainerDied","Data":"9e6cb40d5836d7db053cd1b138d0e9f028ede27f845803712c17ac735219c21f"} Feb 02 22:40:31 crc kubenswrapper[4789]: I0202 22:40:31.185478 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Feb 02 22:40:31 crc kubenswrapper[4789]: I0202 22:40:31.254646 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6bxnj\" (UniqueName: \"kubernetes.io/projected/8a6396d5-2467-4f72-bdc3-d54fb5f740c7-kube-api-access-6bxnj\") pod \"8a6396d5-2467-4f72-bdc3-d54fb5f740c7\" (UID: \"8a6396d5-2467-4f72-bdc3-d54fb5f740c7\") " Feb 02 22:40:31 crc kubenswrapper[4789]: I0202 22:40:31.259274 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a6396d5-2467-4f72-bdc3-d54fb5f740c7-kube-api-access-6bxnj" (OuterVolumeSpecName: "kube-api-access-6bxnj") pod "8a6396d5-2467-4f72-bdc3-d54fb5f740c7" (UID: "8a6396d5-2467-4f72-bdc3-d54fb5f740c7"). InnerVolumeSpecName "kube-api-access-6bxnj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 22:40:31 crc kubenswrapper[4789]: I0202 22:40:31.356837 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6bxnj\" (UniqueName: \"kubernetes.io/projected/8a6396d5-2467-4f72-bdc3-d54fb5f740c7-kube-api-access-6bxnj\") on node \"crc\" DevicePath \"\"" Feb 02 22:40:32 crc kubenswrapper[4789]: I0202 22:40:32.108010 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"8a6396d5-2467-4f72-bdc3-d54fb5f740c7","Type":"ContainerDied","Data":"c3afdc1273c0d4a496c57adeba6b8ea26a4fe31a71a96644521530495e8de33d"} Feb 02 22:40:32 crc kubenswrapper[4789]: I0202 22:40:32.108089 4789 scope.go:117] "RemoveContainer" containerID="9e6cb40d5836d7db053cd1b138d0e9f028ede27f845803712c17ac735219c21f" Feb 02 22:40:32 crc kubenswrapper[4789]: I0202 22:40:32.108117 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Feb 02 22:40:32 crc kubenswrapper[4789]: I0202 22:40:32.169226 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Feb 02 22:40:32 crc kubenswrapper[4789]: I0202 22:40:32.181407 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Feb 02 22:40:32 crc kubenswrapper[4789]: I0202 22:40:32.443989 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a6396d5-2467-4f72-bdc3-d54fb5f740c7" path="/var/lib/kubelet/pods/8a6396d5-2467-4f72-bdc3-d54fb5f740c7/volumes" Feb 02 22:40:38 crc kubenswrapper[4789]: I0202 22:40:38.420339 4789 scope.go:117] "RemoveContainer" containerID="e1882e0fff6a0a98d0a3daff7d66a9d268ab6f6c60c38af019ec759180dddf7d" Feb 02 22:40:38 crc kubenswrapper[4789]: E0202 22:40:38.421616 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:40:47 crc kubenswrapper[4789]: I0202 22:40:47.860419 4789 scope.go:117] "RemoveContainer" containerID="197151257b67b3a9df95a4f148a75a76f2992f304c900f1cfc646fe766a581bc" Feb 02 22:40:51 crc kubenswrapper[4789]: I0202 22:40:51.113351 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-w6zdf"] Feb 02 22:40:51 crc kubenswrapper[4789]: E0202 22:40:51.114605 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a6396d5-2467-4f72-bdc3-d54fb5f740c7" containerName="mariadb-client" Feb 02 22:40:51 crc kubenswrapper[4789]: I0202 22:40:51.114628 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a6396d5-2467-4f72-bdc3-d54fb5f740c7" containerName="mariadb-client" Feb 02 22:40:51 crc kubenswrapper[4789]: I0202 22:40:51.114902 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a6396d5-2467-4f72-bdc3-d54fb5f740c7" containerName="mariadb-client" Feb 02 22:40:51 crc kubenswrapper[4789]: I0202 22:40:51.116695 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w6zdf" Feb 02 22:40:51 crc kubenswrapper[4789]: I0202 22:40:51.141816 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-w6zdf"] Feb 02 22:40:51 crc kubenswrapper[4789]: I0202 22:40:51.227005 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwbfr\" (UniqueName: \"kubernetes.io/projected/5a9e321d-f7f4-4abd-bf4d-a79162d2fbd9-kube-api-access-wwbfr\") pod \"certified-operators-w6zdf\" (UID: \"5a9e321d-f7f4-4abd-bf4d-a79162d2fbd9\") " pod="openshift-marketplace/certified-operators-w6zdf" Feb 02 22:40:51 crc kubenswrapper[4789]: I0202 22:40:51.227101 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a9e321d-f7f4-4abd-bf4d-a79162d2fbd9-utilities\") pod \"certified-operators-w6zdf\" (UID: \"5a9e321d-f7f4-4abd-bf4d-a79162d2fbd9\") " pod="openshift-marketplace/certified-operators-w6zdf" Feb 02 22:40:51 crc kubenswrapper[4789]: I0202 22:40:51.227193 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a9e321d-f7f4-4abd-bf4d-a79162d2fbd9-catalog-content\") pod \"certified-operators-w6zdf\" (UID: \"5a9e321d-f7f4-4abd-bf4d-a79162d2fbd9\") " pod="openshift-marketplace/certified-operators-w6zdf" Feb 02 22:40:51 crc kubenswrapper[4789]: I0202 22:40:51.328388 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a9e321d-f7f4-4abd-bf4d-a79162d2fbd9-utilities\") pod \"certified-operators-w6zdf\" (UID: \"5a9e321d-f7f4-4abd-bf4d-a79162d2fbd9\") " pod="openshift-marketplace/certified-operators-w6zdf" Feb 02 22:40:51 crc kubenswrapper[4789]: I0202 22:40:51.328444 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a9e321d-f7f4-4abd-bf4d-a79162d2fbd9-catalog-content\") pod \"certified-operators-w6zdf\" (UID: \"5a9e321d-f7f4-4abd-bf4d-a79162d2fbd9\") " pod="openshift-marketplace/certified-operators-w6zdf" Feb 02 22:40:51 crc kubenswrapper[4789]: I0202 22:40:51.328493 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwbfr\" (UniqueName: \"kubernetes.io/projected/5a9e321d-f7f4-4abd-bf4d-a79162d2fbd9-kube-api-access-wwbfr\") pod \"certified-operators-w6zdf\" (UID: \"5a9e321d-f7f4-4abd-bf4d-a79162d2fbd9\") " pod="openshift-marketplace/certified-operators-w6zdf" Feb 02 22:40:51 crc kubenswrapper[4789]: I0202 22:40:51.329140 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a9e321d-f7f4-4abd-bf4d-a79162d2fbd9-utilities\") pod \"certified-operators-w6zdf\" (UID: \"5a9e321d-f7f4-4abd-bf4d-a79162d2fbd9\") " pod="openshift-marketplace/certified-operators-w6zdf" Feb 02 22:40:51 crc kubenswrapper[4789]: I0202 22:40:51.329189 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a9e321d-f7f4-4abd-bf4d-a79162d2fbd9-catalog-content\") pod \"certified-operators-w6zdf\" (UID: \"5a9e321d-f7f4-4abd-bf4d-a79162d2fbd9\") " pod="openshift-marketplace/certified-operators-w6zdf" Feb 02 22:40:51 crc kubenswrapper[4789]: I0202 22:40:51.352628 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwbfr\" (UniqueName: \"kubernetes.io/projected/5a9e321d-f7f4-4abd-bf4d-a79162d2fbd9-kube-api-access-wwbfr\") pod \"certified-operators-w6zdf\" (UID: \"5a9e321d-f7f4-4abd-bf4d-a79162d2fbd9\") " pod="openshift-marketplace/certified-operators-w6zdf" Feb 02 22:40:51 crc kubenswrapper[4789]: I0202 22:40:51.466221 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w6zdf" Feb 02 22:40:52 crc kubenswrapper[4789]: I0202 22:40:52.044634 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-w6zdf"] Feb 02 22:40:52 crc kubenswrapper[4789]: I0202 22:40:52.407212 4789 generic.go:334] "Generic (PLEG): container finished" podID="5a9e321d-f7f4-4abd-bf4d-a79162d2fbd9" containerID="da58bdffb0a4495187215aecda239bf21def19b211e3650ad68c1604dc3f0c08" exitCode=0 Feb 02 22:40:52 crc kubenswrapper[4789]: I0202 22:40:52.407286 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w6zdf" event={"ID":"5a9e321d-f7f4-4abd-bf4d-a79162d2fbd9","Type":"ContainerDied","Data":"da58bdffb0a4495187215aecda239bf21def19b211e3650ad68c1604dc3f0c08"} Feb 02 22:40:52 crc kubenswrapper[4789]: I0202 22:40:52.407754 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w6zdf" event={"ID":"5a9e321d-f7f4-4abd-bf4d-a79162d2fbd9","Type":"ContainerStarted","Data":"ca78077cd0b9468681a81f084cea170db8a62085d2818314fa90daea74697fbe"} Feb 02 22:40:52 crc kubenswrapper[4789]: I0202 22:40:52.411103 4789 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 22:40:53 crc kubenswrapper[4789]: I0202 22:40:53.415695 4789 generic.go:334] "Generic (PLEG): container finished" podID="5a9e321d-f7f4-4abd-bf4d-a79162d2fbd9" containerID="d855af7a157f33ee38d157e05a61e18d2155054a05e70bf072ab31652f86470b" exitCode=0 Feb 02 22:40:53 crc kubenswrapper[4789]: I0202 22:40:53.415812 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w6zdf" event={"ID":"5a9e321d-f7f4-4abd-bf4d-a79162d2fbd9","Type":"ContainerDied","Data":"d855af7a157f33ee38d157e05a61e18d2155054a05e70bf072ab31652f86470b"} Feb 02 22:40:53 crc kubenswrapper[4789]: I0202 22:40:53.420430 4789 scope.go:117] "RemoveContainer" containerID="e1882e0fff6a0a98d0a3daff7d66a9d268ab6f6c60c38af019ec759180dddf7d" Feb 02 22:40:53 crc kubenswrapper[4789]: I0202 22:40:53.498280 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fhq27"] Feb 02 22:40:53 crc kubenswrapper[4789]: I0202 22:40:53.499708 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fhq27" Feb 02 22:40:53 crc kubenswrapper[4789]: I0202 22:40:53.515025 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fhq27"] Feb 02 22:40:53 crc kubenswrapper[4789]: I0202 22:40:53.670261 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/982d42c5-33f0-420d-adb9-cc5fb289ba87-catalog-content\") pod \"redhat-marketplace-fhq27\" (UID: \"982d42c5-33f0-420d-adb9-cc5fb289ba87\") " pod="openshift-marketplace/redhat-marketplace-fhq27" Feb 02 22:40:53 crc kubenswrapper[4789]: I0202 22:40:53.670305 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzhw7\" (UniqueName: \"kubernetes.io/projected/982d42c5-33f0-420d-adb9-cc5fb289ba87-kube-api-access-kzhw7\") pod \"redhat-marketplace-fhq27\" (UID: \"982d42c5-33f0-420d-adb9-cc5fb289ba87\") " pod="openshift-marketplace/redhat-marketplace-fhq27" Feb 02 22:40:53 crc kubenswrapper[4789]: I0202 22:40:53.670431 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/982d42c5-33f0-420d-adb9-cc5fb289ba87-utilities\") pod \"redhat-marketplace-fhq27\" (UID: \"982d42c5-33f0-420d-adb9-cc5fb289ba87\") " pod="openshift-marketplace/redhat-marketplace-fhq27" Feb 02 22:40:53 crc kubenswrapper[4789]: I0202 22:40:53.771300 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/982d42c5-33f0-420d-adb9-cc5fb289ba87-catalog-content\") pod \"redhat-marketplace-fhq27\" (UID: \"982d42c5-33f0-420d-adb9-cc5fb289ba87\") " pod="openshift-marketplace/redhat-marketplace-fhq27" Feb 02 22:40:53 crc kubenswrapper[4789]: I0202 22:40:53.771344 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzhw7\" (UniqueName: \"kubernetes.io/projected/982d42c5-33f0-420d-adb9-cc5fb289ba87-kube-api-access-kzhw7\") pod \"redhat-marketplace-fhq27\" (UID: \"982d42c5-33f0-420d-adb9-cc5fb289ba87\") " pod="openshift-marketplace/redhat-marketplace-fhq27" Feb 02 22:40:53 crc kubenswrapper[4789]: I0202 22:40:53.771409 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/982d42c5-33f0-420d-adb9-cc5fb289ba87-utilities\") pod \"redhat-marketplace-fhq27\" (UID: \"982d42c5-33f0-420d-adb9-cc5fb289ba87\") " pod="openshift-marketplace/redhat-marketplace-fhq27" Feb 02 22:40:53 crc kubenswrapper[4789]: I0202 22:40:53.772237 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/982d42c5-33f0-420d-adb9-cc5fb289ba87-utilities\") pod \"redhat-marketplace-fhq27\" (UID: \"982d42c5-33f0-420d-adb9-cc5fb289ba87\") " pod="openshift-marketplace/redhat-marketplace-fhq27" Feb 02 22:40:53 crc kubenswrapper[4789]: I0202 22:40:53.772308 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/982d42c5-33f0-420d-adb9-cc5fb289ba87-catalog-content\") pod \"redhat-marketplace-fhq27\" (UID: \"982d42c5-33f0-420d-adb9-cc5fb289ba87\") " pod="openshift-marketplace/redhat-marketplace-fhq27" Feb 02 22:40:53 crc kubenswrapper[4789]: I0202 22:40:53.794614 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzhw7\" (UniqueName: \"kubernetes.io/projected/982d42c5-33f0-420d-adb9-cc5fb289ba87-kube-api-access-kzhw7\") pod \"redhat-marketplace-fhq27\" (UID: \"982d42c5-33f0-420d-adb9-cc5fb289ba87\") " pod="openshift-marketplace/redhat-marketplace-fhq27" Feb 02 22:40:53 crc kubenswrapper[4789]: I0202 22:40:53.836117 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fhq27" Feb 02 22:40:54 crc kubenswrapper[4789]: I0202 22:40:54.066794 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fhq27"] Feb 02 22:40:54 crc kubenswrapper[4789]: I0202 22:40:54.430202 4789 generic.go:334] "Generic (PLEG): container finished" podID="982d42c5-33f0-420d-adb9-cc5fb289ba87" containerID="6cfb8b1e3b543561310ad23075b8ab93eb60fbae152e0bf1a0b8df40336aae00" exitCode=0 Feb 02 22:40:54 crc kubenswrapper[4789]: I0202 22:40:54.444635 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fhq27" event={"ID":"982d42c5-33f0-420d-adb9-cc5fb289ba87","Type":"ContainerDied","Data":"6cfb8b1e3b543561310ad23075b8ab93eb60fbae152e0bf1a0b8df40336aae00"} Feb 02 22:40:54 crc kubenswrapper[4789]: I0202 22:40:54.444688 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fhq27" event={"ID":"982d42c5-33f0-420d-adb9-cc5fb289ba87","Type":"ContainerStarted","Data":"b5f4d54cb9311e750b71b19b17fe58fc9acfdfb7902101ed1da579e55b894213"} Feb 02 22:40:54 crc kubenswrapper[4789]: I0202 22:40:54.446047 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" event={"ID":"bdf018b4-1451-4d37-be6e-05802b67c73e","Type":"ContainerStarted","Data":"7ba14292c23dc4f23f84b7674c42768a164c25b022f823f0ebbe7a54004bb378"} Feb 02 22:40:54 crc kubenswrapper[4789]: I0202 22:40:54.453249 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w6zdf" event={"ID":"5a9e321d-f7f4-4abd-bf4d-a79162d2fbd9","Type":"ContainerStarted","Data":"2918d1ab0bc55e1ed0d3fa0827dcc16089c63e8744b4601074b733452a77e960"} Feb 02 22:40:54 crc kubenswrapper[4789]: I0202 22:40:54.500517 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-w6zdf" podStartSLOduration=2.093616382 podStartE2EDuration="3.500494952s" podCreationTimestamp="2026-02-02 22:40:51 +0000 UTC" firstStartedPulling="2026-02-02 22:40:52.41034898 +0000 UTC m=+4872.705374039" lastFinishedPulling="2026-02-02 22:40:53.81722759 +0000 UTC m=+4874.112252609" observedRunningTime="2026-02-02 22:40:54.497772025 +0000 UTC m=+4874.792797114" watchObservedRunningTime="2026-02-02 22:40:54.500494952 +0000 UTC m=+4874.795520001" Feb 02 22:40:55 crc kubenswrapper[4789]: I0202 22:40:55.463954 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fhq27" event={"ID":"982d42c5-33f0-420d-adb9-cc5fb289ba87","Type":"ContainerStarted","Data":"21e92baad048e80491571b167dbcc476481d7d9985d78b67e24ad500a631bfaf"} Feb 02 22:40:56 crc kubenswrapper[4789]: I0202 22:40:56.492825 4789 generic.go:334] "Generic (PLEG): container finished" podID="982d42c5-33f0-420d-adb9-cc5fb289ba87" containerID="21e92baad048e80491571b167dbcc476481d7d9985d78b67e24ad500a631bfaf" exitCode=0 Feb 02 22:40:56 crc kubenswrapper[4789]: I0202 22:40:56.492900 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fhq27" event={"ID":"982d42c5-33f0-420d-adb9-cc5fb289ba87","Type":"ContainerDied","Data":"21e92baad048e80491571b167dbcc476481d7d9985d78b67e24ad500a631bfaf"} Feb 02 22:40:57 crc kubenswrapper[4789]: I0202 22:40:57.506692 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fhq27" event={"ID":"982d42c5-33f0-420d-adb9-cc5fb289ba87","Type":"ContainerStarted","Data":"f63deab2bcfd7b5515274f919b91a7256658dd24ce404ccf34bfc655228750de"} Feb 02 22:40:57 crc kubenswrapper[4789]: I0202 22:40:57.535088 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fhq27" podStartSLOduration=2.009837946 podStartE2EDuration="4.535064565s" podCreationTimestamp="2026-02-02 22:40:53 +0000 UTC" firstStartedPulling="2026-02-02 22:40:54.44087971 +0000 UTC m=+4874.735904769" lastFinishedPulling="2026-02-02 22:40:56.966106329 +0000 UTC m=+4877.261131388" observedRunningTime="2026-02-02 22:40:57.53310814 +0000 UTC m=+4877.828133219" watchObservedRunningTime="2026-02-02 22:40:57.535064565 +0000 UTC m=+4877.830089624" Feb 02 22:41:01 crc kubenswrapper[4789]: I0202 22:41:01.466285 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-w6zdf" Feb 02 22:41:01 crc kubenswrapper[4789]: I0202 22:41:01.466946 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-w6zdf" Feb 02 22:41:01 crc kubenswrapper[4789]: I0202 22:41:01.548635 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-w6zdf" Feb 02 22:41:01 crc kubenswrapper[4789]: I0202 22:41:01.624472 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-w6zdf" Feb 02 22:41:01 crc kubenswrapper[4789]: I0202 22:41:01.799489 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-w6zdf"] Feb 02 22:41:03 crc kubenswrapper[4789]: I0202 22:41:03.574318 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-w6zdf" podUID="5a9e321d-f7f4-4abd-bf4d-a79162d2fbd9" containerName="registry-server" containerID="cri-o://2918d1ab0bc55e1ed0d3fa0827dcc16089c63e8744b4601074b733452a77e960" gracePeriod=2 Feb 02 22:41:03 crc kubenswrapper[4789]: I0202 22:41:03.836856 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fhq27" Feb 02 22:41:03 crc kubenswrapper[4789]: I0202 22:41:03.836904 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-fhq27" Feb 02 22:41:03 crc kubenswrapper[4789]: I0202 22:41:03.910726 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fhq27" Feb 02 22:41:04 crc kubenswrapper[4789]: I0202 22:41:04.150038 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w6zdf" Feb 02 22:41:04 crc kubenswrapper[4789]: I0202 22:41:04.248256 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a9e321d-f7f4-4abd-bf4d-a79162d2fbd9-utilities\") pod \"5a9e321d-f7f4-4abd-bf4d-a79162d2fbd9\" (UID: \"5a9e321d-f7f4-4abd-bf4d-a79162d2fbd9\") " Feb 02 22:41:04 crc kubenswrapper[4789]: I0202 22:41:04.248517 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a9e321d-f7f4-4abd-bf4d-a79162d2fbd9-catalog-content\") pod \"5a9e321d-f7f4-4abd-bf4d-a79162d2fbd9\" (UID: \"5a9e321d-f7f4-4abd-bf4d-a79162d2fbd9\") " Feb 02 22:41:04 crc kubenswrapper[4789]: I0202 22:41:04.248668 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wwbfr\" (UniqueName: \"kubernetes.io/projected/5a9e321d-f7f4-4abd-bf4d-a79162d2fbd9-kube-api-access-wwbfr\") pod \"5a9e321d-f7f4-4abd-bf4d-a79162d2fbd9\" (UID: \"5a9e321d-f7f4-4abd-bf4d-a79162d2fbd9\") " Feb 02 22:41:04 crc kubenswrapper[4789]: I0202 22:41:04.249130 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a9e321d-f7f4-4abd-bf4d-a79162d2fbd9-utilities" (OuterVolumeSpecName: "utilities") pod "5a9e321d-f7f4-4abd-bf4d-a79162d2fbd9" (UID: "5a9e321d-f7f4-4abd-bf4d-a79162d2fbd9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 22:41:04 crc kubenswrapper[4789]: I0202 22:41:04.257178 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a9e321d-f7f4-4abd-bf4d-a79162d2fbd9-kube-api-access-wwbfr" (OuterVolumeSpecName: "kube-api-access-wwbfr") pod "5a9e321d-f7f4-4abd-bf4d-a79162d2fbd9" (UID: "5a9e321d-f7f4-4abd-bf4d-a79162d2fbd9"). InnerVolumeSpecName "kube-api-access-wwbfr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 22:41:04 crc kubenswrapper[4789]: I0202 22:41:04.319092 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a9e321d-f7f4-4abd-bf4d-a79162d2fbd9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5a9e321d-f7f4-4abd-bf4d-a79162d2fbd9" (UID: "5a9e321d-f7f4-4abd-bf4d-a79162d2fbd9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 22:41:04 crc kubenswrapper[4789]: I0202 22:41:04.350553 4789 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a9e321d-f7f4-4abd-bf4d-a79162d2fbd9-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 22:41:04 crc kubenswrapper[4789]: I0202 22:41:04.350973 4789 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a9e321d-f7f4-4abd-bf4d-a79162d2fbd9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 22:41:04 crc kubenswrapper[4789]: I0202 22:41:04.351061 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wwbfr\" (UniqueName: \"kubernetes.io/projected/5a9e321d-f7f4-4abd-bf4d-a79162d2fbd9-kube-api-access-wwbfr\") on node \"crc\" DevicePath \"\"" Feb 02 22:41:04 crc kubenswrapper[4789]: I0202 22:41:04.589033 4789 generic.go:334] "Generic (PLEG): container finished" podID="5a9e321d-f7f4-4abd-bf4d-a79162d2fbd9" containerID="2918d1ab0bc55e1ed0d3fa0827dcc16089c63e8744b4601074b733452a77e960" exitCode=0 Feb 02 22:41:04 crc kubenswrapper[4789]: I0202 22:41:04.589753 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w6zdf" event={"ID":"5a9e321d-f7f4-4abd-bf4d-a79162d2fbd9","Type":"ContainerDied","Data":"2918d1ab0bc55e1ed0d3fa0827dcc16089c63e8744b4601074b733452a77e960"} Feb 02 22:41:04 crc kubenswrapper[4789]: I0202 22:41:04.589819 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w6zdf" event={"ID":"5a9e321d-f7f4-4abd-bf4d-a79162d2fbd9","Type":"ContainerDied","Data":"ca78077cd0b9468681a81f084cea170db8a62085d2818314fa90daea74697fbe"} Feb 02 22:41:04 crc kubenswrapper[4789]: I0202 22:41:04.589825 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w6zdf" Feb 02 22:41:04 crc kubenswrapper[4789]: I0202 22:41:04.589847 4789 scope.go:117] "RemoveContainer" containerID="2918d1ab0bc55e1ed0d3fa0827dcc16089c63e8744b4601074b733452a77e960" Feb 02 22:41:04 crc kubenswrapper[4789]: I0202 22:41:04.629201 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-w6zdf"] Feb 02 22:41:04 crc kubenswrapper[4789]: I0202 22:41:04.632837 4789 scope.go:117] "RemoveContainer" containerID="d855af7a157f33ee38d157e05a61e18d2155054a05e70bf072ab31652f86470b" Feb 02 22:41:04 crc kubenswrapper[4789]: I0202 22:41:04.639779 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-w6zdf"] Feb 02 22:41:04 crc kubenswrapper[4789]: I0202 22:41:04.671388 4789 scope.go:117] "RemoveContainer" containerID="da58bdffb0a4495187215aecda239bf21def19b211e3650ad68c1604dc3f0c08" Feb 02 22:41:04 crc kubenswrapper[4789]: I0202 22:41:04.686337 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fhq27" Feb 02 22:41:04 crc kubenswrapper[4789]: I0202 22:41:04.701014 4789 scope.go:117] "RemoveContainer" containerID="2918d1ab0bc55e1ed0d3fa0827dcc16089c63e8744b4601074b733452a77e960" Feb 02 22:41:04 crc kubenswrapper[4789]: E0202 22:41:04.701580 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2918d1ab0bc55e1ed0d3fa0827dcc16089c63e8744b4601074b733452a77e960\": container with ID starting with 2918d1ab0bc55e1ed0d3fa0827dcc16089c63e8744b4601074b733452a77e960 not found: ID does not exist" containerID="2918d1ab0bc55e1ed0d3fa0827dcc16089c63e8744b4601074b733452a77e960" Feb 02 22:41:04 crc kubenswrapper[4789]: I0202 22:41:04.701641 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2918d1ab0bc55e1ed0d3fa0827dcc16089c63e8744b4601074b733452a77e960"} err="failed to get container status \"2918d1ab0bc55e1ed0d3fa0827dcc16089c63e8744b4601074b733452a77e960\": rpc error: code = NotFound desc = could not find container \"2918d1ab0bc55e1ed0d3fa0827dcc16089c63e8744b4601074b733452a77e960\": container with ID starting with 2918d1ab0bc55e1ed0d3fa0827dcc16089c63e8744b4601074b733452a77e960 not found: ID does not exist" Feb 02 22:41:04 crc kubenswrapper[4789]: I0202 22:41:04.701670 4789 scope.go:117] "RemoveContainer" containerID="d855af7a157f33ee38d157e05a61e18d2155054a05e70bf072ab31652f86470b" Feb 02 22:41:04 crc kubenswrapper[4789]: E0202 22:41:04.701942 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d855af7a157f33ee38d157e05a61e18d2155054a05e70bf072ab31652f86470b\": container with ID starting with d855af7a157f33ee38d157e05a61e18d2155054a05e70bf072ab31652f86470b not found: ID does not exist" containerID="d855af7a157f33ee38d157e05a61e18d2155054a05e70bf072ab31652f86470b" Feb 02 22:41:04 crc kubenswrapper[4789]: I0202 22:41:04.701977 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d855af7a157f33ee38d157e05a61e18d2155054a05e70bf072ab31652f86470b"} err="failed to get container status \"d855af7a157f33ee38d157e05a61e18d2155054a05e70bf072ab31652f86470b\": rpc error: code = NotFound desc = could not find container \"d855af7a157f33ee38d157e05a61e18d2155054a05e70bf072ab31652f86470b\": container with ID starting with d855af7a157f33ee38d157e05a61e18d2155054a05e70bf072ab31652f86470b not found: ID does not exist" Feb 02 22:41:04 crc kubenswrapper[4789]: I0202 22:41:04.702002 4789 scope.go:117] "RemoveContainer" containerID="da58bdffb0a4495187215aecda239bf21def19b211e3650ad68c1604dc3f0c08" Feb 02 22:41:04 crc kubenswrapper[4789]: E0202 22:41:04.702295 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da58bdffb0a4495187215aecda239bf21def19b211e3650ad68c1604dc3f0c08\": container with ID starting with da58bdffb0a4495187215aecda239bf21def19b211e3650ad68c1604dc3f0c08 not found: ID does not exist" containerID="da58bdffb0a4495187215aecda239bf21def19b211e3650ad68c1604dc3f0c08" Feb 02 22:41:04 crc kubenswrapper[4789]: I0202 22:41:04.702332 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da58bdffb0a4495187215aecda239bf21def19b211e3650ad68c1604dc3f0c08"} err="failed to get container status \"da58bdffb0a4495187215aecda239bf21def19b211e3650ad68c1604dc3f0c08\": rpc error: code = NotFound desc = could not find container \"da58bdffb0a4495187215aecda239bf21def19b211e3650ad68c1604dc3f0c08\": container with ID starting with da58bdffb0a4495187215aecda239bf21def19b211e3650ad68c1604dc3f0c08 not found: ID does not exist" Feb 02 22:41:06 crc kubenswrapper[4789]: I0202 22:41:06.438122 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a9e321d-f7f4-4abd-bf4d-a79162d2fbd9" path="/var/lib/kubelet/pods/5a9e321d-f7f4-4abd-bf4d-a79162d2fbd9/volumes" Feb 02 22:41:08 crc kubenswrapper[4789]: I0202 22:41:08.393838 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fhq27"] Feb 02 22:41:08 crc kubenswrapper[4789]: I0202 22:41:08.394389 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-fhq27" podUID="982d42c5-33f0-420d-adb9-cc5fb289ba87" containerName="registry-server" containerID="cri-o://f63deab2bcfd7b5515274f919b91a7256658dd24ce404ccf34bfc655228750de" gracePeriod=2 Feb 02 22:41:08 crc kubenswrapper[4789]: I0202 22:41:08.632854 4789 generic.go:334] "Generic (PLEG): container finished" podID="982d42c5-33f0-420d-adb9-cc5fb289ba87" containerID="f63deab2bcfd7b5515274f919b91a7256658dd24ce404ccf34bfc655228750de" exitCode=0 Feb 02 22:41:08 crc kubenswrapper[4789]: I0202 22:41:08.632906 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fhq27" event={"ID":"982d42c5-33f0-420d-adb9-cc5fb289ba87","Type":"ContainerDied","Data":"f63deab2bcfd7b5515274f919b91a7256658dd24ce404ccf34bfc655228750de"} Feb 02 22:41:08 crc kubenswrapper[4789]: I0202 22:41:08.825059 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fhq27" Feb 02 22:41:08 crc kubenswrapper[4789]: I0202 22:41:08.927987 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/982d42c5-33f0-420d-adb9-cc5fb289ba87-catalog-content\") pod \"982d42c5-33f0-420d-adb9-cc5fb289ba87\" (UID: \"982d42c5-33f0-420d-adb9-cc5fb289ba87\") " Feb 02 22:41:08 crc kubenswrapper[4789]: I0202 22:41:08.928086 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kzhw7\" (UniqueName: \"kubernetes.io/projected/982d42c5-33f0-420d-adb9-cc5fb289ba87-kube-api-access-kzhw7\") pod \"982d42c5-33f0-420d-adb9-cc5fb289ba87\" (UID: \"982d42c5-33f0-420d-adb9-cc5fb289ba87\") " Feb 02 22:41:08 crc kubenswrapper[4789]: I0202 22:41:08.928260 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/982d42c5-33f0-420d-adb9-cc5fb289ba87-utilities\") pod \"982d42c5-33f0-420d-adb9-cc5fb289ba87\" (UID: \"982d42c5-33f0-420d-adb9-cc5fb289ba87\") " Feb 02 22:41:08 crc kubenswrapper[4789]: I0202 22:41:08.929293 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/982d42c5-33f0-420d-adb9-cc5fb289ba87-utilities" (OuterVolumeSpecName: "utilities") pod "982d42c5-33f0-420d-adb9-cc5fb289ba87" (UID: "982d42c5-33f0-420d-adb9-cc5fb289ba87"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 22:41:08 crc kubenswrapper[4789]: I0202 22:41:08.936970 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/982d42c5-33f0-420d-adb9-cc5fb289ba87-kube-api-access-kzhw7" (OuterVolumeSpecName: "kube-api-access-kzhw7") pod "982d42c5-33f0-420d-adb9-cc5fb289ba87" (UID: "982d42c5-33f0-420d-adb9-cc5fb289ba87"). InnerVolumeSpecName "kube-api-access-kzhw7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 22:41:08 crc kubenswrapper[4789]: I0202 22:41:08.956214 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/982d42c5-33f0-420d-adb9-cc5fb289ba87-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "982d42c5-33f0-420d-adb9-cc5fb289ba87" (UID: "982d42c5-33f0-420d-adb9-cc5fb289ba87"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 22:41:09 crc kubenswrapper[4789]: I0202 22:41:09.030203 4789 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/982d42c5-33f0-420d-adb9-cc5fb289ba87-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 22:41:09 crc kubenswrapper[4789]: I0202 22:41:09.030269 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kzhw7\" (UniqueName: \"kubernetes.io/projected/982d42c5-33f0-420d-adb9-cc5fb289ba87-kube-api-access-kzhw7\") on node \"crc\" DevicePath \"\"" Feb 02 22:41:09 crc kubenswrapper[4789]: I0202 22:41:09.030289 4789 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/982d42c5-33f0-420d-adb9-cc5fb289ba87-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 22:41:09 crc kubenswrapper[4789]: I0202 22:41:09.646224 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fhq27" event={"ID":"982d42c5-33f0-420d-adb9-cc5fb289ba87","Type":"ContainerDied","Data":"b5f4d54cb9311e750b71b19b17fe58fc9acfdfb7902101ed1da579e55b894213"} Feb 02 22:41:09 crc kubenswrapper[4789]: I0202 22:41:09.646263 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fhq27" Feb 02 22:41:09 crc kubenswrapper[4789]: I0202 22:41:09.646314 4789 scope.go:117] "RemoveContainer" containerID="f63deab2bcfd7b5515274f919b91a7256658dd24ce404ccf34bfc655228750de" Feb 02 22:41:09 crc kubenswrapper[4789]: I0202 22:41:09.677459 4789 scope.go:117] "RemoveContainer" containerID="21e92baad048e80491571b167dbcc476481d7d9985d78b67e24ad500a631bfaf" Feb 02 22:41:09 crc kubenswrapper[4789]: I0202 22:41:09.686224 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fhq27"] Feb 02 22:41:09 crc kubenswrapper[4789]: I0202 22:41:09.690662 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-fhq27"] Feb 02 22:41:09 crc kubenswrapper[4789]: I0202 22:41:09.703146 4789 scope.go:117] "RemoveContainer" containerID="6cfb8b1e3b543561310ad23075b8ab93eb60fbae152e0bf1a0b8df40336aae00" Feb 02 22:41:10 crc kubenswrapper[4789]: I0202 22:41:10.434787 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="982d42c5-33f0-420d-adb9-cc5fb289ba87" path="/var/lib/kubelet/pods/982d42c5-33f0-420d-adb9-cc5fb289ba87/volumes" Feb 02 22:43:22 crc kubenswrapper[4789]: I0202 22:43:22.841723 4789 patch_prober.go:28] interesting pod/machine-config-daemon-c8vcn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 22:43:22 crc kubenswrapper[4789]: I0202 22:43:22.842686 4789 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 22:43:42 crc kubenswrapper[4789]: I0202 22:43:42.937496 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-fjvt7"] Feb 02 22:43:42 crc kubenswrapper[4789]: E0202 22:43:42.938900 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a9e321d-f7f4-4abd-bf4d-a79162d2fbd9" containerName="extract-utilities" Feb 02 22:43:42 crc kubenswrapper[4789]: I0202 22:43:42.938935 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a9e321d-f7f4-4abd-bf4d-a79162d2fbd9" containerName="extract-utilities" Feb 02 22:43:42 crc kubenswrapper[4789]: E0202 22:43:42.938964 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a9e321d-f7f4-4abd-bf4d-a79162d2fbd9" containerName="extract-content" Feb 02 22:43:42 crc kubenswrapper[4789]: I0202 22:43:42.938980 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a9e321d-f7f4-4abd-bf4d-a79162d2fbd9" containerName="extract-content" Feb 02 22:43:42 crc kubenswrapper[4789]: E0202 22:43:42.939012 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a9e321d-f7f4-4abd-bf4d-a79162d2fbd9" containerName="registry-server" Feb 02 22:43:42 crc kubenswrapper[4789]: I0202 22:43:42.939030 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a9e321d-f7f4-4abd-bf4d-a79162d2fbd9" containerName="registry-server" Feb 02 22:43:42 crc kubenswrapper[4789]: E0202 22:43:42.939059 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="982d42c5-33f0-420d-adb9-cc5fb289ba87" containerName="extract-content" Feb 02 22:43:42 crc kubenswrapper[4789]: I0202 22:43:42.939075 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="982d42c5-33f0-420d-adb9-cc5fb289ba87" containerName="extract-content" Feb 02 22:43:42 crc kubenswrapper[4789]: E0202 22:43:42.939108 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="982d42c5-33f0-420d-adb9-cc5fb289ba87" containerName="extract-utilities" Feb 02 22:43:42 crc kubenswrapper[4789]: I0202 22:43:42.939124 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="982d42c5-33f0-420d-adb9-cc5fb289ba87" containerName="extract-utilities" Feb 02 22:43:42 crc kubenswrapper[4789]: E0202 22:43:42.939165 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="982d42c5-33f0-420d-adb9-cc5fb289ba87" containerName="registry-server" Feb 02 22:43:42 crc kubenswrapper[4789]: I0202 22:43:42.939180 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="982d42c5-33f0-420d-adb9-cc5fb289ba87" containerName="registry-server" Feb 02 22:43:42 crc kubenswrapper[4789]: I0202 22:43:42.939497 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="982d42c5-33f0-420d-adb9-cc5fb289ba87" containerName="registry-server" Feb 02 22:43:42 crc kubenswrapper[4789]: I0202 22:43:42.939533 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a9e321d-f7f4-4abd-bf4d-a79162d2fbd9" containerName="registry-server" Feb 02 22:43:42 crc kubenswrapper[4789]: I0202 22:43:42.941880 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fjvt7" Feb 02 22:43:42 crc kubenswrapper[4789]: I0202 22:43:42.964032 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fjvt7"] Feb 02 22:43:43 crc kubenswrapper[4789]: I0202 22:43:43.056905 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbnpg\" (UniqueName: \"kubernetes.io/projected/ac45e938-e440-4cd6-bee4-d371c4a799dc-kube-api-access-qbnpg\") pod \"community-operators-fjvt7\" (UID: \"ac45e938-e440-4cd6-bee4-d371c4a799dc\") " pod="openshift-marketplace/community-operators-fjvt7" Feb 02 22:43:43 crc kubenswrapper[4789]: I0202 22:43:43.056983 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac45e938-e440-4cd6-bee4-d371c4a799dc-catalog-content\") pod \"community-operators-fjvt7\" (UID: \"ac45e938-e440-4cd6-bee4-d371c4a799dc\") " pod="openshift-marketplace/community-operators-fjvt7" Feb 02 22:43:43 crc kubenswrapper[4789]: I0202 22:43:43.057066 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac45e938-e440-4cd6-bee4-d371c4a799dc-utilities\") pod \"community-operators-fjvt7\" (UID: \"ac45e938-e440-4cd6-bee4-d371c4a799dc\") " pod="openshift-marketplace/community-operators-fjvt7" Feb 02 22:43:43 crc kubenswrapper[4789]: I0202 22:43:43.158746 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbnpg\" (UniqueName: \"kubernetes.io/projected/ac45e938-e440-4cd6-bee4-d371c4a799dc-kube-api-access-qbnpg\") pod \"community-operators-fjvt7\" (UID: \"ac45e938-e440-4cd6-bee4-d371c4a799dc\") " pod="openshift-marketplace/community-operators-fjvt7" Feb 02 22:43:43 crc kubenswrapper[4789]: I0202 22:43:43.158806 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac45e938-e440-4cd6-bee4-d371c4a799dc-catalog-content\") pod \"community-operators-fjvt7\" (UID: \"ac45e938-e440-4cd6-bee4-d371c4a799dc\") " pod="openshift-marketplace/community-operators-fjvt7" Feb 02 22:43:43 crc kubenswrapper[4789]: I0202 22:43:43.158864 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac45e938-e440-4cd6-bee4-d371c4a799dc-utilities\") pod \"community-operators-fjvt7\" (UID: \"ac45e938-e440-4cd6-bee4-d371c4a799dc\") " pod="openshift-marketplace/community-operators-fjvt7" Feb 02 22:43:43 crc kubenswrapper[4789]: I0202 22:43:43.159361 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac45e938-e440-4cd6-bee4-d371c4a799dc-utilities\") pod \"community-operators-fjvt7\" (UID: \"ac45e938-e440-4cd6-bee4-d371c4a799dc\") " pod="openshift-marketplace/community-operators-fjvt7" Feb 02 22:43:43 crc kubenswrapper[4789]: I0202 22:43:43.159843 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac45e938-e440-4cd6-bee4-d371c4a799dc-catalog-content\") pod \"community-operators-fjvt7\" (UID: \"ac45e938-e440-4cd6-bee4-d371c4a799dc\") " pod="openshift-marketplace/community-operators-fjvt7" Feb 02 22:43:43 crc kubenswrapper[4789]: I0202 22:43:43.184495 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbnpg\" (UniqueName: \"kubernetes.io/projected/ac45e938-e440-4cd6-bee4-d371c4a799dc-kube-api-access-qbnpg\") pod \"community-operators-fjvt7\" (UID: \"ac45e938-e440-4cd6-bee4-d371c4a799dc\") " pod="openshift-marketplace/community-operators-fjvt7" Feb 02 22:43:43 crc kubenswrapper[4789]: I0202 22:43:43.271378 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fjvt7" Feb 02 22:43:43 crc kubenswrapper[4789]: I0202 22:43:43.715286 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fjvt7"] Feb 02 22:43:44 crc kubenswrapper[4789]: I0202 22:43:44.132639 4789 generic.go:334] "Generic (PLEG): container finished" podID="ac45e938-e440-4cd6-bee4-d371c4a799dc" containerID="f331c46fb49f3ff489c0c213d29db57e71c46627861e9e73f1cabc44ccb1b700" exitCode=0 Feb 02 22:43:44 crc kubenswrapper[4789]: I0202 22:43:44.132946 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fjvt7" event={"ID":"ac45e938-e440-4cd6-bee4-d371c4a799dc","Type":"ContainerDied","Data":"f331c46fb49f3ff489c0c213d29db57e71c46627861e9e73f1cabc44ccb1b700"} Feb 02 22:43:44 crc kubenswrapper[4789]: I0202 22:43:44.132987 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fjvt7" event={"ID":"ac45e938-e440-4cd6-bee4-d371c4a799dc","Type":"ContainerStarted","Data":"6be13e248fb554bd1aca54616b130301dc7b98db375d3110c9fbbba13fef3e8d"} Feb 02 22:43:45 crc kubenswrapper[4789]: I0202 22:43:45.143277 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fjvt7" event={"ID":"ac45e938-e440-4cd6-bee4-d371c4a799dc","Type":"ContainerStarted","Data":"a72c562a558fa2bfd91b3a69808459c6f223d3edb83b2643de31182be4a844d4"} Feb 02 22:43:46 crc kubenswrapper[4789]: I0202 22:43:46.155419 4789 generic.go:334] "Generic (PLEG): container finished" podID="ac45e938-e440-4cd6-bee4-d371c4a799dc" containerID="a72c562a558fa2bfd91b3a69808459c6f223d3edb83b2643de31182be4a844d4" exitCode=0 Feb 02 22:43:46 crc kubenswrapper[4789]: I0202 22:43:46.155485 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fjvt7" event={"ID":"ac45e938-e440-4cd6-bee4-d371c4a799dc","Type":"ContainerDied","Data":"a72c562a558fa2bfd91b3a69808459c6f223d3edb83b2643de31182be4a844d4"} Feb 02 22:43:47 crc kubenswrapper[4789]: I0202 22:43:47.166994 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fjvt7" event={"ID":"ac45e938-e440-4cd6-bee4-d371c4a799dc","Type":"ContainerStarted","Data":"1b9698e0cb890d80097551ecf1bed1905075eca669687cc6dcfd138233cc0af6"} Feb 02 22:43:47 crc kubenswrapper[4789]: I0202 22:43:47.200313 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-fjvt7" podStartSLOduration=2.514952295 podStartE2EDuration="5.200285244s" podCreationTimestamp="2026-02-02 22:43:42 +0000 UTC" firstStartedPulling="2026-02-02 22:43:44.135122736 +0000 UTC m=+5044.430147785" lastFinishedPulling="2026-02-02 22:43:46.820455705 +0000 UTC m=+5047.115480734" observedRunningTime="2026-02-02 22:43:47.198083701 +0000 UTC m=+5047.493108740" watchObservedRunningTime="2026-02-02 22:43:47.200285244 +0000 UTC m=+5047.495310283" Feb 02 22:43:52 crc kubenswrapper[4789]: I0202 22:43:52.842377 4789 patch_prober.go:28] interesting pod/machine-config-daemon-c8vcn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 22:43:52 crc kubenswrapper[4789]: I0202 22:43:52.843311 4789 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 22:43:53 crc kubenswrapper[4789]: I0202 22:43:53.271570 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-fjvt7" Feb 02 22:43:53 crc kubenswrapper[4789]: I0202 22:43:53.271705 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-fjvt7" Feb 02 22:43:53 crc kubenswrapper[4789]: I0202 22:43:53.349135 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-fjvt7" Feb 02 22:43:54 crc kubenswrapper[4789]: I0202 22:43:54.289872 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-fjvt7" Feb 02 22:43:54 crc kubenswrapper[4789]: I0202 22:43:54.346088 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fjvt7"] Feb 02 22:43:56 crc kubenswrapper[4789]: I0202 22:43:56.248979 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-fjvt7" podUID="ac45e938-e440-4cd6-bee4-d371c4a799dc" containerName="registry-server" containerID="cri-o://1b9698e0cb890d80097551ecf1bed1905075eca669687cc6dcfd138233cc0af6" gracePeriod=2 Feb 02 22:43:56 crc kubenswrapper[4789]: I0202 22:43:56.755113 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fjvt7" Feb 02 22:43:56 crc kubenswrapper[4789]: I0202 22:43:56.903174 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac45e938-e440-4cd6-bee4-d371c4a799dc-catalog-content\") pod \"ac45e938-e440-4cd6-bee4-d371c4a799dc\" (UID: \"ac45e938-e440-4cd6-bee4-d371c4a799dc\") " Feb 02 22:43:56 crc kubenswrapper[4789]: I0202 22:43:56.903375 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac45e938-e440-4cd6-bee4-d371c4a799dc-utilities\") pod \"ac45e938-e440-4cd6-bee4-d371c4a799dc\" (UID: \"ac45e938-e440-4cd6-bee4-d371c4a799dc\") " Feb 02 22:43:56 crc kubenswrapper[4789]: I0202 22:43:56.903444 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qbnpg\" (UniqueName: \"kubernetes.io/projected/ac45e938-e440-4cd6-bee4-d371c4a799dc-kube-api-access-qbnpg\") pod \"ac45e938-e440-4cd6-bee4-d371c4a799dc\" (UID: \"ac45e938-e440-4cd6-bee4-d371c4a799dc\") " Feb 02 22:43:56 crc kubenswrapper[4789]: I0202 22:43:56.904950 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac45e938-e440-4cd6-bee4-d371c4a799dc-utilities" (OuterVolumeSpecName: "utilities") pod "ac45e938-e440-4cd6-bee4-d371c4a799dc" (UID: "ac45e938-e440-4cd6-bee4-d371c4a799dc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 22:43:56 crc kubenswrapper[4789]: I0202 22:43:56.910057 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac45e938-e440-4cd6-bee4-d371c4a799dc-kube-api-access-qbnpg" (OuterVolumeSpecName: "kube-api-access-qbnpg") pod "ac45e938-e440-4cd6-bee4-d371c4a799dc" (UID: "ac45e938-e440-4cd6-bee4-d371c4a799dc"). InnerVolumeSpecName "kube-api-access-qbnpg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 22:43:56 crc kubenswrapper[4789]: I0202 22:43:56.964822 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac45e938-e440-4cd6-bee4-d371c4a799dc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ac45e938-e440-4cd6-bee4-d371c4a799dc" (UID: "ac45e938-e440-4cd6-bee4-d371c4a799dc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 22:43:57 crc kubenswrapper[4789]: I0202 22:43:57.006322 4789 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac45e938-e440-4cd6-bee4-d371c4a799dc-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 22:43:57 crc kubenswrapper[4789]: I0202 22:43:57.006394 4789 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac45e938-e440-4cd6-bee4-d371c4a799dc-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 22:43:57 crc kubenswrapper[4789]: I0202 22:43:57.006416 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qbnpg\" (UniqueName: \"kubernetes.io/projected/ac45e938-e440-4cd6-bee4-d371c4a799dc-kube-api-access-qbnpg\") on node \"crc\" DevicePath \"\"" Feb 02 22:43:57 crc kubenswrapper[4789]: I0202 22:43:57.260411 4789 generic.go:334] "Generic (PLEG): container finished" podID="ac45e938-e440-4cd6-bee4-d371c4a799dc" containerID="1b9698e0cb890d80097551ecf1bed1905075eca669687cc6dcfd138233cc0af6" exitCode=0 Feb 02 22:43:57 crc kubenswrapper[4789]: I0202 22:43:57.260490 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fjvt7" Feb 02 22:43:57 crc kubenswrapper[4789]: I0202 22:43:57.260514 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fjvt7" event={"ID":"ac45e938-e440-4cd6-bee4-d371c4a799dc","Type":"ContainerDied","Data":"1b9698e0cb890d80097551ecf1bed1905075eca669687cc6dcfd138233cc0af6"} Feb 02 22:43:57 crc kubenswrapper[4789]: I0202 22:43:57.260988 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fjvt7" event={"ID":"ac45e938-e440-4cd6-bee4-d371c4a799dc","Type":"ContainerDied","Data":"6be13e248fb554bd1aca54616b130301dc7b98db375d3110c9fbbba13fef3e8d"} Feb 02 22:43:57 crc kubenswrapper[4789]: I0202 22:43:57.261049 4789 scope.go:117] "RemoveContainer" containerID="1b9698e0cb890d80097551ecf1bed1905075eca669687cc6dcfd138233cc0af6" Feb 02 22:43:57 crc kubenswrapper[4789]: I0202 22:43:57.293182 4789 scope.go:117] "RemoveContainer" containerID="a72c562a558fa2bfd91b3a69808459c6f223d3edb83b2643de31182be4a844d4" Feb 02 22:43:57 crc kubenswrapper[4789]: I0202 22:43:57.311020 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fjvt7"] Feb 02 22:43:57 crc kubenswrapper[4789]: I0202 22:43:57.320602 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-fjvt7"] Feb 02 22:43:57 crc kubenswrapper[4789]: I0202 22:43:57.338725 4789 scope.go:117] "RemoveContainer" containerID="f331c46fb49f3ff489c0c213d29db57e71c46627861e9e73f1cabc44ccb1b700" Feb 02 22:43:57 crc kubenswrapper[4789]: I0202 22:43:57.365272 4789 scope.go:117] "RemoveContainer" containerID="1b9698e0cb890d80097551ecf1bed1905075eca669687cc6dcfd138233cc0af6" Feb 02 22:43:57 crc kubenswrapper[4789]: E0202 22:43:57.365837 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b9698e0cb890d80097551ecf1bed1905075eca669687cc6dcfd138233cc0af6\": container with ID starting with 1b9698e0cb890d80097551ecf1bed1905075eca669687cc6dcfd138233cc0af6 not found: ID does not exist" containerID="1b9698e0cb890d80097551ecf1bed1905075eca669687cc6dcfd138233cc0af6" Feb 02 22:43:57 crc kubenswrapper[4789]: I0202 22:43:57.365881 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b9698e0cb890d80097551ecf1bed1905075eca669687cc6dcfd138233cc0af6"} err="failed to get container status \"1b9698e0cb890d80097551ecf1bed1905075eca669687cc6dcfd138233cc0af6\": rpc error: code = NotFound desc = could not find container \"1b9698e0cb890d80097551ecf1bed1905075eca669687cc6dcfd138233cc0af6\": container with ID starting with 1b9698e0cb890d80097551ecf1bed1905075eca669687cc6dcfd138233cc0af6 not found: ID does not exist" Feb 02 22:43:57 crc kubenswrapper[4789]: I0202 22:43:57.365913 4789 scope.go:117] "RemoveContainer" containerID="a72c562a558fa2bfd91b3a69808459c6f223d3edb83b2643de31182be4a844d4" Feb 02 22:43:57 crc kubenswrapper[4789]: E0202 22:43:57.366318 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a72c562a558fa2bfd91b3a69808459c6f223d3edb83b2643de31182be4a844d4\": container with ID starting with a72c562a558fa2bfd91b3a69808459c6f223d3edb83b2643de31182be4a844d4 not found: ID does not exist" containerID="a72c562a558fa2bfd91b3a69808459c6f223d3edb83b2643de31182be4a844d4" Feb 02 22:43:57 crc kubenswrapper[4789]: I0202 22:43:57.366344 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a72c562a558fa2bfd91b3a69808459c6f223d3edb83b2643de31182be4a844d4"} err="failed to get container status \"a72c562a558fa2bfd91b3a69808459c6f223d3edb83b2643de31182be4a844d4\": rpc error: code = NotFound desc = could not find container \"a72c562a558fa2bfd91b3a69808459c6f223d3edb83b2643de31182be4a844d4\": container with ID starting with a72c562a558fa2bfd91b3a69808459c6f223d3edb83b2643de31182be4a844d4 not found: ID does not exist" Feb 02 22:43:57 crc kubenswrapper[4789]: I0202 22:43:57.366361 4789 scope.go:117] "RemoveContainer" containerID="f331c46fb49f3ff489c0c213d29db57e71c46627861e9e73f1cabc44ccb1b700" Feb 02 22:43:57 crc kubenswrapper[4789]: E0202 22:43:57.366782 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f331c46fb49f3ff489c0c213d29db57e71c46627861e9e73f1cabc44ccb1b700\": container with ID starting with f331c46fb49f3ff489c0c213d29db57e71c46627861e9e73f1cabc44ccb1b700 not found: ID does not exist" containerID="f331c46fb49f3ff489c0c213d29db57e71c46627861e9e73f1cabc44ccb1b700" Feb 02 22:43:57 crc kubenswrapper[4789]: I0202 22:43:57.366819 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f331c46fb49f3ff489c0c213d29db57e71c46627861e9e73f1cabc44ccb1b700"} err="failed to get container status \"f331c46fb49f3ff489c0c213d29db57e71c46627861e9e73f1cabc44ccb1b700\": rpc error: code = NotFound desc = could not find container \"f331c46fb49f3ff489c0c213d29db57e71c46627861e9e73f1cabc44ccb1b700\": container with ID starting with f331c46fb49f3ff489c0c213d29db57e71c46627861e9e73f1cabc44ccb1b700 not found: ID does not exist" Feb 02 22:43:58 crc kubenswrapper[4789]: I0202 22:43:58.449850 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac45e938-e440-4cd6-bee4-d371c4a799dc" path="/var/lib/kubelet/pods/ac45e938-e440-4cd6-bee4-d371c4a799dc/volumes" Feb 02 22:44:22 crc kubenswrapper[4789]: I0202 22:44:22.841370 4789 patch_prober.go:28] interesting pod/machine-config-daemon-c8vcn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 22:44:22 crc kubenswrapper[4789]: I0202 22:44:22.842075 4789 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 22:44:22 crc kubenswrapper[4789]: I0202 22:44:22.842146 4789 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" Feb 02 22:44:22 crc kubenswrapper[4789]: I0202 22:44:22.842909 4789 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7ba14292c23dc4f23f84b7674c42768a164c25b022f823f0ebbe7a54004bb378"} pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 22:44:22 crc kubenswrapper[4789]: I0202 22:44:22.843004 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" containerName="machine-config-daemon" containerID="cri-o://7ba14292c23dc4f23f84b7674c42768a164c25b022f823f0ebbe7a54004bb378" gracePeriod=600 Feb 02 22:44:23 crc kubenswrapper[4789]: I0202 22:44:23.505978 4789 generic.go:334] "Generic (PLEG): container finished" podID="bdf018b4-1451-4d37-be6e-05802b67c73e" containerID="7ba14292c23dc4f23f84b7674c42768a164c25b022f823f0ebbe7a54004bb378" exitCode=0 Feb 02 22:44:23 crc kubenswrapper[4789]: I0202 22:44:23.506093 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" event={"ID":"bdf018b4-1451-4d37-be6e-05802b67c73e","Type":"ContainerDied","Data":"7ba14292c23dc4f23f84b7674c42768a164c25b022f823f0ebbe7a54004bb378"} Feb 02 22:44:23 crc kubenswrapper[4789]: I0202 22:44:23.506434 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" event={"ID":"bdf018b4-1451-4d37-be6e-05802b67c73e","Type":"ContainerStarted","Data":"16b35fede0307e2768edeae9bc17688f5a8ff6427f2ae9305826087a6effbd74"} Feb 02 22:44:23 crc kubenswrapper[4789]: I0202 22:44:23.506471 4789 scope.go:117] "RemoveContainer" containerID="e1882e0fff6a0a98d0a3daff7d66a9d268ab6f6c60c38af019ec759180dddf7d" Feb 02 22:44:48 crc kubenswrapper[4789]: I0202 22:44:48.050798 4789 scope.go:117] "RemoveContainer" containerID="c1723ac67f75880368c6e8f7880fc6a3d80854529520f88e42d2c5b9ff6b3a4a" Feb 02 22:44:48 crc kubenswrapper[4789]: I0202 22:44:48.091350 4789 scope.go:117] "RemoveContainer" containerID="ef77e0d66218310d7c09fa212a256af117992e1d62fd914ab84f166bd5d6bdcb" Feb 02 22:44:48 crc kubenswrapper[4789]: I0202 22:44:48.120602 4789 scope.go:117] "RemoveContainer" containerID="74b699484ed0ab074a613e01dfb47e7f3ec80c5eaadeb3f25b8a6386b1ca1c9b" Feb 02 22:45:00 crc kubenswrapper[4789]: I0202 22:45:00.146364 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29501205-pxd62"] Feb 02 22:45:00 crc kubenswrapper[4789]: E0202 22:45:00.147317 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac45e938-e440-4cd6-bee4-d371c4a799dc" containerName="extract-utilities" Feb 02 22:45:00 crc kubenswrapper[4789]: I0202 22:45:00.147341 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac45e938-e440-4cd6-bee4-d371c4a799dc" containerName="extract-utilities" Feb 02 22:45:00 crc kubenswrapper[4789]: E0202 22:45:00.147358 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac45e938-e440-4cd6-bee4-d371c4a799dc" containerName="registry-server" Feb 02 22:45:00 crc kubenswrapper[4789]: I0202 22:45:00.147371 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac45e938-e440-4cd6-bee4-d371c4a799dc" containerName="registry-server" Feb 02 22:45:00 crc kubenswrapper[4789]: E0202 22:45:00.147396 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac45e938-e440-4cd6-bee4-d371c4a799dc" containerName="extract-content" Feb 02 22:45:00 crc kubenswrapper[4789]: I0202 22:45:00.147409 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac45e938-e440-4cd6-bee4-d371c4a799dc" containerName="extract-content" Feb 02 22:45:00 crc kubenswrapper[4789]: I0202 22:45:00.147700 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac45e938-e440-4cd6-bee4-d371c4a799dc" containerName="registry-server" Feb 02 22:45:00 crc kubenswrapper[4789]: I0202 22:45:00.148534 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29501205-pxd62" Feb 02 22:45:00 crc kubenswrapper[4789]: I0202 22:45:00.150141 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 02 22:45:00 crc kubenswrapper[4789]: I0202 22:45:00.151371 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 02 22:45:00 crc kubenswrapper[4789]: I0202 22:45:00.157645 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29501205-pxd62"] Feb 02 22:45:00 crc kubenswrapper[4789]: I0202 22:45:00.178829 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hg4nc\" (UniqueName: \"kubernetes.io/projected/3592f1a3-06b5-4bb3-a8bb-3a765665f02a-kube-api-access-hg4nc\") pod \"collect-profiles-29501205-pxd62\" (UID: \"3592f1a3-06b5-4bb3-a8bb-3a765665f02a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501205-pxd62" Feb 02 22:45:00 crc kubenswrapper[4789]: I0202 22:45:00.178956 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3592f1a3-06b5-4bb3-a8bb-3a765665f02a-config-volume\") pod \"collect-profiles-29501205-pxd62\" (UID: \"3592f1a3-06b5-4bb3-a8bb-3a765665f02a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501205-pxd62" Feb 02 22:45:00 crc kubenswrapper[4789]: I0202 22:45:00.179043 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3592f1a3-06b5-4bb3-a8bb-3a765665f02a-secret-volume\") pod \"collect-profiles-29501205-pxd62\" (UID: \"3592f1a3-06b5-4bb3-a8bb-3a765665f02a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501205-pxd62" Feb 02 22:45:00 crc kubenswrapper[4789]: I0202 22:45:00.280226 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hg4nc\" (UniqueName: \"kubernetes.io/projected/3592f1a3-06b5-4bb3-a8bb-3a765665f02a-kube-api-access-hg4nc\") pod \"collect-profiles-29501205-pxd62\" (UID: \"3592f1a3-06b5-4bb3-a8bb-3a765665f02a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501205-pxd62" Feb 02 22:45:00 crc kubenswrapper[4789]: I0202 22:45:00.280335 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3592f1a3-06b5-4bb3-a8bb-3a765665f02a-config-volume\") pod \"collect-profiles-29501205-pxd62\" (UID: \"3592f1a3-06b5-4bb3-a8bb-3a765665f02a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501205-pxd62" Feb 02 22:45:00 crc kubenswrapper[4789]: I0202 22:45:00.280470 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3592f1a3-06b5-4bb3-a8bb-3a765665f02a-secret-volume\") pod \"collect-profiles-29501205-pxd62\" (UID: \"3592f1a3-06b5-4bb3-a8bb-3a765665f02a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501205-pxd62" Feb 02 22:45:00 crc kubenswrapper[4789]: I0202 22:45:00.281231 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3592f1a3-06b5-4bb3-a8bb-3a765665f02a-config-volume\") pod \"collect-profiles-29501205-pxd62\" (UID: \"3592f1a3-06b5-4bb3-a8bb-3a765665f02a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501205-pxd62" Feb 02 22:45:00 crc kubenswrapper[4789]: I0202 22:45:00.288251 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3592f1a3-06b5-4bb3-a8bb-3a765665f02a-secret-volume\") pod \"collect-profiles-29501205-pxd62\" (UID: \"3592f1a3-06b5-4bb3-a8bb-3a765665f02a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501205-pxd62" Feb 02 22:45:00 crc kubenswrapper[4789]: I0202 22:45:00.300721 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hg4nc\" (UniqueName: \"kubernetes.io/projected/3592f1a3-06b5-4bb3-a8bb-3a765665f02a-kube-api-access-hg4nc\") pod \"collect-profiles-29501205-pxd62\" (UID: \"3592f1a3-06b5-4bb3-a8bb-3a765665f02a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501205-pxd62" Feb 02 22:45:00 crc kubenswrapper[4789]: I0202 22:45:00.471124 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29501205-pxd62" Feb 02 22:45:00 crc kubenswrapper[4789]: W0202 22:45:00.760808 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3592f1a3_06b5_4bb3_a8bb_3a765665f02a.slice/crio-d48e83989f4f53835aa662e4a8fad14e5f35b576cb99214434d38a20b6a38b51 WatchSource:0}: Error finding container d48e83989f4f53835aa662e4a8fad14e5f35b576cb99214434d38a20b6a38b51: Status 404 returned error can't find the container with id d48e83989f4f53835aa662e4a8fad14e5f35b576cb99214434d38a20b6a38b51 Feb 02 22:45:00 crc kubenswrapper[4789]: I0202 22:45:00.761574 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29501205-pxd62"] Feb 02 22:45:00 crc kubenswrapper[4789]: I0202 22:45:00.977471 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29501205-pxd62" event={"ID":"3592f1a3-06b5-4bb3-a8bb-3a765665f02a","Type":"ContainerStarted","Data":"9b38b698bb13c0a33555a2d879819d44831e623837e788a18858f78e14ba672f"} Feb 02 22:45:00 crc kubenswrapper[4789]: I0202 22:45:00.977843 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29501205-pxd62" event={"ID":"3592f1a3-06b5-4bb3-a8bb-3a765665f02a","Type":"ContainerStarted","Data":"d48e83989f4f53835aa662e4a8fad14e5f35b576cb99214434d38a20b6a38b51"} Feb 02 22:45:00 crc kubenswrapper[4789]: I0202 22:45:00.994293 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29501205-pxd62" podStartSLOduration=0.994265144 podStartE2EDuration="994.265144ms" podCreationTimestamp="2026-02-02 22:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 22:45:00.991790814 +0000 UTC m=+5121.286815873" watchObservedRunningTime="2026-02-02 22:45:00.994265144 +0000 UTC m=+5121.289290203" Feb 02 22:45:01 crc kubenswrapper[4789]: I0202 22:45:01.991395 4789 generic.go:334] "Generic (PLEG): container finished" podID="3592f1a3-06b5-4bb3-a8bb-3a765665f02a" containerID="9b38b698bb13c0a33555a2d879819d44831e623837e788a18858f78e14ba672f" exitCode=0 Feb 02 22:45:01 crc kubenswrapper[4789]: I0202 22:45:01.991477 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29501205-pxd62" event={"ID":"3592f1a3-06b5-4bb3-a8bb-3a765665f02a","Type":"ContainerDied","Data":"9b38b698bb13c0a33555a2d879819d44831e623837e788a18858f78e14ba672f"} Feb 02 22:45:03 crc kubenswrapper[4789]: I0202 22:45:03.404765 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29501205-pxd62" Feb 02 22:45:03 crc kubenswrapper[4789]: I0202 22:45:03.434711 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3592f1a3-06b5-4bb3-a8bb-3a765665f02a-secret-volume\") pod \"3592f1a3-06b5-4bb3-a8bb-3a765665f02a\" (UID: \"3592f1a3-06b5-4bb3-a8bb-3a765665f02a\") " Feb 02 22:45:03 crc kubenswrapper[4789]: I0202 22:45:03.434782 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hg4nc\" (UniqueName: \"kubernetes.io/projected/3592f1a3-06b5-4bb3-a8bb-3a765665f02a-kube-api-access-hg4nc\") pod \"3592f1a3-06b5-4bb3-a8bb-3a765665f02a\" (UID: \"3592f1a3-06b5-4bb3-a8bb-3a765665f02a\") " Feb 02 22:45:03 crc kubenswrapper[4789]: I0202 22:45:03.434897 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3592f1a3-06b5-4bb3-a8bb-3a765665f02a-config-volume\") pod \"3592f1a3-06b5-4bb3-a8bb-3a765665f02a\" (UID: \"3592f1a3-06b5-4bb3-a8bb-3a765665f02a\") " Feb 02 22:45:03 crc kubenswrapper[4789]: I0202 22:45:03.437178 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3592f1a3-06b5-4bb3-a8bb-3a765665f02a-config-volume" (OuterVolumeSpecName: "config-volume") pod "3592f1a3-06b5-4bb3-a8bb-3a765665f02a" (UID: "3592f1a3-06b5-4bb3-a8bb-3a765665f02a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 22:45:03 crc kubenswrapper[4789]: I0202 22:45:03.473869 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3592f1a3-06b5-4bb3-a8bb-3a765665f02a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3592f1a3-06b5-4bb3-a8bb-3a765665f02a" (UID: "3592f1a3-06b5-4bb3-a8bb-3a765665f02a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 22:45:03 crc kubenswrapper[4789]: I0202 22:45:03.474305 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3592f1a3-06b5-4bb3-a8bb-3a765665f02a-kube-api-access-hg4nc" (OuterVolumeSpecName: "kube-api-access-hg4nc") pod "3592f1a3-06b5-4bb3-a8bb-3a765665f02a" (UID: "3592f1a3-06b5-4bb3-a8bb-3a765665f02a"). InnerVolumeSpecName "kube-api-access-hg4nc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 22:45:03 crc kubenswrapper[4789]: I0202 22:45:03.536847 4789 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3592f1a3-06b5-4bb3-a8bb-3a765665f02a-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 02 22:45:03 crc kubenswrapper[4789]: I0202 22:45:03.536882 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hg4nc\" (UniqueName: \"kubernetes.io/projected/3592f1a3-06b5-4bb3-a8bb-3a765665f02a-kube-api-access-hg4nc\") on node \"crc\" DevicePath \"\"" Feb 02 22:45:03 crc kubenswrapper[4789]: I0202 22:45:03.536891 4789 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3592f1a3-06b5-4bb3-a8bb-3a765665f02a-config-volume\") on node \"crc\" DevicePath \"\"" Feb 02 22:45:04 crc kubenswrapper[4789]: I0202 22:45:04.027673 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29501205-pxd62" event={"ID":"3592f1a3-06b5-4bb3-a8bb-3a765665f02a","Type":"ContainerDied","Data":"d48e83989f4f53835aa662e4a8fad14e5f35b576cb99214434d38a20b6a38b51"} Feb 02 22:45:04 crc kubenswrapper[4789]: I0202 22:45:04.027718 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d48e83989f4f53835aa662e4a8fad14e5f35b576cb99214434d38a20b6a38b51" Feb 02 22:45:04 crc kubenswrapper[4789]: I0202 22:45:04.027772 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29501205-pxd62" Feb 02 22:45:04 crc kubenswrapper[4789]: I0202 22:45:04.505107 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29501160-5s9sq"] Feb 02 22:45:04 crc kubenswrapper[4789]: I0202 22:45:04.515406 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29501160-5s9sq"] Feb 02 22:45:04 crc kubenswrapper[4789]: I0202 22:45:04.557721 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-copy-data"] Feb 02 22:45:04 crc kubenswrapper[4789]: E0202 22:45:04.558169 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3592f1a3-06b5-4bb3-a8bb-3a765665f02a" containerName="collect-profiles" Feb 02 22:45:04 crc kubenswrapper[4789]: I0202 22:45:04.558199 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="3592f1a3-06b5-4bb3-a8bb-3a765665f02a" containerName="collect-profiles" Feb 02 22:45:04 crc kubenswrapper[4789]: I0202 22:45:04.558498 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="3592f1a3-06b5-4bb3-a8bb-3a765665f02a" containerName="collect-profiles" Feb 02 22:45:04 crc kubenswrapper[4789]: I0202 22:45:04.559310 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Feb 02 22:45:04 crc kubenswrapper[4789]: I0202 22:45:04.562192 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-6vlcn" Feb 02 22:45:04 crc kubenswrapper[4789]: I0202 22:45:04.569755 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-copy-data"] Feb 02 22:45:04 crc kubenswrapper[4789]: I0202 22:45:04.755428 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6q6l\" (UniqueName: \"kubernetes.io/projected/0b49c7ba-8420-4c1c-9b1a-68242591b0c8-kube-api-access-m6q6l\") pod \"mariadb-copy-data\" (UID: \"0b49c7ba-8420-4c1c-9b1a-68242591b0c8\") " pod="openstack/mariadb-copy-data" Feb 02 22:45:04 crc kubenswrapper[4789]: I0202 22:45:04.755847 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-6796c13e-c51a-4de4-9e0a-9f72d6792964\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6796c13e-c51a-4de4-9e0a-9f72d6792964\") pod \"mariadb-copy-data\" (UID: \"0b49c7ba-8420-4c1c-9b1a-68242591b0c8\") " pod="openstack/mariadb-copy-data" Feb 02 22:45:04 crc kubenswrapper[4789]: I0202 22:45:04.858075 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6q6l\" (UniqueName: \"kubernetes.io/projected/0b49c7ba-8420-4c1c-9b1a-68242591b0c8-kube-api-access-m6q6l\") pod \"mariadb-copy-data\" (UID: \"0b49c7ba-8420-4c1c-9b1a-68242591b0c8\") " pod="openstack/mariadb-copy-data" Feb 02 22:45:04 crc kubenswrapper[4789]: I0202 22:45:04.858270 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-6796c13e-c51a-4de4-9e0a-9f72d6792964\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6796c13e-c51a-4de4-9e0a-9f72d6792964\") pod \"mariadb-copy-data\" (UID: \"0b49c7ba-8420-4c1c-9b1a-68242591b0c8\") " pod="openstack/mariadb-copy-data" Feb 02 22:45:04 crc kubenswrapper[4789]: I0202 22:45:04.864712 4789 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 02 22:45:04 crc kubenswrapper[4789]: I0202 22:45:04.864769 4789 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-6796c13e-c51a-4de4-9e0a-9f72d6792964\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6796c13e-c51a-4de4-9e0a-9f72d6792964\") pod \"mariadb-copy-data\" (UID: \"0b49c7ba-8420-4c1c-9b1a-68242591b0c8\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/3318a9fa5c115fd688a0651835a16f22997bf505fd7ab4d3250261ed2ab9ce96/globalmount\"" pod="openstack/mariadb-copy-data" Feb 02 22:45:04 crc kubenswrapper[4789]: I0202 22:45:04.882498 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6q6l\" (UniqueName: \"kubernetes.io/projected/0b49c7ba-8420-4c1c-9b1a-68242591b0c8-kube-api-access-m6q6l\") pod \"mariadb-copy-data\" (UID: \"0b49c7ba-8420-4c1c-9b1a-68242591b0c8\") " pod="openstack/mariadb-copy-data" Feb 02 22:45:04 crc kubenswrapper[4789]: I0202 22:45:04.918196 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-6796c13e-c51a-4de4-9e0a-9f72d6792964\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6796c13e-c51a-4de4-9e0a-9f72d6792964\") pod \"mariadb-copy-data\" (UID: \"0b49c7ba-8420-4c1c-9b1a-68242591b0c8\") " pod="openstack/mariadb-copy-data" Feb 02 22:45:05 crc kubenswrapper[4789]: I0202 22:45:05.182633 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Feb 02 22:45:05 crc kubenswrapper[4789]: I0202 22:45:05.767604 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-copy-data"] Feb 02 22:45:06 crc kubenswrapper[4789]: I0202 22:45:06.043808 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"0b49c7ba-8420-4c1c-9b1a-68242591b0c8","Type":"ContainerStarted","Data":"8ee0879cf848d16924b8489b8c9fb99fb1680237719bc5a27392044fa62b8655"} Feb 02 22:45:06 crc kubenswrapper[4789]: I0202 22:45:06.044148 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"0b49c7ba-8420-4c1c-9b1a-68242591b0c8","Type":"ContainerStarted","Data":"ae0f577f7bd04c34e5e76c9ebe5374ec2cbc57504f37327c08254f63efd4c085"} Feb 02 22:45:06 crc kubenswrapper[4789]: I0202 22:45:06.068694 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-copy-data" podStartSLOduration=3.068676671 podStartE2EDuration="3.068676671s" podCreationTimestamp="2026-02-02 22:45:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 22:45:06.059376568 +0000 UTC m=+5126.354401607" watchObservedRunningTime="2026-02-02 22:45:06.068676671 +0000 UTC m=+5126.363701690" Feb 02 22:45:06 crc kubenswrapper[4789]: I0202 22:45:06.437676 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1bb7e7b-5bfd-49b8-8c42-e2a0d0d5a069" path="/var/lib/kubelet/pods/a1bb7e7b-5bfd-49b8-8c42-e2a0d0d5a069/volumes" Feb 02 22:45:09 crc kubenswrapper[4789]: I0202 22:45:09.024183 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Feb 02 22:45:09 crc kubenswrapper[4789]: I0202 22:45:09.025744 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Feb 02 22:45:09 crc kubenswrapper[4789]: I0202 22:45:09.040230 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79xx9\" (UniqueName: \"kubernetes.io/projected/28b591f5-b15d-442a-b2df-7df1d199bbea-kube-api-access-79xx9\") pod \"mariadb-client\" (UID: \"28b591f5-b15d-442a-b2df-7df1d199bbea\") " pod="openstack/mariadb-client" Feb 02 22:45:09 crc kubenswrapper[4789]: I0202 22:45:09.042769 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Feb 02 22:45:09 crc kubenswrapper[4789]: I0202 22:45:09.142333 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79xx9\" (UniqueName: \"kubernetes.io/projected/28b591f5-b15d-442a-b2df-7df1d199bbea-kube-api-access-79xx9\") pod \"mariadb-client\" (UID: \"28b591f5-b15d-442a-b2df-7df1d199bbea\") " pod="openstack/mariadb-client" Feb 02 22:45:09 crc kubenswrapper[4789]: I0202 22:45:09.177867 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79xx9\" (UniqueName: \"kubernetes.io/projected/28b591f5-b15d-442a-b2df-7df1d199bbea-kube-api-access-79xx9\") pod \"mariadb-client\" (UID: \"28b591f5-b15d-442a-b2df-7df1d199bbea\") " pod="openstack/mariadb-client" Feb 02 22:45:09 crc kubenswrapper[4789]: I0202 22:45:09.363199 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Feb 02 22:45:09 crc kubenswrapper[4789]: I0202 22:45:09.941129 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Feb 02 22:45:09 crc kubenswrapper[4789]: W0202 22:45:09.948325 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod28b591f5_b15d_442a_b2df_7df1d199bbea.slice/crio-8d2cb7ce76665929cf6e1c65578c1bbda82636a4d886885a657091699c0485c5 WatchSource:0}: Error finding container 8d2cb7ce76665929cf6e1c65578c1bbda82636a4d886885a657091699c0485c5: Status 404 returned error can't find the container with id 8d2cb7ce76665929cf6e1c65578c1bbda82636a4d886885a657091699c0485c5 Feb 02 22:45:10 crc kubenswrapper[4789]: I0202 22:45:10.085624 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"28b591f5-b15d-442a-b2df-7df1d199bbea","Type":"ContainerStarted","Data":"8d2cb7ce76665929cf6e1c65578c1bbda82636a4d886885a657091699c0485c5"} Feb 02 22:45:11 crc kubenswrapper[4789]: I0202 22:45:11.098419 4789 generic.go:334] "Generic (PLEG): container finished" podID="28b591f5-b15d-442a-b2df-7df1d199bbea" containerID="e9f746d134672815c4df79d3dd0d65de5f260d45c52a21592a75a61e46d029c2" exitCode=0 Feb 02 22:45:11 crc kubenswrapper[4789]: I0202 22:45:11.098503 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"28b591f5-b15d-442a-b2df-7df1d199bbea","Type":"ContainerDied","Data":"e9f746d134672815c4df79d3dd0d65de5f260d45c52a21592a75a61e46d029c2"} Feb 02 22:45:12 crc kubenswrapper[4789]: I0202 22:45:12.534338 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Feb 02 22:45:12 crc kubenswrapper[4789]: I0202 22:45:12.562816 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client_28b591f5-b15d-442a-b2df-7df1d199bbea/mariadb-client/0.log" Feb 02 22:45:12 crc kubenswrapper[4789]: I0202 22:45:12.593544 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Feb 02 22:45:12 crc kubenswrapper[4789]: I0202 22:45:12.601095 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Feb 02 22:45:12 crc kubenswrapper[4789]: I0202 22:45:12.702750 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-79xx9\" (UniqueName: \"kubernetes.io/projected/28b591f5-b15d-442a-b2df-7df1d199bbea-kube-api-access-79xx9\") pod \"28b591f5-b15d-442a-b2df-7df1d199bbea\" (UID: \"28b591f5-b15d-442a-b2df-7df1d199bbea\") " Feb 02 22:45:12 crc kubenswrapper[4789]: I0202 22:45:12.724072 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28b591f5-b15d-442a-b2df-7df1d199bbea-kube-api-access-79xx9" (OuterVolumeSpecName: "kube-api-access-79xx9") pod "28b591f5-b15d-442a-b2df-7df1d199bbea" (UID: "28b591f5-b15d-442a-b2df-7df1d199bbea"). InnerVolumeSpecName "kube-api-access-79xx9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 22:45:12 crc kubenswrapper[4789]: I0202 22:45:12.757863 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Feb 02 22:45:12 crc kubenswrapper[4789]: E0202 22:45:12.758157 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28b591f5-b15d-442a-b2df-7df1d199bbea" containerName="mariadb-client" Feb 02 22:45:12 crc kubenswrapper[4789]: I0202 22:45:12.758172 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="28b591f5-b15d-442a-b2df-7df1d199bbea" containerName="mariadb-client" Feb 02 22:45:12 crc kubenswrapper[4789]: I0202 22:45:12.758310 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="28b591f5-b15d-442a-b2df-7df1d199bbea" containerName="mariadb-client" Feb 02 22:45:12 crc kubenswrapper[4789]: I0202 22:45:12.758781 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Feb 02 22:45:12 crc kubenswrapper[4789]: I0202 22:45:12.771839 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Feb 02 22:45:12 crc kubenswrapper[4789]: I0202 22:45:12.829275 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzgg7\" (UniqueName: \"kubernetes.io/projected/aa527155-592c-44fb-a786-c1fb757957c0-kube-api-access-vzgg7\") pod \"mariadb-client\" (UID: \"aa527155-592c-44fb-a786-c1fb757957c0\") " pod="openstack/mariadb-client" Feb 02 22:45:12 crc kubenswrapper[4789]: I0202 22:45:12.829784 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-79xx9\" (UniqueName: \"kubernetes.io/projected/28b591f5-b15d-442a-b2df-7df1d199bbea-kube-api-access-79xx9\") on node \"crc\" DevicePath \"\"" Feb 02 22:45:12 crc kubenswrapper[4789]: I0202 22:45:12.931566 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzgg7\" (UniqueName: \"kubernetes.io/projected/aa527155-592c-44fb-a786-c1fb757957c0-kube-api-access-vzgg7\") pod \"mariadb-client\" (UID: \"aa527155-592c-44fb-a786-c1fb757957c0\") " pod="openstack/mariadb-client" Feb 02 22:45:12 crc kubenswrapper[4789]: I0202 22:45:12.962888 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzgg7\" (UniqueName: \"kubernetes.io/projected/aa527155-592c-44fb-a786-c1fb757957c0-kube-api-access-vzgg7\") pod \"mariadb-client\" (UID: \"aa527155-592c-44fb-a786-c1fb757957c0\") " pod="openstack/mariadb-client" Feb 02 22:45:13 crc kubenswrapper[4789]: I0202 22:45:13.122213 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d2cb7ce76665929cf6e1c65578c1bbda82636a4d886885a657091699c0485c5" Feb 02 22:45:13 crc kubenswrapper[4789]: I0202 22:45:13.122265 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Feb 02 22:45:13 crc kubenswrapper[4789]: I0202 22:45:13.131843 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Feb 02 22:45:13 crc kubenswrapper[4789]: I0202 22:45:13.150127 4789 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/mariadb-client" oldPodUID="28b591f5-b15d-442a-b2df-7df1d199bbea" podUID="aa527155-592c-44fb-a786-c1fb757957c0" Feb 02 22:45:13 crc kubenswrapper[4789]: I0202 22:45:13.652465 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Feb 02 22:45:13 crc kubenswrapper[4789]: W0202 22:45:13.661045 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaa527155_592c_44fb_a786_c1fb757957c0.slice/crio-a0f4ae019bb8df6a7671d21faf55ce67eb0726dba00c4bd2a0a3a81b12cde786 WatchSource:0}: Error finding container a0f4ae019bb8df6a7671d21faf55ce67eb0726dba00c4bd2a0a3a81b12cde786: Status 404 returned error can't find the container with id a0f4ae019bb8df6a7671d21faf55ce67eb0726dba00c4bd2a0a3a81b12cde786 Feb 02 22:45:14 crc kubenswrapper[4789]: I0202 22:45:14.148765 4789 generic.go:334] "Generic (PLEG): container finished" podID="aa527155-592c-44fb-a786-c1fb757957c0" containerID="01a5109c2df719e318faf34cbe82f07df79cbc4a3416be11d623cd60a502f6e2" exitCode=0 Feb 02 22:45:14 crc kubenswrapper[4789]: I0202 22:45:14.148838 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"aa527155-592c-44fb-a786-c1fb757957c0","Type":"ContainerDied","Data":"01a5109c2df719e318faf34cbe82f07df79cbc4a3416be11d623cd60a502f6e2"} Feb 02 22:45:14 crc kubenswrapper[4789]: I0202 22:45:14.148879 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"aa527155-592c-44fb-a786-c1fb757957c0","Type":"ContainerStarted","Data":"a0f4ae019bb8df6a7671d21faf55ce67eb0726dba00c4bd2a0a3a81b12cde786"} Feb 02 22:45:14 crc kubenswrapper[4789]: I0202 22:45:14.437278 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28b591f5-b15d-442a-b2df-7df1d199bbea" path="/var/lib/kubelet/pods/28b591f5-b15d-442a-b2df-7df1d199bbea/volumes" Feb 02 22:45:15 crc kubenswrapper[4789]: I0202 22:45:15.557161 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Feb 02 22:45:15 crc kubenswrapper[4789]: I0202 22:45:15.578697 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client_aa527155-592c-44fb-a786-c1fb757957c0/mariadb-client/0.log" Feb 02 22:45:15 crc kubenswrapper[4789]: I0202 22:45:15.591384 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vzgg7\" (UniqueName: \"kubernetes.io/projected/aa527155-592c-44fb-a786-c1fb757957c0-kube-api-access-vzgg7\") pod \"aa527155-592c-44fb-a786-c1fb757957c0\" (UID: \"aa527155-592c-44fb-a786-c1fb757957c0\") " Feb 02 22:45:15 crc kubenswrapper[4789]: I0202 22:45:15.603892 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa527155-592c-44fb-a786-c1fb757957c0-kube-api-access-vzgg7" (OuterVolumeSpecName: "kube-api-access-vzgg7") pod "aa527155-592c-44fb-a786-c1fb757957c0" (UID: "aa527155-592c-44fb-a786-c1fb757957c0"). InnerVolumeSpecName "kube-api-access-vzgg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 22:45:15 crc kubenswrapper[4789]: I0202 22:45:15.609438 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Feb 02 22:45:15 crc kubenswrapper[4789]: I0202 22:45:15.616606 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Feb 02 22:45:15 crc kubenswrapper[4789]: I0202 22:45:15.694284 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vzgg7\" (UniqueName: \"kubernetes.io/projected/aa527155-592c-44fb-a786-c1fb757957c0-kube-api-access-vzgg7\") on node \"crc\" DevicePath \"\"" Feb 02 22:45:16 crc kubenswrapper[4789]: I0202 22:45:16.170123 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a0f4ae019bb8df6a7671d21faf55ce67eb0726dba00c4bd2a0a3a81b12cde786" Feb 02 22:45:16 crc kubenswrapper[4789]: I0202 22:45:16.170269 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Feb 02 22:45:16 crc kubenswrapper[4789]: I0202 22:45:16.436888 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa527155-592c-44fb-a786-c1fb757957c0" path="/var/lib/kubelet/pods/aa527155-592c-44fb-a786-c1fb757957c0/volumes" Feb 02 22:45:48 crc kubenswrapper[4789]: I0202 22:45:48.223735 4789 scope.go:117] "RemoveContainer" containerID="d0c70e18c8fe13219cea8a79460607b7ea51e6640a89448d9df0fe5e6f3bc98d" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.263854 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 02 22:45:49 crc kubenswrapper[4789]: E0202 22:45:49.264339 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa527155-592c-44fb-a786-c1fb757957c0" containerName="mariadb-client" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.264374 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa527155-592c-44fb-a786-c1fb757957c0" containerName="mariadb-client" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.264591 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa527155-592c-44fb-a786-c1fb757957c0" containerName="mariadb-client" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.265337 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.268913 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.269387 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-kmh6b" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.269667 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.277561 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.314401 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-2"] Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.315881 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-2" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.331237 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-1"] Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.332807 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-1" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.341619 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-2"] Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.349949 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-1"] Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.414518 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f5e652c9-762f-4067-9590-b345432d20d2-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"f5e652c9-762f-4067-9590-b345432d20d2\") " pod="openstack/ovsdbserver-nb-2" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.414606 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-6d2bab01-a5a2-4a89-b999-dcd363a40b2e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6d2bab01-a5a2-4a89-b999-dcd363a40b2e\") pod \"ovsdbserver-nb-2\" (UID: \"f5e652c9-762f-4067-9590-b345432d20d2\") " pod="openstack/ovsdbserver-nb-2" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.414671 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-799b5\" (UniqueName: \"kubernetes.io/projected/3010e3cf-9c6a-4ace-aae1-e12d804aa800-kube-api-access-799b5\") pod \"ovsdbserver-nb-1\" (UID: \"3010e3cf-9c6a-4ace-aae1-e12d804aa800\") " pod="openstack/ovsdbserver-nb-1" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.414700 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3010e3cf-9c6a-4ace-aae1-e12d804aa800-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"3010e3cf-9c6a-4ace-aae1-e12d804aa800\") " pod="openstack/ovsdbserver-nb-1" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.414770 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ts6tv\" (UniqueName: \"kubernetes.io/projected/990c1a60-2c79-469a-9c85-4850c0865450-kube-api-access-ts6tv\") pod \"ovsdbserver-nb-0\" (UID: \"990c1a60-2c79-469a-9c85-4850c0865450\") " pod="openstack/ovsdbserver-nb-0" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.414936 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3010e3cf-9c6a-4ace-aae1-e12d804aa800-config\") pod \"ovsdbserver-nb-1\" (UID: \"3010e3cf-9c6a-4ace-aae1-e12d804aa800\") " pod="openstack/ovsdbserver-nb-1" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.415049 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3010e3cf-9c6a-4ace-aae1-e12d804aa800-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"3010e3cf-9c6a-4ace-aae1-e12d804aa800\") " pod="openstack/ovsdbserver-nb-1" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.415117 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5e652c9-762f-4067-9590-b345432d20d2-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"f5e652c9-762f-4067-9590-b345432d20d2\") " pod="openstack/ovsdbserver-nb-2" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.415309 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3010e3cf-9c6a-4ace-aae1-e12d804aa800-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"3010e3cf-9c6a-4ace-aae1-e12d804aa800\") " pod="openstack/ovsdbserver-nb-1" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.415393 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-3275483c-644a-46c4-80ab-f3aae6d0fd95\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3275483c-644a-46c4-80ab-f3aae6d0fd95\") pod \"ovsdbserver-nb-0\" (UID: \"990c1a60-2c79-469a-9c85-4850c0865450\") " pod="openstack/ovsdbserver-nb-0" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.415442 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/990c1a60-2c79-469a-9c85-4850c0865450-config\") pod \"ovsdbserver-nb-0\" (UID: \"990c1a60-2c79-469a-9c85-4850c0865450\") " pod="openstack/ovsdbserver-nb-0" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.415486 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-28aca9c6-aad4-4d1c-a073-e08b43091842\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-28aca9c6-aad4-4d1c-a073-e08b43091842\") pod \"ovsdbserver-nb-1\" (UID: \"3010e3cf-9c6a-4ace-aae1-e12d804aa800\") " pod="openstack/ovsdbserver-nb-1" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.415535 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5e652c9-762f-4067-9590-b345432d20d2-config\") pod \"ovsdbserver-nb-2\" (UID: \"f5e652c9-762f-4067-9590-b345432d20d2\") " pod="openstack/ovsdbserver-nb-2" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.415570 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/990c1a60-2c79-469a-9c85-4850c0865450-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"990c1a60-2c79-469a-9c85-4850c0865450\") " pod="openstack/ovsdbserver-nb-0" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.415624 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/990c1a60-2c79-469a-9c85-4850c0865450-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"990c1a60-2c79-469a-9c85-4850c0865450\") " pod="openstack/ovsdbserver-nb-0" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.415685 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/990c1a60-2c79-469a-9c85-4850c0865450-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"990c1a60-2c79-469a-9c85-4850c0865450\") " pod="openstack/ovsdbserver-nb-0" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.415732 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f5e652c9-762f-4067-9590-b345432d20d2-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"f5e652c9-762f-4067-9590-b345432d20d2\") " pod="openstack/ovsdbserver-nb-2" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.415887 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khlmc\" (UniqueName: \"kubernetes.io/projected/f5e652c9-762f-4067-9590-b345432d20d2-kube-api-access-khlmc\") pod \"ovsdbserver-nb-2\" (UID: \"f5e652c9-762f-4067-9590-b345432d20d2\") " pod="openstack/ovsdbserver-nb-2" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.478434 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.480471 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.483232 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.483859 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.488020 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-9mf65" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.510953 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.517870 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f5e652c9-762f-4067-9590-b345432d20d2-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"f5e652c9-762f-4067-9590-b345432d20d2\") " pod="openstack/ovsdbserver-nb-2" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.517912 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-6d2bab01-a5a2-4a89-b999-dcd363a40b2e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6d2bab01-a5a2-4a89-b999-dcd363a40b2e\") pod \"ovsdbserver-nb-2\" (UID: \"f5e652c9-762f-4067-9590-b345432d20d2\") " pod="openstack/ovsdbserver-nb-2" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.517954 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-799b5\" (UniqueName: \"kubernetes.io/projected/3010e3cf-9c6a-4ace-aae1-e12d804aa800-kube-api-access-799b5\") pod \"ovsdbserver-nb-1\" (UID: \"3010e3cf-9c6a-4ace-aae1-e12d804aa800\") " pod="openstack/ovsdbserver-nb-1" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.517975 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3010e3cf-9c6a-4ace-aae1-e12d804aa800-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"3010e3cf-9c6a-4ace-aae1-e12d804aa800\") " pod="openstack/ovsdbserver-nb-1" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.517991 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ts6tv\" (UniqueName: \"kubernetes.io/projected/990c1a60-2c79-469a-9c85-4850c0865450-kube-api-access-ts6tv\") pod \"ovsdbserver-nb-0\" (UID: \"990c1a60-2c79-469a-9c85-4850c0865450\") " pod="openstack/ovsdbserver-nb-0" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.518013 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3010e3cf-9c6a-4ace-aae1-e12d804aa800-config\") pod \"ovsdbserver-nb-1\" (UID: \"3010e3cf-9c6a-4ace-aae1-e12d804aa800\") " pod="openstack/ovsdbserver-nb-1" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.518036 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5e652c9-762f-4067-9590-b345432d20d2-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"f5e652c9-762f-4067-9590-b345432d20d2\") " pod="openstack/ovsdbserver-nb-2" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.518052 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3010e3cf-9c6a-4ace-aae1-e12d804aa800-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"3010e3cf-9c6a-4ace-aae1-e12d804aa800\") " pod="openstack/ovsdbserver-nb-1" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.518104 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3010e3cf-9c6a-4ace-aae1-e12d804aa800-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"3010e3cf-9c6a-4ace-aae1-e12d804aa800\") " pod="openstack/ovsdbserver-nb-1" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.518133 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-3275483c-644a-46c4-80ab-f3aae6d0fd95\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3275483c-644a-46c4-80ab-f3aae6d0fd95\") pod \"ovsdbserver-nb-0\" (UID: \"990c1a60-2c79-469a-9c85-4850c0865450\") " pod="openstack/ovsdbserver-nb-0" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.518151 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/990c1a60-2c79-469a-9c85-4850c0865450-config\") pod \"ovsdbserver-nb-0\" (UID: \"990c1a60-2c79-469a-9c85-4850c0865450\") " pod="openstack/ovsdbserver-nb-0" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.518170 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-28aca9c6-aad4-4d1c-a073-e08b43091842\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-28aca9c6-aad4-4d1c-a073-e08b43091842\") pod \"ovsdbserver-nb-1\" (UID: \"3010e3cf-9c6a-4ace-aae1-e12d804aa800\") " pod="openstack/ovsdbserver-nb-1" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.518188 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5e652c9-762f-4067-9590-b345432d20d2-config\") pod \"ovsdbserver-nb-2\" (UID: \"f5e652c9-762f-4067-9590-b345432d20d2\") " pod="openstack/ovsdbserver-nb-2" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.518204 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/990c1a60-2c79-469a-9c85-4850c0865450-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"990c1a60-2c79-469a-9c85-4850c0865450\") " pod="openstack/ovsdbserver-nb-0" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.518219 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/990c1a60-2c79-469a-9c85-4850c0865450-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"990c1a60-2c79-469a-9c85-4850c0865450\") " pod="openstack/ovsdbserver-nb-0" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.518250 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/990c1a60-2c79-469a-9c85-4850c0865450-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"990c1a60-2c79-469a-9c85-4850c0865450\") " pod="openstack/ovsdbserver-nb-0" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.518269 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f5e652c9-762f-4067-9590-b345432d20d2-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"f5e652c9-762f-4067-9590-b345432d20d2\") " pod="openstack/ovsdbserver-nb-2" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.518296 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khlmc\" (UniqueName: \"kubernetes.io/projected/f5e652c9-762f-4067-9590-b345432d20d2-kube-api-access-khlmc\") pod \"ovsdbserver-nb-2\" (UID: \"f5e652c9-762f-4067-9590-b345432d20d2\") " pod="openstack/ovsdbserver-nb-2" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.518964 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f5e652c9-762f-4067-9590-b345432d20d2-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"f5e652c9-762f-4067-9590-b345432d20d2\") " pod="openstack/ovsdbserver-nb-2" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.520451 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3010e3cf-9c6a-4ace-aae1-e12d804aa800-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"3010e3cf-9c6a-4ace-aae1-e12d804aa800\") " pod="openstack/ovsdbserver-nb-1" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.520668 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5e652c9-762f-4067-9590-b345432d20d2-config\") pod \"ovsdbserver-nb-2\" (UID: \"f5e652c9-762f-4067-9590-b345432d20d2\") " pod="openstack/ovsdbserver-nb-2" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.520925 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3010e3cf-9c6a-4ace-aae1-e12d804aa800-config\") pod \"ovsdbserver-nb-1\" (UID: \"3010e3cf-9c6a-4ace-aae1-e12d804aa800\") " pod="openstack/ovsdbserver-nb-1" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.521503 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3010e3cf-9c6a-4ace-aae1-e12d804aa800-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"3010e3cf-9c6a-4ace-aae1-e12d804aa800\") " pod="openstack/ovsdbserver-nb-1" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.521517 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f5e652c9-762f-4067-9590-b345432d20d2-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"f5e652c9-762f-4067-9590-b345432d20d2\") " pod="openstack/ovsdbserver-nb-2" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.522542 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/990c1a60-2c79-469a-9c85-4850c0865450-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"990c1a60-2c79-469a-9c85-4850c0865450\") " pod="openstack/ovsdbserver-nb-0" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.523372 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/990c1a60-2c79-469a-9c85-4850c0865450-config\") pod \"ovsdbserver-nb-0\" (UID: \"990c1a60-2c79-469a-9c85-4850c0865450\") " pod="openstack/ovsdbserver-nb-0" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.523528 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/990c1a60-2c79-469a-9c85-4850c0865450-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"990c1a60-2c79-469a-9c85-4850c0865450\") " pod="openstack/ovsdbserver-nb-0" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.524618 4789 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.524650 4789 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-6d2bab01-a5a2-4a89-b999-dcd363a40b2e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6d2bab01-a5a2-4a89-b999-dcd363a40b2e\") pod \"ovsdbserver-nb-2\" (UID: \"f5e652c9-762f-4067-9590-b345432d20d2\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/0277c7e789c89aa8ac6e0d53bf680574ac7cb78c9714c87028c63e3b09ab9bed/globalmount\"" pod="openstack/ovsdbserver-nb-2" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.524697 4789 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.524720 4789 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-28aca9c6-aad4-4d1c-a073-e08b43091842\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-28aca9c6-aad4-4d1c-a073-e08b43091842\") pod \"ovsdbserver-nb-1\" (UID: \"3010e3cf-9c6a-4ace-aae1-e12d804aa800\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b99d6c4ec7156f6ac6f4ef30c2a00fdd7a8b8a355ff0753fee440754c4e4a98b/globalmount\"" pod="openstack/ovsdbserver-nb-1" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.525279 4789 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.525314 4789 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-3275483c-644a-46c4-80ab-f3aae6d0fd95\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3275483c-644a-46c4-80ab-f3aae6d0fd95\") pod \"ovsdbserver-nb-0\" (UID: \"990c1a60-2c79-469a-9c85-4850c0865450\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ba46d932ab98851d3df8f74cf5a0ddeeb28a60a76df2208faf624ac0a34c0836/globalmount\"" pod="openstack/ovsdbserver-nb-0" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.527178 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-1"] Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.532668 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3010e3cf-9c6a-4ace-aae1-e12d804aa800-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"3010e3cf-9c6a-4ace-aae1-e12d804aa800\") " pod="openstack/ovsdbserver-nb-1" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.532851 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/990c1a60-2c79-469a-9c85-4850c0865450-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"990c1a60-2c79-469a-9c85-4850c0865450\") " pod="openstack/ovsdbserver-nb-0" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.533678 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-1" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.535017 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5e652c9-762f-4067-9590-b345432d20d2-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"f5e652c9-762f-4067-9590-b345432d20d2\") " pod="openstack/ovsdbserver-nb-2" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.538417 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-799b5\" (UniqueName: \"kubernetes.io/projected/3010e3cf-9c6a-4ace-aae1-e12d804aa800-kube-api-access-799b5\") pod \"ovsdbserver-nb-1\" (UID: \"3010e3cf-9c6a-4ace-aae1-e12d804aa800\") " pod="openstack/ovsdbserver-nb-1" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.538486 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-2"] Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.539835 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-2" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.543955 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khlmc\" (UniqueName: \"kubernetes.io/projected/f5e652c9-762f-4067-9590-b345432d20d2-kube-api-access-khlmc\") pod \"ovsdbserver-nb-2\" (UID: \"f5e652c9-762f-4067-9590-b345432d20d2\") " pod="openstack/ovsdbserver-nb-2" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.552723 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ts6tv\" (UniqueName: \"kubernetes.io/projected/990c1a60-2c79-469a-9c85-4850c0865450-kube-api-access-ts6tv\") pod \"ovsdbserver-nb-0\" (UID: \"990c1a60-2c79-469a-9c85-4850c0865450\") " pod="openstack/ovsdbserver-nb-0" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.552872 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-2"] Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.572568 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-1"] Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.575209 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-3275483c-644a-46c4-80ab-f3aae6d0fd95\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3275483c-644a-46c4-80ab-f3aae6d0fd95\") pod \"ovsdbserver-nb-0\" (UID: \"990c1a60-2c79-469a-9c85-4850c0865450\") " pod="openstack/ovsdbserver-nb-0" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.578349 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-28aca9c6-aad4-4d1c-a073-e08b43091842\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-28aca9c6-aad4-4d1c-a073-e08b43091842\") pod \"ovsdbserver-nb-1\" (UID: \"3010e3cf-9c6a-4ace-aae1-e12d804aa800\") " pod="openstack/ovsdbserver-nb-1" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.580029 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-6d2bab01-a5a2-4a89-b999-dcd363a40b2e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6d2bab01-a5a2-4a89-b999-dcd363a40b2e\") pod \"ovsdbserver-nb-2\" (UID: \"f5e652c9-762f-4067-9590-b345432d20d2\") " pod="openstack/ovsdbserver-nb-2" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.615279 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.619513 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vs7qj\" (UniqueName: \"kubernetes.io/projected/d0419330-97f4-4531-beab-99be8c5f9599-kube-api-access-vs7qj\") pod \"ovsdbserver-sb-1\" (UID: \"d0419330-97f4-4531-beab-99be8c5f9599\") " pod="openstack/ovsdbserver-sb-1" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.619546 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d0419330-97f4-4531-beab-99be8c5f9599-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"d0419330-97f4-4531-beab-99be8c5f9599\") " pod="openstack/ovsdbserver-sb-1" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.619569 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/232b785e-a4d7-4069-9575-e85094f4a10a-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"232b785e-a4d7-4069-9575-e85094f4a10a\") " pod="openstack/ovsdbserver-sb-0" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.619604 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d0419330-97f4-4531-beab-99be8c5f9599-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"d0419330-97f4-4531-beab-99be8c5f9599\") " pod="openstack/ovsdbserver-sb-1" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.619624 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/232b785e-a4d7-4069-9575-e85094f4a10a-config\") pod \"ovsdbserver-sb-0\" (UID: \"232b785e-a4d7-4069-9575-e85094f4a10a\") " pod="openstack/ovsdbserver-sb-0" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.619693 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f393a68-69bb-4a09-ba42-4577f15b4ef4-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"8f393a68-69bb-4a09-ba42-4577f15b4ef4\") " pod="openstack/ovsdbserver-sb-2" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.619729 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8f393a68-69bb-4a09-ba42-4577f15b4ef4-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"8f393a68-69bb-4a09-ba42-4577f15b4ef4\") " pod="openstack/ovsdbserver-sb-2" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.619812 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2a51fa9a-ff57-4f5c-b22f-5b1ebe4f4415\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2a51fa9a-ff57-4f5c-b22f-5b1ebe4f4415\") pod \"ovsdbserver-sb-1\" (UID: \"d0419330-97f4-4531-beab-99be8c5f9599\") " pod="openstack/ovsdbserver-sb-1" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.619844 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7bkz\" (UniqueName: \"kubernetes.io/projected/232b785e-a4d7-4069-9575-e85094f4a10a-kube-api-access-s7bkz\") pod \"ovsdbserver-sb-0\" (UID: \"232b785e-a4d7-4069-9575-e85094f4a10a\") " pod="openstack/ovsdbserver-sb-0" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.619894 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-35558807-cb27-4f82-ad0e-83365ec55ca1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-35558807-cb27-4f82-ad0e-83365ec55ca1\") pod \"ovsdbserver-sb-0\" (UID: \"232b785e-a4d7-4069-9575-e85094f4a10a\") " pod="openstack/ovsdbserver-sb-0" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.619928 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8f393a68-69bb-4a09-ba42-4577f15b4ef4-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"8f393a68-69bb-4a09-ba42-4577f15b4ef4\") " pod="openstack/ovsdbserver-sb-2" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.619950 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0419330-97f4-4531-beab-99be8c5f9599-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"d0419330-97f4-4531-beab-99be8c5f9599\") " pod="openstack/ovsdbserver-sb-1" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.619978 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/232b785e-a4d7-4069-9575-e85094f4a10a-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"232b785e-a4d7-4069-9575-e85094f4a10a\") " pod="openstack/ovsdbserver-sb-0" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.620001 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0419330-97f4-4531-beab-99be8c5f9599-config\") pod \"ovsdbserver-sb-1\" (UID: \"d0419330-97f4-4531-beab-99be8c5f9599\") " pod="openstack/ovsdbserver-sb-1" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.620034 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-1a3e9369-e724-4ecd-8118-021a47043734\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1a3e9369-e724-4ecd-8118-021a47043734\") pod \"ovsdbserver-sb-2\" (UID: \"8f393a68-69bb-4a09-ba42-4577f15b4ef4\") " pod="openstack/ovsdbserver-sb-2" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.620079 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f393a68-69bb-4a09-ba42-4577f15b4ef4-config\") pod \"ovsdbserver-sb-2\" (UID: \"8f393a68-69bb-4a09-ba42-4577f15b4ef4\") " pod="openstack/ovsdbserver-sb-2" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.620337 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/232b785e-a4d7-4069-9575-e85094f4a10a-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"232b785e-a4d7-4069-9575-e85094f4a10a\") " pod="openstack/ovsdbserver-sb-0" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.620357 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jc6f2\" (UniqueName: \"kubernetes.io/projected/8f393a68-69bb-4a09-ba42-4577f15b4ef4-kube-api-access-jc6f2\") pod \"ovsdbserver-sb-2\" (UID: \"8f393a68-69bb-4a09-ba42-4577f15b4ef4\") " pod="openstack/ovsdbserver-sb-2" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.637476 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-2" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.647693 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-1" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.722333 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f393a68-69bb-4a09-ba42-4577f15b4ef4-config\") pod \"ovsdbserver-sb-2\" (UID: \"8f393a68-69bb-4a09-ba42-4577f15b4ef4\") " pod="openstack/ovsdbserver-sb-2" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.722380 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/232b785e-a4d7-4069-9575-e85094f4a10a-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"232b785e-a4d7-4069-9575-e85094f4a10a\") " pod="openstack/ovsdbserver-sb-0" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.722412 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jc6f2\" (UniqueName: \"kubernetes.io/projected/8f393a68-69bb-4a09-ba42-4577f15b4ef4-kube-api-access-jc6f2\") pod \"ovsdbserver-sb-2\" (UID: \"8f393a68-69bb-4a09-ba42-4577f15b4ef4\") " pod="openstack/ovsdbserver-sb-2" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.722443 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vs7qj\" (UniqueName: \"kubernetes.io/projected/d0419330-97f4-4531-beab-99be8c5f9599-kube-api-access-vs7qj\") pod \"ovsdbserver-sb-1\" (UID: \"d0419330-97f4-4531-beab-99be8c5f9599\") " pod="openstack/ovsdbserver-sb-1" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.722467 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d0419330-97f4-4531-beab-99be8c5f9599-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"d0419330-97f4-4531-beab-99be8c5f9599\") " pod="openstack/ovsdbserver-sb-1" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.722497 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/232b785e-a4d7-4069-9575-e85094f4a10a-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"232b785e-a4d7-4069-9575-e85094f4a10a\") " pod="openstack/ovsdbserver-sb-0" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.722518 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d0419330-97f4-4531-beab-99be8c5f9599-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"d0419330-97f4-4531-beab-99be8c5f9599\") " pod="openstack/ovsdbserver-sb-1" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.722541 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/232b785e-a4d7-4069-9575-e85094f4a10a-config\") pod \"ovsdbserver-sb-0\" (UID: \"232b785e-a4d7-4069-9575-e85094f4a10a\") " pod="openstack/ovsdbserver-sb-0" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.722567 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f393a68-69bb-4a09-ba42-4577f15b4ef4-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"8f393a68-69bb-4a09-ba42-4577f15b4ef4\") " pod="openstack/ovsdbserver-sb-2" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.722668 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8f393a68-69bb-4a09-ba42-4577f15b4ef4-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"8f393a68-69bb-4a09-ba42-4577f15b4ef4\") " pod="openstack/ovsdbserver-sb-2" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.722726 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2a51fa9a-ff57-4f5c-b22f-5b1ebe4f4415\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2a51fa9a-ff57-4f5c-b22f-5b1ebe4f4415\") pod \"ovsdbserver-sb-1\" (UID: \"d0419330-97f4-4531-beab-99be8c5f9599\") " pod="openstack/ovsdbserver-sb-1" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.722751 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7bkz\" (UniqueName: \"kubernetes.io/projected/232b785e-a4d7-4069-9575-e85094f4a10a-kube-api-access-s7bkz\") pod \"ovsdbserver-sb-0\" (UID: \"232b785e-a4d7-4069-9575-e85094f4a10a\") " pod="openstack/ovsdbserver-sb-0" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.722815 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-35558807-cb27-4f82-ad0e-83365ec55ca1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-35558807-cb27-4f82-ad0e-83365ec55ca1\") pod \"ovsdbserver-sb-0\" (UID: \"232b785e-a4d7-4069-9575-e85094f4a10a\") " pod="openstack/ovsdbserver-sb-0" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.722851 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8f393a68-69bb-4a09-ba42-4577f15b4ef4-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"8f393a68-69bb-4a09-ba42-4577f15b4ef4\") " pod="openstack/ovsdbserver-sb-2" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.722873 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0419330-97f4-4531-beab-99be8c5f9599-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"d0419330-97f4-4531-beab-99be8c5f9599\") " pod="openstack/ovsdbserver-sb-1" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.722901 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/232b785e-a4d7-4069-9575-e85094f4a10a-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"232b785e-a4d7-4069-9575-e85094f4a10a\") " pod="openstack/ovsdbserver-sb-0" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.723689 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0419330-97f4-4531-beab-99be8c5f9599-config\") pod \"ovsdbserver-sb-1\" (UID: \"d0419330-97f4-4531-beab-99be8c5f9599\") " pod="openstack/ovsdbserver-sb-1" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.723754 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-1a3e9369-e724-4ecd-8118-021a47043734\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1a3e9369-e724-4ecd-8118-021a47043734\") pod \"ovsdbserver-sb-2\" (UID: \"8f393a68-69bb-4a09-ba42-4577f15b4ef4\") " pod="openstack/ovsdbserver-sb-2" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.725666 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f393a68-69bb-4a09-ba42-4577f15b4ef4-config\") pod \"ovsdbserver-sb-2\" (UID: \"8f393a68-69bb-4a09-ba42-4577f15b4ef4\") " pod="openstack/ovsdbserver-sb-2" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.728932 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d0419330-97f4-4531-beab-99be8c5f9599-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"d0419330-97f4-4531-beab-99be8c5f9599\") " pod="openstack/ovsdbserver-sb-1" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.729549 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8f393a68-69bb-4a09-ba42-4577f15b4ef4-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"8f393a68-69bb-4a09-ba42-4577f15b4ef4\") " pod="openstack/ovsdbserver-sb-2" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.729991 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8f393a68-69bb-4a09-ba42-4577f15b4ef4-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"8f393a68-69bb-4a09-ba42-4577f15b4ef4\") " pod="openstack/ovsdbserver-sb-2" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.730453 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d0419330-97f4-4531-beab-99be8c5f9599-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"d0419330-97f4-4531-beab-99be8c5f9599\") " pod="openstack/ovsdbserver-sb-1" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.731017 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/232b785e-a4d7-4069-9575-e85094f4a10a-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"232b785e-a4d7-4069-9575-e85094f4a10a\") " pod="openstack/ovsdbserver-sb-0" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.732378 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/232b785e-a4d7-4069-9575-e85094f4a10a-config\") pod \"ovsdbserver-sb-0\" (UID: \"232b785e-a4d7-4069-9575-e85094f4a10a\") " pod="openstack/ovsdbserver-sb-0" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.732960 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f393a68-69bb-4a09-ba42-4577f15b4ef4-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"8f393a68-69bb-4a09-ba42-4577f15b4ef4\") " pod="openstack/ovsdbserver-sb-2" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.733682 4789 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.733729 4789 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2a51fa9a-ff57-4f5c-b22f-5b1ebe4f4415\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2a51fa9a-ff57-4f5c-b22f-5b1ebe4f4415\") pod \"ovsdbserver-sb-1\" (UID: \"d0419330-97f4-4531-beab-99be8c5f9599\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/47990286ebd7e50d131b45bf5c4083985442d044fc49b66864b60ded8b005eb5/globalmount\"" pod="openstack/ovsdbserver-sb-1" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.733751 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0419330-97f4-4531-beab-99be8c5f9599-config\") pod \"ovsdbserver-sb-1\" (UID: \"d0419330-97f4-4531-beab-99be8c5f9599\") " pod="openstack/ovsdbserver-sb-1" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.734254 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/232b785e-a4d7-4069-9575-e85094f4a10a-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"232b785e-a4d7-4069-9575-e85094f4a10a\") " pod="openstack/ovsdbserver-sb-0" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.734878 4789 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.734917 4789 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-1a3e9369-e724-4ecd-8118-021a47043734\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1a3e9369-e724-4ecd-8118-021a47043734\") pod \"ovsdbserver-sb-2\" (UID: \"8f393a68-69bb-4a09-ba42-4577f15b4ef4\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/c64e14737c0f7d08690748a6258d2aec559c983d083c5bf18b87e79d407ad6db/globalmount\"" pod="openstack/ovsdbserver-sb-2" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.735612 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0419330-97f4-4531-beab-99be8c5f9599-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"d0419330-97f4-4531-beab-99be8c5f9599\") " pod="openstack/ovsdbserver-sb-1" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.737127 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/232b785e-a4d7-4069-9575-e85094f4a10a-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"232b785e-a4d7-4069-9575-e85094f4a10a\") " pod="openstack/ovsdbserver-sb-0" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.737928 4789 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.738035 4789 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-35558807-cb27-4f82-ad0e-83365ec55ca1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-35558807-cb27-4f82-ad0e-83365ec55ca1\") pod \"ovsdbserver-sb-0\" (UID: \"232b785e-a4d7-4069-9575-e85094f4a10a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ddf77e534abe36198e9d2580afaed2cc3014e1ecd182a1d238dfc884572c5444/globalmount\"" pod="openstack/ovsdbserver-sb-0" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.761902 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vs7qj\" (UniqueName: \"kubernetes.io/projected/d0419330-97f4-4531-beab-99be8c5f9599-kube-api-access-vs7qj\") pod \"ovsdbserver-sb-1\" (UID: \"d0419330-97f4-4531-beab-99be8c5f9599\") " pod="openstack/ovsdbserver-sb-1" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.762956 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jc6f2\" (UniqueName: \"kubernetes.io/projected/8f393a68-69bb-4a09-ba42-4577f15b4ef4-kube-api-access-jc6f2\") pod \"ovsdbserver-sb-2\" (UID: \"8f393a68-69bb-4a09-ba42-4577f15b4ef4\") " pod="openstack/ovsdbserver-sb-2" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.771836 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7bkz\" (UniqueName: \"kubernetes.io/projected/232b785e-a4d7-4069-9575-e85094f4a10a-kube-api-access-s7bkz\") pod \"ovsdbserver-sb-0\" (UID: \"232b785e-a4d7-4069-9575-e85094f4a10a\") " pod="openstack/ovsdbserver-sb-0" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.811935 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2a51fa9a-ff57-4f5c-b22f-5b1ebe4f4415\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2a51fa9a-ff57-4f5c-b22f-5b1ebe4f4415\") pod \"ovsdbserver-sb-1\" (UID: \"d0419330-97f4-4531-beab-99be8c5f9599\") " pod="openstack/ovsdbserver-sb-1" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.822557 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-35558807-cb27-4f82-ad0e-83365ec55ca1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-35558807-cb27-4f82-ad0e-83365ec55ca1\") pod \"ovsdbserver-sb-0\" (UID: \"232b785e-a4d7-4069-9575-e85094f4a10a\") " pod="openstack/ovsdbserver-sb-0" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.837636 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-1a3e9369-e724-4ecd-8118-021a47043734\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1a3e9369-e724-4ecd-8118-021a47043734\") pod \"ovsdbserver-sb-2\" (UID: \"8f393a68-69bb-4a09-ba42-4577f15b4ef4\") " pod="openstack/ovsdbserver-sb-2" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.919171 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-1" Feb 02 22:45:49 crc kubenswrapper[4789]: I0202 22:45:49.929431 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-2" Feb 02 22:45:50 crc kubenswrapper[4789]: I0202 22:45:50.111450 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 02 22:45:50 crc kubenswrapper[4789]: I0202 22:45:50.191817 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 02 22:45:50 crc kubenswrapper[4789]: I0202 22:45:50.295630 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-2"] Feb 02 22:45:50 crc kubenswrapper[4789]: I0202 22:45:50.439137 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-1"] Feb 02 22:45:50 crc kubenswrapper[4789]: W0202 22:45:50.447568 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0419330_97f4_4531_beab_99be8c5f9599.slice/crio-8cfea76092ce1c910f4e35c73fc3b18b68ead16590368fd3e28399be9503bd68 WatchSource:0}: Error finding container 8cfea76092ce1c910f4e35c73fc3b18b68ead16590368fd3e28399be9503bd68: Status 404 returned error can't find the container with id 8cfea76092ce1c910f4e35c73fc3b18b68ead16590368fd3e28399be9503bd68 Feb 02 22:45:50 crc kubenswrapper[4789]: I0202 22:45:50.489334 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"f5e652c9-762f-4067-9590-b345432d20d2","Type":"ContainerStarted","Data":"c078f36daf0c6656176957417d2b1babd067452c35ebb41d0f43c4afcf41070f"} Feb 02 22:45:50 crc kubenswrapper[4789]: I0202 22:45:50.489698 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"f5e652c9-762f-4067-9590-b345432d20d2","Type":"ContainerStarted","Data":"1973cf3ffc088d2deab2ca5cad2a2dec160da47879233493defff4f53eafc108"} Feb 02 22:45:50 crc kubenswrapper[4789]: I0202 22:45:50.491120 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"990c1a60-2c79-469a-9c85-4850c0865450","Type":"ContainerStarted","Data":"1ba9f81c454a8c48731a81826493290a3373c70fd664107d1cc5a243e5745c83"} Feb 02 22:45:50 crc kubenswrapper[4789]: I0202 22:45:50.491163 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"990c1a60-2c79-469a-9c85-4850c0865450","Type":"ContainerStarted","Data":"914b5b9ccb83fd6d92a4a95886e59afcc8391ddae95daca3f3101b65cec1bb65"} Feb 02 22:45:50 crc kubenswrapper[4789]: I0202 22:45:50.492570 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"d0419330-97f4-4531-beab-99be8c5f9599","Type":"ContainerStarted","Data":"8cfea76092ce1c910f4e35c73fc3b18b68ead16590368fd3e28399be9503bd68"} Feb 02 22:45:50 crc kubenswrapper[4789]: I0202 22:45:50.538189 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-2"] Feb 02 22:45:50 crc kubenswrapper[4789]: W0202 22:45:50.549175 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8f393a68_69bb_4a09_ba42_4577f15b4ef4.slice/crio-f2a343dad34e533f16ae78aed292586e97b95d99a2829a09742f4090174d5cc9 WatchSource:0}: Error finding container f2a343dad34e533f16ae78aed292586e97b95d99a2829a09742f4090174d5cc9: Status 404 returned error can't find the container with id f2a343dad34e533f16ae78aed292586e97b95d99a2829a09742f4090174d5cc9 Feb 02 22:45:50 crc kubenswrapper[4789]: I0202 22:45:50.653417 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 02 22:45:51 crc kubenswrapper[4789]: I0202 22:45:51.308218 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-1"] Feb 02 22:45:51 crc kubenswrapper[4789]: I0202 22:45:51.506266 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"f5e652c9-762f-4067-9590-b345432d20d2","Type":"ContainerStarted","Data":"465b562f12427bf2fcf3292c6666b79ced3b21c5e980299c198566406c241056"} Feb 02 22:45:51 crc kubenswrapper[4789]: I0202 22:45:51.509301 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"232b785e-a4d7-4069-9575-e85094f4a10a","Type":"ContainerStarted","Data":"d3d9fdf5c8e41550034c01f4f1385708b047b3c3a883522511c263c71f34f705"} Feb 02 22:45:51 crc kubenswrapper[4789]: I0202 22:45:51.509351 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"232b785e-a4d7-4069-9575-e85094f4a10a","Type":"ContainerStarted","Data":"27f28440b3653c96de88aec714e2bd67bcecb7411087c046b80469b32dcfbbf9"} Feb 02 22:45:51 crc kubenswrapper[4789]: I0202 22:45:51.509361 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"232b785e-a4d7-4069-9575-e85094f4a10a","Type":"ContainerStarted","Data":"e9701e806131f6e37ab43e0dc444550ffac562d8953ac3487712ef5068dc6dce"} Feb 02 22:45:51 crc kubenswrapper[4789]: I0202 22:45:51.512366 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"990c1a60-2c79-469a-9c85-4850c0865450","Type":"ContainerStarted","Data":"f0a730895100a1329ddf52e5441fb32fb6ff3a470460594ec4281e059e11a648"} Feb 02 22:45:51 crc kubenswrapper[4789]: I0202 22:45:51.513967 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"d0419330-97f4-4531-beab-99be8c5f9599","Type":"ContainerStarted","Data":"f94347335ea7a1154615234f5aa380ed604dc3553040921b8deaed7ea9cdeb29"} Feb 02 22:45:51 crc kubenswrapper[4789]: I0202 22:45:51.513992 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"d0419330-97f4-4531-beab-99be8c5f9599","Type":"ContainerStarted","Data":"8d5e5dc46052ad6e62a594d7598e91c85e06c04ecf59508509fb59071795c93b"} Feb 02 22:45:51 crc kubenswrapper[4789]: I0202 22:45:51.516458 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"8f393a68-69bb-4a09-ba42-4577f15b4ef4","Type":"ContainerStarted","Data":"2727851c3e522956cbf0bbb06f34675ec9426cdaa759bc80d8cf5b524a905cd7"} Feb 02 22:45:51 crc kubenswrapper[4789]: I0202 22:45:51.516486 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"8f393a68-69bb-4a09-ba42-4577f15b4ef4","Type":"ContainerStarted","Data":"be79d25783ae57fbf7fba3536d551b5d0874fbedf2bd937a3a985c044a990fda"} Feb 02 22:45:51 crc kubenswrapper[4789]: I0202 22:45:51.516495 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"8f393a68-69bb-4a09-ba42-4577f15b4ef4","Type":"ContainerStarted","Data":"f2a343dad34e533f16ae78aed292586e97b95d99a2829a09742f4090174d5cc9"} Feb 02 22:45:51 crc kubenswrapper[4789]: I0202 22:45:51.524200 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"3010e3cf-9c6a-4ace-aae1-e12d804aa800","Type":"ContainerStarted","Data":"fb0d956a089fba2cd41ab902d4da0de1bcfda0c792653f88d37a06a5469edc39"} Feb 02 22:45:51 crc kubenswrapper[4789]: I0202 22:45:51.524242 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"3010e3cf-9c6a-4ace-aae1-e12d804aa800","Type":"ContainerStarted","Data":"51719841eec3f4d5a2b79695a68ef4d81bcc10b22b8ca54f20e92adb5b520453"} Feb 02 22:45:51 crc kubenswrapper[4789]: I0202 22:45:51.537477 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-2" podStartSLOduration=3.537458628 podStartE2EDuration="3.537458628s" podCreationTimestamp="2026-02-02 22:45:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 22:45:51.529999798 +0000 UTC m=+5171.825024817" watchObservedRunningTime="2026-02-02 22:45:51.537458628 +0000 UTC m=+5171.832483647" Feb 02 22:45:51 crc kubenswrapper[4789]: I0202 22:45:51.569325 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-1" podStartSLOduration=3.569301277 podStartE2EDuration="3.569301277s" podCreationTimestamp="2026-02-02 22:45:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 22:45:51.560874639 +0000 UTC m=+5171.855899658" watchObservedRunningTime="2026-02-02 22:45:51.569301277 +0000 UTC m=+5171.864326336" Feb 02 22:45:51 crc kubenswrapper[4789]: I0202 22:45:51.628546 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=3.628520688 podStartE2EDuration="3.628520688s" podCreationTimestamp="2026-02-02 22:45:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 22:45:51.5917592 +0000 UTC m=+5171.886784219" watchObservedRunningTime="2026-02-02 22:45:51.628520688 +0000 UTC m=+5171.923545727" Feb 02 22:45:51 crc kubenswrapper[4789]: I0202 22:45:51.640448 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-2" podStartSLOduration=3.640428144 podStartE2EDuration="3.640428144s" podCreationTimestamp="2026-02-02 22:45:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 22:45:51.613917346 +0000 UTC m=+5171.908942375" watchObservedRunningTime="2026-02-02 22:45:51.640428144 +0000 UTC m=+5171.935453173" Feb 02 22:45:51 crc kubenswrapper[4789]: I0202 22:45:51.642120 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=3.642108921 podStartE2EDuration="3.642108921s" podCreationTimestamp="2026-02-02 22:45:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 22:45:51.638199821 +0000 UTC m=+5171.933224840" watchObservedRunningTime="2026-02-02 22:45:51.642108921 +0000 UTC m=+5171.937133950" Feb 02 22:45:52 crc kubenswrapper[4789]: I0202 22:45:52.539873 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"3010e3cf-9c6a-4ace-aae1-e12d804aa800","Type":"ContainerStarted","Data":"f82b04b58f5ddaea21b2bf2c52129e4ecd86ae29eeae695e4e7d4a79eaa18372"} Feb 02 22:45:52 crc kubenswrapper[4789]: I0202 22:45:52.597826 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-1" podStartSLOduration=4.59778797 podStartE2EDuration="4.59778797s" podCreationTimestamp="2026-02-02 22:45:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 22:45:52.583995281 +0000 UTC m=+5172.879020320" watchObservedRunningTime="2026-02-02 22:45:52.59778797 +0000 UTC m=+5172.892813039" Feb 02 22:45:52 crc kubenswrapper[4789]: I0202 22:45:52.616106 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 02 22:45:52 crc kubenswrapper[4789]: I0202 22:45:52.638668 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-2" Feb 02 22:45:52 crc kubenswrapper[4789]: I0202 22:45:52.648364 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-1" Feb 02 22:45:52 crc kubenswrapper[4789]: I0202 22:45:52.919874 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-1" Feb 02 22:45:52 crc kubenswrapper[4789]: I0202 22:45:52.930536 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-2" Feb 02 22:45:53 crc kubenswrapper[4789]: I0202 22:45:53.112513 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 02 22:45:53 crc kubenswrapper[4789]: I0202 22:45:53.178346 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 02 22:45:53 crc kubenswrapper[4789]: I0202 22:45:53.550915 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 02 22:45:54 crc kubenswrapper[4789]: I0202 22:45:54.616502 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 02 22:45:54 crc kubenswrapper[4789]: I0202 22:45:54.637713 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-2" Feb 02 22:45:54 crc kubenswrapper[4789]: I0202 22:45:54.648342 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-1" Feb 02 22:45:54 crc kubenswrapper[4789]: I0202 22:45:54.920465 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-1" Feb 02 22:45:54 crc kubenswrapper[4789]: I0202 22:45:54.930727 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-2" Feb 02 22:45:55 crc kubenswrapper[4789]: I0202 22:45:55.178998 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 02 22:45:55 crc kubenswrapper[4789]: I0202 22:45:55.548132 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7cd49575f7-p97s5"] Feb 02 22:45:55 crc kubenswrapper[4789]: I0202 22:45:55.549293 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cd49575f7-p97s5" Feb 02 22:45:55 crc kubenswrapper[4789]: I0202 22:45:55.555811 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 02 22:45:55 crc kubenswrapper[4789]: I0202 22:45:55.589491 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7cd49575f7-p97s5"] Feb 02 22:45:55 crc kubenswrapper[4789]: I0202 22:45:55.653875 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 02 22:45:55 crc kubenswrapper[4789]: I0202 22:45:55.681492 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-2" Feb 02 22:45:55 crc kubenswrapper[4789]: I0202 22:45:55.690148 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-1" Feb 02 22:45:55 crc kubenswrapper[4789]: I0202 22:45:55.690796 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4998aa3c-a550-4732-a545-25b1a93d0fa6-ovsdbserver-sb\") pod \"dnsmasq-dns-7cd49575f7-p97s5\" (UID: \"4998aa3c-a550-4732-a545-25b1a93d0fa6\") " pod="openstack/dnsmasq-dns-7cd49575f7-p97s5" Feb 02 22:45:55 crc kubenswrapper[4789]: I0202 22:45:55.690950 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjh2l\" (UniqueName: \"kubernetes.io/projected/4998aa3c-a550-4732-a545-25b1a93d0fa6-kube-api-access-fjh2l\") pod \"dnsmasq-dns-7cd49575f7-p97s5\" (UID: \"4998aa3c-a550-4732-a545-25b1a93d0fa6\") " pod="openstack/dnsmasq-dns-7cd49575f7-p97s5" Feb 02 22:45:55 crc kubenswrapper[4789]: I0202 22:45:55.691016 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4998aa3c-a550-4732-a545-25b1a93d0fa6-config\") pod \"dnsmasq-dns-7cd49575f7-p97s5\" (UID: \"4998aa3c-a550-4732-a545-25b1a93d0fa6\") " pod="openstack/dnsmasq-dns-7cd49575f7-p97s5" Feb 02 22:45:55 crc kubenswrapper[4789]: I0202 22:45:55.691071 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4998aa3c-a550-4732-a545-25b1a93d0fa6-dns-svc\") pod \"dnsmasq-dns-7cd49575f7-p97s5\" (UID: \"4998aa3c-a550-4732-a545-25b1a93d0fa6\") " pod="openstack/dnsmasq-dns-7cd49575f7-p97s5" Feb 02 22:45:55 crc kubenswrapper[4789]: I0202 22:45:55.700821 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 02 22:45:55 crc kubenswrapper[4789]: I0202 22:45:55.720004 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-2" Feb 02 22:45:55 crc kubenswrapper[4789]: I0202 22:45:55.793220 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4998aa3c-a550-4732-a545-25b1a93d0fa6-ovsdbserver-sb\") pod \"dnsmasq-dns-7cd49575f7-p97s5\" (UID: \"4998aa3c-a550-4732-a545-25b1a93d0fa6\") " pod="openstack/dnsmasq-dns-7cd49575f7-p97s5" Feb 02 22:45:55 crc kubenswrapper[4789]: I0202 22:45:55.793352 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjh2l\" (UniqueName: \"kubernetes.io/projected/4998aa3c-a550-4732-a545-25b1a93d0fa6-kube-api-access-fjh2l\") pod \"dnsmasq-dns-7cd49575f7-p97s5\" (UID: \"4998aa3c-a550-4732-a545-25b1a93d0fa6\") " pod="openstack/dnsmasq-dns-7cd49575f7-p97s5" Feb 02 22:45:55 crc kubenswrapper[4789]: I0202 22:45:55.793413 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4998aa3c-a550-4732-a545-25b1a93d0fa6-config\") pod \"dnsmasq-dns-7cd49575f7-p97s5\" (UID: \"4998aa3c-a550-4732-a545-25b1a93d0fa6\") " pod="openstack/dnsmasq-dns-7cd49575f7-p97s5" Feb 02 22:45:55 crc kubenswrapper[4789]: I0202 22:45:55.793445 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4998aa3c-a550-4732-a545-25b1a93d0fa6-dns-svc\") pod \"dnsmasq-dns-7cd49575f7-p97s5\" (UID: \"4998aa3c-a550-4732-a545-25b1a93d0fa6\") " pod="openstack/dnsmasq-dns-7cd49575f7-p97s5" Feb 02 22:45:55 crc kubenswrapper[4789]: I0202 22:45:55.794310 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4998aa3c-a550-4732-a545-25b1a93d0fa6-config\") pod \"dnsmasq-dns-7cd49575f7-p97s5\" (UID: \"4998aa3c-a550-4732-a545-25b1a93d0fa6\") " pod="openstack/dnsmasq-dns-7cd49575f7-p97s5" Feb 02 22:45:55 crc kubenswrapper[4789]: I0202 22:45:55.794352 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4998aa3c-a550-4732-a545-25b1a93d0fa6-dns-svc\") pod \"dnsmasq-dns-7cd49575f7-p97s5\" (UID: \"4998aa3c-a550-4732-a545-25b1a93d0fa6\") " pod="openstack/dnsmasq-dns-7cd49575f7-p97s5" Feb 02 22:45:55 crc kubenswrapper[4789]: I0202 22:45:55.794785 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4998aa3c-a550-4732-a545-25b1a93d0fa6-ovsdbserver-sb\") pod \"dnsmasq-dns-7cd49575f7-p97s5\" (UID: \"4998aa3c-a550-4732-a545-25b1a93d0fa6\") " pod="openstack/dnsmasq-dns-7cd49575f7-p97s5" Feb 02 22:45:55 crc kubenswrapper[4789]: I0202 22:45:55.813396 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjh2l\" (UniqueName: \"kubernetes.io/projected/4998aa3c-a550-4732-a545-25b1a93d0fa6-kube-api-access-fjh2l\") pod \"dnsmasq-dns-7cd49575f7-p97s5\" (UID: \"4998aa3c-a550-4732-a545-25b1a93d0fa6\") " pod="openstack/dnsmasq-dns-7cd49575f7-p97s5" Feb 02 22:45:55 crc kubenswrapper[4789]: I0202 22:45:55.884742 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cd49575f7-p97s5" Feb 02 22:45:55 crc kubenswrapper[4789]: I0202 22:45:55.979337 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-2" Feb 02 22:45:55 crc kubenswrapper[4789]: I0202 22:45:55.981074 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-1" Feb 02 22:45:56 crc kubenswrapper[4789]: I0202 22:45:56.035512 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cd49575f7-p97s5"] Feb 02 22:45:56 crc kubenswrapper[4789]: I0202 22:45:56.064073 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-1" Feb 02 22:45:56 crc kubenswrapper[4789]: I0202 22:45:56.064126 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-2" Feb 02 22:45:56 crc kubenswrapper[4789]: I0202 22:45:56.066703 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-667d7cd957-rgfp8"] Feb 02 22:45:56 crc kubenswrapper[4789]: I0202 22:45:56.071632 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-667d7cd957-rgfp8" Feb 02 22:45:56 crc kubenswrapper[4789]: I0202 22:45:56.075493 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 02 22:45:56 crc kubenswrapper[4789]: I0202 22:45:56.079608 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-667d7cd957-rgfp8"] Feb 02 22:45:56 crc kubenswrapper[4789]: I0202 22:45:56.201888 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40f1a92c-8d4d-43ec-b1a6-ad811716cb87-config\") pod \"dnsmasq-dns-667d7cd957-rgfp8\" (UID: \"40f1a92c-8d4d-43ec-b1a6-ad811716cb87\") " pod="openstack/dnsmasq-dns-667d7cd957-rgfp8" Feb 02 22:45:56 crc kubenswrapper[4789]: I0202 22:45:56.201969 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/40f1a92c-8d4d-43ec-b1a6-ad811716cb87-dns-svc\") pod \"dnsmasq-dns-667d7cd957-rgfp8\" (UID: \"40f1a92c-8d4d-43ec-b1a6-ad811716cb87\") " pod="openstack/dnsmasq-dns-667d7cd957-rgfp8" Feb 02 22:45:56 crc kubenswrapper[4789]: I0202 22:45:56.202110 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjr7n\" (UniqueName: \"kubernetes.io/projected/40f1a92c-8d4d-43ec-b1a6-ad811716cb87-kube-api-access-hjr7n\") pod \"dnsmasq-dns-667d7cd957-rgfp8\" (UID: \"40f1a92c-8d4d-43ec-b1a6-ad811716cb87\") " pod="openstack/dnsmasq-dns-667d7cd957-rgfp8" Feb 02 22:45:56 crc kubenswrapper[4789]: I0202 22:45:56.202157 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/40f1a92c-8d4d-43ec-b1a6-ad811716cb87-ovsdbserver-sb\") pod \"dnsmasq-dns-667d7cd957-rgfp8\" (UID: \"40f1a92c-8d4d-43ec-b1a6-ad811716cb87\") " pod="openstack/dnsmasq-dns-667d7cd957-rgfp8" Feb 02 22:45:56 crc kubenswrapper[4789]: I0202 22:45:56.202309 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/40f1a92c-8d4d-43ec-b1a6-ad811716cb87-ovsdbserver-nb\") pod \"dnsmasq-dns-667d7cd957-rgfp8\" (UID: \"40f1a92c-8d4d-43ec-b1a6-ad811716cb87\") " pod="openstack/dnsmasq-dns-667d7cd957-rgfp8" Feb 02 22:45:56 crc kubenswrapper[4789]: I0202 22:45:56.303383 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/40f1a92c-8d4d-43ec-b1a6-ad811716cb87-dns-svc\") pod \"dnsmasq-dns-667d7cd957-rgfp8\" (UID: \"40f1a92c-8d4d-43ec-b1a6-ad811716cb87\") " pod="openstack/dnsmasq-dns-667d7cd957-rgfp8" Feb 02 22:45:56 crc kubenswrapper[4789]: I0202 22:45:56.303452 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjr7n\" (UniqueName: \"kubernetes.io/projected/40f1a92c-8d4d-43ec-b1a6-ad811716cb87-kube-api-access-hjr7n\") pod \"dnsmasq-dns-667d7cd957-rgfp8\" (UID: \"40f1a92c-8d4d-43ec-b1a6-ad811716cb87\") " pod="openstack/dnsmasq-dns-667d7cd957-rgfp8" Feb 02 22:45:56 crc kubenswrapper[4789]: I0202 22:45:56.303483 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/40f1a92c-8d4d-43ec-b1a6-ad811716cb87-ovsdbserver-sb\") pod \"dnsmasq-dns-667d7cd957-rgfp8\" (UID: \"40f1a92c-8d4d-43ec-b1a6-ad811716cb87\") " pod="openstack/dnsmasq-dns-667d7cd957-rgfp8" Feb 02 22:45:56 crc kubenswrapper[4789]: I0202 22:45:56.303534 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/40f1a92c-8d4d-43ec-b1a6-ad811716cb87-ovsdbserver-nb\") pod \"dnsmasq-dns-667d7cd957-rgfp8\" (UID: \"40f1a92c-8d4d-43ec-b1a6-ad811716cb87\") " pod="openstack/dnsmasq-dns-667d7cd957-rgfp8" Feb 02 22:45:56 crc kubenswrapper[4789]: I0202 22:45:56.303615 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40f1a92c-8d4d-43ec-b1a6-ad811716cb87-config\") pod \"dnsmasq-dns-667d7cd957-rgfp8\" (UID: \"40f1a92c-8d4d-43ec-b1a6-ad811716cb87\") " pod="openstack/dnsmasq-dns-667d7cd957-rgfp8" Feb 02 22:45:56 crc kubenswrapper[4789]: I0202 22:45:56.304520 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40f1a92c-8d4d-43ec-b1a6-ad811716cb87-config\") pod \"dnsmasq-dns-667d7cd957-rgfp8\" (UID: \"40f1a92c-8d4d-43ec-b1a6-ad811716cb87\") " pod="openstack/dnsmasq-dns-667d7cd957-rgfp8" Feb 02 22:45:56 crc kubenswrapper[4789]: I0202 22:45:56.304616 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/40f1a92c-8d4d-43ec-b1a6-ad811716cb87-dns-svc\") pod \"dnsmasq-dns-667d7cd957-rgfp8\" (UID: \"40f1a92c-8d4d-43ec-b1a6-ad811716cb87\") " pod="openstack/dnsmasq-dns-667d7cd957-rgfp8" Feb 02 22:45:56 crc kubenswrapper[4789]: I0202 22:45:56.305385 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/40f1a92c-8d4d-43ec-b1a6-ad811716cb87-ovsdbserver-nb\") pod \"dnsmasq-dns-667d7cd957-rgfp8\" (UID: \"40f1a92c-8d4d-43ec-b1a6-ad811716cb87\") " pod="openstack/dnsmasq-dns-667d7cd957-rgfp8" Feb 02 22:45:56 crc kubenswrapper[4789]: I0202 22:45:56.305397 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/40f1a92c-8d4d-43ec-b1a6-ad811716cb87-ovsdbserver-sb\") pod \"dnsmasq-dns-667d7cd957-rgfp8\" (UID: \"40f1a92c-8d4d-43ec-b1a6-ad811716cb87\") " pod="openstack/dnsmasq-dns-667d7cd957-rgfp8" Feb 02 22:45:56 crc kubenswrapper[4789]: I0202 22:45:56.326869 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjr7n\" (UniqueName: \"kubernetes.io/projected/40f1a92c-8d4d-43ec-b1a6-ad811716cb87-kube-api-access-hjr7n\") pod \"dnsmasq-dns-667d7cd957-rgfp8\" (UID: \"40f1a92c-8d4d-43ec-b1a6-ad811716cb87\") " pod="openstack/dnsmasq-dns-667d7cd957-rgfp8" Feb 02 22:45:56 crc kubenswrapper[4789]: I0202 22:45:56.408057 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-667d7cd957-rgfp8" Feb 02 22:45:56 crc kubenswrapper[4789]: I0202 22:45:56.465206 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cd49575f7-p97s5"] Feb 02 22:45:56 crc kubenswrapper[4789]: I0202 22:45:56.582929 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cd49575f7-p97s5" event={"ID":"4998aa3c-a550-4732-a545-25b1a93d0fa6","Type":"ContainerStarted","Data":"d59a628f25966dfa60828ebe8195413d00fbbcf97b849914f4aca39b945712f8"} Feb 02 22:45:56 crc kubenswrapper[4789]: I0202 22:45:56.641636 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-1" Feb 02 22:45:56 crc kubenswrapper[4789]: I0202 22:45:56.843725 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-667d7cd957-rgfp8"] Feb 02 22:45:56 crc kubenswrapper[4789]: W0202 22:45:56.849561 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod40f1a92c_8d4d_43ec_b1a6_ad811716cb87.slice/crio-580e311474fd6d470666a1ec061e2da3276e8a36938d92acb87ee19ea9f28a6d WatchSource:0}: Error finding container 580e311474fd6d470666a1ec061e2da3276e8a36938d92acb87ee19ea9f28a6d: Status 404 returned error can't find the container with id 580e311474fd6d470666a1ec061e2da3276e8a36938d92acb87ee19ea9f28a6d Feb 02 22:45:57 crc kubenswrapper[4789]: I0202 22:45:57.592682 4789 generic.go:334] "Generic (PLEG): container finished" podID="40f1a92c-8d4d-43ec-b1a6-ad811716cb87" containerID="b10f7a8e05776b8c5a044e0120a0ee30102dd615192c882814be8ec81d3d49e1" exitCode=0 Feb 02 22:45:57 crc kubenswrapper[4789]: I0202 22:45:57.593055 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-667d7cd957-rgfp8" event={"ID":"40f1a92c-8d4d-43ec-b1a6-ad811716cb87","Type":"ContainerDied","Data":"b10f7a8e05776b8c5a044e0120a0ee30102dd615192c882814be8ec81d3d49e1"} Feb 02 22:45:57 crc kubenswrapper[4789]: I0202 22:45:57.593097 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-667d7cd957-rgfp8" event={"ID":"40f1a92c-8d4d-43ec-b1a6-ad811716cb87","Type":"ContainerStarted","Data":"580e311474fd6d470666a1ec061e2da3276e8a36938d92acb87ee19ea9f28a6d"} Feb 02 22:45:57 crc kubenswrapper[4789]: I0202 22:45:57.598449 4789 generic.go:334] "Generic (PLEG): container finished" podID="4998aa3c-a550-4732-a545-25b1a93d0fa6" containerID="e26779803f9cddcd33e1c07c009aaac1e10d1c089974b65bf2702bf0a3a05f44" exitCode=0 Feb 02 22:45:57 crc kubenswrapper[4789]: I0202 22:45:57.598687 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cd49575f7-p97s5" event={"ID":"4998aa3c-a550-4732-a545-25b1a93d0fa6","Type":"ContainerDied","Data":"e26779803f9cddcd33e1c07c009aaac1e10d1c089974b65bf2702bf0a3a05f44"} Feb 02 22:45:57 crc kubenswrapper[4789]: I0202 22:45:57.959537 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cd49575f7-p97s5" Feb 02 22:45:58 crc kubenswrapper[4789]: I0202 22:45:58.135947 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4998aa3c-a550-4732-a545-25b1a93d0fa6-config\") pod \"4998aa3c-a550-4732-a545-25b1a93d0fa6\" (UID: \"4998aa3c-a550-4732-a545-25b1a93d0fa6\") " Feb 02 22:45:58 crc kubenswrapper[4789]: I0202 22:45:58.136034 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fjh2l\" (UniqueName: \"kubernetes.io/projected/4998aa3c-a550-4732-a545-25b1a93d0fa6-kube-api-access-fjh2l\") pod \"4998aa3c-a550-4732-a545-25b1a93d0fa6\" (UID: \"4998aa3c-a550-4732-a545-25b1a93d0fa6\") " Feb 02 22:45:58 crc kubenswrapper[4789]: I0202 22:45:58.136078 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4998aa3c-a550-4732-a545-25b1a93d0fa6-dns-svc\") pod \"4998aa3c-a550-4732-a545-25b1a93d0fa6\" (UID: \"4998aa3c-a550-4732-a545-25b1a93d0fa6\") " Feb 02 22:45:58 crc kubenswrapper[4789]: I0202 22:45:58.136131 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4998aa3c-a550-4732-a545-25b1a93d0fa6-ovsdbserver-sb\") pod \"4998aa3c-a550-4732-a545-25b1a93d0fa6\" (UID: \"4998aa3c-a550-4732-a545-25b1a93d0fa6\") " Feb 02 22:45:58 crc kubenswrapper[4789]: I0202 22:45:58.142950 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4998aa3c-a550-4732-a545-25b1a93d0fa6-kube-api-access-fjh2l" (OuterVolumeSpecName: "kube-api-access-fjh2l") pod "4998aa3c-a550-4732-a545-25b1a93d0fa6" (UID: "4998aa3c-a550-4732-a545-25b1a93d0fa6"). InnerVolumeSpecName "kube-api-access-fjh2l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 22:45:58 crc kubenswrapper[4789]: I0202 22:45:58.168303 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4998aa3c-a550-4732-a545-25b1a93d0fa6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4998aa3c-a550-4732-a545-25b1a93d0fa6" (UID: "4998aa3c-a550-4732-a545-25b1a93d0fa6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 22:45:58 crc kubenswrapper[4789]: I0202 22:45:58.171371 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4998aa3c-a550-4732-a545-25b1a93d0fa6-config" (OuterVolumeSpecName: "config") pod "4998aa3c-a550-4732-a545-25b1a93d0fa6" (UID: "4998aa3c-a550-4732-a545-25b1a93d0fa6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 22:45:58 crc kubenswrapper[4789]: I0202 22:45:58.173757 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4998aa3c-a550-4732-a545-25b1a93d0fa6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4998aa3c-a550-4732-a545-25b1a93d0fa6" (UID: "4998aa3c-a550-4732-a545-25b1a93d0fa6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 22:45:58 crc kubenswrapper[4789]: I0202 22:45:58.238307 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fjh2l\" (UniqueName: \"kubernetes.io/projected/4998aa3c-a550-4732-a545-25b1a93d0fa6-kube-api-access-fjh2l\") on node \"crc\" DevicePath \"\"" Feb 02 22:45:58 crc kubenswrapper[4789]: I0202 22:45:58.238340 4789 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4998aa3c-a550-4732-a545-25b1a93d0fa6-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 22:45:58 crc kubenswrapper[4789]: I0202 22:45:58.238352 4789 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4998aa3c-a550-4732-a545-25b1a93d0fa6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 22:45:58 crc kubenswrapper[4789]: I0202 22:45:58.238362 4789 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4998aa3c-a550-4732-a545-25b1a93d0fa6-config\") on node \"crc\" DevicePath \"\"" Feb 02 22:45:58 crc kubenswrapper[4789]: I0202 22:45:58.612151 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-667d7cd957-rgfp8" event={"ID":"40f1a92c-8d4d-43ec-b1a6-ad811716cb87","Type":"ContainerStarted","Data":"9c5e1970b7c142ebcc7f4e9e9dde7ae7fc93e4afcce825534defc74fc73b0361"} Feb 02 22:45:58 crc kubenswrapper[4789]: I0202 22:45:58.612397 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-667d7cd957-rgfp8" Feb 02 22:45:58 crc kubenswrapper[4789]: I0202 22:45:58.615060 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cd49575f7-p97s5" event={"ID":"4998aa3c-a550-4732-a545-25b1a93d0fa6","Type":"ContainerDied","Data":"d59a628f25966dfa60828ebe8195413d00fbbcf97b849914f4aca39b945712f8"} Feb 02 22:45:58 crc kubenswrapper[4789]: I0202 22:45:58.615118 4789 scope.go:117] "RemoveContainer" containerID="e26779803f9cddcd33e1c07c009aaac1e10d1c089974b65bf2702bf0a3a05f44" Feb 02 22:45:58 crc kubenswrapper[4789]: I0202 22:45:58.615201 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cd49575f7-p97s5" Feb 02 22:45:58 crc kubenswrapper[4789]: I0202 22:45:58.654098 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-667d7cd957-rgfp8" podStartSLOduration=2.654072214 podStartE2EDuration="2.654072214s" podCreationTimestamp="2026-02-02 22:45:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 22:45:58.642218189 +0000 UTC m=+5178.937243288" watchObservedRunningTime="2026-02-02 22:45:58.654072214 +0000 UTC m=+5178.949097263" Feb 02 22:45:58 crc kubenswrapper[4789]: I0202 22:45:58.712288 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cd49575f7-p97s5"] Feb 02 22:45:58 crc kubenswrapper[4789]: I0202 22:45:58.723510 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7cd49575f7-p97s5"] Feb 02 22:45:59 crc kubenswrapper[4789]: I0202 22:45:59.732265 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-copy-data"] Feb 02 22:45:59 crc kubenswrapper[4789]: E0202 22:45:59.732760 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4998aa3c-a550-4732-a545-25b1a93d0fa6" containerName="init" Feb 02 22:45:59 crc kubenswrapper[4789]: I0202 22:45:59.732776 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="4998aa3c-a550-4732-a545-25b1a93d0fa6" containerName="init" Feb 02 22:45:59 crc kubenswrapper[4789]: I0202 22:45:59.732982 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="4998aa3c-a550-4732-a545-25b1a93d0fa6" containerName="init" Feb 02 22:45:59 crc kubenswrapper[4789]: I0202 22:45:59.733642 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Feb 02 22:45:59 crc kubenswrapper[4789]: I0202 22:45:59.738181 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovn-data-cert" Feb 02 22:45:59 crc kubenswrapper[4789]: I0202 22:45:59.746331 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-copy-data"] Feb 02 22:45:59 crc kubenswrapper[4789]: I0202 22:45:59.866500 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vqkf\" (UniqueName: \"kubernetes.io/projected/c7c9b895-669c-426e-b9ad-7efd519878e4-kube-api-access-2vqkf\") pod \"ovn-copy-data\" (UID: \"c7c9b895-669c-426e-b9ad-7efd519878e4\") " pod="openstack/ovn-copy-data" Feb 02 22:45:59 crc kubenswrapper[4789]: I0202 22:45:59.866629 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-7563998a-53aa-41b9-a71f-7c115a170305\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7563998a-53aa-41b9-a71f-7c115a170305\") pod \"ovn-copy-data\" (UID: \"c7c9b895-669c-426e-b9ad-7efd519878e4\") " pod="openstack/ovn-copy-data" Feb 02 22:45:59 crc kubenswrapper[4789]: I0202 22:45:59.866666 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/c7c9b895-669c-426e-b9ad-7efd519878e4-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"c7c9b895-669c-426e-b9ad-7efd519878e4\") " pod="openstack/ovn-copy-data" Feb 02 22:45:59 crc kubenswrapper[4789]: I0202 22:45:59.967829 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vqkf\" (UniqueName: \"kubernetes.io/projected/c7c9b895-669c-426e-b9ad-7efd519878e4-kube-api-access-2vqkf\") pod \"ovn-copy-data\" (UID: \"c7c9b895-669c-426e-b9ad-7efd519878e4\") " pod="openstack/ovn-copy-data" Feb 02 22:45:59 crc kubenswrapper[4789]: I0202 22:45:59.967900 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-7563998a-53aa-41b9-a71f-7c115a170305\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7563998a-53aa-41b9-a71f-7c115a170305\") pod \"ovn-copy-data\" (UID: \"c7c9b895-669c-426e-b9ad-7efd519878e4\") " pod="openstack/ovn-copy-data" Feb 02 22:45:59 crc kubenswrapper[4789]: I0202 22:45:59.967931 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/c7c9b895-669c-426e-b9ad-7efd519878e4-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"c7c9b895-669c-426e-b9ad-7efd519878e4\") " pod="openstack/ovn-copy-data" Feb 02 22:45:59 crc kubenswrapper[4789]: I0202 22:45:59.970292 4789 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 02 22:45:59 crc kubenswrapper[4789]: I0202 22:45:59.970349 4789 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-7563998a-53aa-41b9-a71f-7c115a170305\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7563998a-53aa-41b9-a71f-7c115a170305\") pod \"ovn-copy-data\" (UID: \"c7c9b895-669c-426e-b9ad-7efd519878e4\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/8d8a73e3851135c9600af484d844e45c7eec794c99314105989add1c4ca58b90/globalmount\"" pod="openstack/ovn-copy-data" Feb 02 22:45:59 crc kubenswrapper[4789]: I0202 22:45:59.972507 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/c7c9b895-669c-426e-b9ad-7efd519878e4-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"c7c9b895-669c-426e-b9ad-7efd519878e4\") " pod="openstack/ovn-copy-data" Feb 02 22:45:59 crc kubenswrapper[4789]: I0202 22:45:59.991772 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vqkf\" (UniqueName: \"kubernetes.io/projected/c7c9b895-669c-426e-b9ad-7efd519878e4-kube-api-access-2vqkf\") pod \"ovn-copy-data\" (UID: \"c7c9b895-669c-426e-b9ad-7efd519878e4\") " pod="openstack/ovn-copy-data" Feb 02 22:46:00 crc kubenswrapper[4789]: I0202 22:46:00.001550 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-7563998a-53aa-41b9-a71f-7c115a170305\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7563998a-53aa-41b9-a71f-7c115a170305\") pod \"ovn-copy-data\" (UID: \"c7c9b895-669c-426e-b9ad-7efd519878e4\") " pod="openstack/ovn-copy-data" Feb 02 22:46:00 crc kubenswrapper[4789]: I0202 22:46:00.061026 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Feb 02 22:46:00 crc kubenswrapper[4789]: I0202 22:46:00.433844 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4998aa3c-a550-4732-a545-25b1a93d0fa6" path="/var/lib/kubelet/pods/4998aa3c-a550-4732-a545-25b1a93d0fa6/volumes" Feb 02 22:46:00 crc kubenswrapper[4789]: I0202 22:46:00.672527 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-copy-data"] Feb 02 22:46:00 crc kubenswrapper[4789]: W0202 22:46:00.675878 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc7c9b895_669c_426e_b9ad_7efd519878e4.slice/crio-61b9e70d90212088fc53e6767a7de180689075189c09a44a1f151413dc7cfe1a WatchSource:0}: Error finding container 61b9e70d90212088fc53e6767a7de180689075189c09a44a1f151413dc7cfe1a: Status 404 returned error can't find the container with id 61b9e70d90212088fc53e6767a7de180689075189c09a44a1f151413dc7cfe1a Feb 02 22:46:01 crc kubenswrapper[4789]: I0202 22:46:01.659715 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"c7c9b895-669c-426e-b9ad-7efd519878e4","Type":"ContainerStarted","Data":"69201f2a1d73db5ba3ee7193128f130c3e86cc33c8743de59c4e877cdacee2fc"} Feb 02 22:46:01 crc kubenswrapper[4789]: I0202 22:46:01.659819 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"c7c9b895-669c-426e-b9ad-7efd519878e4","Type":"ContainerStarted","Data":"61b9e70d90212088fc53e6767a7de180689075189c09a44a1f151413dc7cfe1a"} Feb 02 22:46:01 crc kubenswrapper[4789]: I0202 22:46:01.686515 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-copy-data" podStartSLOduration=3.686489837 podStartE2EDuration="3.686489837s" podCreationTimestamp="2026-02-02 22:45:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 22:46:01.684032717 +0000 UTC m=+5181.979057776" watchObservedRunningTime="2026-02-02 22:46:01.686489837 +0000 UTC m=+5181.981514886" Feb 02 22:46:05 crc kubenswrapper[4789]: E0202 22:46:05.923740 4789 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.189:39872->38.102.83.189:36729: write tcp 38.102.83.189:39872->38.102.83.189:36729: write: broken pipe Feb 02 22:46:06 crc kubenswrapper[4789]: I0202 22:46:06.410882 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-667d7cd957-rgfp8" Feb 02 22:46:06 crc kubenswrapper[4789]: I0202 22:46:06.507850 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b7946d7b9-dxg9d"] Feb 02 22:46:06 crc kubenswrapper[4789]: I0202 22:46:06.508225 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b7946d7b9-dxg9d" podUID="52bd3963-7c35-4c60-ba08-d537d3f8ec1d" containerName="dnsmasq-dns" containerID="cri-o://e71533d78d9a4947cc4bff7b06f77378bf709625db20ae14a3ebec3b8277a16f" gracePeriod=10 Feb 02 22:46:06 crc kubenswrapper[4789]: I0202 22:46:06.709270 4789 generic.go:334] "Generic (PLEG): container finished" podID="52bd3963-7c35-4c60-ba08-d537d3f8ec1d" containerID="e71533d78d9a4947cc4bff7b06f77378bf709625db20ae14a3ebec3b8277a16f" exitCode=0 Feb 02 22:46:06 crc kubenswrapper[4789]: I0202 22:46:06.709443 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b7946d7b9-dxg9d" event={"ID":"52bd3963-7c35-4c60-ba08-d537d3f8ec1d","Type":"ContainerDied","Data":"e71533d78d9a4947cc4bff7b06f77378bf709625db20ae14a3ebec3b8277a16f"} Feb 02 22:46:06 crc kubenswrapper[4789]: I0202 22:46:06.953287 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b7946d7b9-dxg9d" Feb 02 22:46:07 crc kubenswrapper[4789]: I0202 22:46:07.109801 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/52bd3963-7c35-4c60-ba08-d537d3f8ec1d-dns-svc\") pod \"52bd3963-7c35-4c60-ba08-d537d3f8ec1d\" (UID: \"52bd3963-7c35-4c60-ba08-d537d3f8ec1d\") " Feb 02 22:46:07 crc kubenswrapper[4789]: I0202 22:46:07.109961 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52bd3963-7c35-4c60-ba08-d537d3f8ec1d-config\") pod \"52bd3963-7c35-4c60-ba08-d537d3f8ec1d\" (UID: \"52bd3963-7c35-4c60-ba08-d537d3f8ec1d\") " Feb 02 22:46:07 crc kubenswrapper[4789]: I0202 22:46:07.109999 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sz2xv\" (UniqueName: \"kubernetes.io/projected/52bd3963-7c35-4c60-ba08-d537d3f8ec1d-kube-api-access-sz2xv\") pod \"52bd3963-7c35-4c60-ba08-d537d3f8ec1d\" (UID: \"52bd3963-7c35-4c60-ba08-d537d3f8ec1d\") " Feb 02 22:46:07 crc kubenswrapper[4789]: I0202 22:46:07.122971 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52bd3963-7c35-4c60-ba08-d537d3f8ec1d-kube-api-access-sz2xv" (OuterVolumeSpecName: "kube-api-access-sz2xv") pod "52bd3963-7c35-4c60-ba08-d537d3f8ec1d" (UID: "52bd3963-7c35-4c60-ba08-d537d3f8ec1d"). InnerVolumeSpecName "kube-api-access-sz2xv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 22:46:07 crc kubenswrapper[4789]: I0202 22:46:07.161379 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52bd3963-7c35-4c60-ba08-d537d3f8ec1d-config" (OuterVolumeSpecName: "config") pod "52bd3963-7c35-4c60-ba08-d537d3f8ec1d" (UID: "52bd3963-7c35-4c60-ba08-d537d3f8ec1d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 22:46:07 crc kubenswrapper[4789]: I0202 22:46:07.165508 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52bd3963-7c35-4c60-ba08-d537d3f8ec1d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "52bd3963-7c35-4c60-ba08-d537d3f8ec1d" (UID: "52bd3963-7c35-4c60-ba08-d537d3f8ec1d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 22:46:07 crc kubenswrapper[4789]: I0202 22:46:07.212129 4789 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/52bd3963-7c35-4c60-ba08-d537d3f8ec1d-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 22:46:07 crc kubenswrapper[4789]: I0202 22:46:07.212160 4789 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52bd3963-7c35-4c60-ba08-d537d3f8ec1d-config\") on node \"crc\" DevicePath \"\"" Feb 02 22:46:07 crc kubenswrapper[4789]: I0202 22:46:07.212173 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sz2xv\" (UniqueName: \"kubernetes.io/projected/52bd3963-7c35-4c60-ba08-d537d3f8ec1d-kube-api-access-sz2xv\") on node \"crc\" DevicePath \"\"" Feb 02 22:46:07 crc kubenswrapper[4789]: I0202 22:46:07.325265 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 02 22:46:07 crc kubenswrapper[4789]: E0202 22:46:07.325769 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52bd3963-7c35-4c60-ba08-d537d3f8ec1d" containerName="init" Feb 02 22:46:07 crc kubenswrapper[4789]: I0202 22:46:07.325821 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="52bd3963-7c35-4c60-ba08-d537d3f8ec1d" containerName="init" Feb 02 22:46:07 crc kubenswrapper[4789]: E0202 22:46:07.325844 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52bd3963-7c35-4c60-ba08-d537d3f8ec1d" containerName="dnsmasq-dns" Feb 02 22:46:07 crc kubenswrapper[4789]: I0202 22:46:07.325852 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="52bd3963-7c35-4c60-ba08-d537d3f8ec1d" containerName="dnsmasq-dns" Feb 02 22:46:07 crc kubenswrapper[4789]: I0202 22:46:07.326034 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="52bd3963-7c35-4c60-ba08-d537d3f8ec1d" containerName="dnsmasq-dns" Feb 02 22:46:07 crc kubenswrapper[4789]: I0202 22:46:07.327002 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 02 22:46:07 crc kubenswrapper[4789]: I0202 22:46:07.337279 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 02 22:46:07 crc kubenswrapper[4789]: I0202 22:46:07.337396 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 02 22:46:07 crc kubenswrapper[4789]: I0202 22:46:07.337299 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-hxkl5" Feb 02 22:46:07 crc kubenswrapper[4789]: I0202 22:46:07.356924 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 02 22:46:07 crc kubenswrapper[4789]: I0202 22:46:07.416124 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8581696e-1d96-46c1-8969-ab62ab7f296e-config\") pod \"ovn-northd-0\" (UID: \"8581696e-1d96-46c1-8969-ab62ab7f296e\") " pod="openstack/ovn-northd-0" Feb 02 22:46:07 crc kubenswrapper[4789]: I0202 22:46:07.416361 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zp6c\" (UniqueName: \"kubernetes.io/projected/8581696e-1d96-46c1-8969-ab62ab7f296e-kube-api-access-2zp6c\") pod \"ovn-northd-0\" (UID: \"8581696e-1d96-46c1-8969-ab62ab7f296e\") " pod="openstack/ovn-northd-0" Feb 02 22:46:07 crc kubenswrapper[4789]: I0202 22:46:07.416400 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8581696e-1d96-46c1-8969-ab62ab7f296e-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"8581696e-1d96-46c1-8969-ab62ab7f296e\") " pod="openstack/ovn-northd-0" Feb 02 22:46:07 crc kubenswrapper[4789]: I0202 22:46:07.416427 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8581696e-1d96-46c1-8969-ab62ab7f296e-scripts\") pod \"ovn-northd-0\" (UID: \"8581696e-1d96-46c1-8969-ab62ab7f296e\") " pod="openstack/ovn-northd-0" Feb 02 22:46:07 crc kubenswrapper[4789]: I0202 22:46:07.416708 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8581696e-1d96-46c1-8969-ab62ab7f296e-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"8581696e-1d96-46c1-8969-ab62ab7f296e\") " pod="openstack/ovn-northd-0" Feb 02 22:46:07 crc kubenswrapper[4789]: I0202 22:46:07.518506 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8581696e-1d96-46c1-8969-ab62ab7f296e-config\") pod \"ovn-northd-0\" (UID: \"8581696e-1d96-46c1-8969-ab62ab7f296e\") " pod="openstack/ovn-northd-0" Feb 02 22:46:07 crc kubenswrapper[4789]: I0202 22:46:07.518651 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zp6c\" (UniqueName: \"kubernetes.io/projected/8581696e-1d96-46c1-8969-ab62ab7f296e-kube-api-access-2zp6c\") pod \"ovn-northd-0\" (UID: \"8581696e-1d96-46c1-8969-ab62ab7f296e\") " pod="openstack/ovn-northd-0" Feb 02 22:46:07 crc kubenswrapper[4789]: I0202 22:46:07.518689 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8581696e-1d96-46c1-8969-ab62ab7f296e-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"8581696e-1d96-46c1-8969-ab62ab7f296e\") " pod="openstack/ovn-northd-0" Feb 02 22:46:07 crc kubenswrapper[4789]: I0202 22:46:07.518706 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8581696e-1d96-46c1-8969-ab62ab7f296e-scripts\") pod \"ovn-northd-0\" (UID: \"8581696e-1d96-46c1-8969-ab62ab7f296e\") " pod="openstack/ovn-northd-0" Feb 02 22:46:07 crc kubenswrapper[4789]: I0202 22:46:07.518737 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8581696e-1d96-46c1-8969-ab62ab7f296e-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"8581696e-1d96-46c1-8969-ab62ab7f296e\") " pod="openstack/ovn-northd-0" Feb 02 22:46:07 crc kubenswrapper[4789]: I0202 22:46:07.519832 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8581696e-1d96-46c1-8969-ab62ab7f296e-config\") pod \"ovn-northd-0\" (UID: \"8581696e-1d96-46c1-8969-ab62ab7f296e\") " pod="openstack/ovn-northd-0" Feb 02 22:46:07 crc kubenswrapper[4789]: I0202 22:46:07.520533 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8581696e-1d96-46c1-8969-ab62ab7f296e-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"8581696e-1d96-46c1-8969-ab62ab7f296e\") " pod="openstack/ovn-northd-0" Feb 02 22:46:07 crc kubenswrapper[4789]: I0202 22:46:07.521032 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8581696e-1d96-46c1-8969-ab62ab7f296e-scripts\") pod \"ovn-northd-0\" (UID: \"8581696e-1d96-46c1-8969-ab62ab7f296e\") " pod="openstack/ovn-northd-0" Feb 02 22:46:07 crc kubenswrapper[4789]: I0202 22:46:07.524209 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8581696e-1d96-46c1-8969-ab62ab7f296e-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"8581696e-1d96-46c1-8969-ab62ab7f296e\") " pod="openstack/ovn-northd-0" Feb 02 22:46:07 crc kubenswrapper[4789]: I0202 22:46:07.571765 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zp6c\" (UniqueName: \"kubernetes.io/projected/8581696e-1d96-46c1-8969-ab62ab7f296e-kube-api-access-2zp6c\") pod \"ovn-northd-0\" (UID: \"8581696e-1d96-46c1-8969-ab62ab7f296e\") " pod="openstack/ovn-northd-0" Feb 02 22:46:07 crc kubenswrapper[4789]: I0202 22:46:07.658320 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 02 22:46:07 crc kubenswrapper[4789]: I0202 22:46:07.720918 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b7946d7b9-dxg9d" event={"ID":"52bd3963-7c35-4c60-ba08-d537d3f8ec1d","Type":"ContainerDied","Data":"5b98a338f23db5ad0201e2477413a690d5bc49f78a8837ca09c36c6f6c893b6a"} Feb 02 22:46:07 crc kubenswrapper[4789]: I0202 22:46:07.721164 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b7946d7b9-dxg9d" Feb 02 22:46:07 crc kubenswrapper[4789]: I0202 22:46:07.721182 4789 scope.go:117] "RemoveContainer" containerID="e71533d78d9a4947cc4bff7b06f77378bf709625db20ae14a3ebec3b8277a16f" Feb 02 22:46:07 crc kubenswrapper[4789]: I0202 22:46:07.773382 4789 scope.go:117] "RemoveContainer" containerID="60b0bebdff37c0e9bec28f7e0746b96cbe025f5262c8de1d069e7ba8595d6bc4" Feb 02 22:46:07 crc kubenswrapper[4789]: I0202 22:46:07.774861 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b7946d7b9-dxg9d"] Feb 02 22:46:07 crc kubenswrapper[4789]: I0202 22:46:07.781213 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b7946d7b9-dxg9d"] Feb 02 22:46:08 crc kubenswrapper[4789]: I0202 22:46:08.201154 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 02 22:46:08 crc kubenswrapper[4789]: W0202 22:46:08.206838 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8581696e_1d96_46c1_8969_ab62ab7f296e.slice/crio-c3dd7622a4984c85920834380755485ad17b9deaa47eab2ed0211b1d06786ee6 WatchSource:0}: Error finding container c3dd7622a4984c85920834380755485ad17b9deaa47eab2ed0211b1d06786ee6: Status 404 returned error can't find the container with id c3dd7622a4984c85920834380755485ad17b9deaa47eab2ed0211b1d06786ee6 Feb 02 22:46:08 crc kubenswrapper[4789]: I0202 22:46:08.446604 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52bd3963-7c35-4c60-ba08-d537d3f8ec1d" path="/var/lib/kubelet/pods/52bd3963-7c35-4c60-ba08-d537d3f8ec1d/volumes" Feb 02 22:46:08 crc kubenswrapper[4789]: I0202 22:46:08.735754 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"8581696e-1d96-46c1-8969-ab62ab7f296e","Type":"ContainerStarted","Data":"e7c130ac9cfd6c28c6b26b7c77d90185ca8dbd91ffff64184dd2de29423e2977"} Feb 02 22:46:08 crc kubenswrapper[4789]: I0202 22:46:08.735862 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"8581696e-1d96-46c1-8969-ab62ab7f296e","Type":"ContainerStarted","Data":"2c0655fdb302cc053b38d84de3785b3906c2fa119759230158e9eb4fd01c3696"} Feb 02 22:46:08 crc kubenswrapper[4789]: I0202 22:46:08.735893 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"8581696e-1d96-46c1-8969-ab62ab7f296e","Type":"ContainerStarted","Data":"c3dd7622a4984c85920834380755485ad17b9deaa47eab2ed0211b1d06786ee6"} Feb 02 22:46:08 crc kubenswrapper[4789]: I0202 22:46:08.736194 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 02 22:46:08 crc kubenswrapper[4789]: I0202 22:46:08.764315 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=1.764285948 podStartE2EDuration="1.764285948s" podCreationTimestamp="2026-02-02 22:46:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 22:46:08.763258839 +0000 UTC m=+5189.058283938" watchObservedRunningTime="2026-02-02 22:46:08.764285948 +0000 UTC m=+5189.059310997" Feb 02 22:46:12 crc kubenswrapper[4789]: I0202 22:46:12.730389 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-r4kr2"] Feb 02 22:46:12 crc kubenswrapper[4789]: I0202 22:46:12.732117 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-r4kr2" Feb 02 22:46:12 crc kubenswrapper[4789]: I0202 22:46:12.742788 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-r4kr2"] Feb 02 22:46:12 crc kubenswrapper[4789]: I0202 22:46:12.827765 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-5c92-account-create-update-4shd8"] Feb 02 22:46:12 crc kubenswrapper[4789]: I0202 22:46:12.828788 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5c92-account-create-update-4shd8" Feb 02 22:46:12 crc kubenswrapper[4789]: I0202 22:46:12.830460 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 02 22:46:12 crc kubenswrapper[4789]: I0202 22:46:12.839730 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a93cc873-6b0b-4eb8-be87-acda79c160af-operator-scripts\") pod \"keystone-db-create-r4kr2\" (UID: \"a93cc873-6b0b-4eb8-be87-acda79c160af\") " pod="openstack/keystone-db-create-r4kr2" Feb 02 22:46:12 crc kubenswrapper[4789]: I0202 22:46:12.839887 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7lq9\" (UniqueName: \"kubernetes.io/projected/a93cc873-6b0b-4eb8-be87-acda79c160af-kube-api-access-n7lq9\") pod \"keystone-db-create-r4kr2\" (UID: \"a93cc873-6b0b-4eb8-be87-acda79c160af\") " pod="openstack/keystone-db-create-r4kr2" Feb 02 22:46:12 crc kubenswrapper[4789]: I0202 22:46:12.840834 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5c92-account-create-update-4shd8"] Feb 02 22:46:12 crc kubenswrapper[4789]: I0202 22:46:12.941485 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c7308363-74f5-4ae7-8695-3017babea57c-operator-scripts\") pod \"keystone-5c92-account-create-update-4shd8\" (UID: \"c7308363-74f5-4ae7-8695-3017babea57c\") " pod="openstack/keystone-5c92-account-create-update-4shd8" Feb 02 22:46:12 crc kubenswrapper[4789]: I0202 22:46:12.941553 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7lq9\" (UniqueName: \"kubernetes.io/projected/a93cc873-6b0b-4eb8-be87-acda79c160af-kube-api-access-n7lq9\") pod \"keystone-db-create-r4kr2\" (UID: \"a93cc873-6b0b-4eb8-be87-acda79c160af\") " pod="openstack/keystone-db-create-r4kr2" Feb 02 22:46:12 crc kubenswrapper[4789]: I0202 22:46:12.941700 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a93cc873-6b0b-4eb8-be87-acda79c160af-operator-scripts\") pod \"keystone-db-create-r4kr2\" (UID: \"a93cc873-6b0b-4eb8-be87-acda79c160af\") " pod="openstack/keystone-db-create-r4kr2" Feb 02 22:46:12 crc kubenswrapper[4789]: I0202 22:46:12.941737 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2x9f\" (UniqueName: \"kubernetes.io/projected/c7308363-74f5-4ae7-8695-3017babea57c-kube-api-access-k2x9f\") pod \"keystone-5c92-account-create-update-4shd8\" (UID: \"c7308363-74f5-4ae7-8695-3017babea57c\") " pod="openstack/keystone-5c92-account-create-update-4shd8" Feb 02 22:46:12 crc kubenswrapper[4789]: I0202 22:46:12.942403 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a93cc873-6b0b-4eb8-be87-acda79c160af-operator-scripts\") pod \"keystone-db-create-r4kr2\" (UID: \"a93cc873-6b0b-4eb8-be87-acda79c160af\") " pod="openstack/keystone-db-create-r4kr2" Feb 02 22:46:12 crc kubenswrapper[4789]: I0202 22:46:12.965033 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7lq9\" (UniqueName: \"kubernetes.io/projected/a93cc873-6b0b-4eb8-be87-acda79c160af-kube-api-access-n7lq9\") pod \"keystone-db-create-r4kr2\" (UID: \"a93cc873-6b0b-4eb8-be87-acda79c160af\") " pod="openstack/keystone-db-create-r4kr2" Feb 02 22:46:13 crc kubenswrapper[4789]: I0202 22:46:13.042741 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c7308363-74f5-4ae7-8695-3017babea57c-operator-scripts\") pod \"keystone-5c92-account-create-update-4shd8\" (UID: \"c7308363-74f5-4ae7-8695-3017babea57c\") " pod="openstack/keystone-5c92-account-create-update-4shd8" Feb 02 22:46:13 crc kubenswrapper[4789]: I0202 22:46:13.043180 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2x9f\" (UniqueName: \"kubernetes.io/projected/c7308363-74f5-4ae7-8695-3017babea57c-kube-api-access-k2x9f\") pod \"keystone-5c92-account-create-update-4shd8\" (UID: \"c7308363-74f5-4ae7-8695-3017babea57c\") " pod="openstack/keystone-5c92-account-create-update-4shd8" Feb 02 22:46:13 crc kubenswrapper[4789]: I0202 22:46:13.043514 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c7308363-74f5-4ae7-8695-3017babea57c-operator-scripts\") pod \"keystone-5c92-account-create-update-4shd8\" (UID: \"c7308363-74f5-4ae7-8695-3017babea57c\") " pod="openstack/keystone-5c92-account-create-update-4shd8" Feb 02 22:46:13 crc kubenswrapper[4789]: I0202 22:46:13.052237 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-r4kr2" Feb 02 22:46:13 crc kubenswrapper[4789]: I0202 22:46:13.059553 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2x9f\" (UniqueName: \"kubernetes.io/projected/c7308363-74f5-4ae7-8695-3017babea57c-kube-api-access-k2x9f\") pod \"keystone-5c92-account-create-update-4shd8\" (UID: \"c7308363-74f5-4ae7-8695-3017babea57c\") " pod="openstack/keystone-5c92-account-create-update-4shd8" Feb 02 22:46:13 crc kubenswrapper[4789]: I0202 22:46:13.149201 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5c92-account-create-update-4shd8" Feb 02 22:46:13 crc kubenswrapper[4789]: I0202 22:46:13.538744 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-r4kr2"] Feb 02 22:46:13 crc kubenswrapper[4789]: W0202 22:46:13.539727 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda93cc873_6b0b_4eb8_be87_acda79c160af.slice/crio-679bc76267e9defbbc59f56b3bf58d669edfffa0c3f8e578dbbc1a3ff79fdab7 WatchSource:0}: Error finding container 679bc76267e9defbbc59f56b3bf58d669edfffa0c3f8e578dbbc1a3ff79fdab7: Status 404 returned error can't find the container with id 679bc76267e9defbbc59f56b3bf58d669edfffa0c3f8e578dbbc1a3ff79fdab7 Feb 02 22:46:13 crc kubenswrapper[4789]: I0202 22:46:13.601387 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5c92-account-create-update-4shd8"] Feb 02 22:46:13 crc kubenswrapper[4789]: W0202 22:46:13.606867 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc7308363_74f5_4ae7_8695_3017babea57c.slice/crio-6cf1488a0115f01b8978e6fcfab94a5f899063591bc9e62397cb3beb0c7ac01f WatchSource:0}: Error finding container 6cf1488a0115f01b8978e6fcfab94a5f899063591bc9e62397cb3beb0c7ac01f: Status 404 returned error can't find the container with id 6cf1488a0115f01b8978e6fcfab94a5f899063591bc9e62397cb3beb0c7ac01f Feb 02 22:46:13 crc kubenswrapper[4789]: I0202 22:46:13.785252 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-r4kr2" event={"ID":"a93cc873-6b0b-4eb8-be87-acda79c160af","Type":"ContainerStarted","Data":"548ccb062dedfe43d98fd2e457b5cb68f1291df8673ce70470e91b45e78d3d90"} Feb 02 22:46:13 crc kubenswrapper[4789]: I0202 22:46:13.785313 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-r4kr2" event={"ID":"a93cc873-6b0b-4eb8-be87-acda79c160af","Type":"ContainerStarted","Data":"679bc76267e9defbbc59f56b3bf58d669edfffa0c3f8e578dbbc1a3ff79fdab7"} Feb 02 22:46:13 crc kubenswrapper[4789]: I0202 22:46:13.788300 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5c92-account-create-update-4shd8" event={"ID":"c7308363-74f5-4ae7-8695-3017babea57c","Type":"ContainerStarted","Data":"543c6c7d8dc94243f269f4111a2024f567753214c363b23421dee005c5ba5b3c"} Feb 02 22:46:13 crc kubenswrapper[4789]: I0202 22:46:13.788341 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5c92-account-create-update-4shd8" event={"ID":"c7308363-74f5-4ae7-8695-3017babea57c","Type":"ContainerStarted","Data":"6cf1488a0115f01b8978e6fcfab94a5f899063591bc9e62397cb3beb0c7ac01f"} Feb 02 22:46:13 crc kubenswrapper[4789]: I0202 22:46:13.810875 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-r4kr2" podStartSLOduration=1.810849926 podStartE2EDuration="1.810849926s" podCreationTimestamp="2026-02-02 22:46:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 22:46:13.808398097 +0000 UTC m=+5194.103423116" watchObservedRunningTime="2026-02-02 22:46:13.810849926 +0000 UTC m=+5194.105874955" Feb 02 22:46:13 crc kubenswrapper[4789]: I0202 22:46:13.832354 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-5c92-account-create-update-4shd8" podStartSLOduration=1.832333443 podStartE2EDuration="1.832333443s" podCreationTimestamp="2026-02-02 22:46:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 22:46:13.827938029 +0000 UTC m=+5194.122963058" watchObservedRunningTime="2026-02-02 22:46:13.832333443 +0000 UTC m=+5194.127358462" Feb 02 22:46:14 crc kubenswrapper[4789]: I0202 22:46:14.801173 4789 generic.go:334] "Generic (PLEG): container finished" podID="a93cc873-6b0b-4eb8-be87-acda79c160af" containerID="548ccb062dedfe43d98fd2e457b5cb68f1291df8673ce70470e91b45e78d3d90" exitCode=0 Feb 02 22:46:14 crc kubenswrapper[4789]: I0202 22:46:14.801464 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-r4kr2" event={"ID":"a93cc873-6b0b-4eb8-be87-acda79c160af","Type":"ContainerDied","Data":"548ccb062dedfe43d98fd2e457b5cb68f1291df8673ce70470e91b45e78d3d90"} Feb 02 22:46:14 crc kubenswrapper[4789]: I0202 22:46:14.806074 4789 generic.go:334] "Generic (PLEG): container finished" podID="c7308363-74f5-4ae7-8695-3017babea57c" containerID="543c6c7d8dc94243f269f4111a2024f567753214c363b23421dee005c5ba5b3c" exitCode=0 Feb 02 22:46:14 crc kubenswrapper[4789]: I0202 22:46:14.806131 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5c92-account-create-update-4shd8" event={"ID":"c7308363-74f5-4ae7-8695-3017babea57c","Type":"ContainerDied","Data":"543c6c7d8dc94243f269f4111a2024f567753214c363b23421dee005c5ba5b3c"} Feb 02 22:46:16 crc kubenswrapper[4789]: I0202 22:46:16.376205 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5c92-account-create-update-4shd8" Feb 02 22:46:16 crc kubenswrapper[4789]: I0202 22:46:16.389386 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-r4kr2" Feb 02 22:46:16 crc kubenswrapper[4789]: I0202 22:46:16.501900 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c7308363-74f5-4ae7-8695-3017babea57c-operator-scripts\") pod \"c7308363-74f5-4ae7-8695-3017babea57c\" (UID: \"c7308363-74f5-4ae7-8695-3017babea57c\") " Feb 02 22:46:16 crc kubenswrapper[4789]: I0202 22:46:16.501996 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a93cc873-6b0b-4eb8-be87-acda79c160af-operator-scripts\") pod \"a93cc873-6b0b-4eb8-be87-acda79c160af\" (UID: \"a93cc873-6b0b-4eb8-be87-acda79c160af\") " Feb 02 22:46:16 crc kubenswrapper[4789]: I0202 22:46:16.502095 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n7lq9\" (UniqueName: \"kubernetes.io/projected/a93cc873-6b0b-4eb8-be87-acda79c160af-kube-api-access-n7lq9\") pod \"a93cc873-6b0b-4eb8-be87-acda79c160af\" (UID: \"a93cc873-6b0b-4eb8-be87-acda79c160af\") " Feb 02 22:46:16 crc kubenswrapper[4789]: I0202 22:46:16.502129 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k2x9f\" (UniqueName: \"kubernetes.io/projected/c7308363-74f5-4ae7-8695-3017babea57c-kube-api-access-k2x9f\") pod \"c7308363-74f5-4ae7-8695-3017babea57c\" (UID: \"c7308363-74f5-4ae7-8695-3017babea57c\") " Feb 02 22:46:16 crc kubenswrapper[4789]: I0202 22:46:16.503281 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7308363-74f5-4ae7-8695-3017babea57c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c7308363-74f5-4ae7-8695-3017babea57c" (UID: "c7308363-74f5-4ae7-8695-3017babea57c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 22:46:16 crc kubenswrapper[4789]: I0202 22:46:16.503551 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a93cc873-6b0b-4eb8-be87-acda79c160af-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a93cc873-6b0b-4eb8-be87-acda79c160af" (UID: "a93cc873-6b0b-4eb8-be87-acda79c160af"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 22:46:16 crc kubenswrapper[4789]: I0202 22:46:16.509642 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a93cc873-6b0b-4eb8-be87-acda79c160af-kube-api-access-n7lq9" (OuterVolumeSpecName: "kube-api-access-n7lq9") pod "a93cc873-6b0b-4eb8-be87-acda79c160af" (UID: "a93cc873-6b0b-4eb8-be87-acda79c160af"). InnerVolumeSpecName "kube-api-access-n7lq9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 22:46:16 crc kubenswrapper[4789]: I0202 22:46:16.510085 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7308363-74f5-4ae7-8695-3017babea57c-kube-api-access-k2x9f" (OuterVolumeSpecName: "kube-api-access-k2x9f") pod "c7308363-74f5-4ae7-8695-3017babea57c" (UID: "c7308363-74f5-4ae7-8695-3017babea57c"). InnerVolumeSpecName "kube-api-access-k2x9f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 22:46:16 crc kubenswrapper[4789]: I0202 22:46:16.604360 4789 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c7308363-74f5-4ae7-8695-3017babea57c-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 22:46:16 crc kubenswrapper[4789]: I0202 22:46:16.604718 4789 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a93cc873-6b0b-4eb8-be87-acda79c160af-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 22:46:16 crc kubenswrapper[4789]: I0202 22:46:16.604742 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n7lq9\" (UniqueName: \"kubernetes.io/projected/a93cc873-6b0b-4eb8-be87-acda79c160af-kube-api-access-n7lq9\") on node \"crc\" DevicePath \"\"" Feb 02 22:46:16 crc kubenswrapper[4789]: I0202 22:46:16.604765 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k2x9f\" (UniqueName: \"kubernetes.io/projected/c7308363-74f5-4ae7-8695-3017babea57c-kube-api-access-k2x9f\") on node \"crc\" DevicePath \"\"" Feb 02 22:46:16 crc kubenswrapper[4789]: I0202 22:46:16.848349 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-r4kr2" event={"ID":"a93cc873-6b0b-4eb8-be87-acda79c160af","Type":"ContainerDied","Data":"679bc76267e9defbbc59f56b3bf58d669edfffa0c3f8e578dbbc1a3ff79fdab7"} Feb 02 22:46:16 crc kubenswrapper[4789]: I0202 22:46:16.848409 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="679bc76267e9defbbc59f56b3bf58d669edfffa0c3f8e578dbbc1a3ff79fdab7" Feb 02 22:46:16 crc kubenswrapper[4789]: I0202 22:46:16.848508 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-r4kr2" Feb 02 22:46:16 crc kubenswrapper[4789]: I0202 22:46:16.851316 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5c92-account-create-update-4shd8" event={"ID":"c7308363-74f5-4ae7-8695-3017babea57c","Type":"ContainerDied","Data":"6cf1488a0115f01b8978e6fcfab94a5f899063591bc9e62397cb3beb0c7ac01f"} Feb 02 22:46:16 crc kubenswrapper[4789]: I0202 22:46:16.851380 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6cf1488a0115f01b8978e6fcfab94a5f899063591bc9e62397cb3beb0c7ac01f" Feb 02 22:46:16 crc kubenswrapper[4789]: I0202 22:46:16.851484 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5c92-account-create-update-4shd8" Feb 02 22:46:18 crc kubenswrapper[4789]: I0202 22:46:18.394179 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-vhtsg"] Feb 02 22:46:18 crc kubenswrapper[4789]: E0202 22:46:18.396350 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7308363-74f5-4ae7-8695-3017babea57c" containerName="mariadb-account-create-update" Feb 02 22:46:18 crc kubenswrapper[4789]: I0202 22:46:18.396523 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7308363-74f5-4ae7-8695-3017babea57c" containerName="mariadb-account-create-update" Feb 02 22:46:18 crc kubenswrapper[4789]: E0202 22:46:18.396702 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a93cc873-6b0b-4eb8-be87-acda79c160af" containerName="mariadb-database-create" Feb 02 22:46:18 crc kubenswrapper[4789]: I0202 22:46:18.396831 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="a93cc873-6b0b-4eb8-be87-acda79c160af" containerName="mariadb-database-create" Feb 02 22:46:18 crc kubenswrapper[4789]: I0202 22:46:18.397234 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="a93cc873-6b0b-4eb8-be87-acda79c160af" containerName="mariadb-database-create" Feb 02 22:46:18 crc kubenswrapper[4789]: I0202 22:46:18.397387 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7308363-74f5-4ae7-8695-3017babea57c" containerName="mariadb-account-create-update" Feb 02 22:46:18 crc kubenswrapper[4789]: I0202 22:46:18.398433 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-vhtsg" Feb 02 22:46:18 crc kubenswrapper[4789]: I0202 22:46:18.402497 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-88j9r" Feb 02 22:46:18 crc kubenswrapper[4789]: I0202 22:46:18.403066 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 02 22:46:18 crc kubenswrapper[4789]: I0202 22:46:18.407446 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 02 22:46:18 crc kubenswrapper[4789]: I0202 22:46:18.407730 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 02 22:46:18 crc kubenswrapper[4789]: I0202 22:46:18.410850 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-vhtsg"] Feb 02 22:46:18 crc kubenswrapper[4789]: I0202 22:46:18.542666 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4cd23310-ee19-467d-8534-f7daed0233e8-config-data\") pod \"keystone-db-sync-vhtsg\" (UID: \"4cd23310-ee19-467d-8534-f7daed0233e8\") " pod="openstack/keystone-db-sync-vhtsg" Feb 02 22:46:18 crc kubenswrapper[4789]: I0202 22:46:18.542934 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7k9z\" (UniqueName: \"kubernetes.io/projected/4cd23310-ee19-467d-8534-f7daed0233e8-kube-api-access-x7k9z\") pod \"keystone-db-sync-vhtsg\" (UID: \"4cd23310-ee19-467d-8534-f7daed0233e8\") " pod="openstack/keystone-db-sync-vhtsg" Feb 02 22:46:18 crc kubenswrapper[4789]: I0202 22:46:18.543007 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cd23310-ee19-467d-8534-f7daed0233e8-combined-ca-bundle\") pod \"keystone-db-sync-vhtsg\" (UID: \"4cd23310-ee19-467d-8534-f7daed0233e8\") " pod="openstack/keystone-db-sync-vhtsg" Feb 02 22:46:18 crc kubenswrapper[4789]: I0202 22:46:18.644870 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4cd23310-ee19-467d-8534-f7daed0233e8-config-data\") pod \"keystone-db-sync-vhtsg\" (UID: \"4cd23310-ee19-467d-8534-f7daed0233e8\") " pod="openstack/keystone-db-sync-vhtsg" Feb 02 22:46:18 crc kubenswrapper[4789]: I0202 22:46:18.645043 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7k9z\" (UniqueName: \"kubernetes.io/projected/4cd23310-ee19-467d-8534-f7daed0233e8-kube-api-access-x7k9z\") pod \"keystone-db-sync-vhtsg\" (UID: \"4cd23310-ee19-467d-8534-f7daed0233e8\") " pod="openstack/keystone-db-sync-vhtsg" Feb 02 22:46:18 crc kubenswrapper[4789]: I0202 22:46:18.645086 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cd23310-ee19-467d-8534-f7daed0233e8-combined-ca-bundle\") pod \"keystone-db-sync-vhtsg\" (UID: \"4cd23310-ee19-467d-8534-f7daed0233e8\") " pod="openstack/keystone-db-sync-vhtsg" Feb 02 22:46:18 crc kubenswrapper[4789]: I0202 22:46:18.652681 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4cd23310-ee19-467d-8534-f7daed0233e8-config-data\") pod \"keystone-db-sync-vhtsg\" (UID: \"4cd23310-ee19-467d-8534-f7daed0233e8\") " pod="openstack/keystone-db-sync-vhtsg" Feb 02 22:46:18 crc kubenswrapper[4789]: I0202 22:46:18.655510 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cd23310-ee19-467d-8534-f7daed0233e8-combined-ca-bundle\") pod \"keystone-db-sync-vhtsg\" (UID: \"4cd23310-ee19-467d-8534-f7daed0233e8\") " pod="openstack/keystone-db-sync-vhtsg" Feb 02 22:46:18 crc kubenswrapper[4789]: I0202 22:46:18.672493 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7k9z\" (UniqueName: \"kubernetes.io/projected/4cd23310-ee19-467d-8534-f7daed0233e8-kube-api-access-x7k9z\") pod \"keystone-db-sync-vhtsg\" (UID: \"4cd23310-ee19-467d-8534-f7daed0233e8\") " pod="openstack/keystone-db-sync-vhtsg" Feb 02 22:46:18 crc kubenswrapper[4789]: I0202 22:46:18.721136 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-vhtsg" Feb 02 22:46:19 crc kubenswrapper[4789]: I0202 22:46:19.236271 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-vhtsg"] Feb 02 22:46:19 crc kubenswrapper[4789]: W0202 22:46:19.237853 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4cd23310_ee19_467d_8534_f7daed0233e8.slice/crio-6aa2a9768ffe25cc3a59c1b54eea0301e7b8a93b981ea628e992ab15608be460 WatchSource:0}: Error finding container 6aa2a9768ffe25cc3a59c1b54eea0301e7b8a93b981ea628e992ab15608be460: Status 404 returned error can't find the container with id 6aa2a9768ffe25cc3a59c1b54eea0301e7b8a93b981ea628e992ab15608be460 Feb 02 22:46:19 crc kubenswrapper[4789]: I0202 22:46:19.879622 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-vhtsg" event={"ID":"4cd23310-ee19-467d-8534-f7daed0233e8","Type":"ContainerStarted","Data":"ab833ccef2ddf75486ffa4549a048dafcf839387591d4dbc5f2cbab4d2c4d7cd"} Feb 02 22:46:19 crc kubenswrapper[4789]: I0202 22:46:19.880006 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-vhtsg" event={"ID":"4cd23310-ee19-467d-8534-f7daed0233e8","Type":"ContainerStarted","Data":"6aa2a9768ffe25cc3a59c1b54eea0301e7b8a93b981ea628e992ab15608be460"} Feb 02 22:46:19 crc kubenswrapper[4789]: I0202 22:46:19.913460 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-vhtsg" podStartSLOduration=1.9134431269999999 podStartE2EDuration="1.913443127s" podCreationTimestamp="2026-02-02 22:46:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 22:46:19.907638043 +0000 UTC m=+5200.202663062" watchObservedRunningTime="2026-02-02 22:46:19.913443127 +0000 UTC m=+5200.208468146" Feb 02 22:46:21 crc kubenswrapper[4789]: I0202 22:46:21.902449 4789 generic.go:334] "Generic (PLEG): container finished" podID="4cd23310-ee19-467d-8534-f7daed0233e8" containerID="ab833ccef2ddf75486ffa4549a048dafcf839387591d4dbc5f2cbab4d2c4d7cd" exitCode=0 Feb 02 22:46:21 crc kubenswrapper[4789]: I0202 22:46:21.902557 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-vhtsg" event={"ID":"4cd23310-ee19-467d-8534-f7daed0233e8","Type":"ContainerDied","Data":"ab833ccef2ddf75486ffa4549a048dafcf839387591d4dbc5f2cbab4d2c4d7cd"} Feb 02 22:46:23 crc kubenswrapper[4789]: I0202 22:46:23.300448 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-vhtsg" Feb 02 22:46:23 crc kubenswrapper[4789]: I0202 22:46:23.454898 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cd23310-ee19-467d-8534-f7daed0233e8-combined-ca-bundle\") pod \"4cd23310-ee19-467d-8534-f7daed0233e8\" (UID: \"4cd23310-ee19-467d-8534-f7daed0233e8\") " Feb 02 22:46:23 crc kubenswrapper[4789]: I0202 22:46:23.455025 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4cd23310-ee19-467d-8534-f7daed0233e8-config-data\") pod \"4cd23310-ee19-467d-8534-f7daed0233e8\" (UID: \"4cd23310-ee19-467d-8534-f7daed0233e8\") " Feb 02 22:46:23 crc kubenswrapper[4789]: I0202 22:46:23.455285 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7k9z\" (UniqueName: \"kubernetes.io/projected/4cd23310-ee19-467d-8534-f7daed0233e8-kube-api-access-x7k9z\") pod \"4cd23310-ee19-467d-8534-f7daed0233e8\" (UID: \"4cd23310-ee19-467d-8534-f7daed0233e8\") " Feb 02 22:46:23 crc kubenswrapper[4789]: I0202 22:46:23.465375 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cd23310-ee19-467d-8534-f7daed0233e8-kube-api-access-x7k9z" (OuterVolumeSpecName: "kube-api-access-x7k9z") pod "4cd23310-ee19-467d-8534-f7daed0233e8" (UID: "4cd23310-ee19-467d-8534-f7daed0233e8"). InnerVolumeSpecName "kube-api-access-x7k9z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 22:46:23 crc kubenswrapper[4789]: I0202 22:46:23.501155 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cd23310-ee19-467d-8534-f7daed0233e8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4cd23310-ee19-467d-8534-f7daed0233e8" (UID: "4cd23310-ee19-467d-8534-f7daed0233e8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 22:46:23 crc kubenswrapper[4789]: I0202 22:46:23.521986 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cd23310-ee19-467d-8534-f7daed0233e8-config-data" (OuterVolumeSpecName: "config-data") pod "4cd23310-ee19-467d-8534-f7daed0233e8" (UID: "4cd23310-ee19-467d-8534-f7daed0233e8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 22:46:23 crc kubenswrapper[4789]: I0202 22:46:23.558064 4789 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cd23310-ee19-467d-8534-f7daed0233e8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 22:46:23 crc kubenswrapper[4789]: I0202 22:46:23.558124 4789 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4cd23310-ee19-467d-8534-f7daed0233e8-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 22:46:23 crc kubenswrapper[4789]: I0202 22:46:23.558150 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7k9z\" (UniqueName: \"kubernetes.io/projected/4cd23310-ee19-467d-8534-f7daed0233e8-kube-api-access-x7k9z\") on node \"crc\" DevicePath \"\"" Feb 02 22:46:23 crc kubenswrapper[4789]: I0202 22:46:23.927035 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-vhtsg" event={"ID":"4cd23310-ee19-467d-8534-f7daed0233e8","Type":"ContainerDied","Data":"6aa2a9768ffe25cc3a59c1b54eea0301e7b8a93b981ea628e992ab15608be460"} Feb 02 22:46:23 crc kubenswrapper[4789]: I0202 22:46:23.927087 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6aa2a9768ffe25cc3a59c1b54eea0301e7b8a93b981ea628e992ab15608be460" Feb 02 22:46:23 crc kubenswrapper[4789]: I0202 22:46:23.927132 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-vhtsg" Feb 02 22:46:24 crc kubenswrapper[4789]: I0202 22:46:24.216451 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8596d5dcd9-r2chc"] Feb 02 22:46:24 crc kubenswrapper[4789]: E0202 22:46:24.217027 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cd23310-ee19-467d-8534-f7daed0233e8" containerName="keystone-db-sync" Feb 02 22:46:24 crc kubenswrapper[4789]: I0202 22:46:24.217047 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cd23310-ee19-467d-8534-f7daed0233e8" containerName="keystone-db-sync" Feb 02 22:46:24 crc kubenswrapper[4789]: I0202 22:46:24.217324 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cd23310-ee19-467d-8534-f7daed0233e8" containerName="keystone-db-sync" Feb 02 22:46:24 crc kubenswrapper[4789]: I0202 22:46:24.218714 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8596d5dcd9-r2chc" Feb 02 22:46:24 crc kubenswrapper[4789]: I0202 22:46:24.233319 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8596d5dcd9-r2chc"] Feb 02 22:46:24 crc kubenswrapper[4789]: I0202 22:46:24.248530 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-jvdln"] Feb 02 22:46:24 crc kubenswrapper[4789]: I0202 22:46:24.249489 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-jvdln" Feb 02 22:46:24 crc kubenswrapper[4789]: I0202 22:46:24.251491 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 02 22:46:24 crc kubenswrapper[4789]: I0202 22:46:24.251903 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 02 22:46:24 crc kubenswrapper[4789]: I0202 22:46:24.252066 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 02 22:46:24 crc kubenswrapper[4789]: I0202 22:46:24.252172 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 02 22:46:24 crc kubenswrapper[4789]: I0202 22:46:24.252322 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-88j9r" Feb 02 22:46:24 crc kubenswrapper[4789]: I0202 22:46:24.265571 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-jvdln"] Feb 02 22:46:24 crc kubenswrapper[4789]: I0202 22:46:24.385183 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bdaabe7-f89b-45ae-a45e-04ae95f66326-combined-ca-bundle\") pod \"keystone-bootstrap-jvdln\" (UID: \"3bdaabe7-f89b-45ae-a45e-04ae95f66326\") " pod="openstack/keystone-bootstrap-jvdln" Feb 02 22:46:24 crc kubenswrapper[4789]: I0202 22:46:24.385242 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bb3a48bd-007e-4640-bf96-9ea8b39a12e2-ovsdbserver-nb\") pod \"dnsmasq-dns-8596d5dcd9-r2chc\" (UID: \"bb3a48bd-007e-4640-bf96-9ea8b39a12e2\") " pod="openstack/dnsmasq-dns-8596d5dcd9-r2chc" Feb 02 22:46:24 crc kubenswrapper[4789]: I0202 22:46:24.385267 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdhmd\" (UniqueName: \"kubernetes.io/projected/bb3a48bd-007e-4640-bf96-9ea8b39a12e2-kube-api-access-cdhmd\") pod \"dnsmasq-dns-8596d5dcd9-r2chc\" (UID: \"bb3a48bd-007e-4640-bf96-9ea8b39a12e2\") " pod="openstack/dnsmasq-dns-8596d5dcd9-r2chc" Feb 02 22:46:24 crc kubenswrapper[4789]: I0202 22:46:24.385290 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb3a48bd-007e-4640-bf96-9ea8b39a12e2-config\") pod \"dnsmasq-dns-8596d5dcd9-r2chc\" (UID: \"bb3a48bd-007e-4640-bf96-9ea8b39a12e2\") " pod="openstack/dnsmasq-dns-8596d5dcd9-r2chc" Feb 02 22:46:24 crc kubenswrapper[4789]: I0202 22:46:24.385308 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3bdaabe7-f89b-45ae-a45e-04ae95f66326-scripts\") pod \"keystone-bootstrap-jvdln\" (UID: \"3bdaabe7-f89b-45ae-a45e-04ae95f66326\") " pod="openstack/keystone-bootstrap-jvdln" Feb 02 22:46:24 crc kubenswrapper[4789]: I0202 22:46:24.385333 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bb3a48bd-007e-4640-bf96-9ea8b39a12e2-ovsdbserver-sb\") pod \"dnsmasq-dns-8596d5dcd9-r2chc\" (UID: \"bb3a48bd-007e-4640-bf96-9ea8b39a12e2\") " pod="openstack/dnsmasq-dns-8596d5dcd9-r2chc" Feb 02 22:46:24 crc kubenswrapper[4789]: I0202 22:46:24.385351 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qj2j7\" (UniqueName: \"kubernetes.io/projected/3bdaabe7-f89b-45ae-a45e-04ae95f66326-kube-api-access-qj2j7\") pod \"keystone-bootstrap-jvdln\" (UID: \"3bdaabe7-f89b-45ae-a45e-04ae95f66326\") " pod="openstack/keystone-bootstrap-jvdln" Feb 02 22:46:24 crc kubenswrapper[4789]: I0202 22:46:24.385373 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3bdaabe7-f89b-45ae-a45e-04ae95f66326-credential-keys\") pod \"keystone-bootstrap-jvdln\" (UID: \"3bdaabe7-f89b-45ae-a45e-04ae95f66326\") " pod="openstack/keystone-bootstrap-jvdln" Feb 02 22:46:24 crc kubenswrapper[4789]: I0202 22:46:24.385390 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bb3a48bd-007e-4640-bf96-9ea8b39a12e2-dns-svc\") pod \"dnsmasq-dns-8596d5dcd9-r2chc\" (UID: \"bb3a48bd-007e-4640-bf96-9ea8b39a12e2\") " pod="openstack/dnsmasq-dns-8596d5dcd9-r2chc" Feb 02 22:46:24 crc kubenswrapper[4789]: I0202 22:46:24.385426 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bdaabe7-f89b-45ae-a45e-04ae95f66326-config-data\") pod \"keystone-bootstrap-jvdln\" (UID: \"3bdaabe7-f89b-45ae-a45e-04ae95f66326\") " pod="openstack/keystone-bootstrap-jvdln" Feb 02 22:46:24 crc kubenswrapper[4789]: I0202 22:46:24.385454 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3bdaabe7-f89b-45ae-a45e-04ae95f66326-fernet-keys\") pod \"keystone-bootstrap-jvdln\" (UID: \"3bdaabe7-f89b-45ae-a45e-04ae95f66326\") " pod="openstack/keystone-bootstrap-jvdln" Feb 02 22:46:24 crc kubenswrapper[4789]: I0202 22:46:24.488311 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bdaabe7-f89b-45ae-a45e-04ae95f66326-combined-ca-bundle\") pod \"keystone-bootstrap-jvdln\" (UID: \"3bdaabe7-f89b-45ae-a45e-04ae95f66326\") " pod="openstack/keystone-bootstrap-jvdln" Feb 02 22:46:24 crc kubenswrapper[4789]: I0202 22:46:24.488354 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bb3a48bd-007e-4640-bf96-9ea8b39a12e2-ovsdbserver-nb\") pod \"dnsmasq-dns-8596d5dcd9-r2chc\" (UID: \"bb3a48bd-007e-4640-bf96-9ea8b39a12e2\") " pod="openstack/dnsmasq-dns-8596d5dcd9-r2chc" Feb 02 22:46:24 crc kubenswrapper[4789]: I0202 22:46:24.488371 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdhmd\" (UniqueName: \"kubernetes.io/projected/bb3a48bd-007e-4640-bf96-9ea8b39a12e2-kube-api-access-cdhmd\") pod \"dnsmasq-dns-8596d5dcd9-r2chc\" (UID: \"bb3a48bd-007e-4640-bf96-9ea8b39a12e2\") " pod="openstack/dnsmasq-dns-8596d5dcd9-r2chc" Feb 02 22:46:24 crc kubenswrapper[4789]: I0202 22:46:24.488390 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb3a48bd-007e-4640-bf96-9ea8b39a12e2-config\") pod \"dnsmasq-dns-8596d5dcd9-r2chc\" (UID: \"bb3a48bd-007e-4640-bf96-9ea8b39a12e2\") " pod="openstack/dnsmasq-dns-8596d5dcd9-r2chc" Feb 02 22:46:24 crc kubenswrapper[4789]: I0202 22:46:24.488406 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3bdaabe7-f89b-45ae-a45e-04ae95f66326-scripts\") pod \"keystone-bootstrap-jvdln\" (UID: \"3bdaabe7-f89b-45ae-a45e-04ae95f66326\") " pod="openstack/keystone-bootstrap-jvdln" Feb 02 22:46:24 crc kubenswrapper[4789]: I0202 22:46:24.488432 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bb3a48bd-007e-4640-bf96-9ea8b39a12e2-ovsdbserver-sb\") pod \"dnsmasq-dns-8596d5dcd9-r2chc\" (UID: \"bb3a48bd-007e-4640-bf96-9ea8b39a12e2\") " pod="openstack/dnsmasq-dns-8596d5dcd9-r2chc" Feb 02 22:46:24 crc kubenswrapper[4789]: I0202 22:46:24.488447 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qj2j7\" (UniqueName: \"kubernetes.io/projected/3bdaabe7-f89b-45ae-a45e-04ae95f66326-kube-api-access-qj2j7\") pod \"keystone-bootstrap-jvdln\" (UID: \"3bdaabe7-f89b-45ae-a45e-04ae95f66326\") " pod="openstack/keystone-bootstrap-jvdln" Feb 02 22:46:24 crc kubenswrapper[4789]: I0202 22:46:24.488468 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3bdaabe7-f89b-45ae-a45e-04ae95f66326-credential-keys\") pod \"keystone-bootstrap-jvdln\" (UID: \"3bdaabe7-f89b-45ae-a45e-04ae95f66326\") " pod="openstack/keystone-bootstrap-jvdln" Feb 02 22:46:24 crc kubenswrapper[4789]: I0202 22:46:24.488487 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bb3a48bd-007e-4640-bf96-9ea8b39a12e2-dns-svc\") pod \"dnsmasq-dns-8596d5dcd9-r2chc\" (UID: \"bb3a48bd-007e-4640-bf96-9ea8b39a12e2\") " pod="openstack/dnsmasq-dns-8596d5dcd9-r2chc" Feb 02 22:46:24 crc kubenswrapper[4789]: I0202 22:46:24.488524 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bdaabe7-f89b-45ae-a45e-04ae95f66326-config-data\") pod \"keystone-bootstrap-jvdln\" (UID: \"3bdaabe7-f89b-45ae-a45e-04ae95f66326\") " pod="openstack/keystone-bootstrap-jvdln" Feb 02 22:46:24 crc kubenswrapper[4789]: I0202 22:46:24.488555 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3bdaabe7-f89b-45ae-a45e-04ae95f66326-fernet-keys\") pod \"keystone-bootstrap-jvdln\" (UID: \"3bdaabe7-f89b-45ae-a45e-04ae95f66326\") " pod="openstack/keystone-bootstrap-jvdln" Feb 02 22:46:24 crc kubenswrapper[4789]: I0202 22:46:24.489507 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bb3a48bd-007e-4640-bf96-9ea8b39a12e2-dns-svc\") pod \"dnsmasq-dns-8596d5dcd9-r2chc\" (UID: \"bb3a48bd-007e-4640-bf96-9ea8b39a12e2\") " pod="openstack/dnsmasq-dns-8596d5dcd9-r2chc" Feb 02 22:46:24 crc kubenswrapper[4789]: I0202 22:46:24.489547 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb3a48bd-007e-4640-bf96-9ea8b39a12e2-config\") pod \"dnsmasq-dns-8596d5dcd9-r2chc\" (UID: \"bb3a48bd-007e-4640-bf96-9ea8b39a12e2\") " pod="openstack/dnsmasq-dns-8596d5dcd9-r2chc" Feb 02 22:46:24 crc kubenswrapper[4789]: I0202 22:46:24.489553 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bb3a48bd-007e-4640-bf96-9ea8b39a12e2-ovsdbserver-sb\") pod \"dnsmasq-dns-8596d5dcd9-r2chc\" (UID: \"bb3a48bd-007e-4640-bf96-9ea8b39a12e2\") " pod="openstack/dnsmasq-dns-8596d5dcd9-r2chc" Feb 02 22:46:24 crc kubenswrapper[4789]: I0202 22:46:24.490195 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bb3a48bd-007e-4640-bf96-9ea8b39a12e2-ovsdbserver-nb\") pod \"dnsmasq-dns-8596d5dcd9-r2chc\" (UID: \"bb3a48bd-007e-4640-bf96-9ea8b39a12e2\") " pod="openstack/dnsmasq-dns-8596d5dcd9-r2chc" Feb 02 22:46:24 crc kubenswrapper[4789]: I0202 22:46:24.492734 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3bdaabe7-f89b-45ae-a45e-04ae95f66326-scripts\") pod \"keystone-bootstrap-jvdln\" (UID: \"3bdaabe7-f89b-45ae-a45e-04ae95f66326\") " pod="openstack/keystone-bootstrap-jvdln" Feb 02 22:46:24 crc kubenswrapper[4789]: I0202 22:46:24.493555 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3bdaabe7-f89b-45ae-a45e-04ae95f66326-fernet-keys\") pod \"keystone-bootstrap-jvdln\" (UID: \"3bdaabe7-f89b-45ae-a45e-04ae95f66326\") " pod="openstack/keystone-bootstrap-jvdln" Feb 02 22:46:24 crc kubenswrapper[4789]: I0202 22:46:24.493932 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3bdaabe7-f89b-45ae-a45e-04ae95f66326-credential-keys\") pod \"keystone-bootstrap-jvdln\" (UID: \"3bdaabe7-f89b-45ae-a45e-04ae95f66326\") " pod="openstack/keystone-bootstrap-jvdln" Feb 02 22:46:24 crc kubenswrapper[4789]: I0202 22:46:24.494205 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bdaabe7-f89b-45ae-a45e-04ae95f66326-config-data\") pod \"keystone-bootstrap-jvdln\" (UID: \"3bdaabe7-f89b-45ae-a45e-04ae95f66326\") " pod="openstack/keystone-bootstrap-jvdln" Feb 02 22:46:24 crc kubenswrapper[4789]: I0202 22:46:24.504130 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bdaabe7-f89b-45ae-a45e-04ae95f66326-combined-ca-bundle\") pod \"keystone-bootstrap-jvdln\" (UID: \"3bdaabe7-f89b-45ae-a45e-04ae95f66326\") " pod="openstack/keystone-bootstrap-jvdln" Feb 02 22:46:24 crc kubenswrapper[4789]: I0202 22:46:24.508527 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdhmd\" (UniqueName: \"kubernetes.io/projected/bb3a48bd-007e-4640-bf96-9ea8b39a12e2-kube-api-access-cdhmd\") pod \"dnsmasq-dns-8596d5dcd9-r2chc\" (UID: \"bb3a48bd-007e-4640-bf96-9ea8b39a12e2\") " pod="openstack/dnsmasq-dns-8596d5dcd9-r2chc" Feb 02 22:46:24 crc kubenswrapper[4789]: I0202 22:46:24.509963 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qj2j7\" (UniqueName: \"kubernetes.io/projected/3bdaabe7-f89b-45ae-a45e-04ae95f66326-kube-api-access-qj2j7\") pod \"keystone-bootstrap-jvdln\" (UID: \"3bdaabe7-f89b-45ae-a45e-04ae95f66326\") " pod="openstack/keystone-bootstrap-jvdln" Feb 02 22:46:24 crc kubenswrapper[4789]: I0202 22:46:24.554436 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8596d5dcd9-r2chc" Feb 02 22:46:24 crc kubenswrapper[4789]: I0202 22:46:24.580975 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-jvdln" Feb 02 22:46:24 crc kubenswrapper[4789]: I0202 22:46:24.997402 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8596d5dcd9-r2chc"] Feb 02 22:46:25 crc kubenswrapper[4789]: W0202 22:46:25.005207 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbb3a48bd_007e_4640_bf96_9ea8b39a12e2.slice/crio-9198c9268d363a3fdd3879208fe6aff34468b81c24c6eba092e80470438c331c WatchSource:0}: Error finding container 9198c9268d363a3fdd3879208fe6aff34468b81c24c6eba092e80470438c331c: Status 404 returned error can't find the container with id 9198c9268d363a3fdd3879208fe6aff34468b81c24c6eba092e80470438c331c Feb 02 22:46:25 crc kubenswrapper[4789]: I0202 22:46:25.154328 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-jvdln"] Feb 02 22:46:25 crc kubenswrapper[4789]: W0202 22:46:25.160128 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3bdaabe7_f89b_45ae_a45e_04ae95f66326.slice/crio-d5f75bfe1f5df8e03ae4c6cccbd3f41b4c8aea178d90d635c9325c8495674483 WatchSource:0}: Error finding container d5f75bfe1f5df8e03ae4c6cccbd3f41b4c8aea178d90d635c9325c8495674483: Status 404 returned error can't find the container with id d5f75bfe1f5df8e03ae4c6cccbd3f41b4c8aea178d90d635c9325c8495674483 Feb 02 22:46:25 crc kubenswrapper[4789]: I0202 22:46:25.947198 4789 generic.go:334] "Generic (PLEG): container finished" podID="bb3a48bd-007e-4640-bf96-9ea8b39a12e2" containerID="c20723bd39315466a5306c4b66aa57ac15d0fb3a50ebd1ba0e3cd12f3f6265e1" exitCode=0 Feb 02 22:46:25 crc kubenswrapper[4789]: I0202 22:46:25.947294 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8596d5dcd9-r2chc" event={"ID":"bb3a48bd-007e-4640-bf96-9ea8b39a12e2","Type":"ContainerDied","Data":"c20723bd39315466a5306c4b66aa57ac15d0fb3a50ebd1ba0e3cd12f3f6265e1"} Feb 02 22:46:25 crc kubenswrapper[4789]: I0202 22:46:25.947618 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8596d5dcd9-r2chc" event={"ID":"bb3a48bd-007e-4640-bf96-9ea8b39a12e2","Type":"ContainerStarted","Data":"9198c9268d363a3fdd3879208fe6aff34468b81c24c6eba092e80470438c331c"} Feb 02 22:46:25 crc kubenswrapper[4789]: I0202 22:46:25.949060 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-jvdln" event={"ID":"3bdaabe7-f89b-45ae-a45e-04ae95f66326","Type":"ContainerStarted","Data":"d48fd47acca6a4fa1436167c3080fc2a2d77dc1ee242ae48714ae2b4cc0e04d0"} Feb 02 22:46:25 crc kubenswrapper[4789]: I0202 22:46:25.949108 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-jvdln" event={"ID":"3bdaabe7-f89b-45ae-a45e-04ae95f66326","Type":"ContainerStarted","Data":"d5f75bfe1f5df8e03ae4c6cccbd3f41b4c8aea178d90d635c9325c8495674483"} Feb 02 22:46:26 crc kubenswrapper[4789]: I0202 22:46:26.014342 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-jvdln" podStartSLOduration=2.014318459 podStartE2EDuration="2.014318459s" podCreationTimestamp="2026-02-02 22:46:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 22:46:26.005225903 +0000 UTC m=+5206.300250982" watchObservedRunningTime="2026-02-02 22:46:26.014318459 +0000 UTC m=+5206.309343518" Feb 02 22:46:26 crc kubenswrapper[4789]: I0202 22:46:26.963837 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8596d5dcd9-r2chc" event={"ID":"bb3a48bd-007e-4640-bf96-9ea8b39a12e2","Type":"ContainerStarted","Data":"24c4656fbbd0b3c2ab60b7004e1b2260a302c9d583946e78b17d3886e6aa1ee2"} Feb 02 22:46:27 crc kubenswrapper[4789]: I0202 22:46:27.007320 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8596d5dcd9-r2chc" podStartSLOduration=3.00728678 podStartE2EDuration="3.00728678s" podCreationTimestamp="2026-02-02 22:46:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 22:46:26.994292273 +0000 UTC m=+5207.289317322" watchObservedRunningTime="2026-02-02 22:46:27.00728678 +0000 UTC m=+5207.302311839" Feb 02 22:46:27 crc kubenswrapper[4789]: I0202 22:46:27.762371 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 02 22:46:27 crc kubenswrapper[4789]: I0202 22:46:27.971562 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8596d5dcd9-r2chc" Feb 02 22:46:28 crc kubenswrapper[4789]: I0202 22:46:28.982570 4789 generic.go:334] "Generic (PLEG): container finished" podID="3bdaabe7-f89b-45ae-a45e-04ae95f66326" containerID="d48fd47acca6a4fa1436167c3080fc2a2d77dc1ee242ae48714ae2b4cc0e04d0" exitCode=0 Feb 02 22:46:28 crc kubenswrapper[4789]: I0202 22:46:28.982632 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-jvdln" event={"ID":"3bdaabe7-f89b-45ae-a45e-04ae95f66326","Type":"ContainerDied","Data":"d48fd47acca6a4fa1436167c3080fc2a2d77dc1ee242ae48714ae2b4cc0e04d0"} Feb 02 22:46:30 crc kubenswrapper[4789]: I0202 22:46:30.401433 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-jvdln" Feb 02 22:46:30 crc kubenswrapper[4789]: I0202 22:46:30.505711 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bdaabe7-f89b-45ae-a45e-04ae95f66326-config-data\") pod \"3bdaabe7-f89b-45ae-a45e-04ae95f66326\" (UID: \"3bdaabe7-f89b-45ae-a45e-04ae95f66326\") " Feb 02 22:46:30 crc kubenswrapper[4789]: I0202 22:46:30.505976 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3bdaabe7-f89b-45ae-a45e-04ae95f66326-credential-keys\") pod \"3bdaabe7-f89b-45ae-a45e-04ae95f66326\" (UID: \"3bdaabe7-f89b-45ae-a45e-04ae95f66326\") " Feb 02 22:46:30 crc kubenswrapper[4789]: I0202 22:46:30.506002 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qj2j7\" (UniqueName: \"kubernetes.io/projected/3bdaabe7-f89b-45ae-a45e-04ae95f66326-kube-api-access-qj2j7\") pod \"3bdaabe7-f89b-45ae-a45e-04ae95f66326\" (UID: \"3bdaabe7-f89b-45ae-a45e-04ae95f66326\") " Feb 02 22:46:30 crc kubenswrapper[4789]: I0202 22:46:30.506099 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bdaabe7-f89b-45ae-a45e-04ae95f66326-combined-ca-bundle\") pod \"3bdaabe7-f89b-45ae-a45e-04ae95f66326\" (UID: \"3bdaabe7-f89b-45ae-a45e-04ae95f66326\") " Feb 02 22:46:30 crc kubenswrapper[4789]: I0202 22:46:30.506123 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3bdaabe7-f89b-45ae-a45e-04ae95f66326-scripts\") pod \"3bdaabe7-f89b-45ae-a45e-04ae95f66326\" (UID: \"3bdaabe7-f89b-45ae-a45e-04ae95f66326\") " Feb 02 22:46:30 crc kubenswrapper[4789]: I0202 22:46:30.506153 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3bdaabe7-f89b-45ae-a45e-04ae95f66326-fernet-keys\") pod \"3bdaabe7-f89b-45ae-a45e-04ae95f66326\" (UID: \"3bdaabe7-f89b-45ae-a45e-04ae95f66326\") " Feb 02 22:46:30 crc kubenswrapper[4789]: I0202 22:46:30.511632 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bdaabe7-f89b-45ae-a45e-04ae95f66326-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "3bdaabe7-f89b-45ae-a45e-04ae95f66326" (UID: "3bdaabe7-f89b-45ae-a45e-04ae95f66326"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 22:46:30 crc kubenswrapper[4789]: I0202 22:46:30.511680 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bdaabe7-f89b-45ae-a45e-04ae95f66326-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "3bdaabe7-f89b-45ae-a45e-04ae95f66326" (UID: "3bdaabe7-f89b-45ae-a45e-04ae95f66326"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 22:46:30 crc kubenswrapper[4789]: I0202 22:46:30.511982 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bdaabe7-f89b-45ae-a45e-04ae95f66326-kube-api-access-qj2j7" (OuterVolumeSpecName: "kube-api-access-qj2j7") pod "3bdaabe7-f89b-45ae-a45e-04ae95f66326" (UID: "3bdaabe7-f89b-45ae-a45e-04ae95f66326"). InnerVolumeSpecName "kube-api-access-qj2j7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 22:46:30 crc kubenswrapper[4789]: I0202 22:46:30.513100 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bdaabe7-f89b-45ae-a45e-04ae95f66326-scripts" (OuterVolumeSpecName: "scripts") pod "3bdaabe7-f89b-45ae-a45e-04ae95f66326" (UID: "3bdaabe7-f89b-45ae-a45e-04ae95f66326"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 22:46:30 crc kubenswrapper[4789]: I0202 22:46:30.533655 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bdaabe7-f89b-45ae-a45e-04ae95f66326-config-data" (OuterVolumeSpecName: "config-data") pod "3bdaabe7-f89b-45ae-a45e-04ae95f66326" (UID: "3bdaabe7-f89b-45ae-a45e-04ae95f66326"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 22:46:30 crc kubenswrapper[4789]: I0202 22:46:30.538500 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bdaabe7-f89b-45ae-a45e-04ae95f66326-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3bdaabe7-f89b-45ae-a45e-04ae95f66326" (UID: "3bdaabe7-f89b-45ae-a45e-04ae95f66326"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 22:46:30 crc kubenswrapper[4789]: I0202 22:46:30.608896 4789 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bdaabe7-f89b-45ae-a45e-04ae95f66326-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 22:46:30 crc kubenswrapper[4789]: I0202 22:46:30.608933 4789 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3bdaabe7-f89b-45ae-a45e-04ae95f66326-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 22:46:30 crc kubenswrapper[4789]: I0202 22:46:30.608950 4789 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3bdaabe7-f89b-45ae-a45e-04ae95f66326-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 02 22:46:30 crc kubenswrapper[4789]: I0202 22:46:30.608964 4789 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bdaabe7-f89b-45ae-a45e-04ae95f66326-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 22:46:30 crc kubenswrapper[4789]: I0202 22:46:30.608978 4789 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3bdaabe7-f89b-45ae-a45e-04ae95f66326-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 02 22:46:30 crc kubenswrapper[4789]: I0202 22:46:30.608992 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qj2j7\" (UniqueName: \"kubernetes.io/projected/3bdaabe7-f89b-45ae-a45e-04ae95f66326-kube-api-access-qj2j7\") on node \"crc\" DevicePath \"\"" Feb 02 22:46:31 crc kubenswrapper[4789]: I0202 22:46:31.003560 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-jvdln" event={"ID":"3bdaabe7-f89b-45ae-a45e-04ae95f66326","Type":"ContainerDied","Data":"d5f75bfe1f5df8e03ae4c6cccbd3f41b4c8aea178d90d635c9325c8495674483"} Feb 02 22:46:31 crc kubenswrapper[4789]: I0202 22:46:31.003623 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5f75bfe1f5df8e03ae4c6cccbd3f41b4c8aea178d90d635c9325c8495674483" Feb 02 22:46:31 crc kubenswrapper[4789]: I0202 22:46:31.003637 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-jvdln" Feb 02 22:46:31 crc kubenswrapper[4789]: I0202 22:46:31.104500 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-jvdln"] Feb 02 22:46:31 crc kubenswrapper[4789]: I0202 22:46:31.118829 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-jvdln"] Feb 02 22:46:31 crc kubenswrapper[4789]: I0202 22:46:31.198789 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-frq2r"] Feb 02 22:46:31 crc kubenswrapper[4789]: E0202 22:46:31.199162 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bdaabe7-f89b-45ae-a45e-04ae95f66326" containerName="keystone-bootstrap" Feb 02 22:46:31 crc kubenswrapper[4789]: I0202 22:46:31.199186 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bdaabe7-f89b-45ae-a45e-04ae95f66326" containerName="keystone-bootstrap" Feb 02 22:46:31 crc kubenswrapper[4789]: I0202 22:46:31.199406 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bdaabe7-f89b-45ae-a45e-04ae95f66326" containerName="keystone-bootstrap" Feb 02 22:46:31 crc kubenswrapper[4789]: I0202 22:46:31.200232 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-frq2r" Feb 02 22:46:31 crc kubenswrapper[4789]: I0202 22:46:31.202131 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 02 22:46:31 crc kubenswrapper[4789]: I0202 22:46:31.202164 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 02 22:46:31 crc kubenswrapper[4789]: I0202 22:46:31.202988 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-88j9r" Feb 02 22:46:31 crc kubenswrapper[4789]: I0202 22:46:31.202993 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 02 22:46:31 crc kubenswrapper[4789]: I0202 22:46:31.203184 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 02 22:46:31 crc kubenswrapper[4789]: I0202 22:46:31.222284 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-frq2r"] Feb 02 22:46:31 crc kubenswrapper[4789]: I0202 22:46:31.321021 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/52d3c3df-2d71-4415-ae65-7301f4157711-fernet-keys\") pod \"keystone-bootstrap-frq2r\" (UID: \"52d3c3df-2d71-4415-ae65-7301f4157711\") " pod="openstack/keystone-bootstrap-frq2r" Feb 02 22:46:31 crc kubenswrapper[4789]: I0202 22:46:31.321080 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52d3c3df-2d71-4415-ae65-7301f4157711-scripts\") pod \"keystone-bootstrap-frq2r\" (UID: \"52d3c3df-2d71-4415-ae65-7301f4157711\") " pod="openstack/keystone-bootstrap-frq2r" Feb 02 22:46:31 crc kubenswrapper[4789]: I0202 22:46:31.321103 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/52d3c3df-2d71-4415-ae65-7301f4157711-credential-keys\") pod \"keystone-bootstrap-frq2r\" (UID: \"52d3c3df-2d71-4415-ae65-7301f4157711\") " pod="openstack/keystone-bootstrap-frq2r" Feb 02 22:46:31 crc kubenswrapper[4789]: I0202 22:46:31.321248 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52d3c3df-2d71-4415-ae65-7301f4157711-combined-ca-bundle\") pod \"keystone-bootstrap-frq2r\" (UID: \"52d3c3df-2d71-4415-ae65-7301f4157711\") " pod="openstack/keystone-bootstrap-frq2r" Feb 02 22:46:31 crc kubenswrapper[4789]: I0202 22:46:31.321488 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52d3c3df-2d71-4415-ae65-7301f4157711-config-data\") pod \"keystone-bootstrap-frq2r\" (UID: \"52d3c3df-2d71-4415-ae65-7301f4157711\") " pod="openstack/keystone-bootstrap-frq2r" Feb 02 22:46:31 crc kubenswrapper[4789]: I0202 22:46:31.321749 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gwxj\" (UniqueName: \"kubernetes.io/projected/52d3c3df-2d71-4415-ae65-7301f4157711-kube-api-access-4gwxj\") pod \"keystone-bootstrap-frq2r\" (UID: \"52d3c3df-2d71-4415-ae65-7301f4157711\") " pod="openstack/keystone-bootstrap-frq2r" Feb 02 22:46:31 crc kubenswrapper[4789]: I0202 22:46:31.423102 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52d3c3df-2d71-4415-ae65-7301f4157711-config-data\") pod \"keystone-bootstrap-frq2r\" (UID: \"52d3c3df-2d71-4415-ae65-7301f4157711\") " pod="openstack/keystone-bootstrap-frq2r" Feb 02 22:46:31 crc kubenswrapper[4789]: I0202 22:46:31.423177 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gwxj\" (UniqueName: \"kubernetes.io/projected/52d3c3df-2d71-4415-ae65-7301f4157711-kube-api-access-4gwxj\") pod \"keystone-bootstrap-frq2r\" (UID: \"52d3c3df-2d71-4415-ae65-7301f4157711\") " pod="openstack/keystone-bootstrap-frq2r" Feb 02 22:46:31 crc kubenswrapper[4789]: I0202 22:46:31.423218 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/52d3c3df-2d71-4415-ae65-7301f4157711-fernet-keys\") pod \"keystone-bootstrap-frq2r\" (UID: \"52d3c3df-2d71-4415-ae65-7301f4157711\") " pod="openstack/keystone-bootstrap-frq2r" Feb 02 22:46:31 crc kubenswrapper[4789]: I0202 22:46:31.423241 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52d3c3df-2d71-4415-ae65-7301f4157711-scripts\") pod \"keystone-bootstrap-frq2r\" (UID: \"52d3c3df-2d71-4415-ae65-7301f4157711\") " pod="openstack/keystone-bootstrap-frq2r" Feb 02 22:46:31 crc kubenswrapper[4789]: I0202 22:46:31.423260 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/52d3c3df-2d71-4415-ae65-7301f4157711-credential-keys\") pod \"keystone-bootstrap-frq2r\" (UID: \"52d3c3df-2d71-4415-ae65-7301f4157711\") " pod="openstack/keystone-bootstrap-frq2r" Feb 02 22:46:31 crc kubenswrapper[4789]: I0202 22:46:31.423297 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52d3c3df-2d71-4415-ae65-7301f4157711-combined-ca-bundle\") pod \"keystone-bootstrap-frq2r\" (UID: \"52d3c3df-2d71-4415-ae65-7301f4157711\") " pod="openstack/keystone-bootstrap-frq2r" Feb 02 22:46:31 crc kubenswrapper[4789]: I0202 22:46:31.428168 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52d3c3df-2d71-4415-ae65-7301f4157711-scripts\") pod \"keystone-bootstrap-frq2r\" (UID: \"52d3c3df-2d71-4415-ae65-7301f4157711\") " pod="openstack/keystone-bootstrap-frq2r" Feb 02 22:46:31 crc kubenswrapper[4789]: I0202 22:46:31.428268 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/52d3c3df-2d71-4415-ae65-7301f4157711-credential-keys\") pod \"keystone-bootstrap-frq2r\" (UID: \"52d3c3df-2d71-4415-ae65-7301f4157711\") " pod="openstack/keystone-bootstrap-frq2r" Feb 02 22:46:31 crc kubenswrapper[4789]: I0202 22:46:31.435200 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52d3c3df-2d71-4415-ae65-7301f4157711-combined-ca-bundle\") pod \"keystone-bootstrap-frq2r\" (UID: \"52d3c3df-2d71-4415-ae65-7301f4157711\") " pod="openstack/keystone-bootstrap-frq2r" Feb 02 22:46:31 crc kubenswrapper[4789]: I0202 22:46:31.436217 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/52d3c3df-2d71-4415-ae65-7301f4157711-fernet-keys\") pod \"keystone-bootstrap-frq2r\" (UID: \"52d3c3df-2d71-4415-ae65-7301f4157711\") " pod="openstack/keystone-bootstrap-frq2r" Feb 02 22:46:31 crc kubenswrapper[4789]: I0202 22:46:31.447438 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52d3c3df-2d71-4415-ae65-7301f4157711-config-data\") pod \"keystone-bootstrap-frq2r\" (UID: \"52d3c3df-2d71-4415-ae65-7301f4157711\") " pod="openstack/keystone-bootstrap-frq2r" Feb 02 22:46:31 crc kubenswrapper[4789]: I0202 22:46:31.453105 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gwxj\" (UniqueName: \"kubernetes.io/projected/52d3c3df-2d71-4415-ae65-7301f4157711-kube-api-access-4gwxj\") pod \"keystone-bootstrap-frq2r\" (UID: \"52d3c3df-2d71-4415-ae65-7301f4157711\") " pod="openstack/keystone-bootstrap-frq2r" Feb 02 22:46:31 crc kubenswrapper[4789]: I0202 22:46:31.587982 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-frq2r" Feb 02 22:46:32 crc kubenswrapper[4789]: I0202 22:46:32.061023 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-frq2r"] Feb 02 22:46:32 crc kubenswrapper[4789]: I0202 22:46:32.435337 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bdaabe7-f89b-45ae-a45e-04ae95f66326" path="/var/lib/kubelet/pods/3bdaabe7-f89b-45ae-a45e-04ae95f66326/volumes" Feb 02 22:46:33 crc kubenswrapper[4789]: I0202 22:46:33.021830 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-frq2r" event={"ID":"52d3c3df-2d71-4415-ae65-7301f4157711","Type":"ContainerStarted","Data":"a9a5aa857d486c42197e9374cbbdf039cba49ca448c89b9de1b987c2bda95389"} Feb 02 22:46:33 crc kubenswrapper[4789]: I0202 22:46:33.021871 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-frq2r" event={"ID":"52d3c3df-2d71-4415-ae65-7301f4157711","Type":"ContainerStarted","Data":"fcc6f8d94e7f0d486169e42df2e5cd2e8f1327f7b27f79ab597270e17a2e8223"} Feb 02 22:46:33 crc kubenswrapper[4789]: I0202 22:46:33.049836 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-frq2r" podStartSLOduration=2.049647212 podStartE2EDuration="2.049647212s" podCreationTimestamp="2026-02-02 22:46:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 22:46:33.042288425 +0000 UTC m=+5213.337313444" watchObservedRunningTime="2026-02-02 22:46:33.049647212 +0000 UTC m=+5213.344672251" Feb 02 22:46:34 crc kubenswrapper[4789]: I0202 22:46:34.556826 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8596d5dcd9-r2chc" Feb 02 22:46:34 crc kubenswrapper[4789]: I0202 22:46:34.653724 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-667d7cd957-rgfp8"] Feb 02 22:46:34 crc kubenswrapper[4789]: I0202 22:46:34.653993 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-667d7cd957-rgfp8" podUID="40f1a92c-8d4d-43ec-b1a6-ad811716cb87" containerName="dnsmasq-dns" containerID="cri-o://9c5e1970b7c142ebcc7f4e9e9dde7ae7fc93e4afcce825534defc74fc73b0361" gracePeriod=10 Feb 02 22:46:35 crc kubenswrapper[4789]: I0202 22:46:35.042907 4789 generic.go:334] "Generic (PLEG): container finished" podID="52d3c3df-2d71-4415-ae65-7301f4157711" containerID="a9a5aa857d486c42197e9374cbbdf039cba49ca448c89b9de1b987c2bda95389" exitCode=0 Feb 02 22:46:35 crc kubenswrapper[4789]: I0202 22:46:35.043018 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-frq2r" event={"ID":"52d3c3df-2d71-4415-ae65-7301f4157711","Type":"ContainerDied","Data":"a9a5aa857d486c42197e9374cbbdf039cba49ca448c89b9de1b987c2bda95389"} Feb 02 22:46:35 crc kubenswrapper[4789]: I0202 22:46:35.046462 4789 generic.go:334] "Generic (PLEG): container finished" podID="40f1a92c-8d4d-43ec-b1a6-ad811716cb87" containerID="9c5e1970b7c142ebcc7f4e9e9dde7ae7fc93e4afcce825534defc74fc73b0361" exitCode=0 Feb 02 22:46:35 crc kubenswrapper[4789]: I0202 22:46:35.046518 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-667d7cd957-rgfp8" event={"ID":"40f1a92c-8d4d-43ec-b1a6-ad811716cb87","Type":"ContainerDied","Data":"9c5e1970b7c142ebcc7f4e9e9dde7ae7fc93e4afcce825534defc74fc73b0361"} Feb 02 22:46:35 crc kubenswrapper[4789]: I0202 22:46:35.046553 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-667d7cd957-rgfp8" event={"ID":"40f1a92c-8d4d-43ec-b1a6-ad811716cb87","Type":"ContainerDied","Data":"580e311474fd6d470666a1ec061e2da3276e8a36938d92acb87ee19ea9f28a6d"} Feb 02 22:46:35 crc kubenswrapper[4789]: I0202 22:46:35.046573 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="580e311474fd6d470666a1ec061e2da3276e8a36938d92acb87ee19ea9f28a6d" Feb 02 22:46:35 crc kubenswrapper[4789]: I0202 22:46:35.110310 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-667d7cd957-rgfp8" Feb 02 22:46:35 crc kubenswrapper[4789]: I0202 22:46:35.221904 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hjr7n\" (UniqueName: \"kubernetes.io/projected/40f1a92c-8d4d-43ec-b1a6-ad811716cb87-kube-api-access-hjr7n\") pod \"40f1a92c-8d4d-43ec-b1a6-ad811716cb87\" (UID: \"40f1a92c-8d4d-43ec-b1a6-ad811716cb87\") " Feb 02 22:46:35 crc kubenswrapper[4789]: I0202 22:46:35.222038 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/40f1a92c-8d4d-43ec-b1a6-ad811716cb87-dns-svc\") pod \"40f1a92c-8d4d-43ec-b1a6-ad811716cb87\" (UID: \"40f1a92c-8d4d-43ec-b1a6-ad811716cb87\") " Feb 02 22:46:35 crc kubenswrapper[4789]: I0202 22:46:35.222102 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/40f1a92c-8d4d-43ec-b1a6-ad811716cb87-ovsdbserver-sb\") pod \"40f1a92c-8d4d-43ec-b1a6-ad811716cb87\" (UID: \"40f1a92c-8d4d-43ec-b1a6-ad811716cb87\") " Feb 02 22:46:35 crc kubenswrapper[4789]: I0202 22:46:35.222145 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/40f1a92c-8d4d-43ec-b1a6-ad811716cb87-ovsdbserver-nb\") pod \"40f1a92c-8d4d-43ec-b1a6-ad811716cb87\" (UID: \"40f1a92c-8d4d-43ec-b1a6-ad811716cb87\") " Feb 02 22:46:35 crc kubenswrapper[4789]: I0202 22:46:35.222270 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40f1a92c-8d4d-43ec-b1a6-ad811716cb87-config\") pod \"40f1a92c-8d4d-43ec-b1a6-ad811716cb87\" (UID: \"40f1a92c-8d4d-43ec-b1a6-ad811716cb87\") " Feb 02 22:46:35 crc kubenswrapper[4789]: I0202 22:46:35.228365 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40f1a92c-8d4d-43ec-b1a6-ad811716cb87-kube-api-access-hjr7n" (OuterVolumeSpecName: "kube-api-access-hjr7n") pod "40f1a92c-8d4d-43ec-b1a6-ad811716cb87" (UID: "40f1a92c-8d4d-43ec-b1a6-ad811716cb87"). InnerVolumeSpecName "kube-api-access-hjr7n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 22:46:35 crc kubenswrapper[4789]: I0202 22:46:35.259332 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40f1a92c-8d4d-43ec-b1a6-ad811716cb87-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "40f1a92c-8d4d-43ec-b1a6-ad811716cb87" (UID: "40f1a92c-8d4d-43ec-b1a6-ad811716cb87"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 22:46:35 crc kubenswrapper[4789]: I0202 22:46:35.262081 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40f1a92c-8d4d-43ec-b1a6-ad811716cb87-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "40f1a92c-8d4d-43ec-b1a6-ad811716cb87" (UID: "40f1a92c-8d4d-43ec-b1a6-ad811716cb87"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 22:46:35 crc kubenswrapper[4789]: I0202 22:46:35.268156 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40f1a92c-8d4d-43ec-b1a6-ad811716cb87-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "40f1a92c-8d4d-43ec-b1a6-ad811716cb87" (UID: "40f1a92c-8d4d-43ec-b1a6-ad811716cb87"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 22:46:35 crc kubenswrapper[4789]: I0202 22:46:35.271146 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40f1a92c-8d4d-43ec-b1a6-ad811716cb87-config" (OuterVolumeSpecName: "config") pod "40f1a92c-8d4d-43ec-b1a6-ad811716cb87" (UID: "40f1a92c-8d4d-43ec-b1a6-ad811716cb87"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 22:46:35 crc kubenswrapper[4789]: I0202 22:46:35.323726 4789 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40f1a92c-8d4d-43ec-b1a6-ad811716cb87-config\") on node \"crc\" DevicePath \"\"" Feb 02 22:46:35 crc kubenswrapper[4789]: I0202 22:46:35.323762 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hjr7n\" (UniqueName: \"kubernetes.io/projected/40f1a92c-8d4d-43ec-b1a6-ad811716cb87-kube-api-access-hjr7n\") on node \"crc\" DevicePath \"\"" Feb 02 22:46:35 crc kubenswrapper[4789]: I0202 22:46:35.323772 4789 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/40f1a92c-8d4d-43ec-b1a6-ad811716cb87-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 22:46:35 crc kubenswrapper[4789]: I0202 22:46:35.323780 4789 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/40f1a92c-8d4d-43ec-b1a6-ad811716cb87-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 22:46:35 crc kubenswrapper[4789]: I0202 22:46:35.323788 4789 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/40f1a92c-8d4d-43ec-b1a6-ad811716cb87-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 22:46:36 crc kubenswrapper[4789]: I0202 22:46:36.072980 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-667d7cd957-rgfp8" Feb 02 22:46:36 crc kubenswrapper[4789]: I0202 22:46:36.124910 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-667d7cd957-rgfp8"] Feb 02 22:46:36 crc kubenswrapper[4789]: I0202 22:46:36.132271 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-667d7cd957-rgfp8"] Feb 02 22:46:36 crc kubenswrapper[4789]: I0202 22:46:36.433101 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40f1a92c-8d4d-43ec-b1a6-ad811716cb87" path="/var/lib/kubelet/pods/40f1a92c-8d4d-43ec-b1a6-ad811716cb87/volumes" Feb 02 22:46:36 crc kubenswrapper[4789]: I0202 22:46:36.516048 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-frq2r" Feb 02 22:46:36 crc kubenswrapper[4789]: I0202 22:46:36.647178 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/52d3c3df-2d71-4415-ae65-7301f4157711-fernet-keys\") pod \"52d3c3df-2d71-4415-ae65-7301f4157711\" (UID: \"52d3c3df-2d71-4415-ae65-7301f4157711\") " Feb 02 22:46:36 crc kubenswrapper[4789]: I0202 22:46:36.647347 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52d3c3df-2d71-4415-ae65-7301f4157711-config-data\") pod \"52d3c3df-2d71-4415-ae65-7301f4157711\" (UID: \"52d3c3df-2d71-4415-ae65-7301f4157711\") " Feb 02 22:46:36 crc kubenswrapper[4789]: I0202 22:46:36.647467 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/52d3c3df-2d71-4415-ae65-7301f4157711-credential-keys\") pod \"52d3c3df-2d71-4415-ae65-7301f4157711\" (UID: \"52d3c3df-2d71-4415-ae65-7301f4157711\") " Feb 02 22:46:36 crc kubenswrapper[4789]: I0202 22:46:36.647528 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52d3c3df-2d71-4415-ae65-7301f4157711-combined-ca-bundle\") pod \"52d3c3df-2d71-4415-ae65-7301f4157711\" (UID: \"52d3c3df-2d71-4415-ae65-7301f4157711\") " Feb 02 22:46:36 crc kubenswrapper[4789]: I0202 22:46:36.647661 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4gwxj\" (UniqueName: \"kubernetes.io/projected/52d3c3df-2d71-4415-ae65-7301f4157711-kube-api-access-4gwxj\") pod \"52d3c3df-2d71-4415-ae65-7301f4157711\" (UID: \"52d3c3df-2d71-4415-ae65-7301f4157711\") " Feb 02 22:46:36 crc kubenswrapper[4789]: I0202 22:46:36.647745 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52d3c3df-2d71-4415-ae65-7301f4157711-scripts\") pod \"52d3c3df-2d71-4415-ae65-7301f4157711\" (UID: \"52d3c3df-2d71-4415-ae65-7301f4157711\") " Feb 02 22:46:36 crc kubenswrapper[4789]: I0202 22:46:36.653857 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52d3c3df-2d71-4415-ae65-7301f4157711-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "52d3c3df-2d71-4415-ae65-7301f4157711" (UID: "52d3c3df-2d71-4415-ae65-7301f4157711"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 22:46:36 crc kubenswrapper[4789]: I0202 22:46:36.654731 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52d3c3df-2d71-4415-ae65-7301f4157711-kube-api-access-4gwxj" (OuterVolumeSpecName: "kube-api-access-4gwxj") pod "52d3c3df-2d71-4415-ae65-7301f4157711" (UID: "52d3c3df-2d71-4415-ae65-7301f4157711"). InnerVolumeSpecName "kube-api-access-4gwxj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 22:46:36 crc kubenswrapper[4789]: I0202 22:46:36.660079 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52d3c3df-2d71-4415-ae65-7301f4157711-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "52d3c3df-2d71-4415-ae65-7301f4157711" (UID: "52d3c3df-2d71-4415-ae65-7301f4157711"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 22:46:36 crc kubenswrapper[4789]: I0202 22:46:36.660738 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52d3c3df-2d71-4415-ae65-7301f4157711-scripts" (OuterVolumeSpecName: "scripts") pod "52d3c3df-2d71-4415-ae65-7301f4157711" (UID: "52d3c3df-2d71-4415-ae65-7301f4157711"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 22:46:36 crc kubenswrapper[4789]: I0202 22:46:36.681778 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52d3c3df-2d71-4415-ae65-7301f4157711-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "52d3c3df-2d71-4415-ae65-7301f4157711" (UID: "52d3c3df-2d71-4415-ae65-7301f4157711"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 22:46:36 crc kubenswrapper[4789]: I0202 22:46:36.687903 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52d3c3df-2d71-4415-ae65-7301f4157711-config-data" (OuterVolumeSpecName: "config-data") pod "52d3c3df-2d71-4415-ae65-7301f4157711" (UID: "52d3c3df-2d71-4415-ae65-7301f4157711"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 22:46:36 crc kubenswrapper[4789]: I0202 22:46:36.750104 4789 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/52d3c3df-2d71-4415-ae65-7301f4157711-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 02 22:46:36 crc kubenswrapper[4789]: I0202 22:46:36.750147 4789 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52d3c3df-2d71-4415-ae65-7301f4157711-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 22:46:36 crc kubenswrapper[4789]: I0202 22:46:36.750159 4789 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/52d3c3df-2d71-4415-ae65-7301f4157711-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 02 22:46:36 crc kubenswrapper[4789]: I0202 22:46:36.750178 4789 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52d3c3df-2d71-4415-ae65-7301f4157711-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 22:46:36 crc kubenswrapper[4789]: I0202 22:46:36.750197 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4gwxj\" (UniqueName: \"kubernetes.io/projected/52d3c3df-2d71-4415-ae65-7301f4157711-kube-api-access-4gwxj\") on node \"crc\" DevicePath \"\"" Feb 02 22:46:36 crc kubenswrapper[4789]: I0202 22:46:36.750213 4789 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52d3c3df-2d71-4415-ae65-7301f4157711-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 22:46:37 crc kubenswrapper[4789]: I0202 22:46:37.092906 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-frq2r" event={"ID":"52d3c3df-2d71-4415-ae65-7301f4157711","Type":"ContainerDied","Data":"fcc6f8d94e7f0d486169e42df2e5cd2e8f1327f7b27f79ab597270e17a2e8223"} Feb 02 22:46:37 crc kubenswrapper[4789]: I0202 22:46:37.092986 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fcc6f8d94e7f0d486169e42df2e5cd2e8f1327f7b27f79ab597270e17a2e8223" Feb 02 22:46:37 crc kubenswrapper[4789]: I0202 22:46:37.093138 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-frq2r" Feb 02 22:46:37 crc kubenswrapper[4789]: I0202 22:46:37.170860 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-694d6fd58-9xbpn"] Feb 02 22:46:37 crc kubenswrapper[4789]: E0202 22:46:37.171229 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52d3c3df-2d71-4415-ae65-7301f4157711" containerName="keystone-bootstrap" Feb 02 22:46:37 crc kubenswrapper[4789]: I0202 22:46:37.171252 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="52d3c3df-2d71-4415-ae65-7301f4157711" containerName="keystone-bootstrap" Feb 02 22:46:37 crc kubenswrapper[4789]: E0202 22:46:37.171271 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40f1a92c-8d4d-43ec-b1a6-ad811716cb87" containerName="init" Feb 02 22:46:37 crc kubenswrapper[4789]: I0202 22:46:37.171279 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="40f1a92c-8d4d-43ec-b1a6-ad811716cb87" containerName="init" Feb 02 22:46:37 crc kubenswrapper[4789]: E0202 22:46:37.171289 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40f1a92c-8d4d-43ec-b1a6-ad811716cb87" containerName="dnsmasq-dns" Feb 02 22:46:37 crc kubenswrapper[4789]: I0202 22:46:37.171298 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="40f1a92c-8d4d-43ec-b1a6-ad811716cb87" containerName="dnsmasq-dns" Feb 02 22:46:37 crc kubenswrapper[4789]: I0202 22:46:37.171493 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="40f1a92c-8d4d-43ec-b1a6-ad811716cb87" containerName="dnsmasq-dns" Feb 02 22:46:37 crc kubenswrapper[4789]: I0202 22:46:37.171511 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="52d3c3df-2d71-4415-ae65-7301f4157711" containerName="keystone-bootstrap" Feb 02 22:46:37 crc kubenswrapper[4789]: I0202 22:46:37.172089 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-694d6fd58-9xbpn" Feb 02 22:46:37 crc kubenswrapper[4789]: I0202 22:46:37.177145 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 02 22:46:37 crc kubenswrapper[4789]: I0202 22:46:37.177718 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-88j9r" Feb 02 22:46:37 crc kubenswrapper[4789]: I0202 22:46:37.183108 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 02 22:46:37 crc kubenswrapper[4789]: I0202 22:46:37.186266 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 02 22:46:37 crc kubenswrapper[4789]: I0202 22:46:37.195664 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-694d6fd58-9xbpn"] Feb 02 22:46:37 crc kubenswrapper[4789]: I0202 22:46:37.361486 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/714e1927-ff76-40b6-8134-dc6d92c4c226-combined-ca-bundle\") pod \"keystone-694d6fd58-9xbpn\" (UID: \"714e1927-ff76-40b6-8134-dc6d92c4c226\") " pod="openstack/keystone-694d6fd58-9xbpn" Feb 02 22:46:37 crc kubenswrapper[4789]: I0202 22:46:37.361551 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7xjf\" (UniqueName: \"kubernetes.io/projected/714e1927-ff76-40b6-8134-dc6d92c4c226-kube-api-access-x7xjf\") pod \"keystone-694d6fd58-9xbpn\" (UID: \"714e1927-ff76-40b6-8134-dc6d92c4c226\") " pod="openstack/keystone-694d6fd58-9xbpn" Feb 02 22:46:37 crc kubenswrapper[4789]: I0202 22:46:37.361592 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/714e1927-ff76-40b6-8134-dc6d92c4c226-credential-keys\") pod \"keystone-694d6fd58-9xbpn\" (UID: \"714e1927-ff76-40b6-8134-dc6d92c4c226\") " pod="openstack/keystone-694d6fd58-9xbpn" Feb 02 22:46:37 crc kubenswrapper[4789]: I0202 22:46:37.361626 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/714e1927-ff76-40b6-8134-dc6d92c4c226-scripts\") pod \"keystone-694d6fd58-9xbpn\" (UID: \"714e1927-ff76-40b6-8134-dc6d92c4c226\") " pod="openstack/keystone-694d6fd58-9xbpn" Feb 02 22:46:37 crc kubenswrapper[4789]: I0202 22:46:37.361648 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/714e1927-ff76-40b6-8134-dc6d92c4c226-config-data\") pod \"keystone-694d6fd58-9xbpn\" (UID: \"714e1927-ff76-40b6-8134-dc6d92c4c226\") " pod="openstack/keystone-694d6fd58-9xbpn" Feb 02 22:46:37 crc kubenswrapper[4789]: I0202 22:46:37.361670 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/714e1927-ff76-40b6-8134-dc6d92c4c226-fernet-keys\") pod \"keystone-694d6fd58-9xbpn\" (UID: \"714e1927-ff76-40b6-8134-dc6d92c4c226\") " pod="openstack/keystone-694d6fd58-9xbpn" Feb 02 22:46:37 crc kubenswrapper[4789]: I0202 22:46:37.463232 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7xjf\" (UniqueName: \"kubernetes.io/projected/714e1927-ff76-40b6-8134-dc6d92c4c226-kube-api-access-x7xjf\") pod \"keystone-694d6fd58-9xbpn\" (UID: \"714e1927-ff76-40b6-8134-dc6d92c4c226\") " pod="openstack/keystone-694d6fd58-9xbpn" Feb 02 22:46:37 crc kubenswrapper[4789]: I0202 22:46:37.463315 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/714e1927-ff76-40b6-8134-dc6d92c4c226-credential-keys\") pod \"keystone-694d6fd58-9xbpn\" (UID: \"714e1927-ff76-40b6-8134-dc6d92c4c226\") " pod="openstack/keystone-694d6fd58-9xbpn" Feb 02 22:46:37 crc kubenswrapper[4789]: I0202 22:46:37.463389 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/714e1927-ff76-40b6-8134-dc6d92c4c226-scripts\") pod \"keystone-694d6fd58-9xbpn\" (UID: \"714e1927-ff76-40b6-8134-dc6d92c4c226\") " pod="openstack/keystone-694d6fd58-9xbpn" Feb 02 22:46:37 crc kubenswrapper[4789]: I0202 22:46:37.463434 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/714e1927-ff76-40b6-8134-dc6d92c4c226-config-data\") pod \"keystone-694d6fd58-9xbpn\" (UID: \"714e1927-ff76-40b6-8134-dc6d92c4c226\") " pod="openstack/keystone-694d6fd58-9xbpn" Feb 02 22:46:37 crc kubenswrapper[4789]: I0202 22:46:37.463479 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/714e1927-ff76-40b6-8134-dc6d92c4c226-fernet-keys\") pod \"keystone-694d6fd58-9xbpn\" (UID: \"714e1927-ff76-40b6-8134-dc6d92c4c226\") " pod="openstack/keystone-694d6fd58-9xbpn" Feb 02 22:46:37 crc kubenswrapper[4789]: I0202 22:46:37.463554 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/714e1927-ff76-40b6-8134-dc6d92c4c226-combined-ca-bundle\") pod \"keystone-694d6fd58-9xbpn\" (UID: \"714e1927-ff76-40b6-8134-dc6d92c4c226\") " pod="openstack/keystone-694d6fd58-9xbpn" Feb 02 22:46:37 crc kubenswrapper[4789]: I0202 22:46:37.470355 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/714e1927-ff76-40b6-8134-dc6d92c4c226-combined-ca-bundle\") pod \"keystone-694d6fd58-9xbpn\" (UID: \"714e1927-ff76-40b6-8134-dc6d92c4c226\") " pod="openstack/keystone-694d6fd58-9xbpn" Feb 02 22:46:37 crc kubenswrapper[4789]: I0202 22:46:37.470641 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/714e1927-ff76-40b6-8134-dc6d92c4c226-credential-keys\") pod \"keystone-694d6fd58-9xbpn\" (UID: \"714e1927-ff76-40b6-8134-dc6d92c4c226\") " pod="openstack/keystone-694d6fd58-9xbpn" Feb 02 22:46:37 crc kubenswrapper[4789]: I0202 22:46:37.471641 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/714e1927-ff76-40b6-8134-dc6d92c4c226-config-data\") pod \"keystone-694d6fd58-9xbpn\" (UID: \"714e1927-ff76-40b6-8134-dc6d92c4c226\") " pod="openstack/keystone-694d6fd58-9xbpn" Feb 02 22:46:37 crc kubenswrapper[4789]: I0202 22:46:37.471886 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/714e1927-ff76-40b6-8134-dc6d92c4c226-scripts\") pod \"keystone-694d6fd58-9xbpn\" (UID: \"714e1927-ff76-40b6-8134-dc6d92c4c226\") " pod="openstack/keystone-694d6fd58-9xbpn" Feb 02 22:46:37 crc kubenswrapper[4789]: I0202 22:46:37.472497 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/714e1927-ff76-40b6-8134-dc6d92c4c226-fernet-keys\") pod \"keystone-694d6fd58-9xbpn\" (UID: \"714e1927-ff76-40b6-8134-dc6d92c4c226\") " pod="openstack/keystone-694d6fd58-9xbpn" Feb 02 22:46:37 crc kubenswrapper[4789]: I0202 22:46:37.499839 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7xjf\" (UniqueName: \"kubernetes.io/projected/714e1927-ff76-40b6-8134-dc6d92c4c226-kube-api-access-x7xjf\") pod \"keystone-694d6fd58-9xbpn\" (UID: \"714e1927-ff76-40b6-8134-dc6d92c4c226\") " pod="openstack/keystone-694d6fd58-9xbpn" Feb 02 22:46:37 crc kubenswrapper[4789]: I0202 22:46:37.540436 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-694d6fd58-9xbpn" Feb 02 22:46:38 crc kubenswrapper[4789]: I0202 22:46:38.059023 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-694d6fd58-9xbpn"] Feb 02 22:46:38 crc kubenswrapper[4789]: W0202 22:46:38.069782 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod714e1927_ff76_40b6_8134_dc6d92c4c226.slice/crio-a198e319b0a070ee816eb49c1c1da198c6acb8b3f23fc61efba381551596e4a7 WatchSource:0}: Error finding container a198e319b0a070ee816eb49c1c1da198c6acb8b3f23fc61efba381551596e4a7: Status 404 returned error can't find the container with id a198e319b0a070ee816eb49c1c1da198c6acb8b3f23fc61efba381551596e4a7 Feb 02 22:46:38 crc kubenswrapper[4789]: I0202 22:46:38.112378 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-694d6fd58-9xbpn" event={"ID":"714e1927-ff76-40b6-8134-dc6d92c4c226","Type":"ContainerStarted","Data":"a198e319b0a070ee816eb49c1c1da198c6acb8b3f23fc61efba381551596e4a7"} Feb 02 22:46:39 crc kubenswrapper[4789]: I0202 22:46:39.126822 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-694d6fd58-9xbpn" event={"ID":"714e1927-ff76-40b6-8134-dc6d92c4c226","Type":"ContainerStarted","Data":"fd5b12a77a798dbd40945388685332ceb10f336bbcf54ecb95cf5265cc4b3d3e"} Feb 02 22:46:39 crc kubenswrapper[4789]: I0202 22:46:39.127245 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-694d6fd58-9xbpn" Feb 02 22:46:39 crc kubenswrapper[4789]: I0202 22:46:39.159446 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-694d6fd58-9xbpn" podStartSLOduration=2.159418686 podStartE2EDuration="2.159418686s" podCreationTimestamp="2026-02-02 22:46:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 22:46:39.154094546 +0000 UTC m=+5219.449119655" watchObservedRunningTime="2026-02-02 22:46:39.159418686 +0000 UTC m=+5219.454443735" Feb 02 22:46:52 crc kubenswrapper[4789]: I0202 22:46:52.842194 4789 patch_prober.go:28] interesting pod/machine-config-daemon-c8vcn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 22:46:52 crc kubenswrapper[4789]: I0202 22:46:52.843081 4789 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 22:47:08 crc kubenswrapper[4789]: I0202 22:47:08.949750 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-694d6fd58-9xbpn" Feb 02 22:47:12 crc kubenswrapper[4789]: I0202 22:47:12.775431 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 02 22:47:12 crc kubenswrapper[4789]: I0202 22:47:12.818500 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 02 22:47:12 crc kubenswrapper[4789]: I0202 22:47:12.818673 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 02 22:47:12 crc kubenswrapper[4789]: I0202 22:47:12.822066 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 02 22:47:12 crc kubenswrapper[4789]: I0202 22:47:12.822990 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 02 22:47:12 crc kubenswrapper[4789]: I0202 22:47:12.823723 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-4gkml" Feb 02 22:47:12 crc kubenswrapper[4789]: I0202 22:47:12.853883 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Feb 02 22:47:12 crc kubenswrapper[4789]: E0202 22:47:12.862405 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-xv28w openstack-config openstack-config-secret], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/openstackclient" podUID="6c7a482a-bb23-44e3-a27e-c4bb0b59cee0" Feb 02 22:47:12 crc kubenswrapper[4789]: I0202 22:47:12.864677 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Feb 02 22:47:12 crc kubenswrapper[4789]: I0202 22:47:12.880884 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6c7a482a-bb23-44e3-a27e-c4bb0b59cee0-openstack-config\") pod \"openstackclient\" (UID: \"6c7a482a-bb23-44e3-a27e-c4bb0b59cee0\") " pod="openstack/openstackclient" Feb 02 22:47:12 crc kubenswrapper[4789]: I0202 22:47:12.881021 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xv28w\" (UniqueName: \"kubernetes.io/projected/6c7a482a-bb23-44e3-a27e-c4bb0b59cee0-kube-api-access-xv28w\") pod \"openstackclient\" (UID: \"6c7a482a-bb23-44e3-a27e-c4bb0b59cee0\") " pod="openstack/openstackclient" Feb 02 22:47:12 crc kubenswrapper[4789]: I0202 22:47:12.881120 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 02 22:47:12 crc kubenswrapper[4789]: I0202 22:47:12.881281 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6c7a482a-bb23-44e3-a27e-c4bb0b59cee0-openstack-config-secret\") pod \"openstackclient\" (UID: \"6c7a482a-bb23-44e3-a27e-c4bb0b59cee0\") " pod="openstack/openstackclient" Feb 02 22:47:12 crc kubenswrapper[4789]: I0202 22:47:12.882337 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 02 22:47:12 crc kubenswrapper[4789]: I0202 22:47:12.899955 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 02 22:47:12 crc kubenswrapper[4789]: I0202 22:47:12.983243 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a634b261-487f-4a31-ae53-10279607cb1e-openstack-config-secret\") pod \"openstackclient\" (UID: \"a634b261-487f-4a31-ae53-10279607cb1e\") " pod="openstack/openstackclient" Feb 02 22:47:12 crc kubenswrapper[4789]: I0202 22:47:12.983317 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mvn8\" (UniqueName: \"kubernetes.io/projected/a634b261-487f-4a31-ae53-10279607cb1e-kube-api-access-9mvn8\") pod \"openstackclient\" (UID: \"a634b261-487f-4a31-ae53-10279607cb1e\") " pod="openstack/openstackclient" Feb 02 22:47:12 crc kubenswrapper[4789]: I0202 22:47:12.983348 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6c7a482a-bb23-44e3-a27e-c4bb0b59cee0-openstack-config-secret\") pod \"openstackclient\" (UID: \"6c7a482a-bb23-44e3-a27e-c4bb0b59cee0\") " pod="openstack/openstackclient" Feb 02 22:47:12 crc kubenswrapper[4789]: I0202 22:47:12.983369 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a634b261-487f-4a31-ae53-10279607cb1e-openstack-config\") pod \"openstackclient\" (UID: \"a634b261-487f-4a31-ae53-10279607cb1e\") " pod="openstack/openstackclient" Feb 02 22:47:12 crc kubenswrapper[4789]: I0202 22:47:12.983713 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6c7a482a-bb23-44e3-a27e-c4bb0b59cee0-openstack-config\") pod \"openstackclient\" (UID: \"6c7a482a-bb23-44e3-a27e-c4bb0b59cee0\") " pod="openstack/openstackclient" Feb 02 22:47:12 crc kubenswrapper[4789]: I0202 22:47:12.983870 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xv28w\" (UniqueName: \"kubernetes.io/projected/6c7a482a-bb23-44e3-a27e-c4bb0b59cee0-kube-api-access-xv28w\") pod \"openstackclient\" (UID: \"6c7a482a-bb23-44e3-a27e-c4bb0b59cee0\") " pod="openstack/openstackclient" Feb 02 22:47:12 crc kubenswrapper[4789]: I0202 22:47:12.984940 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6c7a482a-bb23-44e3-a27e-c4bb0b59cee0-openstack-config\") pod \"openstackclient\" (UID: \"6c7a482a-bb23-44e3-a27e-c4bb0b59cee0\") " pod="openstack/openstackclient" Feb 02 22:47:12 crc kubenswrapper[4789]: E0202 22:47:12.986158 4789 projected.go:194] Error preparing data for projected volume kube-api-access-xv28w for pod openstack/openstackclient: failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: the UID in the bound object reference (6c7a482a-bb23-44e3-a27e-c4bb0b59cee0) does not match the UID in record. The object might have been deleted and then recreated Feb 02 22:47:12 crc kubenswrapper[4789]: E0202 22:47:12.986250 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6c7a482a-bb23-44e3-a27e-c4bb0b59cee0-kube-api-access-xv28w podName:6c7a482a-bb23-44e3-a27e-c4bb0b59cee0 nodeName:}" failed. No retries permitted until 2026-02-02 22:47:13.486222903 +0000 UTC m=+5253.781247952 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-xv28w" (UniqueName: "kubernetes.io/projected/6c7a482a-bb23-44e3-a27e-c4bb0b59cee0-kube-api-access-xv28w") pod "openstackclient" (UID: "6c7a482a-bb23-44e3-a27e-c4bb0b59cee0") : failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: the UID in the bound object reference (6c7a482a-bb23-44e3-a27e-c4bb0b59cee0) does not match the UID in record. The object might have been deleted and then recreated Feb 02 22:47:12 crc kubenswrapper[4789]: I0202 22:47:12.989515 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6c7a482a-bb23-44e3-a27e-c4bb0b59cee0-openstack-config-secret\") pod \"openstackclient\" (UID: \"6c7a482a-bb23-44e3-a27e-c4bb0b59cee0\") " pod="openstack/openstackclient" Feb 02 22:47:13 crc kubenswrapper[4789]: I0202 22:47:13.085136 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a634b261-487f-4a31-ae53-10279607cb1e-openstack-config-secret\") pod \"openstackclient\" (UID: \"a634b261-487f-4a31-ae53-10279607cb1e\") " pod="openstack/openstackclient" Feb 02 22:47:13 crc kubenswrapper[4789]: I0202 22:47:13.085246 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mvn8\" (UniqueName: \"kubernetes.io/projected/a634b261-487f-4a31-ae53-10279607cb1e-kube-api-access-9mvn8\") pod \"openstackclient\" (UID: \"a634b261-487f-4a31-ae53-10279607cb1e\") " pod="openstack/openstackclient" Feb 02 22:47:13 crc kubenswrapper[4789]: I0202 22:47:13.085288 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a634b261-487f-4a31-ae53-10279607cb1e-openstack-config\") pod \"openstackclient\" (UID: \"a634b261-487f-4a31-ae53-10279607cb1e\") " pod="openstack/openstackclient" Feb 02 22:47:13 crc kubenswrapper[4789]: I0202 22:47:13.086316 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a634b261-487f-4a31-ae53-10279607cb1e-openstack-config\") pod \"openstackclient\" (UID: \"a634b261-487f-4a31-ae53-10279607cb1e\") " pod="openstack/openstackclient" Feb 02 22:47:13 crc kubenswrapper[4789]: I0202 22:47:13.089857 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a634b261-487f-4a31-ae53-10279607cb1e-openstack-config-secret\") pod \"openstackclient\" (UID: \"a634b261-487f-4a31-ae53-10279607cb1e\") " pod="openstack/openstackclient" Feb 02 22:47:13 crc kubenswrapper[4789]: I0202 22:47:13.105708 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mvn8\" (UniqueName: \"kubernetes.io/projected/a634b261-487f-4a31-ae53-10279607cb1e-kube-api-access-9mvn8\") pod \"openstackclient\" (UID: \"a634b261-487f-4a31-ae53-10279607cb1e\") " pod="openstack/openstackclient" Feb 02 22:47:13 crc kubenswrapper[4789]: I0202 22:47:13.206627 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 02 22:47:13 crc kubenswrapper[4789]: I0202 22:47:13.465432 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 02 22:47:13 crc kubenswrapper[4789]: I0202 22:47:13.469384 4789 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="6c7a482a-bb23-44e3-a27e-c4bb0b59cee0" podUID="a634b261-487f-4a31-ae53-10279607cb1e" Feb 02 22:47:13 crc kubenswrapper[4789]: I0202 22:47:13.477013 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 02 22:47:13 crc kubenswrapper[4789]: I0202 22:47:13.492154 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xv28w\" (UniqueName: \"kubernetes.io/projected/6c7a482a-bb23-44e3-a27e-c4bb0b59cee0-kube-api-access-xv28w\") pod \"openstackclient\" (UID: \"6c7a482a-bb23-44e3-a27e-c4bb0b59cee0\") " pod="openstack/openstackclient" Feb 02 22:47:13 crc kubenswrapper[4789]: E0202 22:47:13.493915 4789 projected.go:194] Error preparing data for projected volume kube-api-access-xv28w for pod openstack/openstackclient: failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: the UID in the bound object reference (6c7a482a-bb23-44e3-a27e-c4bb0b59cee0) does not match the UID in record. The object might have been deleted and then recreated Feb 02 22:47:13 crc kubenswrapper[4789]: E0202 22:47:13.493974 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6c7a482a-bb23-44e3-a27e-c4bb0b59cee0-kube-api-access-xv28w podName:6c7a482a-bb23-44e3-a27e-c4bb0b59cee0 nodeName:}" failed. No retries permitted until 2026-02-02 22:47:14.493957421 +0000 UTC m=+5254.788982440 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-xv28w" (UniqueName: "kubernetes.io/projected/6c7a482a-bb23-44e3-a27e-c4bb0b59cee0-kube-api-access-xv28w") pod "openstackclient" (UID: "6c7a482a-bb23-44e3-a27e-c4bb0b59cee0") : failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: the UID in the bound object reference (6c7a482a-bb23-44e3-a27e-c4bb0b59cee0) does not match the UID in record. The object might have been deleted and then recreated Feb 02 22:47:13 crc kubenswrapper[4789]: I0202 22:47:13.593571 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6c7a482a-bb23-44e3-a27e-c4bb0b59cee0-openstack-config\") pod \"6c7a482a-bb23-44e3-a27e-c4bb0b59cee0\" (UID: \"6c7a482a-bb23-44e3-a27e-c4bb0b59cee0\") " Feb 02 22:47:13 crc kubenswrapper[4789]: I0202 22:47:13.593742 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6c7a482a-bb23-44e3-a27e-c4bb0b59cee0-openstack-config-secret\") pod \"6c7a482a-bb23-44e3-a27e-c4bb0b59cee0\" (UID: \"6c7a482a-bb23-44e3-a27e-c4bb0b59cee0\") " Feb 02 22:47:13 crc kubenswrapper[4789]: I0202 22:47:13.594223 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xv28w\" (UniqueName: \"kubernetes.io/projected/6c7a482a-bb23-44e3-a27e-c4bb0b59cee0-kube-api-access-xv28w\") on node \"crc\" DevicePath \"\"" Feb 02 22:47:13 crc kubenswrapper[4789]: I0202 22:47:13.594639 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c7a482a-bb23-44e3-a27e-c4bb0b59cee0-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "6c7a482a-bb23-44e3-a27e-c4bb0b59cee0" (UID: "6c7a482a-bb23-44e3-a27e-c4bb0b59cee0"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 22:47:13 crc kubenswrapper[4789]: I0202 22:47:13.601303 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c7a482a-bb23-44e3-a27e-c4bb0b59cee0-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "6c7a482a-bb23-44e3-a27e-c4bb0b59cee0" (UID: "6c7a482a-bb23-44e3-a27e-c4bb0b59cee0"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 22:47:13 crc kubenswrapper[4789]: I0202 22:47:13.695998 4789 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6c7a482a-bb23-44e3-a27e-c4bb0b59cee0-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 02 22:47:13 crc kubenswrapper[4789]: I0202 22:47:13.696066 4789 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6c7a482a-bb23-44e3-a27e-c4bb0b59cee0-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 02 22:47:13 crc kubenswrapper[4789]: I0202 22:47:13.701421 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 02 22:47:14 crc kubenswrapper[4789]: I0202 22:47:14.440972 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c7a482a-bb23-44e3-a27e-c4bb0b59cee0" path="/var/lib/kubelet/pods/6c7a482a-bb23-44e3-a27e-c4bb0b59cee0/volumes" Feb 02 22:47:14 crc kubenswrapper[4789]: I0202 22:47:14.478748 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 02 22:47:14 crc kubenswrapper[4789]: I0202 22:47:14.478761 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"a634b261-487f-4a31-ae53-10279607cb1e","Type":"ContainerStarted","Data":"c3c8dbb62678a13e48602405e927758a18e434bce0ca7f19b84a34630c440f13"} Feb 02 22:47:14 crc kubenswrapper[4789]: I0202 22:47:14.478940 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"a634b261-487f-4a31-ae53-10279607cb1e","Type":"ContainerStarted","Data":"acf3b9c4e679b147b95ba2f6bb5682a70377ef31486d303079b7480f670b6650"} Feb 02 22:47:14 crc kubenswrapper[4789]: I0202 22:47:14.509174 4789 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="6c7a482a-bb23-44e3-a27e-c4bb0b59cee0" podUID="a634b261-487f-4a31-ae53-10279607cb1e" Feb 02 22:47:14 crc kubenswrapper[4789]: I0202 22:47:14.510061 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.510038643 podStartE2EDuration="2.510038643s" podCreationTimestamp="2026-02-02 22:47:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 22:47:14.504548299 +0000 UTC m=+5254.799573388" watchObservedRunningTime="2026-02-02 22:47:14.510038643 +0000 UTC m=+5254.805063702" Feb 02 22:47:20 crc kubenswrapper[4789]: I0202 22:47:20.213187 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9kpmh"] Feb 02 22:47:20 crc kubenswrapper[4789]: I0202 22:47:20.215845 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9kpmh" Feb 02 22:47:20 crc kubenswrapper[4789]: I0202 22:47:20.234425 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9kpmh"] Feb 02 22:47:20 crc kubenswrapper[4789]: I0202 22:47:20.413405 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8kx7\" (UniqueName: \"kubernetes.io/projected/922b07cb-53db-484a-a559-dee560091852-kube-api-access-c8kx7\") pod \"redhat-operators-9kpmh\" (UID: \"922b07cb-53db-484a-a559-dee560091852\") " pod="openshift-marketplace/redhat-operators-9kpmh" Feb 02 22:47:20 crc kubenswrapper[4789]: I0202 22:47:20.413465 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/922b07cb-53db-484a-a559-dee560091852-utilities\") pod \"redhat-operators-9kpmh\" (UID: \"922b07cb-53db-484a-a559-dee560091852\") " pod="openshift-marketplace/redhat-operators-9kpmh" Feb 02 22:47:20 crc kubenswrapper[4789]: I0202 22:47:20.413602 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/922b07cb-53db-484a-a559-dee560091852-catalog-content\") pod \"redhat-operators-9kpmh\" (UID: \"922b07cb-53db-484a-a559-dee560091852\") " pod="openshift-marketplace/redhat-operators-9kpmh" Feb 02 22:47:20 crc kubenswrapper[4789]: I0202 22:47:20.515327 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/922b07cb-53db-484a-a559-dee560091852-catalog-content\") pod \"redhat-operators-9kpmh\" (UID: \"922b07cb-53db-484a-a559-dee560091852\") " pod="openshift-marketplace/redhat-operators-9kpmh" Feb 02 22:47:20 crc kubenswrapper[4789]: I0202 22:47:20.515423 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8kx7\" (UniqueName: \"kubernetes.io/projected/922b07cb-53db-484a-a559-dee560091852-kube-api-access-c8kx7\") pod \"redhat-operators-9kpmh\" (UID: \"922b07cb-53db-484a-a559-dee560091852\") " pod="openshift-marketplace/redhat-operators-9kpmh" Feb 02 22:47:20 crc kubenswrapper[4789]: I0202 22:47:20.515443 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/922b07cb-53db-484a-a559-dee560091852-utilities\") pod \"redhat-operators-9kpmh\" (UID: \"922b07cb-53db-484a-a559-dee560091852\") " pod="openshift-marketplace/redhat-operators-9kpmh" Feb 02 22:47:20 crc kubenswrapper[4789]: I0202 22:47:20.515903 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/922b07cb-53db-484a-a559-dee560091852-utilities\") pod \"redhat-operators-9kpmh\" (UID: \"922b07cb-53db-484a-a559-dee560091852\") " pod="openshift-marketplace/redhat-operators-9kpmh" Feb 02 22:47:20 crc kubenswrapper[4789]: I0202 22:47:20.516078 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/922b07cb-53db-484a-a559-dee560091852-catalog-content\") pod \"redhat-operators-9kpmh\" (UID: \"922b07cb-53db-484a-a559-dee560091852\") " pod="openshift-marketplace/redhat-operators-9kpmh" Feb 02 22:47:20 crc kubenswrapper[4789]: I0202 22:47:20.539265 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8kx7\" (UniqueName: \"kubernetes.io/projected/922b07cb-53db-484a-a559-dee560091852-kube-api-access-c8kx7\") pod \"redhat-operators-9kpmh\" (UID: \"922b07cb-53db-484a-a559-dee560091852\") " pod="openshift-marketplace/redhat-operators-9kpmh" Feb 02 22:47:20 crc kubenswrapper[4789]: I0202 22:47:20.549554 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9kpmh" Feb 02 22:47:20 crc kubenswrapper[4789]: I0202 22:47:20.815125 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9kpmh"] Feb 02 22:47:21 crc kubenswrapper[4789]: I0202 22:47:21.557051 4789 generic.go:334] "Generic (PLEG): container finished" podID="922b07cb-53db-484a-a559-dee560091852" containerID="2592ccf9d6c18e74dc08777ba1081b5601d89438eb1326e0010a28ea8ee80070" exitCode=0 Feb 02 22:47:21 crc kubenswrapper[4789]: I0202 22:47:21.557091 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9kpmh" event={"ID":"922b07cb-53db-484a-a559-dee560091852","Type":"ContainerDied","Data":"2592ccf9d6c18e74dc08777ba1081b5601d89438eb1326e0010a28ea8ee80070"} Feb 02 22:47:21 crc kubenswrapper[4789]: I0202 22:47:21.557115 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9kpmh" event={"ID":"922b07cb-53db-484a-a559-dee560091852","Type":"ContainerStarted","Data":"8a6271a04cb5b0d5210842c9a0ed8d6348f9d16a5227087caed0a22613453c6c"} Feb 02 22:47:21 crc kubenswrapper[4789]: I0202 22:47:21.559053 4789 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 22:47:22 crc kubenswrapper[4789]: I0202 22:47:22.841940 4789 patch_prober.go:28] interesting pod/machine-config-daemon-c8vcn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 22:47:22 crc kubenswrapper[4789]: I0202 22:47:22.842264 4789 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 22:47:23 crc kubenswrapper[4789]: I0202 22:47:23.585063 4789 generic.go:334] "Generic (PLEG): container finished" podID="922b07cb-53db-484a-a559-dee560091852" containerID="5a7c9019a3d6cd9ea311d693ec3fac3cf165ec972329c86c350eb7af243b400e" exitCode=0 Feb 02 22:47:23 crc kubenswrapper[4789]: I0202 22:47:23.585131 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9kpmh" event={"ID":"922b07cb-53db-484a-a559-dee560091852","Type":"ContainerDied","Data":"5a7c9019a3d6cd9ea311d693ec3fac3cf165ec972329c86c350eb7af243b400e"} Feb 02 22:47:24 crc kubenswrapper[4789]: I0202 22:47:24.599368 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9kpmh" event={"ID":"922b07cb-53db-484a-a559-dee560091852","Type":"ContainerStarted","Data":"39d690c5932cecccfbe6b8cb6fabbe298544050ad536bb689c0232f0f61fb6c4"} Feb 02 22:47:24 crc kubenswrapper[4789]: I0202 22:47:24.667928 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9kpmh" podStartSLOduration=2.224496716 podStartE2EDuration="4.667891187s" podCreationTimestamp="2026-02-02 22:47:20 +0000 UTC" firstStartedPulling="2026-02-02 22:47:21.55879138 +0000 UTC m=+5261.853816399" lastFinishedPulling="2026-02-02 22:47:24.002185811 +0000 UTC m=+5264.297210870" observedRunningTime="2026-02-02 22:47:24.640886568 +0000 UTC m=+5264.935911607" watchObservedRunningTime="2026-02-02 22:47:24.667891187 +0000 UTC m=+5264.962916246" Feb 02 22:47:30 crc kubenswrapper[4789]: I0202 22:47:30.550564 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9kpmh" Feb 02 22:47:30 crc kubenswrapper[4789]: I0202 22:47:30.550906 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9kpmh" Feb 02 22:47:31 crc kubenswrapper[4789]: I0202 22:47:31.626141 4789 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9kpmh" podUID="922b07cb-53db-484a-a559-dee560091852" containerName="registry-server" probeResult="failure" output=< Feb 02 22:47:31 crc kubenswrapper[4789]: timeout: failed to connect service ":50051" within 1s Feb 02 22:47:31 crc kubenswrapper[4789]: > Feb 02 22:47:40 crc kubenswrapper[4789]: I0202 22:47:40.593101 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9kpmh" Feb 02 22:47:40 crc kubenswrapper[4789]: I0202 22:47:40.678549 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9kpmh" Feb 02 22:47:40 crc kubenswrapper[4789]: I0202 22:47:40.852515 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9kpmh"] Feb 02 22:47:41 crc kubenswrapper[4789]: I0202 22:47:41.825465 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9kpmh" podUID="922b07cb-53db-484a-a559-dee560091852" containerName="registry-server" containerID="cri-o://39d690c5932cecccfbe6b8cb6fabbe298544050ad536bb689c0232f0f61fb6c4" gracePeriod=2 Feb 02 22:47:42 crc kubenswrapper[4789]: I0202 22:47:42.352879 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9kpmh" Feb 02 22:47:42 crc kubenswrapper[4789]: I0202 22:47:42.459177 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/922b07cb-53db-484a-a559-dee560091852-utilities\") pod \"922b07cb-53db-484a-a559-dee560091852\" (UID: \"922b07cb-53db-484a-a559-dee560091852\") " Feb 02 22:47:42 crc kubenswrapper[4789]: I0202 22:47:42.459421 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c8kx7\" (UniqueName: \"kubernetes.io/projected/922b07cb-53db-484a-a559-dee560091852-kube-api-access-c8kx7\") pod \"922b07cb-53db-484a-a559-dee560091852\" (UID: \"922b07cb-53db-484a-a559-dee560091852\") " Feb 02 22:47:42 crc kubenswrapper[4789]: I0202 22:47:42.459559 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/922b07cb-53db-484a-a559-dee560091852-catalog-content\") pod \"922b07cb-53db-484a-a559-dee560091852\" (UID: \"922b07cb-53db-484a-a559-dee560091852\") " Feb 02 22:47:42 crc kubenswrapper[4789]: I0202 22:47:42.460886 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/922b07cb-53db-484a-a559-dee560091852-utilities" (OuterVolumeSpecName: "utilities") pod "922b07cb-53db-484a-a559-dee560091852" (UID: "922b07cb-53db-484a-a559-dee560091852"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 22:47:42 crc kubenswrapper[4789]: I0202 22:47:42.469204 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/922b07cb-53db-484a-a559-dee560091852-kube-api-access-c8kx7" (OuterVolumeSpecName: "kube-api-access-c8kx7") pod "922b07cb-53db-484a-a559-dee560091852" (UID: "922b07cb-53db-484a-a559-dee560091852"). InnerVolumeSpecName "kube-api-access-c8kx7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 22:47:42 crc kubenswrapper[4789]: I0202 22:47:42.562508 4789 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/922b07cb-53db-484a-a559-dee560091852-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 22:47:42 crc kubenswrapper[4789]: I0202 22:47:42.562537 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c8kx7\" (UniqueName: \"kubernetes.io/projected/922b07cb-53db-484a-a559-dee560091852-kube-api-access-c8kx7\") on node \"crc\" DevicePath \"\"" Feb 02 22:47:42 crc kubenswrapper[4789]: I0202 22:47:42.612388 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/922b07cb-53db-484a-a559-dee560091852-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "922b07cb-53db-484a-a559-dee560091852" (UID: "922b07cb-53db-484a-a559-dee560091852"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 22:47:42 crc kubenswrapper[4789]: I0202 22:47:42.664771 4789 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/922b07cb-53db-484a-a559-dee560091852-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 22:47:42 crc kubenswrapper[4789]: I0202 22:47:42.856050 4789 generic.go:334] "Generic (PLEG): container finished" podID="922b07cb-53db-484a-a559-dee560091852" containerID="39d690c5932cecccfbe6b8cb6fabbe298544050ad536bb689c0232f0f61fb6c4" exitCode=0 Feb 02 22:47:42 crc kubenswrapper[4789]: I0202 22:47:42.856251 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9kpmh" event={"ID":"922b07cb-53db-484a-a559-dee560091852","Type":"ContainerDied","Data":"39d690c5932cecccfbe6b8cb6fabbe298544050ad536bb689c0232f0f61fb6c4"} Feb 02 22:47:42 crc kubenswrapper[4789]: I0202 22:47:42.856340 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9kpmh" event={"ID":"922b07cb-53db-484a-a559-dee560091852","Type":"ContainerDied","Data":"8a6271a04cb5b0d5210842c9a0ed8d6348f9d16a5227087caed0a22613453c6c"} Feb 02 22:47:42 crc kubenswrapper[4789]: I0202 22:47:42.856389 4789 scope.go:117] "RemoveContainer" containerID="39d690c5932cecccfbe6b8cb6fabbe298544050ad536bb689c0232f0f61fb6c4" Feb 02 22:47:42 crc kubenswrapper[4789]: I0202 22:47:42.856512 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9kpmh" Feb 02 22:47:42 crc kubenswrapper[4789]: I0202 22:47:42.885159 4789 scope.go:117] "RemoveContainer" containerID="5a7c9019a3d6cd9ea311d693ec3fac3cf165ec972329c86c350eb7af243b400e" Feb 02 22:47:42 crc kubenswrapper[4789]: I0202 22:47:42.913720 4789 scope.go:117] "RemoveContainer" containerID="2592ccf9d6c18e74dc08777ba1081b5601d89438eb1326e0010a28ea8ee80070" Feb 02 22:47:42 crc kubenswrapper[4789]: I0202 22:47:42.922306 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9kpmh"] Feb 02 22:47:42 crc kubenswrapper[4789]: I0202 22:47:42.930546 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9kpmh"] Feb 02 22:47:42 crc kubenswrapper[4789]: I0202 22:47:42.959622 4789 scope.go:117] "RemoveContainer" containerID="39d690c5932cecccfbe6b8cb6fabbe298544050ad536bb689c0232f0f61fb6c4" Feb 02 22:47:42 crc kubenswrapper[4789]: E0202 22:47:42.960304 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39d690c5932cecccfbe6b8cb6fabbe298544050ad536bb689c0232f0f61fb6c4\": container with ID starting with 39d690c5932cecccfbe6b8cb6fabbe298544050ad536bb689c0232f0f61fb6c4 not found: ID does not exist" containerID="39d690c5932cecccfbe6b8cb6fabbe298544050ad536bb689c0232f0f61fb6c4" Feb 02 22:47:42 crc kubenswrapper[4789]: I0202 22:47:42.960386 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39d690c5932cecccfbe6b8cb6fabbe298544050ad536bb689c0232f0f61fb6c4"} err="failed to get container status \"39d690c5932cecccfbe6b8cb6fabbe298544050ad536bb689c0232f0f61fb6c4\": rpc error: code = NotFound desc = could not find container \"39d690c5932cecccfbe6b8cb6fabbe298544050ad536bb689c0232f0f61fb6c4\": container with ID starting with 39d690c5932cecccfbe6b8cb6fabbe298544050ad536bb689c0232f0f61fb6c4 not found: ID does not exist" Feb 02 22:47:42 crc kubenswrapper[4789]: I0202 22:47:42.960438 4789 scope.go:117] "RemoveContainer" containerID="5a7c9019a3d6cd9ea311d693ec3fac3cf165ec972329c86c350eb7af243b400e" Feb 02 22:47:42 crc kubenswrapper[4789]: E0202 22:47:42.960807 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a7c9019a3d6cd9ea311d693ec3fac3cf165ec972329c86c350eb7af243b400e\": container with ID starting with 5a7c9019a3d6cd9ea311d693ec3fac3cf165ec972329c86c350eb7af243b400e not found: ID does not exist" containerID="5a7c9019a3d6cd9ea311d693ec3fac3cf165ec972329c86c350eb7af243b400e" Feb 02 22:47:42 crc kubenswrapper[4789]: I0202 22:47:42.960984 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a7c9019a3d6cd9ea311d693ec3fac3cf165ec972329c86c350eb7af243b400e"} err="failed to get container status \"5a7c9019a3d6cd9ea311d693ec3fac3cf165ec972329c86c350eb7af243b400e\": rpc error: code = NotFound desc = could not find container \"5a7c9019a3d6cd9ea311d693ec3fac3cf165ec972329c86c350eb7af243b400e\": container with ID starting with 5a7c9019a3d6cd9ea311d693ec3fac3cf165ec972329c86c350eb7af243b400e not found: ID does not exist" Feb 02 22:47:42 crc kubenswrapper[4789]: I0202 22:47:42.961093 4789 scope.go:117] "RemoveContainer" containerID="2592ccf9d6c18e74dc08777ba1081b5601d89438eb1326e0010a28ea8ee80070" Feb 02 22:47:42 crc kubenswrapper[4789]: E0202 22:47:42.961534 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2592ccf9d6c18e74dc08777ba1081b5601d89438eb1326e0010a28ea8ee80070\": container with ID starting with 2592ccf9d6c18e74dc08777ba1081b5601d89438eb1326e0010a28ea8ee80070 not found: ID does not exist" containerID="2592ccf9d6c18e74dc08777ba1081b5601d89438eb1326e0010a28ea8ee80070" Feb 02 22:47:42 crc kubenswrapper[4789]: I0202 22:47:42.961648 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2592ccf9d6c18e74dc08777ba1081b5601d89438eb1326e0010a28ea8ee80070"} err="failed to get container status \"2592ccf9d6c18e74dc08777ba1081b5601d89438eb1326e0010a28ea8ee80070\": rpc error: code = NotFound desc = could not find container \"2592ccf9d6c18e74dc08777ba1081b5601d89438eb1326e0010a28ea8ee80070\": container with ID starting with 2592ccf9d6c18e74dc08777ba1081b5601d89438eb1326e0010a28ea8ee80070 not found: ID does not exist" Feb 02 22:47:44 crc kubenswrapper[4789]: I0202 22:47:44.451937 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="922b07cb-53db-484a-a559-dee560091852" path="/var/lib/kubelet/pods/922b07cb-53db-484a-a559-dee560091852/volumes" Feb 02 22:47:48 crc kubenswrapper[4789]: E0202 22:47:48.232909 4789 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.189:45562->38.102.83.189:36729: read tcp 38.102.83.189:45562->38.102.83.189:36729: read: connection reset by peer Feb 02 22:47:52 crc kubenswrapper[4789]: I0202 22:47:52.842151 4789 patch_prober.go:28] interesting pod/machine-config-daemon-c8vcn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 22:47:52 crc kubenswrapper[4789]: I0202 22:47:52.842546 4789 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 22:47:52 crc kubenswrapper[4789]: I0202 22:47:52.842623 4789 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" Feb 02 22:47:52 crc kubenswrapper[4789]: I0202 22:47:52.843703 4789 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"16b35fede0307e2768edeae9bc17688f5a8ff6427f2ae9305826087a6effbd74"} pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 22:47:52 crc kubenswrapper[4789]: I0202 22:47:52.843769 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" containerName="machine-config-daemon" containerID="cri-o://16b35fede0307e2768edeae9bc17688f5a8ff6427f2ae9305826087a6effbd74" gracePeriod=600 Feb 02 22:47:52 crc kubenswrapper[4789]: E0202 22:47:52.979909 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:47:53 crc kubenswrapper[4789]: I0202 22:47:53.985629 4789 generic.go:334] "Generic (PLEG): container finished" podID="bdf018b4-1451-4d37-be6e-05802b67c73e" containerID="16b35fede0307e2768edeae9bc17688f5a8ff6427f2ae9305826087a6effbd74" exitCode=0 Feb 02 22:47:53 crc kubenswrapper[4789]: I0202 22:47:53.985712 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" event={"ID":"bdf018b4-1451-4d37-be6e-05802b67c73e","Type":"ContainerDied","Data":"16b35fede0307e2768edeae9bc17688f5a8ff6427f2ae9305826087a6effbd74"} Feb 02 22:47:53 crc kubenswrapper[4789]: I0202 22:47:53.985770 4789 scope.go:117] "RemoveContainer" containerID="7ba14292c23dc4f23f84b7674c42768a164c25b022f823f0ebbe7a54004bb378" Feb 02 22:47:53 crc kubenswrapper[4789]: I0202 22:47:53.986773 4789 scope.go:117] "RemoveContainer" containerID="16b35fede0307e2768edeae9bc17688f5a8ff6427f2ae9305826087a6effbd74" Feb 02 22:47:53 crc kubenswrapper[4789]: E0202 22:47:53.987258 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:48:07 crc kubenswrapper[4789]: I0202 22:48:07.419144 4789 scope.go:117] "RemoveContainer" containerID="16b35fede0307e2768edeae9bc17688f5a8ff6427f2ae9305826087a6effbd74" Feb 02 22:48:07 crc kubenswrapper[4789]: E0202 22:48:07.419859 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:48:20 crc kubenswrapper[4789]: I0202 22:48:20.429328 4789 scope.go:117] "RemoveContainer" containerID="16b35fede0307e2768edeae9bc17688f5a8ff6427f2ae9305826087a6effbd74" Feb 02 22:48:20 crc kubenswrapper[4789]: E0202 22:48:20.431265 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:48:33 crc kubenswrapper[4789]: I0202 22:48:33.419852 4789 scope.go:117] "RemoveContainer" containerID="16b35fede0307e2768edeae9bc17688f5a8ff6427f2ae9305826087a6effbd74" Feb 02 22:48:33 crc kubenswrapper[4789]: E0202 22:48:33.420956 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:48:44 crc kubenswrapper[4789]: I0202 22:48:44.060343 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-xs9v2"] Feb 02 22:48:44 crc kubenswrapper[4789]: I0202 22:48:44.073842 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-xs9v2"] Feb 02 22:48:44 crc kubenswrapper[4789]: I0202 22:48:44.438463 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="153c6b69-6597-4b78-bbd1-99f7bf407c2f" path="/var/lib/kubelet/pods/153c6b69-6597-4b78-bbd1-99f7bf407c2f/volumes" Feb 02 22:48:47 crc kubenswrapper[4789]: I0202 22:48:47.419760 4789 scope.go:117] "RemoveContainer" containerID="16b35fede0307e2768edeae9bc17688f5a8ff6427f2ae9305826087a6effbd74" Feb 02 22:48:47 crc kubenswrapper[4789]: E0202 22:48:47.420401 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:48:48 crc kubenswrapper[4789]: I0202 22:48:48.399990 4789 scope.go:117] "RemoveContainer" containerID="2795663c70bc821bd41ea770940adcaa5b72b4ca8b5d4823cc32028558cf62c8" Feb 02 22:48:59 crc kubenswrapper[4789]: I0202 22:48:59.419832 4789 scope.go:117] "RemoveContainer" containerID="16b35fede0307e2768edeae9bc17688f5a8ff6427f2ae9305826087a6effbd74" Feb 02 22:48:59 crc kubenswrapper[4789]: E0202 22:48:59.420761 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:49:12 crc kubenswrapper[4789]: I0202 22:49:12.420872 4789 scope.go:117] "RemoveContainer" containerID="16b35fede0307e2768edeae9bc17688f5a8ff6427f2ae9305826087a6effbd74" Feb 02 22:49:12 crc kubenswrapper[4789]: E0202 22:49:12.421914 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:49:27 crc kubenswrapper[4789]: I0202 22:49:27.419231 4789 scope.go:117] "RemoveContainer" containerID="16b35fede0307e2768edeae9bc17688f5a8ff6427f2ae9305826087a6effbd74" Feb 02 22:49:27 crc kubenswrapper[4789]: E0202 22:49:27.419949 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:49:39 crc kubenswrapper[4789]: I0202 22:49:39.420514 4789 scope.go:117] "RemoveContainer" containerID="16b35fede0307e2768edeae9bc17688f5a8ff6427f2ae9305826087a6effbd74" Feb 02 22:49:39 crc kubenswrapper[4789]: E0202 22:49:39.421760 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:49:46 crc kubenswrapper[4789]: I0202 22:49:46.970139 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-sssmb/must-gather-nkf86"] Feb 02 22:49:46 crc kubenswrapper[4789]: E0202 22:49:46.971664 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="922b07cb-53db-484a-a559-dee560091852" containerName="registry-server" Feb 02 22:49:46 crc kubenswrapper[4789]: I0202 22:49:46.971683 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="922b07cb-53db-484a-a559-dee560091852" containerName="registry-server" Feb 02 22:49:46 crc kubenswrapper[4789]: E0202 22:49:46.971714 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="922b07cb-53db-484a-a559-dee560091852" containerName="extract-utilities" Feb 02 22:49:46 crc kubenswrapper[4789]: I0202 22:49:46.971724 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="922b07cb-53db-484a-a559-dee560091852" containerName="extract-utilities" Feb 02 22:49:46 crc kubenswrapper[4789]: E0202 22:49:46.971736 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="922b07cb-53db-484a-a559-dee560091852" containerName="extract-content" Feb 02 22:49:46 crc kubenswrapper[4789]: I0202 22:49:46.971744 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="922b07cb-53db-484a-a559-dee560091852" containerName="extract-content" Feb 02 22:49:46 crc kubenswrapper[4789]: I0202 22:49:46.971907 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="922b07cb-53db-484a-a559-dee560091852" containerName="registry-server" Feb 02 22:49:46 crc kubenswrapper[4789]: I0202 22:49:46.974335 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sssmb/must-gather-nkf86" Feb 02 22:49:46 crc kubenswrapper[4789]: I0202 22:49:46.975987 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-sssmb"/"default-dockercfg-hh69m" Feb 02 22:49:46 crc kubenswrapper[4789]: I0202 22:49:46.976171 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-sssmb"/"kube-root-ca.crt" Feb 02 22:49:46 crc kubenswrapper[4789]: I0202 22:49:46.977698 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-sssmb"/"openshift-service-ca.crt" Feb 02 22:49:46 crc kubenswrapper[4789]: I0202 22:49:46.987142 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-sssmb/must-gather-nkf86"] Feb 02 22:49:47 crc kubenswrapper[4789]: I0202 22:49:47.024913 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7dq2\" (UniqueName: \"kubernetes.io/projected/e03574ac-341d-45ed-b979-12a6b34b7695-kube-api-access-m7dq2\") pod \"must-gather-nkf86\" (UID: \"e03574ac-341d-45ed-b979-12a6b34b7695\") " pod="openshift-must-gather-sssmb/must-gather-nkf86" Feb 02 22:49:47 crc kubenswrapper[4789]: I0202 22:49:47.024981 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e03574ac-341d-45ed-b979-12a6b34b7695-must-gather-output\") pod \"must-gather-nkf86\" (UID: \"e03574ac-341d-45ed-b979-12a6b34b7695\") " pod="openshift-must-gather-sssmb/must-gather-nkf86" Feb 02 22:49:47 crc kubenswrapper[4789]: I0202 22:49:47.127283 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7dq2\" (UniqueName: \"kubernetes.io/projected/e03574ac-341d-45ed-b979-12a6b34b7695-kube-api-access-m7dq2\") pod \"must-gather-nkf86\" (UID: \"e03574ac-341d-45ed-b979-12a6b34b7695\") " pod="openshift-must-gather-sssmb/must-gather-nkf86" Feb 02 22:49:47 crc kubenswrapper[4789]: I0202 22:49:47.127347 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e03574ac-341d-45ed-b979-12a6b34b7695-must-gather-output\") pod \"must-gather-nkf86\" (UID: \"e03574ac-341d-45ed-b979-12a6b34b7695\") " pod="openshift-must-gather-sssmb/must-gather-nkf86" Feb 02 22:49:47 crc kubenswrapper[4789]: I0202 22:49:47.127850 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e03574ac-341d-45ed-b979-12a6b34b7695-must-gather-output\") pod \"must-gather-nkf86\" (UID: \"e03574ac-341d-45ed-b979-12a6b34b7695\") " pod="openshift-must-gather-sssmb/must-gather-nkf86" Feb 02 22:49:47 crc kubenswrapper[4789]: I0202 22:49:47.162085 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7dq2\" (UniqueName: \"kubernetes.io/projected/e03574ac-341d-45ed-b979-12a6b34b7695-kube-api-access-m7dq2\") pod \"must-gather-nkf86\" (UID: \"e03574ac-341d-45ed-b979-12a6b34b7695\") " pod="openshift-must-gather-sssmb/must-gather-nkf86" Feb 02 22:49:47 crc kubenswrapper[4789]: I0202 22:49:47.328135 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sssmb/must-gather-nkf86" Feb 02 22:49:47 crc kubenswrapper[4789]: I0202 22:49:47.778203 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-sssmb/must-gather-nkf86"] Feb 02 22:49:47 crc kubenswrapper[4789]: W0202 22:49:47.779705 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode03574ac_341d_45ed_b979_12a6b34b7695.slice/crio-6f715599c311cc9d4a83915bf704bf3e1e94389ae71e02b26ad0e33e7385fa53 WatchSource:0}: Error finding container 6f715599c311cc9d4a83915bf704bf3e1e94389ae71e02b26ad0e33e7385fa53: Status 404 returned error can't find the container with id 6f715599c311cc9d4a83915bf704bf3e1e94389ae71e02b26ad0e33e7385fa53 Feb 02 22:49:48 crc kubenswrapper[4789]: I0202 22:49:48.145824 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-sssmb/must-gather-nkf86" event={"ID":"e03574ac-341d-45ed-b979-12a6b34b7695","Type":"ContainerStarted","Data":"6f715599c311cc9d4a83915bf704bf3e1e94389ae71e02b26ad0e33e7385fa53"} Feb 02 22:49:52 crc kubenswrapper[4789]: I0202 22:49:52.435670 4789 scope.go:117] "RemoveContainer" containerID="16b35fede0307e2768edeae9bc17688f5a8ff6427f2ae9305826087a6effbd74" Feb 02 22:49:52 crc kubenswrapper[4789]: E0202 22:49:52.437023 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:49:53 crc kubenswrapper[4789]: I0202 22:49:53.196369 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-sssmb/must-gather-nkf86" event={"ID":"e03574ac-341d-45ed-b979-12a6b34b7695","Type":"ContainerStarted","Data":"884eb54b44dcd39fcdda827d93fe73ad6d14ff5d9ce7807cef7f3f87ea950f0f"} Feb 02 22:49:53 crc kubenswrapper[4789]: I0202 22:49:53.196817 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-sssmb/must-gather-nkf86" event={"ID":"e03574ac-341d-45ed-b979-12a6b34b7695","Type":"ContainerStarted","Data":"e5ac45ded34208685604b81214cf00ee100ff5905343616910ccf72f5f37ed1d"} Feb 02 22:49:53 crc kubenswrapper[4789]: I0202 22:49:53.219565 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-sssmb/must-gather-nkf86" podStartSLOduration=2.671667162 podStartE2EDuration="7.219546182s" podCreationTimestamp="2026-02-02 22:49:46 +0000 UTC" firstStartedPulling="2026-02-02 22:49:47.783011064 +0000 UTC m=+5408.078036083" lastFinishedPulling="2026-02-02 22:49:52.330890074 +0000 UTC m=+5412.625915103" observedRunningTime="2026-02-02 22:49:53.21378683 +0000 UTC m=+5413.508811859" watchObservedRunningTime="2026-02-02 22:49:53.219546182 +0000 UTC m=+5413.514571221" Feb 02 22:49:55 crc kubenswrapper[4789]: I0202 22:49:55.494931 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-sssmb/crc-debug-lsz5h"] Feb 02 22:49:55 crc kubenswrapper[4789]: I0202 22:49:55.496350 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sssmb/crc-debug-lsz5h" Feb 02 22:49:55 crc kubenswrapper[4789]: I0202 22:49:55.686605 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b3d50e6e-f4bb-407b-bf1f-c8eea8ded318-host\") pod \"crc-debug-lsz5h\" (UID: \"b3d50e6e-f4bb-407b-bf1f-c8eea8ded318\") " pod="openshift-must-gather-sssmb/crc-debug-lsz5h" Feb 02 22:49:55 crc kubenswrapper[4789]: I0202 22:49:55.686847 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frmzs\" (UniqueName: \"kubernetes.io/projected/b3d50e6e-f4bb-407b-bf1f-c8eea8ded318-kube-api-access-frmzs\") pod \"crc-debug-lsz5h\" (UID: \"b3d50e6e-f4bb-407b-bf1f-c8eea8ded318\") " pod="openshift-must-gather-sssmb/crc-debug-lsz5h" Feb 02 22:49:55 crc kubenswrapper[4789]: I0202 22:49:55.788112 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b3d50e6e-f4bb-407b-bf1f-c8eea8ded318-host\") pod \"crc-debug-lsz5h\" (UID: \"b3d50e6e-f4bb-407b-bf1f-c8eea8ded318\") " pod="openshift-must-gather-sssmb/crc-debug-lsz5h" Feb 02 22:49:55 crc kubenswrapper[4789]: I0202 22:49:55.788206 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frmzs\" (UniqueName: \"kubernetes.io/projected/b3d50e6e-f4bb-407b-bf1f-c8eea8ded318-kube-api-access-frmzs\") pod \"crc-debug-lsz5h\" (UID: \"b3d50e6e-f4bb-407b-bf1f-c8eea8ded318\") " pod="openshift-must-gather-sssmb/crc-debug-lsz5h" Feb 02 22:49:55 crc kubenswrapper[4789]: I0202 22:49:55.788301 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b3d50e6e-f4bb-407b-bf1f-c8eea8ded318-host\") pod \"crc-debug-lsz5h\" (UID: \"b3d50e6e-f4bb-407b-bf1f-c8eea8ded318\") " pod="openshift-must-gather-sssmb/crc-debug-lsz5h" Feb 02 22:49:55 crc kubenswrapper[4789]: I0202 22:49:55.819559 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frmzs\" (UniqueName: \"kubernetes.io/projected/b3d50e6e-f4bb-407b-bf1f-c8eea8ded318-kube-api-access-frmzs\") pod \"crc-debug-lsz5h\" (UID: \"b3d50e6e-f4bb-407b-bf1f-c8eea8ded318\") " pod="openshift-must-gather-sssmb/crc-debug-lsz5h" Feb 02 22:49:56 crc kubenswrapper[4789]: I0202 22:49:56.117218 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sssmb/crc-debug-lsz5h" Feb 02 22:49:56 crc kubenswrapper[4789]: W0202 22:49:56.151305 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb3d50e6e_f4bb_407b_bf1f_c8eea8ded318.slice/crio-4224cfe16c7116f59c4ee44506ba51d980c0925fdc1e5f36b1cc3d53868b886b WatchSource:0}: Error finding container 4224cfe16c7116f59c4ee44506ba51d980c0925fdc1e5f36b1cc3d53868b886b: Status 404 returned error can't find the container with id 4224cfe16c7116f59c4ee44506ba51d980c0925fdc1e5f36b1cc3d53868b886b Feb 02 22:49:56 crc kubenswrapper[4789]: I0202 22:49:56.218766 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-sssmb/crc-debug-lsz5h" event={"ID":"b3d50e6e-f4bb-407b-bf1f-c8eea8ded318","Type":"ContainerStarted","Data":"4224cfe16c7116f59c4ee44506ba51d980c0925fdc1e5f36b1cc3d53868b886b"} Feb 02 22:50:06 crc kubenswrapper[4789]: I0202 22:50:06.300475 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-sssmb/crc-debug-lsz5h" event={"ID":"b3d50e6e-f4bb-407b-bf1f-c8eea8ded318","Type":"ContainerStarted","Data":"f1dbfbb10046c4e8a93ab32efb973bf756440214ba330725bf9417a5ca10d47c"} Feb 02 22:50:06 crc kubenswrapper[4789]: I0202 22:50:06.322063 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-sssmb/crc-debug-lsz5h" podStartSLOduration=1.486482498 podStartE2EDuration="11.322044007s" podCreationTimestamp="2026-02-02 22:49:55 +0000 UTC" firstStartedPulling="2026-02-02 22:49:56.154087228 +0000 UTC m=+5416.449112287" lastFinishedPulling="2026-02-02 22:50:05.989648777 +0000 UTC m=+5426.284673796" observedRunningTime="2026-02-02 22:50:06.315451632 +0000 UTC m=+5426.610476651" watchObservedRunningTime="2026-02-02 22:50:06.322044007 +0000 UTC m=+5426.617069026" Feb 02 22:50:06 crc kubenswrapper[4789]: I0202 22:50:06.420111 4789 scope.go:117] "RemoveContainer" containerID="16b35fede0307e2768edeae9bc17688f5a8ff6427f2ae9305826087a6effbd74" Feb 02 22:50:06 crc kubenswrapper[4789]: E0202 22:50:06.420539 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:50:18 crc kubenswrapper[4789]: I0202 22:50:18.420520 4789 scope.go:117] "RemoveContainer" containerID="16b35fede0307e2768edeae9bc17688f5a8ff6427f2ae9305826087a6effbd74" Feb 02 22:50:18 crc kubenswrapper[4789]: E0202 22:50:18.421754 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:50:25 crc kubenswrapper[4789]: I0202 22:50:25.466443 4789 generic.go:334] "Generic (PLEG): container finished" podID="b3d50e6e-f4bb-407b-bf1f-c8eea8ded318" containerID="f1dbfbb10046c4e8a93ab32efb973bf756440214ba330725bf9417a5ca10d47c" exitCode=0 Feb 02 22:50:25 crc kubenswrapper[4789]: I0202 22:50:25.466546 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-sssmb/crc-debug-lsz5h" event={"ID":"b3d50e6e-f4bb-407b-bf1f-c8eea8ded318","Type":"ContainerDied","Data":"f1dbfbb10046c4e8a93ab32efb973bf756440214ba330725bf9417a5ca10d47c"} Feb 02 22:50:26 crc kubenswrapper[4789]: I0202 22:50:26.590631 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sssmb/crc-debug-lsz5h" Feb 02 22:50:26 crc kubenswrapper[4789]: I0202 22:50:26.620700 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-sssmb/crc-debug-lsz5h"] Feb 02 22:50:26 crc kubenswrapper[4789]: I0202 22:50:26.626507 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-sssmb/crc-debug-lsz5h"] Feb 02 22:50:26 crc kubenswrapper[4789]: I0202 22:50:26.686834 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b3d50e6e-f4bb-407b-bf1f-c8eea8ded318-host\") pod \"b3d50e6e-f4bb-407b-bf1f-c8eea8ded318\" (UID: \"b3d50e6e-f4bb-407b-bf1f-c8eea8ded318\") " Feb 02 22:50:26 crc kubenswrapper[4789]: I0202 22:50:26.687484 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-frmzs\" (UniqueName: \"kubernetes.io/projected/b3d50e6e-f4bb-407b-bf1f-c8eea8ded318-kube-api-access-frmzs\") pod \"b3d50e6e-f4bb-407b-bf1f-c8eea8ded318\" (UID: \"b3d50e6e-f4bb-407b-bf1f-c8eea8ded318\") " Feb 02 22:50:26 crc kubenswrapper[4789]: I0202 22:50:26.686954 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3d50e6e-f4bb-407b-bf1f-c8eea8ded318-host" (OuterVolumeSpecName: "host") pod "b3d50e6e-f4bb-407b-bf1f-c8eea8ded318" (UID: "b3d50e6e-f4bb-407b-bf1f-c8eea8ded318"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 22:50:26 crc kubenswrapper[4789]: I0202 22:50:26.688474 4789 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b3d50e6e-f4bb-407b-bf1f-c8eea8ded318-host\") on node \"crc\" DevicePath \"\"" Feb 02 22:50:26 crc kubenswrapper[4789]: I0202 22:50:26.707144 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3d50e6e-f4bb-407b-bf1f-c8eea8ded318-kube-api-access-frmzs" (OuterVolumeSpecName: "kube-api-access-frmzs") pod "b3d50e6e-f4bb-407b-bf1f-c8eea8ded318" (UID: "b3d50e6e-f4bb-407b-bf1f-c8eea8ded318"). InnerVolumeSpecName "kube-api-access-frmzs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 22:50:26 crc kubenswrapper[4789]: I0202 22:50:26.790273 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-frmzs\" (UniqueName: \"kubernetes.io/projected/b3d50e6e-f4bb-407b-bf1f-c8eea8ded318-kube-api-access-frmzs\") on node \"crc\" DevicePath \"\"" Feb 02 22:50:27 crc kubenswrapper[4789]: I0202 22:50:27.502613 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4224cfe16c7116f59c4ee44506ba51d980c0925fdc1e5f36b1cc3d53868b886b" Feb 02 22:50:27 crc kubenswrapper[4789]: I0202 22:50:27.502697 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sssmb/crc-debug-lsz5h" Feb 02 22:50:27 crc kubenswrapper[4789]: I0202 22:50:27.873301 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-sssmb/crc-debug-ph9z6"] Feb 02 22:50:27 crc kubenswrapper[4789]: E0202 22:50:27.873606 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3d50e6e-f4bb-407b-bf1f-c8eea8ded318" containerName="container-00" Feb 02 22:50:27 crc kubenswrapper[4789]: I0202 22:50:27.873618 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3d50e6e-f4bb-407b-bf1f-c8eea8ded318" containerName="container-00" Feb 02 22:50:27 crc kubenswrapper[4789]: I0202 22:50:27.873786 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3d50e6e-f4bb-407b-bf1f-c8eea8ded318" containerName="container-00" Feb 02 22:50:27 crc kubenswrapper[4789]: I0202 22:50:27.874248 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sssmb/crc-debug-ph9z6" Feb 02 22:50:28 crc kubenswrapper[4789]: I0202 22:50:28.009200 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bf55d\" (UniqueName: \"kubernetes.io/projected/1cb1fd34-bc4b-4e66-a0f8-b2cbe8d174d5-kube-api-access-bf55d\") pod \"crc-debug-ph9z6\" (UID: \"1cb1fd34-bc4b-4e66-a0f8-b2cbe8d174d5\") " pod="openshift-must-gather-sssmb/crc-debug-ph9z6" Feb 02 22:50:28 crc kubenswrapper[4789]: I0202 22:50:28.009313 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1cb1fd34-bc4b-4e66-a0f8-b2cbe8d174d5-host\") pod \"crc-debug-ph9z6\" (UID: \"1cb1fd34-bc4b-4e66-a0f8-b2cbe8d174d5\") " pod="openshift-must-gather-sssmb/crc-debug-ph9z6" Feb 02 22:50:28 crc kubenswrapper[4789]: I0202 22:50:28.111279 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bf55d\" (UniqueName: \"kubernetes.io/projected/1cb1fd34-bc4b-4e66-a0f8-b2cbe8d174d5-kube-api-access-bf55d\") pod \"crc-debug-ph9z6\" (UID: \"1cb1fd34-bc4b-4e66-a0f8-b2cbe8d174d5\") " pod="openshift-must-gather-sssmb/crc-debug-ph9z6" Feb 02 22:50:28 crc kubenswrapper[4789]: I0202 22:50:28.111930 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1cb1fd34-bc4b-4e66-a0f8-b2cbe8d174d5-host\") pod \"crc-debug-ph9z6\" (UID: \"1cb1fd34-bc4b-4e66-a0f8-b2cbe8d174d5\") " pod="openshift-must-gather-sssmb/crc-debug-ph9z6" Feb 02 22:50:28 crc kubenswrapper[4789]: I0202 22:50:28.112087 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1cb1fd34-bc4b-4e66-a0f8-b2cbe8d174d5-host\") pod \"crc-debug-ph9z6\" (UID: \"1cb1fd34-bc4b-4e66-a0f8-b2cbe8d174d5\") " pod="openshift-must-gather-sssmb/crc-debug-ph9z6" Feb 02 22:50:28 crc kubenswrapper[4789]: I0202 22:50:28.135122 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bf55d\" (UniqueName: \"kubernetes.io/projected/1cb1fd34-bc4b-4e66-a0f8-b2cbe8d174d5-kube-api-access-bf55d\") pod \"crc-debug-ph9z6\" (UID: \"1cb1fd34-bc4b-4e66-a0f8-b2cbe8d174d5\") " pod="openshift-must-gather-sssmb/crc-debug-ph9z6" Feb 02 22:50:28 crc kubenswrapper[4789]: I0202 22:50:28.192336 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sssmb/crc-debug-ph9z6" Feb 02 22:50:28 crc kubenswrapper[4789]: I0202 22:50:28.433870 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3d50e6e-f4bb-407b-bf1f-c8eea8ded318" path="/var/lib/kubelet/pods/b3d50e6e-f4bb-407b-bf1f-c8eea8ded318/volumes" Feb 02 22:50:28 crc kubenswrapper[4789]: I0202 22:50:28.514028 4789 generic.go:334] "Generic (PLEG): container finished" podID="1cb1fd34-bc4b-4e66-a0f8-b2cbe8d174d5" containerID="349de262e66063ac07a8141ba49b85c7108384bd67c6f2da62cabb05ccb2e628" exitCode=1 Feb 02 22:50:28 crc kubenswrapper[4789]: I0202 22:50:28.514068 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-sssmb/crc-debug-ph9z6" event={"ID":"1cb1fd34-bc4b-4e66-a0f8-b2cbe8d174d5","Type":"ContainerDied","Data":"349de262e66063ac07a8141ba49b85c7108384bd67c6f2da62cabb05ccb2e628"} Feb 02 22:50:28 crc kubenswrapper[4789]: I0202 22:50:28.514110 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-sssmb/crc-debug-ph9z6" event={"ID":"1cb1fd34-bc4b-4e66-a0f8-b2cbe8d174d5","Type":"ContainerStarted","Data":"53ebb211f92a65779b6a114f81d9b0d0929333a34b342902a2fc52d6b17063bc"} Feb 02 22:50:28 crc kubenswrapper[4789]: I0202 22:50:28.561344 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-sssmb/crc-debug-ph9z6"] Feb 02 22:50:28 crc kubenswrapper[4789]: I0202 22:50:28.569380 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-sssmb/crc-debug-ph9z6"] Feb 02 22:50:29 crc kubenswrapper[4789]: I0202 22:50:29.623862 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sssmb/crc-debug-ph9z6" Feb 02 22:50:29 crc kubenswrapper[4789]: I0202 22:50:29.756811 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1cb1fd34-bc4b-4e66-a0f8-b2cbe8d174d5-host\") pod \"1cb1fd34-bc4b-4e66-a0f8-b2cbe8d174d5\" (UID: \"1cb1fd34-bc4b-4e66-a0f8-b2cbe8d174d5\") " Feb 02 22:50:29 crc kubenswrapper[4789]: I0202 22:50:29.757221 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf55d\" (UniqueName: \"kubernetes.io/projected/1cb1fd34-bc4b-4e66-a0f8-b2cbe8d174d5-kube-api-access-bf55d\") pod \"1cb1fd34-bc4b-4e66-a0f8-b2cbe8d174d5\" (UID: \"1cb1fd34-bc4b-4e66-a0f8-b2cbe8d174d5\") " Feb 02 22:50:29 crc kubenswrapper[4789]: I0202 22:50:29.756980 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1cb1fd34-bc4b-4e66-a0f8-b2cbe8d174d5-host" (OuterVolumeSpecName: "host") pod "1cb1fd34-bc4b-4e66-a0f8-b2cbe8d174d5" (UID: "1cb1fd34-bc4b-4e66-a0f8-b2cbe8d174d5"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 22:50:29 crc kubenswrapper[4789]: I0202 22:50:29.757740 4789 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1cb1fd34-bc4b-4e66-a0f8-b2cbe8d174d5-host\") on node \"crc\" DevicePath \"\"" Feb 02 22:50:29 crc kubenswrapper[4789]: I0202 22:50:29.763995 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1cb1fd34-bc4b-4e66-a0f8-b2cbe8d174d5-kube-api-access-bf55d" (OuterVolumeSpecName: "kube-api-access-bf55d") pod "1cb1fd34-bc4b-4e66-a0f8-b2cbe8d174d5" (UID: "1cb1fd34-bc4b-4e66-a0f8-b2cbe8d174d5"). InnerVolumeSpecName "kube-api-access-bf55d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 22:50:29 crc kubenswrapper[4789]: I0202 22:50:29.859013 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf55d\" (UniqueName: \"kubernetes.io/projected/1cb1fd34-bc4b-4e66-a0f8-b2cbe8d174d5-kube-api-access-bf55d\") on node \"crc\" DevicePath \"\"" Feb 02 22:50:30 crc kubenswrapper[4789]: I0202 22:50:30.437139 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1cb1fd34-bc4b-4e66-a0f8-b2cbe8d174d5" path="/var/lib/kubelet/pods/1cb1fd34-bc4b-4e66-a0f8-b2cbe8d174d5/volumes" Feb 02 22:50:30 crc kubenswrapper[4789]: I0202 22:50:30.542329 4789 scope.go:117] "RemoveContainer" containerID="349de262e66063ac07a8141ba49b85c7108384bd67c6f2da62cabb05ccb2e628" Feb 02 22:50:30 crc kubenswrapper[4789]: I0202 22:50:30.542410 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sssmb/crc-debug-ph9z6" Feb 02 22:50:32 crc kubenswrapper[4789]: I0202 22:50:32.420992 4789 scope.go:117] "RemoveContainer" containerID="16b35fede0307e2768edeae9bc17688f5a8ff6427f2ae9305826087a6effbd74" Feb 02 22:50:32 crc kubenswrapper[4789]: E0202 22:50:32.421511 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:50:46 crc kubenswrapper[4789]: I0202 22:50:46.421932 4789 scope.go:117] "RemoveContainer" containerID="16b35fede0307e2768edeae9bc17688f5a8ff6427f2ae9305826087a6effbd74" Feb 02 22:50:46 crc kubenswrapper[4789]: E0202 22:50:46.423053 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:50:49 crc kubenswrapper[4789]: I0202 22:50:49.171682 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-8596d5dcd9-r2chc_bb3a48bd-007e-4640-bf96-9ea8b39a12e2/init/0.log" Feb 02 22:50:49 crc kubenswrapper[4789]: I0202 22:50:49.354498 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-8596d5dcd9-r2chc_bb3a48bd-007e-4640-bf96-9ea8b39a12e2/init/0.log" Feb 02 22:50:49 crc kubenswrapper[4789]: I0202 22:50:49.381301 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-8596d5dcd9-r2chc_bb3a48bd-007e-4640-bf96-9ea8b39a12e2/dnsmasq-dns/0.log" Feb 02 22:50:49 crc kubenswrapper[4789]: I0202 22:50:49.497716 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-5c92-account-create-update-4shd8_c7308363-74f5-4ae7-8695-3017babea57c/mariadb-account-create-update/0.log" Feb 02 22:50:49 crc kubenswrapper[4789]: I0202 22:50:49.551141 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-694d6fd58-9xbpn_714e1927-ff76-40b6-8134-dc6d92c4c226/keystone-api/0.log" Feb 02 22:50:49 crc kubenswrapper[4789]: I0202 22:50:49.747410 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-bootstrap-frq2r_52d3c3df-2d71-4415-ae65-7301f4157711/keystone-bootstrap/0.log" Feb 02 22:50:49 crc kubenswrapper[4789]: I0202 22:50:49.817499 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-db-create-r4kr2_a93cc873-6b0b-4eb8-be87-acda79c160af/mariadb-database-create/0.log" Feb 02 22:50:49 crc kubenswrapper[4789]: I0202 22:50:49.963681 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-db-sync-vhtsg_4cd23310-ee19-467d-8534-f7daed0233e8/keystone-db-sync/0.log" Feb 02 22:50:50 crc kubenswrapper[4789]: I0202 22:50:50.014142 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-copy-data_0b49c7ba-8420-4c1c-9b1a-68242591b0c8/adoption/0.log" Feb 02 22:50:50 crc kubenswrapper[4789]: I0202 22:50:50.243597 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_1617a40a-8c8d-4940-b8d0-bc501567c07d/mysql-bootstrap/0.log" Feb 02 22:50:50 crc kubenswrapper[4789]: I0202 22:50:50.400616 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_1617a40a-8c8d-4940-b8d0-bc501567c07d/mysql-bootstrap/0.log" Feb 02 22:50:50 crc kubenswrapper[4789]: I0202 22:50:50.439282 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_1617a40a-8c8d-4940-b8d0-bc501567c07d/galera/0.log" Feb 02 22:50:50 crc kubenswrapper[4789]: I0202 22:50:50.539709 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_69ba12ec-7b8a-4ef5-8e4a-7ff0adda7040/memcached/0.log" Feb 02 22:50:50 crc kubenswrapper[4789]: I0202 22:50:50.614870 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_5b0e9cfc-d618-4cbc-ab7c-f86d711e087f/mysql-bootstrap/0.log" Feb 02 22:50:50 crc kubenswrapper[4789]: I0202 22:50:50.794212 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_5b0e9cfc-d618-4cbc-ab7c-f86d711e087f/galera/0.log" Feb 02 22:50:50 crc kubenswrapper[4789]: I0202 22:50:50.805010 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_5b0e9cfc-d618-4cbc-ab7c-f86d711e087f/mysql-bootstrap/0.log" Feb 02 22:50:50 crc kubenswrapper[4789]: I0202 22:50:50.876466 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_a634b261-487f-4a31-ae53-10279607cb1e/openstackclient/0.log" Feb 02 22:50:50 crc kubenswrapper[4789]: I0202 22:50:50.975874 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-copy-data_c7c9b895-669c-426e-b9ad-7efd519878e4/adoption/0.log" Feb 02 22:50:51 crc kubenswrapper[4789]: I0202 22:50:51.047338 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_8581696e-1d96-46c1-8969-ab62ab7f296e/openstack-network-exporter/0.log" Feb 02 22:50:51 crc kubenswrapper[4789]: I0202 22:50:51.165465 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_8581696e-1d96-46c1-8969-ab62ab7f296e/ovn-northd/0.log" Feb 02 22:50:51 crc kubenswrapper[4789]: I0202 22:50:51.193095 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_990c1a60-2c79-469a-9c85-4850c0865450/openstack-network-exporter/0.log" Feb 02 22:50:51 crc kubenswrapper[4789]: I0202 22:50:51.306389 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_990c1a60-2c79-469a-9c85-4850c0865450/ovsdbserver-nb/0.log" Feb 02 22:50:51 crc kubenswrapper[4789]: I0202 22:50:51.348108 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-1_3010e3cf-9c6a-4ace-aae1-e12d804aa800/openstack-network-exporter/0.log" Feb 02 22:50:51 crc kubenswrapper[4789]: I0202 22:50:51.537734 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-1_3010e3cf-9c6a-4ace-aae1-e12d804aa800/ovsdbserver-nb/0.log" Feb 02 22:50:51 crc kubenswrapper[4789]: I0202 22:50:51.628463 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-2_f5e652c9-762f-4067-9590-b345432d20d2/openstack-network-exporter/0.log" Feb 02 22:50:51 crc kubenswrapper[4789]: I0202 22:50:51.660465 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-2_f5e652c9-762f-4067-9590-b345432d20d2/ovsdbserver-nb/0.log" Feb 02 22:50:51 crc kubenswrapper[4789]: I0202 22:50:51.780165 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_232b785e-a4d7-4069-9575-e85094f4a10a/openstack-network-exporter/0.log" Feb 02 22:50:51 crc kubenswrapper[4789]: I0202 22:50:51.795608 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_232b785e-a4d7-4069-9575-e85094f4a10a/ovsdbserver-sb/0.log" Feb 02 22:50:51 crc kubenswrapper[4789]: I0202 22:50:51.926102 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-1_d0419330-97f4-4531-beab-99be8c5f9599/openstack-network-exporter/0.log" Feb 02 22:50:51 crc kubenswrapper[4789]: I0202 22:50:51.950278 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-1_d0419330-97f4-4531-beab-99be8c5f9599/ovsdbserver-sb/0.log" Feb 02 22:50:52 crc kubenswrapper[4789]: I0202 22:50:52.054237 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-2_8f393a68-69bb-4a09-ba42-4577f15b4ef4/openstack-network-exporter/0.log" Feb 02 22:50:52 crc kubenswrapper[4789]: I0202 22:50:52.114876 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-2_8f393a68-69bb-4a09-ba42-4577f15b4ef4/ovsdbserver-sb/0.log" Feb 02 22:50:52 crc kubenswrapper[4789]: I0202 22:50:52.191395 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_f37562ff-5ea3-4230-9c68-09d330bb64c8/setup-container/0.log" Feb 02 22:50:52 crc kubenswrapper[4789]: I0202 22:50:52.355533 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_f37562ff-5ea3-4230-9c68-09d330bb64c8/setup-container/0.log" Feb 02 22:50:52 crc kubenswrapper[4789]: I0202 22:50:52.379767 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_6492b19f-f809-4667-b7d7-94ee3bfaa669/setup-container/0.log" Feb 02 22:50:52 crc kubenswrapper[4789]: I0202 22:50:52.399879 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_f37562ff-5ea3-4230-9c68-09d330bb64c8/rabbitmq/0.log" Feb 02 22:50:52 crc kubenswrapper[4789]: I0202 22:50:52.541643 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_6492b19f-f809-4667-b7d7-94ee3bfaa669/setup-container/0.log" Feb 02 22:50:52 crc kubenswrapper[4789]: I0202 22:50:52.579731 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_6492b19f-f809-4667-b7d7-94ee3bfaa669/rabbitmq/0.log" Feb 02 22:50:58 crc kubenswrapper[4789]: I0202 22:50:58.419845 4789 scope.go:117] "RemoveContainer" containerID="16b35fede0307e2768edeae9bc17688f5a8ff6427f2ae9305826087a6effbd74" Feb 02 22:50:58 crc kubenswrapper[4789]: E0202 22:50:58.420495 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:51:09 crc kubenswrapper[4789]: I0202 22:51:09.014469 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6d4b642553982cbb8e7fd91a34c219bfd2554bbfec4ca764bf9a8af8f3phzkw_b517a7e4-68f4-4873-99aa-b62a18bc38b6/util/0.log" Feb 02 22:51:09 crc kubenswrapper[4789]: I0202 22:51:09.211834 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6d4b642553982cbb8e7fd91a34c219bfd2554bbfec4ca764bf9a8af8f3phzkw_b517a7e4-68f4-4873-99aa-b62a18bc38b6/pull/0.log" Feb 02 22:51:09 crc kubenswrapper[4789]: I0202 22:51:09.222685 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6d4b642553982cbb8e7fd91a34c219bfd2554bbfec4ca764bf9a8af8f3phzkw_b517a7e4-68f4-4873-99aa-b62a18bc38b6/util/0.log" Feb 02 22:51:09 crc kubenswrapper[4789]: I0202 22:51:09.283711 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6d4b642553982cbb8e7fd91a34c219bfd2554bbfec4ca764bf9a8af8f3phzkw_b517a7e4-68f4-4873-99aa-b62a18bc38b6/pull/0.log" Feb 02 22:51:09 crc kubenswrapper[4789]: I0202 22:51:09.412899 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6d4b642553982cbb8e7fd91a34c219bfd2554bbfec4ca764bf9a8af8f3phzkw_b517a7e4-68f4-4873-99aa-b62a18bc38b6/util/0.log" Feb 02 22:51:09 crc kubenswrapper[4789]: I0202 22:51:09.458779 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6d4b642553982cbb8e7fd91a34c219bfd2554bbfec4ca764bf9a8af8f3phzkw_b517a7e4-68f4-4873-99aa-b62a18bc38b6/extract/0.log" Feb 02 22:51:09 crc kubenswrapper[4789]: I0202 22:51:09.469242 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6d4b642553982cbb8e7fd91a34c219bfd2554bbfec4ca764bf9a8af8f3phzkw_b517a7e4-68f4-4873-99aa-b62a18bc38b6/pull/0.log" Feb 02 22:51:09 crc kubenswrapper[4789]: I0202 22:51:09.674293 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-fc589b45f-7l6fz_56cea7a2-1c74-45e8-ae6e-e5a30b71df3e/manager/0.log" Feb 02 22:51:09 crc kubenswrapper[4789]: I0202 22:51:09.884426 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-8f4c5cb64-dntn6_46f273e2-4f9b-4436-815b-72fcfd1f0b96/manager/0.log" Feb 02 22:51:10 crc kubenswrapper[4789]: I0202 22:51:10.043639 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-5d77f4dbc9-v5pw2_92bfe813-4e92-44da-8e1a-092164813972/manager/0.log" Feb 02 22:51:10 crc kubenswrapper[4789]: I0202 22:51:10.091937 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-65dc6c8d9c-rgs76_0bdf708d-44ed-44cf-af93-f8a2aec7e9ed/manager/0.log" Feb 02 22:51:10 crc kubenswrapper[4789]: I0202 22:51:10.271007 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5fb775575f-pg282_1ac077aa-b2d2-41b7-aa3d-061e3e7b41dc/manager/0.log" Feb 02 22:51:10 crc kubenswrapper[4789]: I0202 22:51:10.611765 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-87bd9d46f-b8vjj_27a24f44-b811-4f28-84b5-88504deaae1c/manager/0.log" Feb 02 22:51:10 crc kubenswrapper[4789]: I0202 22:51:10.876606 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79955696d6-r68x4_c8853e21-c77c-4220-acb8-8e469cbca718/manager/0.log" Feb 02 22:51:10 crc kubenswrapper[4789]: I0202 22:51:10.923642 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-64469b487f-svkqs_bd53908a-497a-4e78-aa27-d608e94d1723/manager/0.log" Feb 02 22:51:11 crc kubenswrapper[4789]: I0202 22:51:11.075463 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7775d87d9d-hzqch_e7c2322e-e6a7-4745-ab83-4ba56575e037/manager/0.log" Feb 02 22:51:11 crc kubenswrapper[4789]: I0202 22:51:11.197897 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-67bf948998-5zgft_0b11c2a0-977e-46f8-bab7-5d812c8a35f9/manager/0.log" Feb 02 22:51:11 crc kubenswrapper[4789]: I0202 22:51:11.327882 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-576995988b-qcnc8_1fd01978-b3df-4a1c-a650-e6b182389a8d/manager/0.log" Feb 02 22:51:11 crc kubenswrapper[4789]: I0202 22:51:11.524538 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-866f9bb544-47rnc_0a1fe831-76ff-4718-9217-34c72110e718/manager/0.log" Feb 02 22:51:11 crc kubenswrapper[4789]: I0202 22:51:11.601848 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-5644b66645-mpkrm_546b235a-9844-4eff-9e93-91fc4c8c1c0c/manager/0.log" Feb 02 22:51:11 crc kubenswrapper[4789]: I0202 22:51:11.790691 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-59c4b45c4dvqp4s_faa2ece3-95a2-43c2-935b-10cc966e7292/manager/0.log" Feb 02 22:51:12 crc kubenswrapper[4789]: I0202 22:51:12.158565 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-76c676d8b-bw7bj_fa6fc056-865a-4a4d-8fa6-f7615afd06b1/operator/0.log" Feb 02 22:51:12 crc kubenswrapper[4789]: I0202 22:51:12.359436 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-xzwx8_05148ecc-f381-45e6-af59-a732c6d6e856/registry-server/0.log" Feb 02 22:51:12 crc kubenswrapper[4789]: I0202 22:51:12.683896 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-788c46999f-pqj67_ad6fe491-a355-480a-8c88-ec835b469c44/manager/0.log" Feb 02 22:51:12 crc kubenswrapper[4789]: I0202 22:51:12.770068 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b964cf4cd-xc8xn_c43d0dc6-7406-4225-93b3-6779c68940f8/manager/0.log" Feb 02 22:51:12 crc kubenswrapper[4789]: I0202 22:51:12.932098 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-pn28p_c82b99fc-84c7-4ff7-9662-c7cbae1d9ae5/operator/0.log" Feb 02 22:51:13 crc kubenswrapper[4789]: I0202 22:51:13.154083 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-7b89fdf75b-xv5k7_88179082-d6af-4a6c-a159-262a4928c4c3/manager/0.log" Feb 02 22:51:13 crc kubenswrapper[4789]: I0202 22:51:13.240503 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-565849b54-th2wj_4ad0215b-c9cf-46bd-aee5-a3099f5fb8e7/manager/0.log" Feb 02 22:51:13 crc kubenswrapper[4789]: I0202 22:51:13.300304 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-79966df5f8-95whl_7709193e-e11f-49dc-9ffc-be57f3d0b898/manager/0.log" Feb 02 22:51:13 crc kubenswrapper[4789]: I0202 22:51:13.363568 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-56f8bfcd9f-g6hcr_19963467-1169-4cc6-99f8-efadadfcba2e/manager/0.log" Feb 02 22:51:13 crc kubenswrapper[4789]: I0202 22:51:13.419693 4789 scope.go:117] "RemoveContainer" containerID="16b35fede0307e2768edeae9bc17688f5a8ff6427f2ae9305826087a6effbd74" Feb 02 22:51:13 crc kubenswrapper[4789]: E0202 22:51:13.420109 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:51:13 crc kubenswrapper[4789]: I0202 22:51:13.535497 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-586b95b788-jl54m_3b005885-1fa6-4f6b-b928-b0da5cd41798/manager/0.log" Feb 02 22:51:13 crc kubenswrapper[4789]: I0202 22:51:13.825322 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7b89ddb58-gxpg7_31505f8d-d5db-47b0-a3c7-38b45e6a6997/manager/0.log" Feb 02 22:51:25 crc kubenswrapper[4789]: I0202 22:51:25.419788 4789 scope.go:117] "RemoveContainer" containerID="16b35fede0307e2768edeae9bc17688f5a8ff6427f2ae9305826087a6effbd74" Feb 02 22:51:25 crc kubenswrapper[4789]: E0202 22:51:25.420755 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:51:32 crc kubenswrapper[4789]: I0202 22:51:32.879072 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-zn96z_62652ba8-968d-4e22-8e4a-00497c30cacc/control-plane-machine-set-operator/0.log" Feb 02 22:51:33 crc kubenswrapper[4789]: I0202 22:51:33.005944 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-hqmrz_82a78e00-0795-4de5-8062-f92878ea6c72/kube-rbac-proxy/0.log" Feb 02 22:51:33 crc kubenswrapper[4789]: I0202 22:51:33.049753 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-hqmrz_82a78e00-0795-4de5-8062-f92878ea6c72/machine-api-operator/0.log" Feb 02 22:51:36 crc kubenswrapper[4789]: I0202 22:51:36.420316 4789 scope.go:117] "RemoveContainer" containerID="16b35fede0307e2768edeae9bc17688f5a8ff6427f2ae9305826087a6effbd74" Feb 02 22:51:36 crc kubenswrapper[4789]: E0202 22:51:36.421284 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:51:45 crc kubenswrapper[4789]: I0202 22:51:45.269046 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9jq46"] Feb 02 22:51:45 crc kubenswrapper[4789]: E0202 22:51:45.270311 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cb1fd34-bc4b-4e66-a0f8-b2cbe8d174d5" containerName="container-00" Feb 02 22:51:45 crc kubenswrapper[4789]: I0202 22:51:45.270335 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cb1fd34-bc4b-4e66-a0f8-b2cbe8d174d5" containerName="container-00" Feb 02 22:51:45 crc kubenswrapper[4789]: I0202 22:51:45.270516 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="1cb1fd34-bc4b-4e66-a0f8-b2cbe8d174d5" containerName="container-00" Feb 02 22:51:45 crc kubenswrapper[4789]: I0202 22:51:45.271923 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9jq46" Feb 02 22:51:45 crc kubenswrapper[4789]: I0202 22:51:45.289898 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9jq46"] Feb 02 22:51:45 crc kubenswrapper[4789]: I0202 22:51:45.318084 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/549c829f-4be0-412c-b018-deab3d4cf2dd-catalog-content\") pod \"certified-operators-9jq46\" (UID: \"549c829f-4be0-412c-b018-deab3d4cf2dd\") " pod="openshift-marketplace/certified-operators-9jq46" Feb 02 22:51:45 crc kubenswrapper[4789]: I0202 22:51:45.318163 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/549c829f-4be0-412c-b018-deab3d4cf2dd-utilities\") pod \"certified-operators-9jq46\" (UID: \"549c829f-4be0-412c-b018-deab3d4cf2dd\") " pod="openshift-marketplace/certified-operators-9jq46" Feb 02 22:51:45 crc kubenswrapper[4789]: I0202 22:51:45.318288 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfk2n\" (UniqueName: \"kubernetes.io/projected/549c829f-4be0-412c-b018-deab3d4cf2dd-kube-api-access-mfk2n\") pod \"certified-operators-9jq46\" (UID: \"549c829f-4be0-412c-b018-deab3d4cf2dd\") " pod="openshift-marketplace/certified-operators-9jq46" Feb 02 22:51:45 crc kubenswrapper[4789]: I0202 22:51:45.419224 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfk2n\" (UniqueName: \"kubernetes.io/projected/549c829f-4be0-412c-b018-deab3d4cf2dd-kube-api-access-mfk2n\") pod \"certified-operators-9jq46\" (UID: \"549c829f-4be0-412c-b018-deab3d4cf2dd\") " pod="openshift-marketplace/certified-operators-9jq46" Feb 02 22:51:45 crc kubenswrapper[4789]: I0202 22:51:45.419596 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/549c829f-4be0-412c-b018-deab3d4cf2dd-catalog-content\") pod \"certified-operators-9jq46\" (UID: \"549c829f-4be0-412c-b018-deab3d4cf2dd\") " pod="openshift-marketplace/certified-operators-9jq46" Feb 02 22:51:45 crc kubenswrapper[4789]: I0202 22:51:45.419639 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/549c829f-4be0-412c-b018-deab3d4cf2dd-utilities\") pod \"certified-operators-9jq46\" (UID: \"549c829f-4be0-412c-b018-deab3d4cf2dd\") " pod="openshift-marketplace/certified-operators-9jq46" Feb 02 22:51:45 crc kubenswrapper[4789]: I0202 22:51:45.420204 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/549c829f-4be0-412c-b018-deab3d4cf2dd-utilities\") pod \"certified-operators-9jq46\" (UID: \"549c829f-4be0-412c-b018-deab3d4cf2dd\") " pod="openshift-marketplace/certified-operators-9jq46" Feb 02 22:51:45 crc kubenswrapper[4789]: I0202 22:51:45.421024 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/549c829f-4be0-412c-b018-deab3d4cf2dd-catalog-content\") pod \"certified-operators-9jq46\" (UID: \"549c829f-4be0-412c-b018-deab3d4cf2dd\") " pod="openshift-marketplace/certified-operators-9jq46" Feb 02 22:51:45 crc kubenswrapper[4789]: I0202 22:51:45.443220 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfk2n\" (UniqueName: \"kubernetes.io/projected/549c829f-4be0-412c-b018-deab3d4cf2dd-kube-api-access-mfk2n\") pod \"certified-operators-9jq46\" (UID: \"549c829f-4be0-412c-b018-deab3d4cf2dd\") " pod="openshift-marketplace/certified-operators-9jq46" Feb 02 22:51:45 crc kubenswrapper[4789]: I0202 22:51:45.592472 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9jq46" Feb 02 22:51:46 crc kubenswrapper[4789]: I0202 22:51:46.111839 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9jq46"] Feb 02 22:51:46 crc kubenswrapper[4789]: W0202 22:51:46.121953 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod549c829f_4be0_412c_b018_deab3d4cf2dd.slice/crio-eb2d4deaebcc76121f9c35d3efae040ac6bead49e50879b34b2274779feb5943 WatchSource:0}: Error finding container eb2d4deaebcc76121f9c35d3efae040ac6bead49e50879b34b2274779feb5943: Status 404 returned error can't find the container with id eb2d4deaebcc76121f9c35d3efae040ac6bead49e50879b34b2274779feb5943 Feb 02 22:51:46 crc kubenswrapper[4789]: I0202 22:51:46.206211 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9jq46" event={"ID":"549c829f-4be0-412c-b018-deab3d4cf2dd","Type":"ContainerStarted","Data":"eb2d4deaebcc76121f9c35d3efae040ac6bead49e50879b34b2274779feb5943"} Feb 02 22:51:47 crc kubenswrapper[4789]: I0202 22:51:47.219925 4789 generic.go:334] "Generic (PLEG): container finished" podID="549c829f-4be0-412c-b018-deab3d4cf2dd" containerID="81fb4a0eb84d535325a1e53da645cc61979727488b3136e5e307852e5bd568f9" exitCode=0 Feb 02 22:51:47 crc kubenswrapper[4789]: I0202 22:51:47.219989 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9jq46" event={"ID":"549c829f-4be0-412c-b018-deab3d4cf2dd","Type":"ContainerDied","Data":"81fb4a0eb84d535325a1e53da645cc61979727488b3136e5e307852e5bd568f9"} Feb 02 22:51:47 crc kubenswrapper[4789]: I0202 22:51:47.663939 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-545d4d4674-p5b9q_25fd2392-6f0c-47da-bf7d-cec1cc21b429/cert-manager-controller/0.log" Feb 02 22:51:47 crc kubenswrapper[4789]: I0202 22:51:47.801287 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-5545bd876-4nrdt_f2f3a656-c8a4-41a3-8791-c021db980c6d/cert-manager-cainjector/0.log" Feb 02 22:51:47 crc kubenswrapper[4789]: I0202 22:51:47.888562 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-6888856db4-t4fd8_f3db5270-fd96-47f4-bce0-94ac69a0e9f4/cert-manager-webhook/0.log" Feb 02 22:51:48 crc kubenswrapper[4789]: I0202 22:51:48.230050 4789 generic.go:334] "Generic (PLEG): container finished" podID="549c829f-4be0-412c-b018-deab3d4cf2dd" containerID="76ee875a66cd28ac9f27f4e8cbbd0394e643af7fc796a98f31c38c73795b928e" exitCode=0 Feb 02 22:51:48 crc kubenswrapper[4789]: I0202 22:51:48.230110 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9jq46" event={"ID":"549c829f-4be0-412c-b018-deab3d4cf2dd","Type":"ContainerDied","Data":"76ee875a66cd28ac9f27f4e8cbbd0394e643af7fc796a98f31c38c73795b928e"} Feb 02 22:51:48 crc kubenswrapper[4789]: I0202 22:51:48.535917 4789 scope.go:117] "RemoveContainer" containerID="01a5109c2df719e318faf34cbe82f07df79cbc4a3416be11d623cd60a502f6e2" Feb 02 22:51:48 crc kubenswrapper[4789]: I0202 22:51:48.586603 4789 scope.go:117] "RemoveContainer" containerID="e9f746d134672815c4df79d3dd0d65de5f260d45c52a21592a75a61e46d029c2" Feb 02 22:51:49 crc kubenswrapper[4789]: I0202 22:51:49.242692 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9jq46" event={"ID":"549c829f-4be0-412c-b018-deab3d4cf2dd","Type":"ContainerStarted","Data":"27ab76f168297fc0d46beef48077323dc0a51f7283cf84e835f32c70343c0353"} Feb 02 22:51:49 crc kubenswrapper[4789]: I0202 22:51:49.276514 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9jq46" podStartSLOduration=2.856830204 podStartE2EDuration="4.276493778s" podCreationTimestamp="2026-02-02 22:51:45 +0000 UTC" firstStartedPulling="2026-02-02 22:51:47.221952835 +0000 UTC m=+5527.516977864" lastFinishedPulling="2026-02-02 22:51:48.641616419 +0000 UTC m=+5528.936641438" observedRunningTime="2026-02-02 22:51:49.272442274 +0000 UTC m=+5529.567467283" watchObservedRunningTime="2026-02-02 22:51:49.276493778 +0000 UTC m=+5529.571518807" Feb 02 22:51:50 crc kubenswrapper[4789]: I0202 22:51:50.423320 4789 scope.go:117] "RemoveContainer" containerID="16b35fede0307e2768edeae9bc17688f5a8ff6427f2ae9305826087a6effbd74" Feb 02 22:51:50 crc kubenswrapper[4789]: E0202 22:51:50.423572 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:51:55 crc kubenswrapper[4789]: I0202 22:51:55.592704 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9jq46" Feb 02 22:51:55 crc kubenswrapper[4789]: I0202 22:51:55.593341 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9jq46" Feb 02 22:51:55 crc kubenswrapper[4789]: I0202 22:51:55.654049 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9jq46" Feb 02 22:51:56 crc kubenswrapper[4789]: I0202 22:51:56.365576 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9jq46" Feb 02 22:51:56 crc kubenswrapper[4789]: I0202 22:51:56.437022 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9jq46"] Feb 02 22:51:58 crc kubenswrapper[4789]: I0202 22:51:58.316017 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9jq46" podUID="549c829f-4be0-412c-b018-deab3d4cf2dd" containerName="registry-server" containerID="cri-o://27ab76f168297fc0d46beef48077323dc0a51f7283cf84e835f32c70343c0353" gracePeriod=2 Feb 02 22:51:58 crc kubenswrapper[4789]: I0202 22:51:58.864287 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9jq46" Feb 02 22:51:58 crc kubenswrapper[4789]: I0202 22:51:58.973914 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/549c829f-4be0-412c-b018-deab3d4cf2dd-utilities\") pod \"549c829f-4be0-412c-b018-deab3d4cf2dd\" (UID: \"549c829f-4be0-412c-b018-deab3d4cf2dd\") " Feb 02 22:51:58 crc kubenswrapper[4789]: I0202 22:51:58.974033 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/549c829f-4be0-412c-b018-deab3d4cf2dd-catalog-content\") pod \"549c829f-4be0-412c-b018-deab3d4cf2dd\" (UID: \"549c829f-4be0-412c-b018-deab3d4cf2dd\") " Feb 02 22:51:58 crc kubenswrapper[4789]: I0202 22:51:58.974053 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfk2n\" (UniqueName: \"kubernetes.io/projected/549c829f-4be0-412c-b018-deab3d4cf2dd-kube-api-access-mfk2n\") pod \"549c829f-4be0-412c-b018-deab3d4cf2dd\" (UID: \"549c829f-4be0-412c-b018-deab3d4cf2dd\") " Feb 02 22:51:58 crc kubenswrapper[4789]: I0202 22:51:58.974741 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/549c829f-4be0-412c-b018-deab3d4cf2dd-utilities" (OuterVolumeSpecName: "utilities") pod "549c829f-4be0-412c-b018-deab3d4cf2dd" (UID: "549c829f-4be0-412c-b018-deab3d4cf2dd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 22:51:58 crc kubenswrapper[4789]: I0202 22:51:58.985110 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/549c829f-4be0-412c-b018-deab3d4cf2dd-kube-api-access-mfk2n" (OuterVolumeSpecName: "kube-api-access-mfk2n") pod "549c829f-4be0-412c-b018-deab3d4cf2dd" (UID: "549c829f-4be0-412c-b018-deab3d4cf2dd"). InnerVolumeSpecName "kube-api-access-mfk2n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 22:51:59 crc kubenswrapper[4789]: I0202 22:51:59.030228 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/549c829f-4be0-412c-b018-deab3d4cf2dd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "549c829f-4be0-412c-b018-deab3d4cf2dd" (UID: "549c829f-4be0-412c-b018-deab3d4cf2dd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 22:51:59 crc kubenswrapper[4789]: I0202 22:51:59.075332 4789 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/549c829f-4be0-412c-b018-deab3d4cf2dd-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 22:51:59 crc kubenswrapper[4789]: I0202 22:51:59.075369 4789 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/549c829f-4be0-412c-b018-deab3d4cf2dd-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 22:51:59 crc kubenswrapper[4789]: I0202 22:51:59.075382 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mfk2n\" (UniqueName: \"kubernetes.io/projected/549c829f-4be0-412c-b018-deab3d4cf2dd-kube-api-access-mfk2n\") on node \"crc\" DevicePath \"\"" Feb 02 22:51:59 crc kubenswrapper[4789]: I0202 22:51:59.327953 4789 generic.go:334] "Generic (PLEG): container finished" podID="549c829f-4be0-412c-b018-deab3d4cf2dd" containerID="27ab76f168297fc0d46beef48077323dc0a51f7283cf84e835f32c70343c0353" exitCode=0 Feb 02 22:51:59 crc kubenswrapper[4789]: I0202 22:51:59.328012 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9jq46" event={"ID":"549c829f-4be0-412c-b018-deab3d4cf2dd","Type":"ContainerDied","Data":"27ab76f168297fc0d46beef48077323dc0a51f7283cf84e835f32c70343c0353"} Feb 02 22:51:59 crc kubenswrapper[4789]: I0202 22:51:59.328638 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9jq46" event={"ID":"549c829f-4be0-412c-b018-deab3d4cf2dd","Type":"ContainerDied","Data":"eb2d4deaebcc76121f9c35d3efae040ac6bead49e50879b34b2274779feb5943"} Feb 02 22:51:59 crc kubenswrapper[4789]: I0202 22:51:59.328028 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9jq46" Feb 02 22:51:59 crc kubenswrapper[4789]: I0202 22:51:59.328686 4789 scope.go:117] "RemoveContainer" containerID="27ab76f168297fc0d46beef48077323dc0a51f7283cf84e835f32c70343c0353" Feb 02 22:51:59 crc kubenswrapper[4789]: I0202 22:51:59.366901 4789 scope.go:117] "RemoveContainer" containerID="76ee875a66cd28ac9f27f4e8cbbd0394e643af7fc796a98f31c38c73795b928e" Feb 02 22:51:59 crc kubenswrapper[4789]: I0202 22:51:59.377699 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9jq46"] Feb 02 22:51:59 crc kubenswrapper[4789]: I0202 22:51:59.386377 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9jq46"] Feb 02 22:51:59 crc kubenswrapper[4789]: I0202 22:51:59.395004 4789 scope.go:117] "RemoveContainer" containerID="81fb4a0eb84d535325a1e53da645cc61979727488b3136e5e307852e5bd568f9" Feb 02 22:51:59 crc kubenswrapper[4789]: I0202 22:51:59.426197 4789 scope.go:117] "RemoveContainer" containerID="27ab76f168297fc0d46beef48077323dc0a51f7283cf84e835f32c70343c0353" Feb 02 22:51:59 crc kubenswrapper[4789]: E0202 22:51:59.426684 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27ab76f168297fc0d46beef48077323dc0a51f7283cf84e835f32c70343c0353\": container with ID starting with 27ab76f168297fc0d46beef48077323dc0a51f7283cf84e835f32c70343c0353 not found: ID does not exist" containerID="27ab76f168297fc0d46beef48077323dc0a51f7283cf84e835f32c70343c0353" Feb 02 22:51:59 crc kubenswrapper[4789]: I0202 22:51:59.426734 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27ab76f168297fc0d46beef48077323dc0a51f7283cf84e835f32c70343c0353"} err="failed to get container status \"27ab76f168297fc0d46beef48077323dc0a51f7283cf84e835f32c70343c0353\": rpc error: code = NotFound desc = could not find container \"27ab76f168297fc0d46beef48077323dc0a51f7283cf84e835f32c70343c0353\": container with ID starting with 27ab76f168297fc0d46beef48077323dc0a51f7283cf84e835f32c70343c0353 not found: ID does not exist" Feb 02 22:51:59 crc kubenswrapper[4789]: I0202 22:51:59.426768 4789 scope.go:117] "RemoveContainer" containerID="76ee875a66cd28ac9f27f4e8cbbd0394e643af7fc796a98f31c38c73795b928e" Feb 02 22:51:59 crc kubenswrapper[4789]: E0202 22:51:59.427145 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76ee875a66cd28ac9f27f4e8cbbd0394e643af7fc796a98f31c38c73795b928e\": container with ID starting with 76ee875a66cd28ac9f27f4e8cbbd0394e643af7fc796a98f31c38c73795b928e not found: ID does not exist" containerID="76ee875a66cd28ac9f27f4e8cbbd0394e643af7fc796a98f31c38c73795b928e" Feb 02 22:51:59 crc kubenswrapper[4789]: I0202 22:51:59.427183 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76ee875a66cd28ac9f27f4e8cbbd0394e643af7fc796a98f31c38c73795b928e"} err="failed to get container status \"76ee875a66cd28ac9f27f4e8cbbd0394e643af7fc796a98f31c38c73795b928e\": rpc error: code = NotFound desc = could not find container \"76ee875a66cd28ac9f27f4e8cbbd0394e643af7fc796a98f31c38c73795b928e\": container with ID starting with 76ee875a66cd28ac9f27f4e8cbbd0394e643af7fc796a98f31c38c73795b928e not found: ID does not exist" Feb 02 22:51:59 crc kubenswrapper[4789]: I0202 22:51:59.427212 4789 scope.go:117] "RemoveContainer" containerID="81fb4a0eb84d535325a1e53da645cc61979727488b3136e5e307852e5bd568f9" Feb 02 22:51:59 crc kubenswrapper[4789]: E0202 22:51:59.427464 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81fb4a0eb84d535325a1e53da645cc61979727488b3136e5e307852e5bd568f9\": container with ID starting with 81fb4a0eb84d535325a1e53da645cc61979727488b3136e5e307852e5bd568f9 not found: ID does not exist" containerID="81fb4a0eb84d535325a1e53da645cc61979727488b3136e5e307852e5bd568f9" Feb 02 22:51:59 crc kubenswrapper[4789]: I0202 22:51:59.427505 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81fb4a0eb84d535325a1e53da645cc61979727488b3136e5e307852e5bd568f9"} err="failed to get container status \"81fb4a0eb84d535325a1e53da645cc61979727488b3136e5e307852e5bd568f9\": rpc error: code = NotFound desc = could not find container \"81fb4a0eb84d535325a1e53da645cc61979727488b3136e5e307852e5bd568f9\": container with ID starting with 81fb4a0eb84d535325a1e53da645cc61979727488b3136e5e307852e5bd568f9 not found: ID does not exist" Feb 02 22:52:00 crc kubenswrapper[4789]: I0202 22:52:00.430049 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="549c829f-4be0-412c-b018-deab3d4cf2dd" path="/var/lib/kubelet/pods/549c829f-4be0-412c-b018-deab3d4cf2dd/volumes" Feb 02 22:52:01 crc kubenswrapper[4789]: I0202 22:52:01.419603 4789 scope.go:117] "RemoveContainer" containerID="16b35fede0307e2768edeae9bc17688f5a8ff6427f2ae9305826087a6effbd74" Feb 02 22:52:01 crc kubenswrapper[4789]: E0202 22:52:01.419810 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:52:01 crc kubenswrapper[4789]: I0202 22:52:01.657308 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-tgbb7_5bd8e822-58d8-41a4-85fa-229fe3662cf7/nmstate-console-plugin/0.log" Feb 02 22:52:01 crc kubenswrapper[4789]: I0202 22:52:01.868266 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-56sxm_8ecd0e71-adc8-4435-98cb-58de4b376820/nmstate-handler/0.log" Feb 02 22:52:01 crc kubenswrapper[4789]: I0202 22:52:01.918920 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-mwlcp_31964bc6-453e-4fc8-a06c-cfa7336e0b0f/kube-rbac-proxy/0.log" Feb 02 22:52:01 crc kubenswrapper[4789]: I0202 22:52:01.978752 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-mwlcp_31964bc6-453e-4fc8-a06c-cfa7336e0b0f/nmstate-metrics/0.log" Feb 02 22:52:02 crc kubenswrapper[4789]: I0202 22:52:02.231523 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-l5mpq_a45d19af-9616-4f0b-99d2-9c250bb43694/nmstate-operator/0.log" Feb 02 22:52:02 crc kubenswrapper[4789]: I0202 22:52:02.279382 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-4t8ms_238b79fd-f8fd-4667-b350-6369490157c5/nmstate-webhook/0.log" Feb 02 22:52:15 crc kubenswrapper[4789]: I0202 22:52:15.419975 4789 scope.go:117] "RemoveContainer" containerID="16b35fede0307e2768edeae9bc17688f5a8ff6427f2ae9305826087a6effbd74" Feb 02 22:52:15 crc kubenswrapper[4789]: E0202 22:52:15.421023 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:52:30 crc kubenswrapper[4789]: I0202 22:52:30.429984 4789 scope.go:117] "RemoveContainer" containerID="16b35fede0307e2768edeae9bc17688f5a8ff6427f2ae9305826087a6effbd74" Feb 02 22:52:30 crc kubenswrapper[4789]: E0202 22:52:30.430990 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:52:33 crc kubenswrapper[4789]: I0202 22:52:33.148363 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-hnv82_6a3c98cd-b4e3-4f60-a4f8-5068fc45c634/kube-rbac-proxy/0.log" Feb 02 22:52:33 crc kubenswrapper[4789]: I0202 22:52:33.395713 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4nlg4_93640dbd-7a68-4fe1-8feb-8fa519bfc5d0/cp-frr-files/0.log" Feb 02 22:52:33 crc kubenswrapper[4789]: I0202 22:52:33.483245 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4nlg4_93640dbd-7a68-4fe1-8feb-8fa519bfc5d0/cp-frr-files/0.log" Feb 02 22:52:33 crc kubenswrapper[4789]: I0202 22:52:33.499792 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-hnv82_6a3c98cd-b4e3-4f60-a4f8-5068fc45c634/controller/0.log" Feb 02 22:52:33 crc kubenswrapper[4789]: I0202 22:52:33.542523 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4nlg4_93640dbd-7a68-4fe1-8feb-8fa519bfc5d0/cp-reloader/0.log" Feb 02 22:52:33 crc kubenswrapper[4789]: I0202 22:52:33.572526 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4nlg4_93640dbd-7a68-4fe1-8feb-8fa519bfc5d0/cp-metrics/0.log" Feb 02 22:52:33 crc kubenswrapper[4789]: I0202 22:52:33.688901 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4nlg4_93640dbd-7a68-4fe1-8feb-8fa519bfc5d0/cp-reloader/0.log" Feb 02 22:52:33 crc kubenswrapper[4789]: I0202 22:52:33.854927 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4nlg4_93640dbd-7a68-4fe1-8feb-8fa519bfc5d0/cp-frr-files/0.log" Feb 02 22:52:33 crc kubenswrapper[4789]: I0202 22:52:33.865127 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4nlg4_93640dbd-7a68-4fe1-8feb-8fa519bfc5d0/cp-reloader/0.log" Feb 02 22:52:33 crc kubenswrapper[4789]: I0202 22:52:33.899754 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4nlg4_93640dbd-7a68-4fe1-8feb-8fa519bfc5d0/cp-metrics/0.log" Feb 02 22:52:33 crc kubenswrapper[4789]: I0202 22:52:33.908164 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4nlg4_93640dbd-7a68-4fe1-8feb-8fa519bfc5d0/cp-metrics/0.log" Feb 02 22:52:34 crc kubenswrapper[4789]: I0202 22:52:34.111102 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4nlg4_93640dbd-7a68-4fe1-8feb-8fa519bfc5d0/cp-frr-files/0.log" Feb 02 22:52:34 crc kubenswrapper[4789]: I0202 22:52:34.118832 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4nlg4_93640dbd-7a68-4fe1-8feb-8fa519bfc5d0/cp-metrics/0.log" Feb 02 22:52:34 crc kubenswrapper[4789]: I0202 22:52:34.122988 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4nlg4_93640dbd-7a68-4fe1-8feb-8fa519bfc5d0/cp-reloader/0.log" Feb 02 22:52:34 crc kubenswrapper[4789]: I0202 22:52:34.181442 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4nlg4_93640dbd-7a68-4fe1-8feb-8fa519bfc5d0/controller/0.log" Feb 02 22:52:34 crc kubenswrapper[4789]: I0202 22:52:34.285840 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4nlg4_93640dbd-7a68-4fe1-8feb-8fa519bfc5d0/kube-rbac-proxy/0.log" Feb 02 22:52:34 crc kubenswrapper[4789]: I0202 22:52:34.350692 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4nlg4_93640dbd-7a68-4fe1-8feb-8fa519bfc5d0/frr-metrics/0.log" Feb 02 22:52:34 crc kubenswrapper[4789]: I0202 22:52:34.378892 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4nlg4_93640dbd-7a68-4fe1-8feb-8fa519bfc5d0/kube-rbac-proxy-frr/0.log" Feb 02 22:52:34 crc kubenswrapper[4789]: I0202 22:52:34.497285 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4nlg4_93640dbd-7a68-4fe1-8feb-8fa519bfc5d0/reloader/0.log" Feb 02 22:52:34 crc kubenswrapper[4789]: I0202 22:52:34.648153 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-kffd2_f8e3f662-8c3b-4324-af03-4e6135a4bbb3/frr-k8s-webhook-server/0.log" Feb 02 22:52:34 crc kubenswrapper[4789]: I0202 22:52:34.894406 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-65474b67b5-ccmdm_a6474578-8598-4e67-b846-2a7bd085dd88/manager/0.log" Feb 02 22:52:35 crc kubenswrapper[4789]: I0202 22:52:35.072163 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-5888cd945b-dznx4_ffea3a90-35b0-4962-b652-8fafa44aa5a9/webhook-server/0.log" Feb 02 22:52:35 crc kubenswrapper[4789]: I0202 22:52:35.150070 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-6fxbj_ae39df38-39b3-4d32-a1d7-d521b31b2840/kube-rbac-proxy/0.log" Feb 02 22:52:35 crc kubenswrapper[4789]: I0202 22:52:35.736913 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-6fxbj_ae39df38-39b3-4d32-a1d7-d521b31b2840/speaker/0.log" Feb 02 22:52:35 crc kubenswrapper[4789]: I0202 22:52:35.814072 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4nlg4_93640dbd-7a68-4fe1-8feb-8fa519bfc5d0/frr/0.log" Feb 02 22:52:43 crc kubenswrapper[4789]: I0202 22:52:43.419488 4789 scope.go:117] "RemoveContainer" containerID="16b35fede0307e2768edeae9bc17688f5a8ff6427f2ae9305826087a6effbd74" Feb 02 22:52:43 crc kubenswrapper[4789]: E0202 22:52:43.420471 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c8vcn_openshift-machine-config-operator(bdf018b4-1451-4d37-be6e-05802b67c73e)\"" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" Feb 02 22:52:48 crc kubenswrapper[4789]: I0202 22:52:48.669121 4789 scope.go:117] "RemoveContainer" containerID="d48fd47acca6a4fa1436167c3080fc2a2d77dc1ee242ae48714ae2b4cc0e04d0" Feb 02 22:52:48 crc kubenswrapper[4789]: I0202 22:52:48.717993 4789 scope.go:117] "RemoveContainer" containerID="9c5e1970b7c142ebcc7f4e9e9dde7ae7fc93e4afcce825534defc74fc73b0361" Feb 02 22:52:48 crc kubenswrapper[4789]: I0202 22:52:48.742964 4789 scope.go:117] "RemoveContainer" containerID="b10f7a8e05776b8c5a044e0120a0ee30102dd615192c882814be8ec81d3d49e1" Feb 02 22:52:50 crc kubenswrapper[4789]: I0202 22:52:50.398200 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfdhf2_925ff462-a22d-4c9d-8ed4-70c08866fe64/util/0.log" Feb 02 22:52:50 crc kubenswrapper[4789]: I0202 22:52:50.561631 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfdhf2_925ff462-a22d-4c9d-8ed4-70c08866fe64/util/0.log" Feb 02 22:52:50 crc kubenswrapper[4789]: I0202 22:52:50.574030 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfdhf2_925ff462-a22d-4c9d-8ed4-70c08866fe64/pull/0.log" Feb 02 22:52:50 crc kubenswrapper[4789]: I0202 22:52:50.632503 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfdhf2_925ff462-a22d-4c9d-8ed4-70c08866fe64/pull/0.log" Feb 02 22:52:50 crc kubenswrapper[4789]: I0202 22:52:50.754089 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfdhf2_925ff462-a22d-4c9d-8ed4-70c08866fe64/util/0.log" Feb 02 22:52:50 crc kubenswrapper[4789]: I0202 22:52:50.786275 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfdhf2_925ff462-a22d-4c9d-8ed4-70c08866fe64/extract/0.log" Feb 02 22:52:50 crc kubenswrapper[4789]: I0202 22:52:50.793126 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfdhf2_925ff462-a22d-4c9d-8ed4-70c08866fe64/pull/0.log" Feb 02 22:52:50 crc kubenswrapper[4789]: I0202 22:52:50.984165 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138c54z_645a95dc-67cd-4eb7-9273-70ff5fea3a01/util/0.log" Feb 02 22:52:51 crc kubenswrapper[4789]: I0202 22:52:51.143166 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138c54z_645a95dc-67cd-4eb7-9273-70ff5fea3a01/pull/0.log" Feb 02 22:52:51 crc kubenswrapper[4789]: I0202 22:52:51.148418 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138c54z_645a95dc-67cd-4eb7-9273-70ff5fea3a01/util/0.log" Feb 02 22:52:51 crc kubenswrapper[4789]: I0202 22:52:51.151364 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138c54z_645a95dc-67cd-4eb7-9273-70ff5fea3a01/pull/0.log" Feb 02 22:52:51 crc kubenswrapper[4789]: I0202 22:52:51.342949 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138c54z_645a95dc-67cd-4eb7-9273-70ff5fea3a01/pull/0.log" Feb 02 22:52:51 crc kubenswrapper[4789]: I0202 22:52:51.351390 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138c54z_645a95dc-67cd-4eb7-9273-70ff5fea3a01/extract/0.log" Feb 02 22:52:51 crc kubenswrapper[4789]: I0202 22:52:51.363120 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138c54z_645a95dc-67cd-4eb7-9273-70ff5fea3a01/util/0.log" Feb 02 22:52:51 crc kubenswrapper[4789]: I0202 22:52:51.523266 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5zfwg5_82257589-0f42-4d43-8843-41285225ccf0/util/0.log" Feb 02 22:52:51 crc kubenswrapper[4789]: I0202 22:52:51.696154 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5zfwg5_82257589-0f42-4d43-8843-41285225ccf0/util/0.log" Feb 02 22:52:51 crc kubenswrapper[4789]: I0202 22:52:51.733678 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5zfwg5_82257589-0f42-4d43-8843-41285225ccf0/pull/0.log" Feb 02 22:52:51 crc kubenswrapper[4789]: I0202 22:52:51.747316 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5zfwg5_82257589-0f42-4d43-8843-41285225ccf0/pull/0.log" Feb 02 22:52:51 crc kubenswrapper[4789]: I0202 22:52:51.866523 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5zfwg5_82257589-0f42-4d43-8843-41285225ccf0/util/0.log" Feb 02 22:52:51 crc kubenswrapper[4789]: I0202 22:52:51.897129 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5zfwg5_82257589-0f42-4d43-8843-41285225ccf0/pull/0.log" Feb 02 22:52:51 crc kubenswrapper[4789]: I0202 22:52:51.906719 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5zfwg5_82257589-0f42-4d43-8843-41285225ccf0/extract/0.log" Feb 02 22:52:52 crc kubenswrapper[4789]: I0202 22:52:52.065239 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-tq6p5_8164a9db-3349-43ca-9927-b326f01ab26d/extract-utilities/0.log" Feb 02 22:52:52 crc kubenswrapper[4789]: I0202 22:52:52.179822 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-tq6p5_8164a9db-3349-43ca-9927-b326f01ab26d/extract-utilities/0.log" Feb 02 22:52:52 crc kubenswrapper[4789]: I0202 22:52:52.192885 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-tq6p5_8164a9db-3349-43ca-9927-b326f01ab26d/extract-content/0.log" Feb 02 22:52:52 crc kubenswrapper[4789]: I0202 22:52:52.210659 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-tq6p5_8164a9db-3349-43ca-9927-b326f01ab26d/extract-content/0.log" Feb 02 22:52:52 crc kubenswrapper[4789]: I0202 22:52:52.377803 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-tq6p5_8164a9db-3349-43ca-9927-b326f01ab26d/extract-utilities/0.log" Feb 02 22:52:52 crc kubenswrapper[4789]: I0202 22:52:52.398839 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-tq6p5_8164a9db-3349-43ca-9927-b326f01ab26d/extract-content/0.log" Feb 02 22:52:52 crc kubenswrapper[4789]: I0202 22:52:52.605188 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-8x64g_2e759e28-291e-43b4-b856-8c89fd7af5ae/extract-utilities/0.log" Feb 02 22:52:52 crc kubenswrapper[4789]: I0202 22:52:52.765313 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-8x64g_2e759e28-291e-43b4-b856-8c89fd7af5ae/extract-content/0.log" Feb 02 22:52:52 crc kubenswrapper[4789]: I0202 22:52:52.823160 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-8x64g_2e759e28-291e-43b4-b856-8c89fd7af5ae/extract-utilities/0.log" Feb 02 22:52:52 crc kubenswrapper[4789]: I0202 22:52:52.936313 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-8x64g_2e759e28-291e-43b4-b856-8c89fd7af5ae/extract-content/0.log" Feb 02 22:52:53 crc kubenswrapper[4789]: I0202 22:52:53.040899 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-tq6p5_8164a9db-3349-43ca-9927-b326f01ab26d/registry-server/0.log" Feb 02 22:52:53 crc kubenswrapper[4789]: I0202 22:52:53.047475 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-8x64g_2e759e28-291e-43b4-b856-8c89fd7af5ae/extract-content/0.log" Feb 02 22:52:53 crc kubenswrapper[4789]: I0202 22:52:53.051733 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-8x64g_2e759e28-291e-43b4-b856-8c89fd7af5ae/extract-utilities/0.log" Feb 02 22:52:53 crc kubenswrapper[4789]: I0202 22:52:53.239650 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-jppg9_baf8f0ee-a9ae-4e65-ad37-b9cea71e0a91/marketplace-operator/0.log" Feb 02 22:52:53 crc kubenswrapper[4789]: I0202 22:52:53.387176 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-8x64g_2e759e28-291e-43b4-b856-8c89fd7af5ae/registry-server/0.log" Feb 02 22:52:53 crc kubenswrapper[4789]: I0202 22:52:53.462981 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-54dlc_eaae7980-489d-4b4d-ae1f-02949a4f8e12/extract-utilities/0.log" Feb 02 22:52:53 crc kubenswrapper[4789]: I0202 22:52:53.599199 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-54dlc_eaae7980-489d-4b4d-ae1f-02949a4f8e12/extract-utilities/0.log" Feb 02 22:52:53 crc kubenswrapper[4789]: I0202 22:52:53.633176 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-54dlc_eaae7980-489d-4b4d-ae1f-02949a4f8e12/extract-content/0.log" Feb 02 22:52:53 crc kubenswrapper[4789]: I0202 22:52:53.639252 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-54dlc_eaae7980-489d-4b4d-ae1f-02949a4f8e12/extract-content/0.log" Feb 02 22:52:53 crc kubenswrapper[4789]: I0202 22:52:53.799224 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-54dlc_eaae7980-489d-4b4d-ae1f-02949a4f8e12/extract-utilities/0.log" Feb 02 22:52:53 crc kubenswrapper[4789]: I0202 22:52:53.837609 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-54dlc_eaae7980-489d-4b4d-ae1f-02949a4f8e12/extract-content/0.log" Feb 02 22:52:53 crc kubenswrapper[4789]: I0202 22:52:53.999274 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hh84g_40611cb2-5a59-49f8-905f-ce117f332665/extract-utilities/0.log" Feb 02 22:52:54 crc kubenswrapper[4789]: I0202 22:52:54.027374 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-54dlc_eaae7980-489d-4b4d-ae1f-02949a4f8e12/registry-server/0.log" Feb 02 22:52:54 crc kubenswrapper[4789]: I0202 22:52:54.151784 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hh84g_40611cb2-5a59-49f8-905f-ce117f332665/extract-utilities/0.log" Feb 02 22:52:54 crc kubenswrapper[4789]: I0202 22:52:54.178050 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hh84g_40611cb2-5a59-49f8-905f-ce117f332665/extract-content/0.log" Feb 02 22:52:54 crc kubenswrapper[4789]: I0202 22:52:54.190033 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hh84g_40611cb2-5a59-49f8-905f-ce117f332665/extract-content/0.log" Feb 02 22:52:54 crc kubenswrapper[4789]: I0202 22:52:54.367335 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hh84g_40611cb2-5a59-49f8-905f-ce117f332665/extract-utilities/0.log" Feb 02 22:52:54 crc kubenswrapper[4789]: I0202 22:52:54.372983 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hh84g_40611cb2-5a59-49f8-905f-ce117f332665/extract-content/0.log" Feb 02 22:52:55 crc kubenswrapper[4789]: I0202 22:52:55.073866 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hh84g_40611cb2-5a59-49f8-905f-ce117f332665/registry-server/0.log" Feb 02 22:52:55 crc kubenswrapper[4789]: I0202 22:52:55.420264 4789 scope.go:117] "RemoveContainer" containerID="16b35fede0307e2768edeae9bc17688f5a8ff6427f2ae9305826087a6effbd74" Feb 02 22:52:55 crc kubenswrapper[4789]: I0202 22:52:55.861171 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" event={"ID":"bdf018b4-1451-4d37-be6e-05802b67c73e","Type":"ContainerStarted","Data":"bf3b364c33c8733778c6c21ccccb5be0c702c8000d59499366aaef19c768aa32"} Feb 02 22:53:16 crc kubenswrapper[4789]: E0202 22:53:16.457443 4789 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.189:60012->38.102.83.189:36729: write tcp 38.102.83.189:60012->38.102.83.189:36729: write: connection reset by peer Feb 02 22:53:30 crc kubenswrapper[4789]: I0202 22:53:30.288191 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-j4922"] Feb 02 22:53:30 crc kubenswrapper[4789]: E0202 22:53:30.289328 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="549c829f-4be0-412c-b018-deab3d4cf2dd" containerName="extract-content" Feb 02 22:53:30 crc kubenswrapper[4789]: I0202 22:53:30.289342 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="549c829f-4be0-412c-b018-deab3d4cf2dd" containerName="extract-content" Feb 02 22:53:30 crc kubenswrapper[4789]: E0202 22:53:30.289366 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="549c829f-4be0-412c-b018-deab3d4cf2dd" containerName="registry-server" Feb 02 22:53:30 crc kubenswrapper[4789]: I0202 22:53:30.289371 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="549c829f-4be0-412c-b018-deab3d4cf2dd" containerName="registry-server" Feb 02 22:53:30 crc kubenswrapper[4789]: E0202 22:53:30.289381 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="549c829f-4be0-412c-b018-deab3d4cf2dd" containerName="extract-utilities" Feb 02 22:53:30 crc kubenswrapper[4789]: I0202 22:53:30.289387 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="549c829f-4be0-412c-b018-deab3d4cf2dd" containerName="extract-utilities" Feb 02 22:53:30 crc kubenswrapper[4789]: I0202 22:53:30.289542 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="549c829f-4be0-412c-b018-deab3d4cf2dd" containerName="registry-server" Feb 02 22:53:30 crc kubenswrapper[4789]: I0202 22:53:30.290564 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j4922" Feb 02 22:53:30 crc kubenswrapper[4789]: I0202 22:53:30.305603 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j4922"] Feb 02 22:53:30 crc kubenswrapper[4789]: I0202 22:53:30.343858 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12378aa9-5001-43a7-81e4-2a625279d27f-utilities\") pod \"redhat-marketplace-j4922\" (UID: \"12378aa9-5001-43a7-81e4-2a625279d27f\") " pod="openshift-marketplace/redhat-marketplace-j4922" Feb 02 22:53:30 crc kubenswrapper[4789]: I0202 22:53:30.344097 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12378aa9-5001-43a7-81e4-2a625279d27f-catalog-content\") pod \"redhat-marketplace-j4922\" (UID: \"12378aa9-5001-43a7-81e4-2a625279d27f\") " pod="openshift-marketplace/redhat-marketplace-j4922" Feb 02 22:53:30 crc kubenswrapper[4789]: I0202 22:53:30.344283 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zslj\" (UniqueName: \"kubernetes.io/projected/12378aa9-5001-43a7-81e4-2a625279d27f-kube-api-access-6zslj\") pod \"redhat-marketplace-j4922\" (UID: \"12378aa9-5001-43a7-81e4-2a625279d27f\") " pod="openshift-marketplace/redhat-marketplace-j4922" Feb 02 22:53:30 crc kubenswrapper[4789]: I0202 22:53:30.445877 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zslj\" (UniqueName: \"kubernetes.io/projected/12378aa9-5001-43a7-81e4-2a625279d27f-kube-api-access-6zslj\") pod \"redhat-marketplace-j4922\" (UID: \"12378aa9-5001-43a7-81e4-2a625279d27f\") " pod="openshift-marketplace/redhat-marketplace-j4922" Feb 02 22:53:30 crc kubenswrapper[4789]: I0202 22:53:30.446005 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12378aa9-5001-43a7-81e4-2a625279d27f-utilities\") pod \"redhat-marketplace-j4922\" (UID: \"12378aa9-5001-43a7-81e4-2a625279d27f\") " pod="openshift-marketplace/redhat-marketplace-j4922" Feb 02 22:53:30 crc kubenswrapper[4789]: I0202 22:53:30.446152 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12378aa9-5001-43a7-81e4-2a625279d27f-catalog-content\") pod \"redhat-marketplace-j4922\" (UID: \"12378aa9-5001-43a7-81e4-2a625279d27f\") " pod="openshift-marketplace/redhat-marketplace-j4922" Feb 02 22:53:30 crc kubenswrapper[4789]: I0202 22:53:30.446846 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12378aa9-5001-43a7-81e4-2a625279d27f-utilities\") pod \"redhat-marketplace-j4922\" (UID: \"12378aa9-5001-43a7-81e4-2a625279d27f\") " pod="openshift-marketplace/redhat-marketplace-j4922" Feb 02 22:53:30 crc kubenswrapper[4789]: I0202 22:53:30.446888 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12378aa9-5001-43a7-81e4-2a625279d27f-catalog-content\") pod \"redhat-marketplace-j4922\" (UID: \"12378aa9-5001-43a7-81e4-2a625279d27f\") " pod="openshift-marketplace/redhat-marketplace-j4922" Feb 02 22:53:30 crc kubenswrapper[4789]: I0202 22:53:30.479114 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zslj\" (UniqueName: \"kubernetes.io/projected/12378aa9-5001-43a7-81e4-2a625279d27f-kube-api-access-6zslj\") pod \"redhat-marketplace-j4922\" (UID: \"12378aa9-5001-43a7-81e4-2a625279d27f\") " pod="openshift-marketplace/redhat-marketplace-j4922" Feb 02 22:53:30 crc kubenswrapper[4789]: I0202 22:53:30.605904 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j4922" Feb 02 22:53:31 crc kubenswrapper[4789]: I0202 22:53:31.055782 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j4922"] Feb 02 22:53:31 crc kubenswrapper[4789]: I0202 22:53:31.149785 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j4922" event={"ID":"12378aa9-5001-43a7-81e4-2a625279d27f","Type":"ContainerStarted","Data":"a992237eedf4bb83ef0b8582996b68daf01a359856d65263e599787fc3004d6e"} Feb 02 22:53:32 crc kubenswrapper[4789]: I0202 22:53:32.162175 4789 generic.go:334] "Generic (PLEG): container finished" podID="12378aa9-5001-43a7-81e4-2a625279d27f" containerID="81750bae8ec1ad0319e6a847581d0a01a2add1ab77e33c1edf52e2f87febf9e3" exitCode=0 Feb 02 22:53:32 crc kubenswrapper[4789]: I0202 22:53:32.162253 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j4922" event={"ID":"12378aa9-5001-43a7-81e4-2a625279d27f","Type":"ContainerDied","Data":"81750bae8ec1ad0319e6a847581d0a01a2add1ab77e33c1edf52e2f87febf9e3"} Feb 02 22:53:32 crc kubenswrapper[4789]: I0202 22:53:32.164978 4789 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 22:53:33 crc kubenswrapper[4789]: I0202 22:53:33.177146 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j4922" event={"ID":"12378aa9-5001-43a7-81e4-2a625279d27f","Type":"ContainerStarted","Data":"1ebcb4cb14654cbd09ca6a7353aa5c764c1d303b7eb4cac98583e9f05d2dc6e4"} Feb 02 22:53:34 crc kubenswrapper[4789]: I0202 22:53:34.190934 4789 generic.go:334] "Generic (PLEG): container finished" podID="12378aa9-5001-43a7-81e4-2a625279d27f" containerID="1ebcb4cb14654cbd09ca6a7353aa5c764c1d303b7eb4cac98583e9f05d2dc6e4" exitCode=0 Feb 02 22:53:34 crc kubenswrapper[4789]: I0202 22:53:34.190985 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j4922" event={"ID":"12378aa9-5001-43a7-81e4-2a625279d27f","Type":"ContainerDied","Data":"1ebcb4cb14654cbd09ca6a7353aa5c764c1d303b7eb4cac98583e9f05d2dc6e4"} Feb 02 22:53:34 crc kubenswrapper[4789]: I0202 22:53:34.191220 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j4922" event={"ID":"12378aa9-5001-43a7-81e4-2a625279d27f","Type":"ContainerStarted","Data":"a0c9f7faaf0be98ebd1531e98df6058f987a03660b438ad5b3d526a9af5dca0f"} Feb 02 22:53:34 crc kubenswrapper[4789]: I0202 22:53:34.226975 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-j4922" podStartSLOduration=2.818079553 podStartE2EDuration="4.226959315s" podCreationTimestamp="2026-02-02 22:53:30 +0000 UTC" firstStartedPulling="2026-02-02 22:53:32.164502319 +0000 UTC m=+5632.459527368" lastFinishedPulling="2026-02-02 22:53:33.573382071 +0000 UTC m=+5633.868407130" observedRunningTime="2026-02-02 22:53:34.216604854 +0000 UTC m=+5634.511629913" watchObservedRunningTime="2026-02-02 22:53:34.226959315 +0000 UTC m=+5634.521984334" Feb 02 22:53:40 crc kubenswrapper[4789]: I0202 22:53:40.606159 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-j4922" Feb 02 22:53:40 crc kubenswrapper[4789]: I0202 22:53:40.606969 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-j4922" Feb 02 22:53:40 crc kubenswrapper[4789]: I0202 22:53:40.709152 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-j4922" Feb 02 22:53:41 crc kubenswrapper[4789]: I0202 22:53:41.342233 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-j4922" Feb 02 22:53:44 crc kubenswrapper[4789]: I0202 22:53:44.227747 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j4922"] Feb 02 22:53:44 crc kubenswrapper[4789]: I0202 22:53:44.228859 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-j4922" podUID="12378aa9-5001-43a7-81e4-2a625279d27f" containerName="registry-server" containerID="cri-o://a0c9f7faaf0be98ebd1531e98df6058f987a03660b438ad5b3d526a9af5dca0f" gracePeriod=2 Feb 02 22:53:44 crc kubenswrapper[4789]: I0202 22:53:44.736854 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j4922" Feb 02 22:53:44 crc kubenswrapper[4789]: I0202 22:53:44.811958 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12378aa9-5001-43a7-81e4-2a625279d27f-utilities\") pod \"12378aa9-5001-43a7-81e4-2a625279d27f\" (UID: \"12378aa9-5001-43a7-81e4-2a625279d27f\") " Feb 02 22:53:44 crc kubenswrapper[4789]: I0202 22:53:44.812110 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6zslj\" (UniqueName: \"kubernetes.io/projected/12378aa9-5001-43a7-81e4-2a625279d27f-kube-api-access-6zslj\") pod \"12378aa9-5001-43a7-81e4-2a625279d27f\" (UID: \"12378aa9-5001-43a7-81e4-2a625279d27f\") " Feb 02 22:53:44 crc kubenswrapper[4789]: I0202 22:53:44.813030 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12378aa9-5001-43a7-81e4-2a625279d27f-catalog-content\") pod \"12378aa9-5001-43a7-81e4-2a625279d27f\" (UID: \"12378aa9-5001-43a7-81e4-2a625279d27f\") " Feb 02 22:53:44 crc kubenswrapper[4789]: I0202 22:53:44.813529 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12378aa9-5001-43a7-81e4-2a625279d27f-utilities" (OuterVolumeSpecName: "utilities") pod "12378aa9-5001-43a7-81e4-2a625279d27f" (UID: "12378aa9-5001-43a7-81e4-2a625279d27f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 22:53:44 crc kubenswrapper[4789]: I0202 22:53:44.826915 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12378aa9-5001-43a7-81e4-2a625279d27f-kube-api-access-6zslj" (OuterVolumeSpecName: "kube-api-access-6zslj") pod "12378aa9-5001-43a7-81e4-2a625279d27f" (UID: "12378aa9-5001-43a7-81e4-2a625279d27f"). InnerVolumeSpecName "kube-api-access-6zslj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 22:53:44 crc kubenswrapper[4789]: I0202 22:53:44.852983 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12378aa9-5001-43a7-81e4-2a625279d27f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "12378aa9-5001-43a7-81e4-2a625279d27f" (UID: "12378aa9-5001-43a7-81e4-2a625279d27f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 22:53:44 crc kubenswrapper[4789]: I0202 22:53:44.915460 4789 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12378aa9-5001-43a7-81e4-2a625279d27f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 22:53:44 crc kubenswrapper[4789]: I0202 22:53:44.915499 4789 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12378aa9-5001-43a7-81e4-2a625279d27f-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 22:53:44 crc kubenswrapper[4789]: I0202 22:53:44.915514 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6zslj\" (UniqueName: \"kubernetes.io/projected/12378aa9-5001-43a7-81e4-2a625279d27f-kube-api-access-6zslj\") on node \"crc\" DevicePath \"\"" Feb 02 22:53:45 crc kubenswrapper[4789]: I0202 22:53:45.310961 4789 generic.go:334] "Generic (PLEG): container finished" podID="12378aa9-5001-43a7-81e4-2a625279d27f" containerID="a0c9f7faaf0be98ebd1531e98df6058f987a03660b438ad5b3d526a9af5dca0f" exitCode=0 Feb 02 22:53:45 crc kubenswrapper[4789]: I0202 22:53:45.311033 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j4922" event={"ID":"12378aa9-5001-43a7-81e4-2a625279d27f","Type":"ContainerDied","Data":"a0c9f7faaf0be98ebd1531e98df6058f987a03660b438ad5b3d526a9af5dca0f"} Feb 02 22:53:45 crc kubenswrapper[4789]: I0202 22:53:45.311132 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j4922" event={"ID":"12378aa9-5001-43a7-81e4-2a625279d27f","Type":"ContainerDied","Data":"a992237eedf4bb83ef0b8582996b68daf01a359856d65263e599787fc3004d6e"} Feb 02 22:53:45 crc kubenswrapper[4789]: I0202 22:53:45.311175 4789 scope.go:117] "RemoveContainer" containerID="a0c9f7faaf0be98ebd1531e98df6058f987a03660b438ad5b3d526a9af5dca0f" Feb 02 22:53:45 crc kubenswrapper[4789]: I0202 22:53:45.313870 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j4922" Feb 02 22:53:45 crc kubenswrapper[4789]: I0202 22:53:45.343341 4789 scope.go:117] "RemoveContainer" containerID="1ebcb4cb14654cbd09ca6a7353aa5c764c1d303b7eb4cac98583e9f05d2dc6e4" Feb 02 22:53:45 crc kubenswrapper[4789]: I0202 22:53:45.387774 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j4922"] Feb 02 22:53:45 crc kubenswrapper[4789]: I0202 22:53:45.400415 4789 scope.go:117] "RemoveContainer" containerID="81750bae8ec1ad0319e6a847581d0a01a2add1ab77e33c1edf52e2f87febf9e3" Feb 02 22:53:45 crc kubenswrapper[4789]: I0202 22:53:45.411665 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-j4922"] Feb 02 22:53:45 crc kubenswrapper[4789]: I0202 22:53:45.440762 4789 scope.go:117] "RemoveContainer" containerID="a0c9f7faaf0be98ebd1531e98df6058f987a03660b438ad5b3d526a9af5dca0f" Feb 02 22:53:45 crc kubenswrapper[4789]: E0202 22:53:45.441467 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0c9f7faaf0be98ebd1531e98df6058f987a03660b438ad5b3d526a9af5dca0f\": container with ID starting with a0c9f7faaf0be98ebd1531e98df6058f987a03660b438ad5b3d526a9af5dca0f not found: ID does not exist" containerID="a0c9f7faaf0be98ebd1531e98df6058f987a03660b438ad5b3d526a9af5dca0f" Feb 02 22:53:45 crc kubenswrapper[4789]: I0202 22:53:45.441505 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0c9f7faaf0be98ebd1531e98df6058f987a03660b438ad5b3d526a9af5dca0f"} err="failed to get container status \"a0c9f7faaf0be98ebd1531e98df6058f987a03660b438ad5b3d526a9af5dca0f\": rpc error: code = NotFound desc = could not find container \"a0c9f7faaf0be98ebd1531e98df6058f987a03660b438ad5b3d526a9af5dca0f\": container with ID starting with a0c9f7faaf0be98ebd1531e98df6058f987a03660b438ad5b3d526a9af5dca0f not found: ID does not exist" Feb 02 22:53:45 crc kubenswrapper[4789]: I0202 22:53:45.441529 4789 scope.go:117] "RemoveContainer" containerID="1ebcb4cb14654cbd09ca6a7353aa5c764c1d303b7eb4cac98583e9f05d2dc6e4" Feb 02 22:53:45 crc kubenswrapper[4789]: E0202 22:53:45.442186 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ebcb4cb14654cbd09ca6a7353aa5c764c1d303b7eb4cac98583e9f05d2dc6e4\": container with ID starting with 1ebcb4cb14654cbd09ca6a7353aa5c764c1d303b7eb4cac98583e9f05d2dc6e4 not found: ID does not exist" containerID="1ebcb4cb14654cbd09ca6a7353aa5c764c1d303b7eb4cac98583e9f05d2dc6e4" Feb 02 22:53:45 crc kubenswrapper[4789]: I0202 22:53:45.442235 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ebcb4cb14654cbd09ca6a7353aa5c764c1d303b7eb4cac98583e9f05d2dc6e4"} err="failed to get container status \"1ebcb4cb14654cbd09ca6a7353aa5c764c1d303b7eb4cac98583e9f05d2dc6e4\": rpc error: code = NotFound desc = could not find container \"1ebcb4cb14654cbd09ca6a7353aa5c764c1d303b7eb4cac98583e9f05d2dc6e4\": container with ID starting with 1ebcb4cb14654cbd09ca6a7353aa5c764c1d303b7eb4cac98583e9f05d2dc6e4 not found: ID does not exist" Feb 02 22:53:45 crc kubenswrapper[4789]: I0202 22:53:45.442267 4789 scope.go:117] "RemoveContainer" containerID="81750bae8ec1ad0319e6a847581d0a01a2add1ab77e33c1edf52e2f87febf9e3" Feb 02 22:53:45 crc kubenswrapper[4789]: E0202 22:53:45.442722 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81750bae8ec1ad0319e6a847581d0a01a2add1ab77e33c1edf52e2f87febf9e3\": container with ID starting with 81750bae8ec1ad0319e6a847581d0a01a2add1ab77e33c1edf52e2f87febf9e3 not found: ID does not exist" containerID="81750bae8ec1ad0319e6a847581d0a01a2add1ab77e33c1edf52e2f87febf9e3" Feb 02 22:53:45 crc kubenswrapper[4789]: I0202 22:53:45.442788 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81750bae8ec1ad0319e6a847581d0a01a2add1ab77e33c1edf52e2f87febf9e3"} err="failed to get container status \"81750bae8ec1ad0319e6a847581d0a01a2add1ab77e33c1edf52e2f87febf9e3\": rpc error: code = NotFound desc = could not find container \"81750bae8ec1ad0319e6a847581d0a01a2add1ab77e33c1edf52e2f87febf9e3\": container with ID starting with 81750bae8ec1ad0319e6a847581d0a01a2add1ab77e33c1edf52e2f87febf9e3 not found: ID does not exist" Feb 02 22:53:46 crc kubenswrapper[4789]: I0202 22:53:46.438803 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12378aa9-5001-43a7-81e4-2a625279d27f" path="/var/lib/kubelet/pods/12378aa9-5001-43a7-81e4-2a625279d27f/volumes" Feb 02 22:54:13 crc kubenswrapper[4789]: I0202 22:54:13.283448 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9jcj4"] Feb 02 22:54:13 crc kubenswrapper[4789]: E0202 22:54:13.284510 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12378aa9-5001-43a7-81e4-2a625279d27f" containerName="extract-utilities" Feb 02 22:54:13 crc kubenswrapper[4789]: I0202 22:54:13.284533 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="12378aa9-5001-43a7-81e4-2a625279d27f" containerName="extract-utilities" Feb 02 22:54:13 crc kubenswrapper[4789]: E0202 22:54:13.284559 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12378aa9-5001-43a7-81e4-2a625279d27f" containerName="extract-content" Feb 02 22:54:13 crc kubenswrapper[4789]: I0202 22:54:13.284572 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="12378aa9-5001-43a7-81e4-2a625279d27f" containerName="extract-content" Feb 02 22:54:13 crc kubenswrapper[4789]: E0202 22:54:13.284635 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12378aa9-5001-43a7-81e4-2a625279d27f" containerName="registry-server" Feb 02 22:54:13 crc kubenswrapper[4789]: I0202 22:54:13.284649 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="12378aa9-5001-43a7-81e4-2a625279d27f" containerName="registry-server" Feb 02 22:54:13 crc kubenswrapper[4789]: I0202 22:54:13.284949 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="12378aa9-5001-43a7-81e4-2a625279d27f" containerName="registry-server" Feb 02 22:54:13 crc kubenswrapper[4789]: I0202 22:54:13.287062 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9jcj4" Feb 02 22:54:13 crc kubenswrapper[4789]: I0202 22:54:13.311934 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9jcj4"] Feb 02 22:54:13 crc kubenswrapper[4789]: I0202 22:54:13.352927 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e3bcb3f-4281-4e36-b361-9ed7fe3105dc-catalog-content\") pod \"community-operators-9jcj4\" (UID: \"7e3bcb3f-4281-4e36-b361-9ed7fe3105dc\") " pod="openshift-marketplace/community-operators-9jcj4" Feb 02 22:54:13 crc kubenswrapper[4789]: I0202 22:54:13.353095 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e3bcb3f-4281-4e36-b361-9ed7fe3105dc-utilities\") pod \"community-operators-9jcj4\" (UID: \"7e3bcb3f-4281-4e36-b361-9ed7fe3105dc\") " pod="openshift-marketplace/community-operators-9jcj4" Feb 02 22:54:13 crc kubenswrapper[4789]: I0202 22:54:13.353256 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqrl6\" (UniqueName: \"kubernetes.io/projected/7e3bcb3f-4281-4e36-b361-9ed7fe3105dc-kube-api-access-qqrl6\") pod \"community-operators-9jcj4\" (UID: \"7e3bcb3f-4281-4e36-b361-9ed7fe3105dc\") " pod="openshift-marketplace/community-operators-9jcj4" Feb 02 22:54:13 crc kubenswrapper[4789]: I0202 22:54:13.454697 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e3bcb3f-4281-4e36-b361-9ed7fe3105dc-catalog-content\") pod \"community-operators-9jcj4\" (UID: \"7e3bcb3f-4281-4e36-b361-9ed7fe3105dc\") " pod="openshift-marketplace/community-operators-9jcj4" Feb 02 22:54:13 crc kubenswrapper[4789]: I0202 22:54:13.454890 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e3bcb3f-4281-4e36-b361-9ed7fe3105dc-utilities\") pod \"community-operators-9jcj4\" (UID: \"7e3bcb3f-4281-4e36-b361-9ed7fe3105dc\") " pod="openshift-marketplace/community-operators-9jcj4" Feb 02 22:54:13 crc kubenswrapper[4789]: I0202 22:54:13.454977 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqrl6\" (UniqueName: \"kubernetes.io/projected/7e3bcb3f-4281-4e36-b361-9ed7fe3105dc-kube-api-access-qqrl6\") pod \"community-operators-9jcj4\" (UID: \"7e3bcb3f-4281-4e36-b361-9ed7fe3105dc\") " pod="openshift-marketplace/community-operators-9jcj4" Feb 02 22:54:13 crc kubenswrapper[4789]: I0202 22:54:13.455225 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e3bcb3f-4281-4e36-b361-9ed7fe3105dc-catalog-content\") pod \"community-operators-9jcj4\" (UID: \"7e3bcb3f-4281-4e36-b361-9ed7fe3105dc\") " pod="openshift-marketplace/community-operators-9jcj4" Feb 02 22:54:13 crc kubenswrapper[4789]: I0202 22:54:13.455288 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e3bcb3f-4281-4e36-b361-9ed7fe3105dc-utilities\") pod \"community-operators-9jcj4\" (UID: \"7e3bcb3f-4281-4e36-b361-9ed7fe3105dc\") " pod="openshift-marketplace/community-operators-9jcj4" Feb 02 22:54:13 crc kubenswrapper[4789]: I0202 22:54:13.487683 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqrl6\" (UniqueName: \"kubernetes.io/projected/7e3bcb3f-4281-4e36-b361-9ed7fe3105dc-kube-api-access-qqrl6\") pod \"community-operators-9jcj4\" (UID: \"7e3bcb3f-4281-4e36-b361-9ed7fe3105dc\") " pod="openshift-marketplace/community-operators-9jcj4" Feb 02 22:54:13 crc kubenswrapper[4789]: I0202 22:54:13.651878 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9jcj4" Feb 02 22:54:14 crc kubenswrapper[4789]: I0202 22:54:14.097101 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9jcj4"] Feb 02 22:54:14 crc kubenswrapper[4789]: I0202 22:54:14.653309 4789 generic.go:334] "Generic (PLEG): container finished" podID="7e3bcb3f-4281-4e36-b361-9ed7fe3105dc" containerID="3ca5390afce8c621fd331bfb5b0612fe321ab157255778c181c554607296f90b" exitCode=0 Feb 02 22:54:14 crc kubenswrapper[4789]: I0202 22:54:14.653363 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9jcj4" event={"ID":"7e3bcb3f-4281-4e36-b361-9ed7fe3105dc","Type":"ContainerDied","Data":"3ca5390afce8c621fd331bfb5b0612fe321ab157255778c181c554607296f90b"} Feb 02 22:54:14 crc kubenswrapper[4789]: I0202 22:54:14.653399 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9jcj4" event={"ID":"7e3bcb3f-4281-4e36-b361-9ed7fe3105dc","Type":"ContainerStarted","Data":"b2500202e86244b6bdabd2622313b14b88988b8aacb56ad8c81fae56e34d1654"} Feb 02 22:54:15 crc kubenswrapper[4789]: I0202 22:54:15.664972 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9jcj4" event={"ID":"7e3bcb3f-4281-4e36-b361-9ed7fe3105dc","Type":"ContainerStarted","Data":"23cee1609cdd4d70c1e581a3742cb386b139d0f6dbae240b8f55dfb34f67cd06"} Feb 02 22:54:16 crc kubenswrapper[4789]: I0202 22:54:16.696444 4789 generic.go:334] "Generic (PLEG): container finished" podID="7e3bcb3f-4281-4e36-b361-9ed7fe3105dc" containerID="23cee1609cdd4d70c1e581a3742cb386b139d0f6dbae240b8f55dfb34f67cd06" exitCode=0 Feb 02 22:54:16 crc kubenswrapper[4789]: I0202 22:54:16.696550 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9jcj4" event={"ID":"7e3bcb3f-4281-4e36-b361-9ed7fe3105dc","Type":"ContainerDied","Data":"23cee1609cdd4d70c1e581a3742cb386b139d0f6dbae240b8f55dfb34f67cd06"} Feb 02 22:54:17 crc kubenswrapper[4789]: I0202 22:54:17.717487 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9jcj4" event={"ID":"7e3bcb3f-4281-4e36-b361-9ed7fe3105dc","Type":"ContainerStarted","Data":"952e942f4bbc95453ac8b6deb57cb7550b1aeb7371eda5fdc163a277a4df58f5"} Feb 02 22:54:23 crc kubenswrapper[4789]: I0202 22:54:23.652864 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9jcj4" Feb 02 22:54:23 crc kubenswrapper[4789]: I0202 22:54:23.653755 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9jcj4" Feb 02 22:54:23 crc kubenswrapper[4789]: I0202 22:54:23.724867 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9jcj4" Feb 02 22:54:23 crc kubenswrapper[4789]: I0202 22:54:23.765156 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9jcj4" podStartSLOduration=8.281823393 podStartE2EDuration="10.765122366s" podCreationTimestamp="2026-02-02 22:54:13 +0000 UTC" firstStartedPulling="2026-02-02 22:54:14.656849906 +0000 UTC m=+5674.951874935" lastFinishedPulling="2026-02-02 22:54:17.140148879 +0000 UTC m=+5677.435173908" observedRunningTime="2026-02-02 22:54:17.737979426 +0000 UTC m=+5678.033004445" watchObservedRunningTime="2026-02-02 22:54:23.765122366 +0000 UTC m=+5684.060147415" Feb 02 22:54:23 crc kubenswrapper[4789]: I0202 22:54:23.841061 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9jcj4" Feb 02 22:54:27 crc kubenswrapper[4789]: I0202 22:54:27.384574 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9jcj4"] Feb 02 22:54:27 crc kubenswrapper[4789]: I0202 22:54:27.385903 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9jcj4" podUID="7e3bcb3f-4281-4e36-b361-9ed7fe3105dc" containerName="registry-server" containerID="cri-o://952e942f4bbc95453ac8b6deb57cb7550b1aeb7371eda5fdc163a277a4df58f5" gracePeriod=2 Feb 02 22:54:27 crc kubenswrapper[4789]: I0202 22:54:27.822975 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9jcj4" Feb 02 22:54:27 crc kubenswrapper[4789]: I0202 22:54:27.839420 4789 generic.go:334] "Generic (PLEG): container finished" podID="7e3bcb3f-4281-4e36-b361-9ed7fe3105dc" containerID="952e942f4bbc95453ac8b6deb57cb7550b1aeb7371eda5fdc163a277a4df58f5" exitCode=0 Feb 02 22:54:27 crc kubenswrapper[4789]: I0202 22:54:27.839487 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9jcj4" event={"ID":"7e3bcb3f-4281-4e36-b361-9ed7fe3105dc","Type":"ContainerDied","Data":"952e942f4bbc95453ac8b6deb57cb7550b1aeb7371eda5fdc163a277a4df58f5"} Feb 02 22:54:27 crc kubenswrapper[4789]: I0202 22:54:27.839518 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9jcj4" event={"ID":"7e3bcb3f-4281-4e36-b361-9ed7fe3105dc","Type":"ContainerDied","Data":"b2500202e86244b6bdabd2622313b14b88988b8aacb56ad8c81fae56e34d1654"} Feb 02 22:54:27 crc kubenswrapper[4789]: I0202 22:54:27.839540 4789 scope.go:117] "RemoveContainer" containerID="952e942f4bbc95453ac8b6deb57cb7550b1aeb7371eda5fdc163a277a4df58f5" Feb 02 22:54:27 crc kubenswrapper[4789]: I0202 22:54:27.840051 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9jcj4" Feb 02 22:54:27 crc kubenswrapper[4789]: I0202 22:54:27.870970 4789 scope.go:117] "RemoveContainer" containerID="23cee1609cdd4d70c1e581a3742cb386b139d0f6dbae240b8f55dfb34f67cd06" Feb 02 22:54:27 crc kubenswrapper[4789]: I0202 22:54:27.896642 4789 scope.go:117] "RemoveContainer" containerID="3ca5390afce8c621fd331bfb5b0612fe321ab157255778c181c554607296f90b" Feb 02 22:54:27 crc kubenswrapper[4789]: I0202 22:54:27.922191 4789 scope.go:117] "RemoveContainer" containerID="952e942f4bbc95453ac8b6deb57cb7550b1aeb7371eda5fdc163a277a4df58f5" Feb 02 22:54:27 crc kubenswrapper[4789]: E0202 22:54:27.922805 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"952e942f4bbc95453ac8b6deb57cb7550b1aeb7371eda5fdc163a277a4df58f5\": container with ID starting with 952e942f4bbc95453ac8b6deb57cb7550b1aeb7371eda5fdc163a277a4df58f5 not found: ID does not exist" containerID="952e942f4bbc95453ac8b6deb57cb7550b1aeb7371eda5fdc163a277a4df58f5" Feb 02 22:54:27 crc kubenswrapper[4789]: I0202 22:54:27.922863 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"952e942f4bbc95453ac8b6deb57cb7550b1aeb7371eda5fdc163a277a4df58f5"} err="failed to get container status \"952e942f4bbc95453ac8b6deb57cb7550b1aeb7371eda5fdc163a277a4df58f5\": rpc error: code = NotFound desc = could not find container \"952e942f4bbc95453ac8b6deb57cb7550b1aeb7371eda5fdc163a277a4df58f5\": container with ID starting with 952e942f4bbc95453ac8b6deb57cb7550b1aeb7371eda5fdc163a277a4df58f5 not found: ID does not exist" Feb 02 22:54:27 crc kubenswrapper[4789]: I0202 22:54:27.922894 4789 scope.go:117] "RemoveContainer" containerID="23cee1609cdd4d70c1e581a3742cb386b139d0f6dbae240b8f55dfb34f67cd06" Feb 02 22:54:27 crc kubenswrapper[4789]: E0202 22:54:27.923254 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23cee1609cdd4d70c1e581a3742cb386b139d0f6dbae240b8f55dfb34f67cd06\": container with ID starting with 23cee1609cdd4d70c1e581a3742cb386b139d0f6dbae240b8f55dfb34f67cd06 not found: ID does not exist" containerID="23cee1609cdd4d70c1e581a3742cb386b139d0f6dbae240b8f55dfb34f67cd06" Feb 02 22:54:27 crc kubenswrapper[4789]: I0202 22:54:27.923302 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23cee1609cdd4d70c1e581a3742cb386b139d0f6dbae240b8f55dfb34f67cd06"} err="failed to get container status \"23cee1609cdd4d70c1e581a3742cb386b139d0f6dbae240b8f55dfb34f67cd06\": rpc error: code = NotFound desc = could not find container \"23cee1609cdd4d70c1e581a3742cb386b139d0f6dbae240b8f55dfb34f67cd06\": container with ID starting with 23cee1609cdd4d70c1e581a3742cb386b139d0f6dbae240b8f55dfb34f67cd06 not found: ID does not exist" Feb 02 22:54:27 crc kubenswrapper[4789]: I0202 22:54:27.923341 4789 scope.go:117] "RemoveContainer" containerID="3ca5390afce8c621fd331bfb5b0612fe321ab157255778c181c554607296f90b" Feb 02 22:54:27 crc kubenswrapper[4789]: E0202 22:54:27.923743 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ca5390afce8c621fd331bfb5b0612fe321ab157255778c181c554607296f90b\": container with ID starting with 3ca5390afce8c621fd331bfb5b0612fe321ab157255778c181c554607296f90b not found: ID does not exist" containerID="3ca5390afce8c621fd331bfb5b0612fe321ab157255778c181c554607296f90b" Feb 02 22:54:27 crc kubenswrapper[4789]: I0202 22:54:27.923775 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ca5390afce8c621fd331bfb5b0612fe321ab157255778c181c554607296f90b"} err="failed to get container status \"3ca5390afce8c621fd331bfb5b0612fe321ab157255778c181c554607296f90b\": rpc error: code = NotFound desc = could not find container \"3ca5390afce8c621fd331bfb5b0612fe321ab157255778c181c554607296f90b\": container with ID starting with 3ca5390afce8c621fd331bfb5b0612fe321ab157255778c181c554607296f90b not found: ID does not exist" Feb 02 22:54:27 crc kubenswrapper[4789]: I0202 22:54:27.933614 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e3bcb3f-4281-4e36-b361-9ed7fe3105dc-utilities\") pod \"7e3bcb3f-4281-4e36-b361-9ed7fe3105dc\" (UID: \"7e3bcb3f-4281-4e36-b361-9ed7fe3105dc\") " Feb 02 22:54:27 crc kubenswrapper[4789]: I0202 22:54:27.933757 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqrl6\" (UniqueName: \"kubernetes.io/projected/7e3bcb3f-4281-4e36-b361-9ed7fe3105dc-kube-api-access-qqrl6\") pod \"7e3bcb3f-4281-4e36-b361-9ed7fe3105dc\" (UID: \"7e3bcb3f-4281-4e36-b361-9ed7fe3105dc\") " Feb 02 22:54:27 crc kubenswrapper[4789]: I0202 22:54:27.933988 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e3bcb3f-4281-4e36-b361-9ed7fe3105dc-catalog-content\") pod \"7e3bcb3f-4281-4e36-b361-9ed7fe3105dc\" (UID: \"7e3bcb3f-4281-4e36-b361-9ed7fe3105dc\") " Feb 02 22:54:27 crc kubenswrapper[4789]: I0202 22:54:27.935400 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e3bcb3f-4281-4e36-b361-9ed7fe3105dc-utilities" (OuterVolumeSpecName: "utilities") pod "7e3bcb3f-4281-4e36-b361-9ed7fe3105dc" (UID: "7e3bcb3f-4281-4e36-b361-9ed7fe3105dc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 22:54:27 crc kubenswrapper[4789]: I0202 22:54:27.939902 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e3bcb3f-4281-4e36-b361-9ed7fe3105dc-kube-api-access-qqrl6" (OuterVolumeSpecName: "kube-api-access-qqrl6") pod "7e3bcb3f-4281-4e36-b361-9ed7fe3105dc" (UID: "7e3bcb3f-4281-4e36-b361-9ed7fe3105dc"). InnerVolumeSpecName "kube-api-access-qqrl6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 22:54:27 crc kubenswrapper[4789]: I0202 22:54:27.986264 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e3bcb3f-4281-4e36-b361-9ed7fe3105dc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7e3bcb3f-4281-4e36-b361-9ed7fe3105dc" (UID: "7e3bcb3f-4281-4e36-b361-9ed7fe3105dc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 22:54:28 crc kubenswrapper[4789]: I0202 22:54:28.037100 4789 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e3bcb3f-4281-4e36-b361-9ed7fe3105dc-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 22:54:28 crc kubenswrapper[4789]: I0202 22:54:28.037153 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qqrl6\" (UniqueName: \"kubernetes.io/projected/7e3bcb3f-4281-4e36-b361-9ed7fe3105dc-kube-api-access-qqrl6\") on node \"crc\" DevicePath \"\"" Feb 02 22:54:28 crc kubenswrapper[4789]: I0202 22:54:28.037175 4789 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e3bcb3f-4281-4e36-b361-9ed7fe3105dc-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 22:54:28 crc kubenswrapper[4789]: I0202 22:54:28.174086 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9jcj4"] Feb 02 22:54:28 crc kubenswrapper[4789]: I0202 22:54:28.181736 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9jcj4"] Feb 02 22:54:28 crc kubenswrapper[4789]: I0202 22:54:28.430021 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e3bcb3f-4281-4e36-b361-9ed7fe3105dc" path="/var/lib/kubelet/pods/7e3bcb3f-4281-4e36-b361-9ed7fe3105dc/volumes" Feb 02 22:54:51 crc kubenswrapper[4789]: I0202 22:54:51.125304 4789 generic.go:334] "Generic (PLEG): container finished" podID="e03574ac-341d-45ed-b979-12a6b34b7695" containerID="e5ac45ded34208685604b81214cf00ee100ff5905343616910ccf72f5f37ed1d" exitCode=0 Feb 02 22:54:51 crc kubenswrapper[4789]: I0202 22:54:51.125427 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-sssmb/must-gather-nkf86" event={"ID":"e03574ac-341d-45ed-b979-12a6b34b7695","Type":"ContainerDied","Data":"e5ac45ded34208685604b81214cf00ee100ff5905343616910ccf72f5f37ed1d"} Feb 02 22:54:51 crc kubenswrapper[4789]: I0202 22:54:51.127113 4789 scope.go:117] "RemoveContainer" containerID="e5ac45ded34208685604b81214cf00ee100ff5905343616910ccf72f5f37ed1d" Feb 02 22:54:51 crc kubenswrapper[4789]: I0202 22:54:51.218516 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-sssmb_must-gather-nkf86_e03574ac-341d-45ed-b979-12a6b34b7695/gather/0.log" Feb 02 22:54:59 crc kubenswrapper[4789]: I0202 22:54:59.188740 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-sssmb/must-gather-nkf86"] Feb 02 22:54:59 crc kubenswrapper[4789]: I0202 22:54:59.190256 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-sssmb/must-gather-nkf86" podUID="e03574ac-341d-45ed-b979-12a6b34b7695" containerName="copy" containerID="cri-o://884eb54b44dcd39fcdda827d93fe73ad6d14ff5d9ce7807cef7f3f87ea950f0f" gracePeriod=2 Feb 02 22:54:59 crc kubenswrapper[4789]: I0202 22:54:59.199220 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-sssmb/must-gather-nkf86"] Feb 02 22:54:59 crc kubenswrapper[4789]: I0202 22:54:59.623827 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-sssmb_must-gather-nkf86_e03574ac-341d-45ed-b979-12a6b34b7695/copy/0.log" Feb 02 22:54:59 crc kubenswrapper[4789]: I0202 22:54:59.624448 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sssmb/must-gather-nkf86" Feb 02 22:54:59 crc kubenswrapper[4789]: I0202 22:54:59.791256 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e03574ac-341d-45ed-b979-12a6b34b7695-must-gather-output\") pod \"e03574ac-341d-45ed-b979-12a6b34b7695\" (UID: \"e03574ac-341d-45ed-b979-12a6b34b7695\") " Feb 02 22:54:59 crc kubenswrapper[4789]: I0202 22:54:59.791340 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m7dq2\" (UniqueName: \"kubernetes.io/projected/e03574ac-341d-45ed-b979-12a6b34b7695-kube-api-access-m7dq2\") pod \"e03574ac-341d-45ed-b979-12a6b34b7695\" (UID: \"e03574ac-341d-45ed-b979-12a6b34b7695\") " Feb 02 22:54:59 crc kubenswrapper[4789]: I0202 22:54:59.799569 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e03574ac-341d-45ed-b979-12a6b34b7695-kube-api-access-m7dq2" (OuterVolumeSpecName: "kube-api-access-m7dq2") pod "e03574ac-341d-45ed-b979-12a6b34b7695" (UID: "e03574ac-341d-45ed-b979-12a6b34b7695"). InnerVolumeSpecName "kube-api-access-m7dq2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 22:54:59 crc kubenswrapper[4789]: I0202 22:54:59.892927 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m7dq2\" (UniqueName: \"kubernetes.io/projected/e03574ac-341d-45ed-b979-12a6b34b7695-kube-api-access-m7dq2\") on node \"crc\" DevicePath \"\"" Feb 02 22:54:59 crc kubenswrapper[4789]: I0202 22:54:59.898189 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e03574ac-341d-45ed-b979-12a6b34b7695-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "e03574ac-341d-45ed-b979-12a6b34b7695" (UID: "e03574ac-341d-45ed-b979-12a6b34b7695"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 22:54:59 crc kubenswrapper[4789]: I0202 22:54:59.995013 4789 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e03574ac-341d-45ed-b979-12a6b34b7695-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 02 22:55:00 crc kubenswrapper[4789]: I0202 22:55:00.219930 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-sssmb_must-gather-nkf86_e03574ac-341d-45ed-b979-12a6b34b7695/copy/0.log" Feb 02 22:55:00 crc kubenswrapper[4789]: I0202 22:55:00.220292 4789 generic.go:334] "Generic (PLEG): container finished" podID="e03574ac-341d-45ed-b979-12a6b34b7695" containerID="884eb54b44dcd39fcdda827d93fe73ad6d14ff5d9ce7807cef7f3f87ea950f0f" exitCode=143 Feb 02 22:55:00 crc kubenswrapper[4789]: I0202 22:55:00.220341 4789 scope.go:117] "RemoveContainer" containerID="884eb54b44dcd39fcdda827d93fe73ad6d14ff5d9ce7807cef7f3f87ea950f0f" Feb 02 22:55:00 crc kubenswrapper[4789]: I0202 22:55:00.220464 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sssmb/must-gather-nkf86" Feb 02 22:55:00 crc kubenswrapper[4789]: I0202 22:55:00.243258 4789 scope.go:117] "RemoveContainer" containerID="e5ac45ded34208685604b81214cf00ee100ff5905343616910ccf72f5f37ed1d" Feb 02 22:55:00 crc kubenswrapper[4789]: I0202 22:55:00.289396 4789 scope.go:117] "RemoveContainer" containerID="884eb54b44dcd39fcdda827d93fe73ad6d14ff5d9ce7807cef7f3f87ea950f0f" Feb 02 22:55:00 crc kubenswrapper[4789]: E0202 22:55:00.289904 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"884eb54b44dcd39fcdda827d93fe73ad6d14ff5d9ce7807cef7f3f87ea950f0f\": container with ID starting with 884eb54b44dcd39fcdda827d93fe73ad6d14ff5d9ce7807cef7f3f87ea950f0f not found: ID does not exist" containerID="884eb54b44dcd39fcdda827d93fe73ad6d14ff5d9ce7807cef7f3f87ea950f0f" Feb 02 22:55:00 crc kubenswrapper[4789]: I0202 22:55:00.289932 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"884eb54b44dcd39fcdda827d93fe73ad6d14ff5d9ce7807cef7f3f87ea950f0f"} err="failed to get container status \"884eb54b44dcd39fcdda827d93fe73ad6d14ff5d9ce7807cef7f3f87ea950f0f\": rpc error: code = NotFound desc = could not find container \"884eb54b44dcd39fcdda827d93fe73ad6d14ff5d9ce7807cef7f3f87ea950f0f\": container with ID starting with 884eb54b44dcd39fcdda827d93fe73ad6d14ff5d9ce7807cef7f3f87ea950f0f not found: ID does not exist" Feb 02 22:55:00 crc kubenswrapper[4789]: I0202 22:55:00.289951 4789 scope.go:117] "RemoveContainer" containerID="e5ac45ded34208685604b81214cf00ee100ff5905343616910ccf72f5f37ed1d" Feb 02 22:55:00 crc kubenswrapper[4789]: E0202 22:55:00.290311 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e5ac45ded34208685604b81214cf00ee100ff5905343616910ccf72f5f37ed1d\": container with ID starting with e5ac45ded34208685604b81214cf00ee100ff5905343616910ccf72f5f37ed1d not found: ID does not exist" containerID="e5ac45ded34208685604b81214cf00ee100ff5905343616910ccf72f5f37ed1d" Feb 02 22:55:00 crc kubenswrapper[4789]: I0202 22:55:00.290353 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5ac45ded34208685604b81214cf00ee100ff5905343616910ccf72f5f37ed1d"} err="failed to get container status \"e5ac45ded34208685604b81214cf00ee100ff5905343616910ccf72f5f37ed1d\": rpc error: code = NotFound desc = could not find container \"e5ac45ded34208685604b81214cf00ee100ff5905343616910ccf72f5f37ed1d\": container with ID starting with e5ac45ded34208685604b81214cf00ee100ff5905343616910ccf72f5f37ed1d not found: ID does not exist" Feb 02 22:55:00 crc kubenswrapper[4789]: I0202 22:55:00.431626 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e03574ac-341d-45ed-b979-12a6b34b7695" path="/var/lib/kubelet/pods/e03574ac-341d-45ed-b979-12a6b34b7695/volumes" Feb 02 22:55:22 crc kubenswrapper[4789]: I0202 22:55:22.841878 4789 patch_prober.go:28] interesting pod/machine-config-daemon-c8vcn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 22:55:22 crc kubenswrapper[4789]: I0202 22:55:22.842447 4789 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 22:55:52 crc kubenswrapper[4789]: I0202 22:55:52.842359 4789 patch_prober.go:28] interesting pod/machine-config-daemon-c8vcn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 22:55:52 crc kubenswrapper[4789]: I0202 22:55:52.842990 4789 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 22:56:16 crc kubenswrapper[4789]: I0202 22:56:16.052324 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-r4kr2"] Feb 02 22:56:16 crc kubenswrapper[4789]: I0202 22:56:16.060823 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-5c92-account-create-update-4shd8"] Feb 02 22:56:16 crc kubenswrapper[4789]: I0202 22:56:16.069967 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-r4kr2"] Feb 02 22:56:16 crc kubenswrapper[4789]: I0202 22:56:16.076927 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-5c92-account-create-update-4shd8"] Feb 02 22:56:16 crc kubenswrapper[4789]: I0202 22:56:16.433190 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a93cc873-6b0b-4eb8-be87-acda79c160af" path="/var/lib/kubelet/pods/a93cc873-6b0b-4eb8-be87-acda79c160af/volumes" Feb 02 22:56:16 crc kubenswrapper[4789]: I0202 22:56:16.434005 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7308363-74f5-4ae7-8695-3017babea57c" path="/var/lib/kubelet/pods/c7308363-74f5-4ae7-8695-3017babea57c/volumes" Feb 02 22:56:22 crc kubenswrapper[4789]: I0202 22:56:22.841405 4789 patch_prober.go:28] interesting pod/machine-config-daemon-c8vcn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 22:56:22 crc kubenswrapper[4789]: I0202 22:56:22.841958 4789 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 22:56:22 crc kubenswrapper[4789]: I0202 22:56:22.842007 4789 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" Feb 02 22:56:22 crc kubenswrapper[4789]: I0202 22:56:22.843325 4789 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bf3b364c33c8733778c6c21ccccb5be0c702c8000d59499366aaef19c768aa32"} pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 22:56:22 crc kubenswrapper[4789]: I0202 22:56:22.843392 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" containerName="machine-config-daemon" containerID="cri-o://bf3b364c33c8733778c6c21ccccb5be0c702c8000d59499366aaef19c768aa32" gracePeriod=600 Feb 02 22:56:23 crc kubenswrapper[4789]: I0202 22:56:23.042147 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-vhtsg"] Feb 02 22:56:23 crc kubenswrapper[4789]: I0202 22:56:23.054428 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-vhtsg"] Feb 02 22:56:23 crc kubenswrapper[4789]: I0202 22:56:23.059823 4789 generic.go:334] "Generic (PLEG): container finished" podID="bdf018b4-1451-4d37-be6e-05802b67c73e" containerID="bf3b364c33c8733778c6c21ccccb5be0c702c8000d59499366aaef19c768aa32" exitCode=0 Feb 02 22:56:23 crc kubenswrapper[4789]: I0202 22:56:23.059904 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" event={"ID":"bdf018b4-1451-4d37-be6e-05802b67c73e","Type":"ContainerDied","Data":"bf3b364c33c8733778c6c21ccccb5be0c702c8000d59499366aaef19c768aa32"} Feb 02 22:56:23 crc kubenswrapper[4789]: I0202 22:56:23.059964 4789 scope.go:117] "RemoveContainer" containerID="16b35fede0307e2768edeae9bc17688f5a8ff6427f2ae9305826087a6effbd74" Feb 02 22:56:24 crc kubenswrapper[4789]: I0202 22:56:24.074079 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" event={"ID":"bdf018b4-1451-4d37-be6e-05802b67c73e","Type":"ContainerStarted","Data":"ce3839108f1ed8baa5dcd6e692c66150be39c676b36c853a9b62016a0dc99295"} Feb 02 22:56:24 crc kubenswrapper[4789]: I0202 22:56:24.437817 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4cd23310-ee19-467d-8534-f7daed0233e8" path="/var/lib/kubelet/pods/4cd23310-ee19-467d-8534-f7daed0233e8/volumes" Feb 02 22:56:37 crc kubenswrapper[4789]: I0202 22:56:37.045328 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-frq2r"] Feb 02 22:56:37 crc kubenswrapper[4789]: I0202 22:56:37.060286 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-frq2r"] Feb 02 22:56:38 crc kubenswrapper[4789]: I0202 22:56:38.439210 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52d3c3df-2d71-4415-ae65-7301f4157711" path="/var/lib/kubelet/pods/52d3c3df-2d71-4415-ae65-7301f4157711/volumes" Feb 02 22:56:48 crc kubenswrapper[4789]: I0202 22:56:48.984367 4789 scope.go:117] "RemoveContainer" containerID="f1dbfbb10046c4e8a93ab32efb973bf756440214ba330725bf9417a5ca10d47c" Feb 02 22:56:49 crc kubenswrapper[4789]: I0202 22:56:49.020465 4789 scope.go:117] "RemoveContainer" containerID="a9a5aa857d486c42197e9374cbbdf039cba49ca448c89b9de1b987c2bda95389" Feb 02 22:56:49 crc kubenswrapper[4789]: I0202 22:56:49.101755 4789 scope.go:117] "RemoveContainer" containerID="548ccb062dedfe43d98fd2e457b5cb68f1291df8673ce70470e91b45e78d3d90" Feb 02 22:56:49 crc kubenswrapper[4789]: I0202 22:56:49.125369 4789 scope.go:117] "RemoveContainer" containerID="ab833ccef2ddf75486ffa4549a048dafcf839387591d4dbc5f2cbab4d2c4d7cd" Feb 02 22:56:49 crc kubenswrapper[4789]: I0202 22:56:49.171285 4789 scope.go:117] "RemoveContainer" containerID="543c6c7d8dc94243f269f4111a2024f567753214c363b23421dee005c5ba5b3c" Feb 02 22:58:09 crc kubenswrapper[4789]: I0202 22:58:09.563798 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pl2l9"] Feb 02 22:58:09 crc kubenswrapper[4789]: E0202 22:58:09.564943 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e3bcb3f-4281-4e36-b361-9ed7fe3105dc" containerName="extract-content" Feb 02 22:58:09 crc kubenswrapper[4789]: I0202 22:58:09.564965 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e3bcb3f-4281-4e36-b361-9ed7fe3105dc" containerName="extract-content" Feb 02 22:58:09 crc kubenswrapper[4789]: E0202 22:58:09.564995 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e03574ac-341d-45ed-b979-12a6b34b7695" containerName="gather" Feb 02 22:58:09 crc kubenswrapper[4789]: I0202 22:58:09.565007 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="e03574ac-341d-45ed-b979-12a6b34b7695" containerName="gather" Feb 02 22:58:09 crc kubenswrapper[4789]: E0202 22:58:09.565035 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e3bcb3f-4281-4e36-b361-9ed7fe3105dc" containerName="registry-server" Feb 02 22:58:09 crc kubenswrapper[4789]: I0202 22:58:09.565048 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e3bcb3f-4281-4e36-b361-9ed7fe3105dc" containerName="registry-server" Feb 02 22:58:09 crc kubenswrapper[4789]: E0202 22:58:09.565068 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e3bcb3f-4281-4e36-b361-9ed7fe3105dc" containerName="extract-utilities" Feb 02 22:58:09 crc kubenswrapper[4789]: I0202 22:58:09.565082 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e3bcb3f-4281-4e36-b361-9ed7fe3105dc" containerName="extract-utilities" Feb 02 22:58:09 crc kubenswrapper[4789]: E0202 22:58:09.565106 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e03574ac-341d-45ed-b979-12a6b34b7695" containerName="copy" Feb 02 22:58:09 crc kubenswrapper[4789]: I0202 22:58:09.565117 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="e03574ac-341d-45ed-b979-12a6b34b7695" containerName="copy" Feb 02 22:58:09 crc kubenswrapper[4789]: I0202 22:58:09.565411 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="e03574ac-341d-45ed-b979-12a6b34b7695" containerName="gather" Feb 02 22:58:09 crc kubenswrapper[4789]: I0202 22:58:09.565433 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="e03574ac-341d-45ed-b979-12a6b34b7695" containerName="copy" Feb 02 22:58:09 crc kubenswrapper[4789]: I0202 22:58:09.565458 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e3bcb3f-4281-4e36-b361-9ed7fe3105dc" containerName="registry-server" Feb 02 22:58:09 crc kubenswrapper[4789]: I0202 22:58:09.567716 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pl2l9" Feb 02 22:58:09 crc kubenswrapper[4789]: I0202 22:58:09.576550 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pl2l9"] Feb 02 22:58:09 crc kubenswrapper[4789]: I0202 22:58:09.739031 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca2e598c-c3ea-4a1c-adfb-54972fb3a2c9-catalog-content\") pod \"redhat-operators-pl2l9\" (UID: \"ca2e598c-c3ea-4a1c-adfb-54972fb3a2c9\") " pod="openshift-marketplace/redhat-operators-pl2l9" Feb 02 22:58:09 crc kubenswrapper[4789]: I0202 22:58:09.739077 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca2e598c-c3ea-4a1c-adfb-54972fb3a2c9-utilities\") pod \"redhat-operators-pl2l9\" (UID: \"ca2e598c-c3ea-4a1c-adfb-54972fb3a2c9\") " pod="openshift-marketplace/redhat-operators-pl2l9" Feb 02 22:58:09 crc kubenswrapper[4789]: I0202 22:58:09.739211 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9c55\" (UniqueName: \"kubernetes.io/projected/ca2e598c-c3ea-4a1c-adfb-54972fb3a2c9-kube-api-access-l9c55\") pod \"redhat-operators-pl2l9\" (UID: \"ca2e598c-c3ea-4a1c-adfb-54972fb3a2c9\") " pod="openshift-marketplace/redhat-operators-pl2l9" Feb 02 22:58:09 crc kubenswrapper[4789]: I0202 22:58:09.840512 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca2e598c-c3ea-4a1c-adfb-54972fb3a2c9-utilities\") pod \"redhat-operators-pl2l9\" (UID: \"ca2e598c-c3ea-4a1c-adfb-54972fb3a2c9\") " pod="openshift-marketplace/redhat-operators-pl2l9" Feb 02 22:58:09 crc kubenswrapper[4789]: I0202 22:58:09.840883 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9c55\" (UniqueName: \"kubernetes.io/projected/ca2e598c-c3ea-4a1c-adfb-54972fb3a2c9-kube-api-access-l9c55\") pod \"redhat-operators-pl2l9\" (UID: \"ca2e598c-c3ea-4a1c-adfb-54972fb3a2c9\") " pod="openshift-marketplace/redhat-operators-pl2l9" Feb 02 22:58:09 crc kubenswrapper[4789]: I0202 22:58:09.841050 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca2e598c-c3ea-4a1c-adfb-54972fb3a2c9-catalog-content\") pod \"redhat-operators-pl2l9\" (UID: \"ca2e598c-c3ea-4a1c-adfb-54972fb3a2c9\") " pod="openshift-marketplace/redhat-operators-pl2l9" Feb 02 22:58:09 crc kubenswrapper[4789]: I0202 22:58:09.841745 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca2e598c-c3ea-4a1c-adfb-54972fb3a2c9-utilities\") pod \"redhat-operators-pl2l9\" (UID: \"ca2e598c-c3ea-4a1c-adfb-54972fb3a2c9\") " pod="openshift-marketplace/redhat-operators-pl2l9" Feb 02 22:58:09 crc kubenswrapper[4789]: I0202 22:58:09.841763 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca2e598c-c3ea-4a1c-adfb-54972fb3a2c9-catalog-content\") pod \"redhat-operators-pl2l9\" (UID: \"ca2e598c-c3ea-4a1c-adfb-54972fb3a2c9\") " pod="openshift-marketplace/redhat-operators-pl2l9" Feb 02 22:58:09 crc kubenswrapper[4789]: I0202 22:58:09.874033 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9c55\" (UniqueName: \"kubernetes.io/projected/ca2e598c-c3ea-4a1c-adfb-54972fb3a2c9-kube-api-access-l9c55\") pod \"redhat-operators-pl2l9\" (UID: \"ca2e598c-c3ea-4a1c-adfb-54972fb3a2c9\") " pod="openshift-marketplace/redhat-operators-pl2l9" Feb 02 22:58:09 crc kubenswrapper[4789]: I0202 22:58:09.949343 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pl2l9" Feb 02 22:58:10 crc kubenswrapper[4789]: I0202 22:58:10.469831 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pl2l9"] Feb 02 22:58:11 crc kubenswrapper[4789]: I0202 22:58:11.212290 4789 generic.go:334] "Generic (PLEG): container finished" podID="ca2e598c-c3ea-4a1c-adfb-54972fb3a2c9" containerID="8cd6521f977bf678e6226565bc15833bd650c6cab04ad6d98f8cf9ef7c4b8e18" exitCode=0 Feb 02 22:58:11 crc kubenswrapper[4789]: I0202 22:58:11.212381 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pl2l9" event={"ID":"ca2e598c-c3ea-4a1c-adfb-54972fb3a2c9","Type":"ContainerDied","Data":"8cd6521f977bf678e6226565bc15833bd650c6cab04ad6d98f8cf9ef7c4b8e18"} Feb 02 22:58:11 crc kubenswrapper[4789]: I0202 22:58:11.212609 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pl2l9" event={"ID":"ca2e598c-c3ea-4a1c-adfb-54972fb3a2c9","Type":"ContainerStarted","Data":"924b258efc653a570eec4405441665d90315aec80f07d4f5c40561613ab7ef1b"} Feb 02 22:58:12 crc kubenswrapper[4789]: I0202 22:58:12.222887 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pl2l9" event={"ID":"ca2e598c-c3ea-4a1c-adfb-54972fb3a2c9","Type":"ContainerStarted","Data":"a3f30e8bee0b6ddbe5842f6432a686a9190020002a459f7a2b9617fcbb431faf"} Feb 02 22:58:13 crc kubenswrapper[4789]: I0202 22:58:13.237712 4789 generic.go:334] "Generic (PLEG): container finished" podID="ca2e598c-c3ea-4a1c-adfb-54972fb3a2c9" containerID="a3f30e8bee0b6ddbe5842f6432a686a9190020002a459f7a2b9617fcbb431faf" exitCode=0 Feb 02 22:58:13 crc kubenswrapper[4789]: I0202 22:58:13.237850 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pl2l9" event={"ID":"ca2e598c-c3ea-4a1c-adfb-54972fb3a2c9","Type":"ContainerDied","Data":"a3f30e8bee0b6ddbe5842f6432a686a9190020002a459f7a2b9617fcbb431faf"} Feb 02 22:58:14 crc kubenswrapper[4789]: I0202 22:58:14.247803 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pl2l9" event={"ID":"ca2e598c-c3ea-4a1c-adfb-54972fb3a2c9","Type":"ContainerStarted","Data":"45ee7f3de9d182b86d5754cd9d34848506f3588bcf72b33f9b1748433a863490"} Feb 02 22:58:14 crc kubenswrapper[4789]: I0202 22:58:14.265497 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pl2l9" podStartSLOduration=2.764268842 podStartE2EDuration="5.26547595s" podCreationTimestamp="2026-02-02 22:58:09 +0000 UTC" firstStartedPulling="2026-02-02 22:58:11.21426231 +0000 UTC m=+5911.509287329" lastFinishedPulling="2026-02-02 22:58:13.715469378 +0000 UTC m=+5914.010494437" observedRunningTime="2026-02-02 22:58:14.263298479 +0000 UTC m=+5914.558323538" watchObservedRunningTime="2026-02-02 22:58:14.26547595 +0000 UTC m=+5914.560500989" Feb 02 22:58:19 crc kubenswrapper[4789]: I0202 22:58:19.950323 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pl2l9" Feb 02 22:58:19 crc kubenswrapper[4789]: I0202 22:58:19.950914 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-pl2l9" Feb 02 22:58:21 crc kubenswrapper[4789]: I0202 22:58:21.011759 4789 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-pl2l9" podUID="ca2e598c-c3ea-4a1c-adfb-54972fb3a2c9" containerName="registry-server" probeResult="failure" output=< Feb 02 22:58:21 crc kubenswrapper[4789]: timeout: failed to connect service ":50051" within 1s Feb 02 22:58:21 crc kubenswrapper[4789]: > Feb 02 22:58:30 crc kubenswrapper[4789]: I0202 22:58:30.027644 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pl2l9" Feb 02 22:58:30 crc kubenswrapper[4789]: I0202 22:58:30.083857 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pl2l9" Feb 02 22:58:30 crc kubenswrapper[4789]: I0202 22:58:30.281010 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pl2l9"] Feb 02 22:58:31 crc kubenswrapper[4789]: I0202 22:58:31.419948 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-pl2l9" podUID="ca2e598c-c3ea-4a1c-adfb-54972fb3a2c9" containerName="registry-server" containerID="cri-o://45ee7f3de9d182b86d5754cd9d34848506f3588bcf72b33f9b1748433a863490" gracePeriod=2 Feb 02 22:58:31 crc kubenswrapper[4789]: I0202 22:58:31.852342 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pl2l9" Feb 02 22:58:31 crc kubenswrapper[4789]: I0202 22:58:31.973744 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca2e598c-c3ea-4a1c-adfb-54972fb3a2c9-utilities\") pod \"ca2e598c-c3ea-4a1c-adfb-54972fb3a2c9\" (UID: \"ca2e598c-c3ea-4a1c-adfb-54972fb3a2c9\") " Feb 02 22:58:31 crc kubenswrapper[4789]: I0202 22:58:31.974008 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca2e598c-c3ea-4a1c-adfb-54972fb3a2c9-catalog-content\") pod \"ca2e598c-c3ea-4a1c-adfb-54972fb3a2c9\" (UID: \"ca2e598c-c3ea-4a1c-adfb-54972fb3a2c9\") " Feb 02 22:58:31 crc kubenswrapper[4789]: I0202 22:58:31.974094 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9c55\" (UniqueName: \"kubernetes.io/projected/ca2e598c-c3ea-4a1c-adfb-54972fb3a2c9-kube-api-access-l9c55\") pod \"ca2e598c-c3ea-4a1c-adfb-54972fb3a2c9\" (UID: \"ca2e598c-c3ea-4a1c-adfb-54972fb3a2c9\") " Feb 02 22:58:31 crc kubenswrapper[4789]: I0202 22:58:31.975295 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca2e598c-c3ea-4a1c-adfb-54972fb3a2c9-utilities" (OuterVolumeSpecName: "utilities") pod "ca2e598c-c3ea-4a1c-adfb-54972fb3a2c9" (UID: "ca2e598c-c3ea-4a1c-adfb-54972fb3a2c9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 22:58:31 crc kubenswrapper[4789]: I0202 22:58:31.982060 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca2e598c-c3ea-4a1c-adfb-54972fb3a2c9-kube-api-access-l9c55" (OuterVolumeSpecName: "kube-api-access-l9c55") pod "ca2e598c-c3ea-4a1c-adfb-54972fb3a2c9" (UID: "ca2e598c-c3ea-4a1c-adfb-54972fb3a2c9"). InnerVolumeSpecName "kube-api-access-l9c55". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 22:58:32 crc kubenswrapper[4789]: I0202 22:58:32.076910 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l9c55\" (UniqueName: \"kubernetes.io/projected/ca2e598c-c3ea-4a1c-adfb-54972fb3a2c9-kube-api-access-l9c55\") on node \"crc\" DevicePath \"\"" Feb 02 22:58:32 crc kubenswrapper[4789]: I0202 22:58:32.076960 4789 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca2e598c-c3ea-4a1c-adfb-54972fb3a2c9-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 22:58:32 crc kubenswrapper[4789]: I0202 22:58:32.112037 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca2e598c-c3ea-4a1c-adfb-54972fb3a2c9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ca2e598c-c3ea-4a1c-adfb-54972fb3a2c9" (UID: "ca2e598c-c3ea-4a1c-adfb-54972fb3a2c9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 22:58:32 crc kubenswrapper[4789]: I0202 22:58:32.178653 4789 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca2e598c-c3ea-4a1c-adfb-54972fb3a2c9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 22:58:32 crc kubenswrapper[4789]: I0202 22:58:32.435474 4789 generic.go:334] "Generic (PLEG): container finished" podID="ca2e598c-c3ea-4a1c-adfb-54972fb3a2c9" containerID="45ee7f3de9d182b86d5754cd9d34848506f3588bcf72b33f9b1748433a863490" exitCode=0 Feb 02 22:58:32 crc kubenswrapper[4789]: I0202 22:58:32.435644 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pl2l9" Feb 02 22:58:32 crc kubenswrapper[4789]: I0202 22:58:32.440434 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pl2l9" event={"ID":"ca2e598c-c3ea-4a1c-adfb-54972fb3a2c9","Type":"ContainerDied","Data":"45ee7f3de9d182b86d5754cd9d34848506f3588bcf72b33f9b1748433a863490"} Feb 02 22:58:32 crc kubenswrapper[4789]: I0202 22:58:32.440491 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pl2l9" event={"ID":"ca2e598c-c3ea-4a1c-adfb-54972fb3a2c9","Type":"ContainerDied","Data":"924b258efc653a570eec4405441665d90315aec80f07d4f5c40561613ab7ef1b"} Feb 02 22:58:32 crc kubenswrapper[4789]: I0202 22:58:32.440526 4789 scope.go:117] "RemoveContainer" containerID="45ee7f3de9d182b86d5754cd9d34848506f3588bcf72b33f9b1748433a863490" Feb 02 22:58:32 crc kubenswrapper[4789]: I0202 22:58:32.484999 4789 scope.go:117] "RemoveContainer" containerID="a3f30e8bee0b6ddbe5842f6432a686a9190020002a459f7a2b9617fcbb431faf" Feb 02 22:58:32 crc kubenswrapper[4789]: I0202 22:58:32.502343 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pl2l9"] Feb 02 22:58:32 crc kubenswrapper[4789]: I0202 22:58:32.514823 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-pl2l9"] Feb 02 22:58:32 crc kubenswrapper[4789]: I0202 22:58:32.519935 4789 scope.go:117] "RemoveContainer" containerID="8cd6521f977bf678e6226565bc15833bd650c6cab04ad6d98f8cf9ef7c4b8e18" Feb 02 22:58:32 crc kubenswrapper[4789]: I0202 22:58:32.574908 4789 scope.go:117] "RemoveContainer" containerID="45ee7f3de9d182b86d5754cd9d34848506f3588bcf72b33f9b1748433a863490" Feb 02 22:58:32 crc kubenswrapper[4789]: E0202 22:58:32.577184 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45ee7f3de9d182b86d5754cd9d34848506f3588bcf72b33f9b1748433a863490\": container with ID starting with 45ee7f3de9d182b86d5754cd9d34848506f3588bcf72b33f9b1748433a863490 not found: ID does not exist" containerID="45ee7f3de9d182b86d5754cd9d34848506f3588bcf72b33f9b1748433a863490" Feb 02 22:58:32 crc kubenswrapper[4789]: I0202 22:58:32.577260 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45ee7f3de9d182b86d5754cd9d34848506f3588bcf72b33f9b1748433a863490"} err="failed to get container status \"45ee7f3de9d182b86d5754cd9d34848506f3588bcf72b33f9b1748433a863490\": rpc error: code = NotFound desc = could not find container \"45ee7f3de9d182b86d5754cd9d34848506f3588bcf72b33f9b1748433a863490\": container with ID starting with 45ee7f3de9d182b86d5754cd9d34848506f3588bcf72b33f9b1748433a863490 not found: ID does not exist" Feb 02 22:58:32 crc kubenswrapper[4789]: I0202 22:58:32.577308 4789 scope.go:117] "RemoveContainer" containerID="a3f30e8bee0b6ddbe5842f6432a686a9190020002a459f7a2b9617fcbb431faf" Feb 02 22:58:32 crc kubenswrapper[4789]: E0202 22:58:32.577986 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3f30e8bee0b6ddbe5842f6432a686a9190020002a459f7a2b9617fcbb431faf\": container with ID starting with a3f30e8bee0b6ddbe5842f6432a686a9190020002a459f7a2b9617fcbb431faf not found: ID does not exist" containerID="a3f30e8bee0b6ddbe5842f6432a686a9190020002a459f7a2b9617fcbb431faf" Feb 02 22:58:32 crc kubenswrapper[4789]: I0202 22:58:32.578039 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3f30e8bee0b6ddbe5842f6432a686a9190020002a459f7a2b9617fcbb431faf"} err="failed to get container status \"a3f30e8bee0b6ddbe5842f6432a686a9190020002a459f7a2b9617fcbb431faf\": rpc error: code = NotFound desc = could not find container \"a3f30e8bee0b6ddbe5842f6432a686a9190020002a459f7a2b9617fcbb431faf\": container with ID starting with a3f30e8bee0b6ddbe5842f6432a686a9190020002a459f7a2b9617fcbb431faf not found: ID does not exist" Feb 02 22:58:32 crc kubenswrapper[4789]: I0202 22:58:32.578076 4789 scope.go:117] "RemoveContainer" containerID="8cd6521f977bf678e6226565bc15833bd650c6cab04ad6d98f8cf9ef7c4b8e18" Feb 02 22:58:32 crc kubenswrapper[4789]: E0202 22:58:32.578641 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8cd6521f977bf678e6226565bc15833bd650c6cab04ad6d98f8cf9ef7c4b8e18\": container with ID starting with 8cd6521f977bf678e6226565bc15833bd650c6cab04ad6d98f8cf9ef7c4b8e18 not found: ID does not exist" containerID="8cd6521f977bf678e6226565bc15833bd650c6cab04ad6d98f8cf9ef7c4b8e18" Feb 02 22:58:32 crc kubenswrapper[4789]: I0202 22:58:32.578706 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8cd6521f977bf678e6226565bc15833bd650c6cab04ad6d98f8cf9ef7c4b8e18"} err="failed to get container status \"8cd6521f977bf678e6226565bc15833bd650c6cab04ad6d98f8cf9ef7c4b8e18\": rpc error: code = NotFound desc = could not find container \"8cd6521f977bf678e6226565bc15833bd650c6cab04ad6d98f8cf9ef7c4b8e18\": container with ID starting with 8cd6521f977bf678e6226565bc15833bd650c6cab04ad6d98f8cf9ef7c4b8e18 not found: ID does not exist" Feb 02 22:58:34 crc kubenswrapper[4789]: I0202 22:58:34.443670 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca2e598c-c3ea-4a1c-adfb-54972fb3a2c9" path="/var/lib/kubelet/pods/ca2e598c-c3ea-4a1c-adfb-54972fb3a2c9/volumes" Feb 02 22:58:52 crc kubenswrapper[4789]: I0202 22:58:52.841494 4789 patch_prober.go:28] interesting pod/machine-config-daemon-c8vcn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 22:58:52 crc kubenswrapper[4789]: I0202 22:58:52.842128 4789 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c8vcn" podUID="bdf018b4-1451-4d37-be6e-05802b67c73e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515140226070024442 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015140226071017360 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015140212015016474 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015140212016015445 5ustar corecore